• 2 Posts
  • 46 Comments
Joined 8 months ago
cake
Cake day: November 24th, 2024

help-circle








  • That is a possibility. Data from interacting with actual humans reduces the rate of model degradation. Maybe somebody does feel like they would get better results here. But they’d have to go to the trouble of sending requests to join instances and federate communities. It’s not a whole lot of work but it’s slightly more overhead for a website that gets way less hits than reddit as of now.

    You’re not naive dude, you’re living in unprecedented times. It’s sad to see people get jumpy at the idea that all of our interactions are becoming simulations of real ones but in some places it literally happened. I don’t even fuck with instagram, facebook, or tiktok because I’ve seen the brainrot there that got created because the platform incentivised it. Stay curious and don’t let the bastards grind you down👍


  • I came here specifically to get away from the chatbot daycare hellhole that reddit became. Share some of your insights about these accounts and I’ll tell you a little about why reddit got so bad. Fediverse doesn’t really offer the same kind of incentive to somebody who’s trying to train an LLM on comments but who knows.

    On reddit, the biggest incentive for people to want to train LLM’s is just the sheer amount of data there. Reddit is insanely big and the karma system is basically a “weight” value similar to how neural networks already categorize info. Even if somebody notices the obvious bot account, enough people there will still interact with the bot sincerely that it gets the interaction it’s trying to provoke every time.

    Also it’s easy as hell to set one up to run on reddit. Simply verify an email address, subscribe to r/newtoreddit and and bunch of other subs that don’t require karma to comment, and then only give votes for the first month before finally starting to leave comments. Reddit claims to screen for bot accounts but deviating from this specific pattern of conduct is something that gets new users comments flagged for review. Reddit is actually only screening real people.

    If you want to talk real tinfoil hat shit, this is probably by design. Chatbots drive up traffic and interaction not just with eachother but specifically with the humans that will also severely inflate usage statistics to look good to advertisers. the ones who leave comments following common “redditisms” and patterns of discussion over and over and over and never get sick of saying the same things.

    Basically, I’m hoping none of these conditions exist here. So far doesn’t seem like it since fediverse isn’t hiding ads as posts, blocking VPN users, or taking such a heavy handed involvement in moderation.











  • sorry for taking a tangent and let me preface by saying I’m not criticizing your setup or desire for security at all. it’s obviously adding a particular kind of physical roadblock to what stealing your stuff would require.

    but I discovered that BIOS setting at a young age and have had this burning question about what exactly does it protect? it does prevent booting but in a situation where somebody has access to your computer that only really stops them from using your motherboard right? is OP’s usecase the actual intention, where somebody would be required to physically steal at least part of the computer in order to access it?