• 0 Posts
  • 7 Comments
Joined 2 months ago
cake
Cake day: September 27th, 2025

help-circle

  • I don’t think it has to be, or even should be the case really. I mean, as a general rule I don’t think it’s a great idea to let kids download stuff off the internet and run it without a knowledgeable adult at least reviewing what they’re doing, or pre-screening what software they’re allowed to use if they’re younger than a certain age. You can introduce kids to open source software and teach them computer skills while still putting limits on what they’re allowed to do, e.g. not allowed to install software without asking a parent, or only allowing them to test software on an old machine that doesn’t have sensitive data on it. I know I got thrown to the internet as a kid but I don’t think that’s the best way for kids to learn stuff.

    That said, I don’t have kids and don’t plan on having them, so I don’t know how realistic that is for kids nowadays. I don’t know if they’re still as far ahead of the adults as we were when it came to working the internet so I recognize the possibility that that all may be clueless childless adult nonsense.


  • Dunno. Where there are some eyeballs, there’s some market for influence. Obviously someone is bothering, but as for how much money is being thrown at the fediverse at this moment, I would guess somewhere between “peanuts” and “small potatoes”. On the other hand I imagine a bot trained here could be deployed elsewhere with little effort, similar to how a reddit bot can be deployed to lemmy with a little bit of rework, so maybe it’s seen as a low-risk training ground. In any case I don’t see it being a problem that gets less salient as the fediverse grows.


  • Who knows what scale they’re operating at. The problem with this kind of bot is that you only really notice if they’re doing a bad job (theoretically). This might be someone who wrote an LLM bot for a lark, a small-time social media botter testing a variant for fedi deployment, or an established bot trainer with dozens or hundreds of accounts that’s field-testing a more aggressive new model. I doubt you could get away with hundreds of bots like this on lemmy, I think the actual user pool is small enough that we’d notice hundreds of bots posting at this volume. but again, I don’t really know how I’d detect it if it were less “obviously smells like LLM slop” than this one. In bot detection, as in so many fields, false negatives are a real bitch to account for.


  • If I were to hazard a guess, it’s for training. Make a bot, make a bunch of posts and comments and get organic interactions, see what get you flagged as a bot account, incorporate that data into your next version, rinse, repeat. The goal is probably to make a bot account that can blend in and interact without being flagged, presumably while also nudging conversations in a particular direction. Something I noticed on reddit is that the first comment can steer the entire thread, as long as it hews close enough to the general group consensus, and that kind of steering is really useful for the kinds of groups that like to influence public thinking.

    I don’t think galacticwaffle is necessarily trying to steer here, I think they’re just trying to make a bot that flies under the radar. but I imagine that that kind of steering is what someone who would pay for this kind of bot would use it for.


  • I don’t share your concerns about the profession. Even supposing for a moment that LLMs did deliver on the promise of making 1 human as productive as 5 humans were previously, that isn’t how for-profit industry has traditionally incorporated productivity gains. Instead, you’ll just have 5 humans producing 25x output. If code generation becomes less of a bottleneck (which it has been doing for decades as frameworks and tooling have matured) there will simply be more code in the world that the code wranglers will have to wrangle. Maybe if LLMs get good enough at generating usable code (still a big if for most non-trivial jobs), some people who previously focused on low-level coding concerns will be able to specialize in higher-level concerns like directing an LLM, while some people will still be writing the low-level inputs for the LLMs, sort of like how you can write applications today without needing to know the specific ins and outs of the instruction set for your CPU. I’m doubtful that that’s around the corner, but who knows. But whatever the tools we have are capable of, the output will be bounded by the abilities of the people who operate the tools, and if you have good tools that are easily replicated, as software tools are, there’s no reason not to try and maximize your output by having as many people as you can afford and cranking out as much product as you can.