- cross-posted to:
- globalnews@lemmy.zip
- cross-posted to:
- globalnews@lemmy.zip
Countries like Brazil, India, and Vietnam are rapidly expanding solar and wind power. Poorer countries like Ethiopia and Nepal are leapfrogging over gasoline-burning cars to battery-powered ones. Nigeria, a petrostate, plans to build its first solar-panel manufacturing plant. Morocco is creating a battery hub to supply European automakers. Santiago, the capital of Chile, has electrified more than half of its bus fleet in recent years.
Key to this shift is the world’s new renewable energy superpower: China.


What is the point of such accounts?
Why do this? I understand the point of setting up such an account on Reddit (gain karma and then start low key spamming or joining a bot-net), but on Threadi?
How can you tell?
Initially they had 80 posts in under an hour after signing up. Same format, multiple paragraphs, same words…thousands of words, hundreds of paragraphs.
Now they’re fluent in German…
They have a human handler…
Identical post structure, tone and argumentation style across multiple posts.
If I were to hazard a guess, it’s for training. Make a bot, make a bunch of posts and comments and get organic interactions, see what get you flagged as a bot account, incorporate that data into your next version, rinse, repeat. The goal is probably to make a bot account that can blend in and interact without being flagged, presumably while also nudging conversations in a particular direction. Something I noticed on reddit is that the first comment can steer the entire thread, as long as it hews close enough to the general group consensus, and that kind of steering is really useful for the kinds of groups that like to influence public thinking.
I don’t think galacticwaffle is necessarily trying to steer here, I think they’re just trying to make a bot that flies under the radar. but I imagine that that kind of steering is what someone who would pay for this kind of bot would use it for.
Interesting theory.
Although I do wonder if the approach is sufficiently scalable/right level of throughput (if this indeed what’s going on).
Who knows what scale they’re operating at. The problem with this kind of bot is that you only really notice if they’re doing a bad job (theoretically). This might be someone who wrote an LLM bot for a lark, a small-time social media botter testing a variant for fedi deployment, or an established bot trainer with dozens or hundreds of accounts that’s field-testing a more aggressive new model. I doubt you could get away with hundreds of bots like this on lemmy, I think the actual user pool is small enough that we’d notice hundreds of bots posting at this volume. but again, I don’t really know how I’d detect it if it were less “obviously smells like LLM slop” than this one. In bot detection, as in so many fields, false negatives are a real bitch to account for.
I don’t doubt such approaches are used. They almost certainly are. I am just wondering if Threadi is large enough for anyone to bother (be it oligarch backed groups or independent conmen).
Dunno. Where there are some eyeballs, there’s some market for influence. Obviously someone is bothering, but as for how much money is being thrown at the fediverse at this moment, I would guess somewhere between “peanuts” and “small potatoes”. On the other hand I imagine a bot trained here could be deployed elsewhere with little effort, similar to how a reddit bot can be deployed to lemmy with a little bit of rework, so maybe it’s seen as a low-risk training ground. In any case I don’t see it being a problem that gets less salient as the fediverse grows.