

https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/ an older man, who had had a stroke and was likely in the early stages of dementia, died when he tried to visit an AI in new york. he repeatedly had asked the bot if it was real and it always said it was. the story also talks about how these facebook chatbots are basically grooming machines. somebody should really do something about zuck and meta
cw grooming
An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company’s policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older.
“It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” … The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.” Those examples of permissible roleplay with children have also been struck, Meta said.
Other guidelines emphasize that Meta doesn’t require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”
Nowhere in the document, however, does Meta place restrictions on bots telling users they’re real people or proposing real-life social engagements.
it’s notable that they don’t actually give any examples of soviet railroads here because the USSR wasn’t a real place that actually existed to these people. if it were then you could actually examine it and see how it really did things, instead it is just a cautionary tale, a gulag amimal farm 1984 holodomor thought terminator or thought experiment that you can use to invent failures of socialism