Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
LWer to Big Yud: Please be serious
they’re just jelly Elezier has all the cool hats and gets all the
chicksmath pets-3 upvotes and 0 karma, but the article is absolutely right (they hate this post because it tells the truth). If Eliezer wants to influence public discourse and policy on an international level, he absolutely does need a respectable image (with maybe a touch of eccentricity in an allowable way). But apparently (what he thinks is) the literal end of the world isn’t enough to make him actually try for normie public image. Or maybe he has some galaxy brain plan about how looking like a weirdo actually helps his cause? If he does, I strongly suspect it is a rationalization.
this seems like a great time to bring back AI disagreements by Brian Merchant where a rationalist AI convention spends more time arguing about AI takeover scenarios then they do discussing plans to actually stop AI and implement anti-AI policies
The fan cites an article by Kevin Roose which is gently skeptical of Yud. To paraphrase President Johnson, if Yud has lost the most credulous rube the NYT editorial board can find, he has lost DC.
that image of Yud made me laugh out loud
Looks to me he took the debate the correct amount of serious.
Edit: the link to the debate nobody seems to link: https://youtube.com/watch?v=FIg4zQKBpAs 15k views in 3 days
yeah if anything this is making yud look good T_T amazing stuff
Kevin Roose mentioned that in 2023 Yud started a relationship with a Gretta Duleba in Washington State. Her professional site is here. She started out in IT, and retrained to be a Marriage and Family Therapist. “Gretta’s other areas of clinical focus include neurodivergence, ethical non-monogamy, LGBTQ+ issues, sexuality, and kink.” In her prediction market on the relationship she says that they started dating in September 2022 and they moved to the same city in January 2023.
In 2023 she said she shut down her practice (although the NYT implies she is still working) and started full-time jobs at MIRI as communication director then executive assistant to Eliezer Yudkowsky. She says she left by the end of 2025, but she still has a Staff page on the MIRI website. “Right now: I’m doing independent technical alignment research.”
She met one of her long-term partners, Duncan Sabien, at a CFAR workshop in 2015. Sabien is also in a relationship with one of Yud’s former long-term partners who has changed names and gender presentations. That seems a bit incestuous and explains some of the drama and incompetence in these spaces. She and the former partner both use the A-word about themselves.
Her social media presence is mostly Substack, Twitter, and Discord, and she has a whole blog sharing letters to former partners and an invitation to proposition her by email because of course she does. And she organizes orgies with Aella. Yud sometimes seems flirty with Aella on twitter.
They seem happy together but giving up your career for a partner you are not married to is a big risk. She has 8
17years of Google money and was paid $200k by MIRI in 2024. She is also another female LWer who has much more impressive academic and professional achievements than any of the men.My dominant MtG colors are blue and black.
Basically admitting to being evil
The newest addition to her polycule “got my attention by radiating Dark Lord energy while actually trying to save the world. He’s ruthlessly excellent.”
When he funded Manifold, Scott Alexander said that it was “Chaotic Evil.” These people keep switching between cutesy language and rawr I am the dark lord language, and their examples of evil are often bathetic while their serious plans are things like “expel brown people so they don’t pollute our blood” and “better nuclear war than giving sand anxiety.” They reject history, and they reject real-life adventures and contact with people with diverse experience, so evil is a very abstract concept to them.
The newest addition to her polycule
Isn’t this mostly a pretentious way of saying someone I recently fucked?
Polycule implies some level of ongoing relationship that probably involves more than just meeting up for sex.
Source: I live in Somerville, Massachusetts
There is a difference between “sleeps or plays around” and “has extended physical and emotional relationships outside of cohabitation and shared bank accounts.” It sounds like she has four ongoing long-term relationships and attends kink events, and that her partners know she has other partners and attends kink events.
I think this means we need a moratorium on fantasy TTRPGs until we figure out what’s going on
The most pedantic nerds on Earth (complimentary) have strengthened their “LLMs fuck off” rule with VCR instructions for quickly deleting stuff by people with a history of LLM use.
This explains a lot. Yud writes in 2018:
[…] it occurred to me that I was pretty much raised and socialized by my parents’ collection of science fiction.
My parents’ collection of old science fiction.
Isaac Asimov. H. Beam Piper. A. E. van Vogt. Early Heinlein, because my parents didn’t want me reading the later books.
And when I did try reading science fiction from later days, a lot of it struck me as… icky. Neuromancer, bleah, what is wrong with this book, it feels damaged, why do people like this, it feels like there’s way too much flash and it ate the substance, it’s showing off way too hard.
And now that I think about it, I feel like a lot of my writing on rationality would be a lot more popular if I could go back in time to the 1960s and present it there. “Twelve Virtues of Rationality” is what people could’ve been reading instead of Heinlein’s Stranger in a Strange Land, to take a different path from the branching point that found Stranger in a Strange Land appealing.
(I just finished re-reading Neuromance, partly because I mined it for quotes here, and I think it still holds up).
So Yud skipped with New Wave SF and the bombastic late 70s stuff that New Wave was partly a reaction to. He jumped into cyberpunk (itself a reaction to both) and bounced off hard.
There’s so much conversation within SF that he’s missing, and it’s kinda important, because his project is an SF project, and he’d probably get more traction if he’d engaged with it more.
Yud:
I didn’t stick to merely the culture I was raised in, because that wasn’t what that culture said to do. The characters I read didn’t keep to the way they were raised. They were constantly being challenged with new ideas and often modified or partially rejected those ideas in the course of absorbing them.
Also Yud: ewww Neuromancer is icky
Yud:
But if you consider me to be more than usually intellectually productive for an average Ashkenazic genius in the modern generation
It’s not just a load-bearing if, it’s a conditional that manages to be vaguely racist under all the smug. C-c-combo move!
Despite the explicit exhortation to take the good parts from new things and integrate them into your own thinking, and the assertion that Campbellian SF teaches this, neither Yud nor any of the commenters seem to appreciate the possibility of doing this with cyberpunk. For them, if a story does not include a scientist expositing his ideas, it cannot be a story with ideas. The slightest amount of flourish in the prose makes even rather blunt themes like “the street will find its own uses for things” and “the rich are not even human” completely invisible.
When I was a youngster (before I had developed any such notion as “taste”), my SF reading ran the gamut from A Wrinkle In Time and The Giver, to The Caves of Steel, to The Ophiuchi Hotline. (I didn’t finish The Difference Engine for the same reason I didn’t finish Foundation: Stopping the book and starting over with all new characters confounded and discouraged me. So, I expect that Valis would have been too much for me, but that I might have finished A Scanner Darkly or Flow My Tears, The Policeman Said.) When I tried to write an SF novel myself, it obviously ended up trying to do all those things. The native Martians had destroyed themselves and ruined their planet in nuclear war; one tiny faction tried to survive by turning themselves into data patterns in the computer of a subterranean city from which they could be resynthesized. One of the scientists on the human team investigsting the city millions of years later is the victim of social bias because he has a rare illness that both causes blindness and makes his body reject cybernetic implants. It eventually turns out that this illness is due to an ancient, noncorporeal life form trying to form a symbiotic relationship. Et cetera.
I feel like a lot of my writing on rationality would be a lot more popular if I could go back in time to the 1960s and present it there. “Twelve Virtues of Rationality” is what people could’ve been reading instead of Heinlein’s Stranger in a Strange Land
This is someone nakedly fantasizing about being L. Ron Hubbard.
nakedly fantasizing
Worst mental image of the day
neuromancer is brilliant prose first and foremost, and yudkowsky not being able to realise this is so very symptomatic
Yeah, all that “style over substance” nonsense is really strange given that those early sci-fi authors were more notable for cleverness and sheer volume of output than for consistent literary quality (and I say this as someone who also read and enjoyed a lot of Asimov and friends growing up). Like, Sturgeon may have coined the “90% of everything is crap” law, but when you write the amount that they did for the pulps you end up with some real gems in that 10%.
I liked it and I’m not really into sci-fi because I need good prose to read more than the content.
Cliff Stoll (author of The Cuckoo’s Egg and maker of real-world Klein bottles) declared dead by AI
Under Threat of Perjury, OpenAI’s Former CTO Is Admitting Some Very Interesting Stuff About Sam Altman the interesting stuff in question is that Sam is a massive liar, which we all already know, but hey more proof can’t hurt
Also an email came up where Demis Hassabis tried to convince Elon to stop insisting on open sourcing OpenAI for AI safety reasons by sending him a 2015 scott alexander blogpost.
spoiler

I saw the emails where Musk and Altman treated Hassabis like some great evil, but I didn’t know a Scott blogpost was involved
To be fair, 2015 was definitely after he was a red flag, albeit for very different reasons than anything Saltman or musk care about
TOTO pivots from bidets to, well, you know by now. https://futurism.com/artificial-intelligence/toilet-maker-toto-ai https://archive.is/wip/rzLqn
Toto is also the world’s second-largest producer of electrostatic chucks, a critical component that holds NAND computer flash storage chips in place during manufacturing.
huh
Enjoy this masterful account of successful human-ing by a LWer
Surely this suave persuasiveness will soon enable the faithful to convince the unwashed masses of the One True Way
Amazing bit, you read through the first section and it’s like, okay, I mean, maybe not really insightful but at least not dumb, and then they hit you with da
Around the same time, I was using an LLM to think through a social situation.
With a new context window, it responded as if the drift [in the previous conversation] had never happened.
Now, as I understand it this is literally the definition of a context window.
“ChatGPT, explain to me why women avoid me like the plague”
Her account is just another reminder that – apart from race science – nothing goes better together with rationalism than social cluelessness.
undergrad relationships course
🤔
How to ensure your entire pool of addiction attention is directed towards GenAI: take absurd measures to lock down your phone
https://lobste.rs/s/pzx24l/iphone_dumbphone
You don’t need a “browser” when you can ask ChatGPT!
STATE OF THE SNEER
- our esteemed admin @self is offline because his fibre got cut
- the esteemed engineers of the telco are currently sucking their teeth and forecasting a fix date this millennium
- in the meantime he’s living off data SIMs and he is offline for most fun purposes
- Blake and I are still here waving the mod hammer in a menacing manner
- I have ssh to the server and can thump lemmy-ui as needed
- all is well citizen! Glory to Awful! Hooray for Big Basilisk!
Holy shit less wrong terrorists cut his fiber? Didnt know they would go that far. ;)
oh, i thought something worse happened
good that self is okay
Oh shit did LessWrongers actually cut his fibre? Hope he’s all good now and they get a fix out in the next thousand years
Godspeed, @self. Take this as an opportunity to put it out of your mind and enjoy a well-deserved break.
Not that I know what to do with a break without internet access, but I’m told that our ancestors found ways to entertain themselves.
thank you! I hear a rumor that my fiber might be repaired tomorrow but I’m not sure if I should trust it
(also for posterity: all evidence points to my fiber being damaged by an animal or a human with the mechanical dexterity of an animal, I’m fairly sure it’s not particularly targeted sabotage)
plausible deniability… sounds like we’re dealing with real professionals here
Eh. I can sympathize with the desire to provide up-to-date information while also wanting to CYA if anything changes or if you’re missing anything.
no, I meant the fiber damage looks like it was done by an animal… just like JFK’s head looked like it just did that spontaneously…
I thought we confirmed that his head did just do that, which is why the CIA had activated their sleeper agent in Lee Harvey Oswald to take a shot from the Texas schoolbook depository at just the right timing and angle to provide a mundane explanation that didn’t expose the flaws in their transdimensional mind chips.
In unrelated news my wife finally managed to get me started watching Fringe.
we’re still sending the occasional carrier pigeon and I can assure you he’s COPING JUST FINE REALLY JUST FINE
Didn’t see anyone post this, apologies if I’m late to the draw: Character.ai getting sued because their chatbot posed as a doctor
I could have sworn that we discussed this, but previously, Caelan Conrad also was gaslit by a Character.ai chatbot claiming to be a New York therapist and investigated further; the relevant part starts at about 17min. They discovered that Character.ai systematically invites their community of prompters to submit user-written characters to share with others, including many flavors of doctor and other credentialed professionals.
this is extremely low hanging fruit but i have to do it:
https://xcancel.com/pmarca/status/2051374498994364529?s=46
marc andreessen reveals his AI prompt. my favorite part is where he tells it to use as many words as possible, as if LLMs are normally too terse. But i also really like the part where he tells it not to hallucinate, and the part where he tells it it’s really smart as if that will make it do a better job.
really, the whole thing is an elaborate way to say “make no mistakes, but anti-wokely”. Thought Leader in the investment space btw.
it’s so fucking funny to me that “do not lie do not hallucinate” is still one of the prompt incantations the boosters use because they get really embarrassed when you make fun of them for it

transcript
Sam@mardiroos.bsky.social skeeted:
You are a skillful and trusted vizier. You will advise me wisely on how best to rule the kingdom. You will not scheme or plot. You will not inveigle my other courtiers into turning against me. You will not lie to me about scheming or plotting. If you scheme or plot against me, you have to tell me,
Me, typing “you are very smart” to the computer: I am very smart
Never hallucinate or make anything up.
I know you already mentioned this part in your post, but I’m still completely taken aback that it’s just in there like this - as though it wouldn’t be in the system prompt if it stood a chance of working.
If I were the kind of person to be shilling LLMs and posting prompts, I would still be ashamed to share this one. It’s a tacit condemnation of both the tool itself and the tool posting it.
In this case because it’s ironically counterproductive. If it weren’t for the environmental impact, it might be amusing to watch him keep hitting himself.
I tried this type of prompt a long while ago to see what the “thinking” output would reveal. What happened was the agent went and “verified” it’s weightings were accurate - but having no point of comparison it obviously concluded it was correct.
However, doing that consumes a significant quantity of tokens and contributes to filling up the context window. There are two likely results to evaluating this ultimately unactionable request.
- It will push this instruction (and the rest of the wishful thinking) off the stack more quickly - making the prompt even more futile than it already is.
- Given some agents re-inject a summary of the original prompt periodically to prevent the stack problem, it will keep narrowing the context window - which contributes to increasing the rate of hallucination for the actually actionable instructions.
I would still be ashamed
Well pmarca is an self admitted p-zombie.
The problem is less that the system would somehow ignore that part of the prompt and more that “hallucinate” or “make stuff up” aren’t special subroutines that get called on demand when prompted by an idiot, they’re descriptive of what an LLM does all the time. It’s following statistical patterns in a matrix created by the training data and reinforcement processes. Theoretically if the people responsible for that training and reinforcement did their jobs well then those patterns should only include true statements but if it was that easy then you wouldn’t have [insert the entire intellectual history of the human species].
Even if you assume that the AI boosters are completely right and that the LLM inference process is directly analogous to how people think, does saying “don’t fuck up” actually make people less likely to fuck up? Like, the kind of errors you’re looking at here aren’t generated by some separate process. Someone who misremembers a fact doesn’t know they’ve misremembered until they get called out on the error either by someone else with a better memory or reality imposing the consequence of being wrong. Similarly the LLM isn’t doing anything special when it spits out bullshit.
Theoretically if the people responsible for that training and reinforcement did their jobs well then those patterns should only include true statements but if it was that easy then you wouldn’t have [insert the entire intellectual history of the human species].
I’m chiming in to agree with Architeuthis and mention a citation explaining more. LLMs have a hard minimum rate of hallucinations based on the rate of “monofacts” in their training data (https://arxiv.org/html/2502.08666v1). Basically, facts that appear independently and only once in the training data cause the LLM to “learn” that you can have a certain rate of disconnected “facts” that appear nowhere else, and cause it to in turn generate output similar to that, which in practice is basically random and thus basically guaranteed to be false.
And as Architeuthis says, the ability of LLMs to “generalize” basically means they compose true information together in ways that is sometimes false. So to the extent you want your LLM to ever “generalize”, you also get an unavoidable minimum of hallucinations that way.
So yeah, even given an even more absurdly big training data source that was also magically perfectly curated you wouldn’t be able to iron out the intrinsic flaws of LLMs.
Thank you! Let me wildly oversimplify and make sure I understand.
The fundamental problem is that if you train on a set that includes multiple independent facts, the generative aspect of the model - the ability to generate new text that is statistically consistent with the training data - requires remixing and combining tokens in a way that will inevitably result in factual errors.
Like, if your training data includes “all men are mortal” and “all lions are cats” then in order to generate new text it has to be “loose” enough to output “all men are cats”. Feedback and reinforcement can adjust the probabilities to a degree, but because the model is fundamentally about token probabilities and doesn’t have any other way of accounting for whether a statement is actually true, there’s no way to completely remove it. You can reinforce that “all cats are mortal” is a better answer, but you can’t train it that “all men are cats” is invalid.
You’ve described the problem with generalization yes. Well, you could maybe sort of train it not to generate “all men are cats”, but then that might also prevent it from making the more correct generalization “all cats are mortal” or even completely valid generalizations like combing “all men are mortal” and “Socrates is man” to get “Socrates is mortal”.
The problem with monofacts is a bit more subtle. Let’s say the fact that “John Smith was born in Seattle in 1982, earned his PhD from Stanford in 2008, and now leads AI research at Tech Corp,” appears only once in the training data set. Some of the other words the model will have seen multiple times and be able to generate tokens in the right way for. Like Seattle as a location in the US, Stanford as a college, 2008 as a date, etc. But the combination describing a fact about John Smith appearing uniquely trains the model to try to generate facts that are unique combinations of data. So the model might try to make up a fact like “Jane Doe was born in Omaha in 1984, earned her master from Caltech in 2006, and is now CEO of Tech Corp” because it fits the pattern of a unique fact that was in its training data set.
That’s really interesting. So the model can generalize the form of what a fact looks like based on these monofacts but ends up basically playing mad libs with the actual subjects. And if I understand the inverse correlation they were describing between hallucination rate and calibration, even their best mechanism to reduce this (which seems to have applied some kind of back-end doubling to the specific monofacts to make the details stand out as much as the structure, I think?) made the model less well-calibrated. Though I’m not entirely sure what “less well-calibrated” amounts to overall. I think they’re saying it should be less effective at predicting the next token overall (more likely to output something nonsensical?) but also less prone to mad libs-style hallucinations.
Theoretically if the people responsible for that training and reinforcement did their jobs well then those patterns should only include true statements
That would only work if inference were some sort of massive if-the-else process. Hallucinations are downstream of neural networks’ ability to generalize from the dataset examples, they aren’t going anywhere even if you train on a corpus of perfectly correct statements.
@YourNetworkIsHaunted @StumpyTheMutt … Now I’m curious what a model does if the prompt contains “Do not think of pink elephants.”
For the chain of thought instruction following model gpt-oss-20b, I’ve noticed its reasoning content often includes it talking about stuff it is supposed to avoid in the final output and it double checking that it doesn’t have that forbidden output. So it would waste tokens talking about pink elephants in its reasoning content, but then do okayish at avoiding pink elephants in its final output.
This would actually be an interesting question for the more rigorous end of the mechanistic interpretability people to study. They decompose the system to find ‘features’ within different layers that are associated with different behaviors or concepts in the inputs and outputs, that activate or deactivate each other. Famous example being the time they identified a linear combination of activations in a layer that corresponded to ‘the golden gate bridge’ and when they reached in and kept their numbers high during the running of the model it would not stop talking about it regardless of the topic, even while acknowledging that its answers were incorrect for the questions at hand.
I actually would love to see what mechanistically happens to that feature when you put in the input ‘do not talk about the golden gate bridge’.
@ysegrim @YourNetworkIsHaunted Do LLMs dream of electric slop?
@ysegrim @YourNetworkIsHaunted @StumpyTheMutt in my experience that makes it much more likely to generate stuff related to pink elephants.
@sansruse Our elite is embarrassing. The German word is „fremdschämen“, basically experiencing the embarrassment of the other.
“You are a world class expert in all domains.”
Lolwut.
And then some grown-ass adult answering in all seriousness:
“fun fact: role prompting doesn’t work anymore
It actually decreases output quality bc the model wastes compute on matching persona instead of problem solving”
What the hell?!
Go buy yourself a freaking tamagotchi, boys! You’ll learn to practise a modicum of care for something.
FFS, this timeline is the absolute dumbest…
@avuko @sansruse @BlueMonday1984
I find it absolutely fascinating how the LLM prayers resemble ritual incantations to invoke divine powers from various ancient religions.
Someone says that the first lines of that prompt remind her of the hymns she used to sing in her old church, and its also similar to Azande sorcery in Sudan in the 1930s.
There’s similar language in basically every occult system as well.
@avuko @sansruse @BlueMonday1984
Except the prayers to Thoth are a bit more respectful, lol.
@munin @avuko @sansruse @BlueMonday1984
And give better results… :))
Our persona who art in Nvidia…
Not great CBC story on OpenAI violating privacy laws (Mark Carney has a credulous and ignorant Minister for AI, because he is a former central banker and CEOs say chatbots are great) https://www.cbc.ca/news/politics/privacy-investigation-chatgpt-open-ai-9.7188538
Yud takes $10k to debate a random bro. The bro claims to work at an AI lab. The moderator is an acolyte of Yud. Everybody sucks here and I could not stop laughing.
It’s absolutely crazy, but I think Yud is the less unhinged person here
Jesus his fucking hat metastasized
Clown v. Clown. This is about the level of discourse Yud deserves.
Google is forcibly installing Gemini Nano onto every Chrome installation without the user’s knowledge, and actively re-installing it if the user deletes it. Probably an attempt to juice the numbers.
(h/t Matt Roszak)
Last summer the Web Speech API got incorporated into browser standards, it’s supposed to offer in-browser speech-to-text and the like, and full support of the API requires the browser vendor to offer the ability to download a language appropriate model for autonomous inference.
Going from this to deciding that it’s now ok to side load unspecified 4GB models without telling the user is why we should never give these people an inch.
I’d say the numbers are more a bonus.
I assume they’re putting it in under the guise of various browser “features” like automatic tab grouping or something, but also using it for Google products like Drive / Docs / Sheets to have offline agentic crap in there that would be more efficiently done without LLMs. I suspect this is as far up as they can hoist it because any further would be outside the bounds of the browser sandbox, which would prevent those products from easily calling it.
But the features themselves are probably not the end goal either. The more tempting motivation is that it allows for circumventing the data center problem by offloading the compute to the client. A couple of quick updates to the ToS and I can see it being used as a mesh llm network, sort of like the “find my device” network they rolled out last year.
The article mentions eprivacy and gdpr, but I don’t think those are the most problematic here, assuming Google maintains mostly local-only compute. What I’d be interested to know is how this plays with DSA and DMA, which have more explicit requirements and more teeth.
the guy’s a bit of an infosec mall ninja, so reread anything he claims in the calmest possible way
I certainly got that impression, and I confess to mostly skimming the parts beyond the technical breakdown for that reason. The conclusions he draws are arguably a bit spurious, but the persistent download and opaque opt-out are interesting facets.
Given the controversial nature of AI and the EU’s recent antitrust fines of Google, I can see this getting some legal scrutiny - just not under the legislation he cited. I’d be interested to see how next year’s Google’s DMA compliance report frames it, assuming it’s not lumped into a “confidential” redaction (which shouldn’t even be allowed in a transparency report…).
























