Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
This week in unsettling AI ads.

The algorithms feed me variants on this one pretty frequently, always with some fantasy woman offering to be your best friend and definitely not maybe give you dirty pictures. “Normal human woman tied up in a basement” is new, though, and even skeezier than usual. I don’t know what the workflow is for these or whether there’s an actual person writing the prompts for this, and I don’t think there’s any answer that would make me less uncomfortable about it.
Wtf is up with the face in the top left circle.
Saw a remarkable take on the pro-AI parts of bsky, that since DeepSeek420.69 can offer the model at like 15% of Claude’s pricing, that must mean that Anthropic is operating at an at least 80% positive margin on inference, so things will work out.
In the same thread they complained about Zitron’s math being dodgy.
Another lament about the tech industry has come to my attention, as Ky Decker asks themselves “Do I belong in tech anymore?”
Pretty solid piece all around, recommend checking it out
[lament about ill effect of AI, here’s why it’s terrible]
I use AI sparingly for
this is critihype
presented without further comment, this remarkable piece of political advice
I think that, while many LessWrong readers do believe that one party is way better than the other, such that the inter-party quality variation is far larger than the intra-party quality variation, this is not true of all readers.
… Wait is this about race and iq again?
Anyway the math ain’t mathing, as there never can be a republican above average enough to counterbalance out that they are an republican.
Okay, today’s Rat fixation that I want to rant about is “constructing hypothetical examples to justify my idiosyncratic position.” Like, I’m not even interested in arguing about whether their conclusion makes sense in their hypothetical world, I’m more curious about what kind of chain of thought leads you to speculate about that in 2026. Like, maybe I’m reading way too much into this but in practical terms it feels like “how do I justify voting for the Republicans no matter how far-right they might go, if my local Democrats try to move the tiniest bit left?” which feels like the rat/tech ethos in a nutshell.
Or maybe it’s the more traditional past time of trying to construct arguments in favor of controversial-sounding positions so that you can feel smarter and more open-minded than everyone else.
Unfortunately, our problem right now is not Donna the below-average Democrat but Donald the fascist. And when it comes to fascists I do not ask if they are above or below average.
Emile Torres and me, rationalist enemies list #3 and #1. a photo to strike fear into the utility functions of rationalists everywhere
https://circumstances.run/@davidgerard/116456408676175449
pic by my kid and from the premiere of Ghost in the Machine, an awesome documentary. I’m in it. tl;dr AI was always olde timey race science all the way down
What a rogue’s gallery. Truly a chilling portrait of the sworn enemies of trillions of unborn human beings.
some of us just nail the existential risk
‘top ai’. so it is a sex thing after all.
Just looking at that picture makes me hear googles of unsimulated people scream in terror.
https://theintercept.com/2026/04/23/chatgpt-ai-false-confession-interrogation-crime/
“If ChatGPT can be induced into a false confession, then who isn’t vulnerable?”
I don’t have words for how stupid this is.
Their heart seems to be in the right place, police interrogation will be exploitative and brainwashy with no real consequences for the interrogators, but they sure chose the dumbest possible way to make their point:
Despite the claims of AI evangelists, chatbots aren’t people and haven’t achieved sentience. The differences between a chatbot and a real person, however, make Heaton’s ability to elicit a false confession more disturbing, not less.
“ChatGPT lacks many of the vulnerabilities that make people more likely to falsely confess — like stress, fatigue, and sleep deprivation,” said Saul Kassin, a professor emeritus at John Jay College who wrote the book on false confessions. “If ChatGPT can be induced into a false confession, then who isn’t vulnerable?”
Detective: “So Magic Eight Ball. I’m just gonna ask you outright. Were you the killer?”
Magic Eight Ball: “It is decidedly so.”
Some Guy: “Oh my god.”
If it’s this easy to coerce the Magic Eight Ball, what chance do the rest of us have against DOOM?!?
Only the magic eight ball has been rigged with sides reading:
-
Signs point to yes
-
It is decidedly so
-
Absolutely. You’re so smart
-
Maybe. Good question!
-
There are strong reasons to think so
-
Lots of people are saying it
-
I can see why you’d ask that
-
There isn’t a strong consensus either way
-
deleted by creator
Friend of Ziz and cofounder of the ‘rationalist fleet’ pops up out of the woodwork trying to clear Ziz’s name
I find myself noticing things rather detached from the typical Ziz funnybusiness more strongly than I notice the stuff about that whole situation.
“I’m Gwen Danielson, a neuroscientist and bioengineer, who decided as a child that I would end Death (and bring people back if I could) and that I would become a dragon and help generally facilitate a fantastical transhumanist future.”
“I dream of non-Euclidean geometries, of countless worlds visible and accessible in the daytime sky, of competent infrastructure, of soul forges continually working to bring back the dead… I dream of reaching through warps in the spacetime fabric to save the dying across time”
“Signed, the dragon of creation Creatrei (cree-AH-trey) also known as Gwen Danielson or as Char and Astria (when referring to my hemis as distinct individuals)”
The reactions are fun. “This post is not actually doing a good job of making me trust you and think this conversation is safe to have[1], and I notice that as I am saying this that I am afraid that this will now somehow result in someone trying to murder me in my sleep”
soul forges continually working to bring back the dead
Even in death, duty does not end.
Ziz has always had a tendency to express her ideas through metaphors in fiction that are familiar to her. We spoke at length about Contessa and Doctor Mother from Worm; the Wardens from World of Warcraft; Frisk, Sans, and especially Undyne from Undertale; Tassadar from Starcraft; Harry and Dumbledore from HPMOR; Iji.
Does “read a second book” apply here, or is this a “read a first book” situation?
Given the Star Wars discussion alluded to in the next paragraph, I think we’re looking at “try rereading your first book while being less of a self-important dumbass.” Like, I get it, Revan is one of the best characters in that canon, and where Vader fell for very human if selfish reasons Revan pushes even farther and was using the dark side to conquer the galaxy in order to try and save it from… being conquered by a sith empire that drew great and terrible power from the dark side of the force. What happened to Vader again? Oh yeah, he sought the dark side for the power to save his wife and became a great and terrible warlord by calling on his rage and despair over… killing his wife. Like, the fact that trying to gain power through the dark side is at best a self-destructive shortcut that will undermine your actual goals is pretty goddamn consistent, and this is Star Wars Legends, a canon not exactly known for being internally consistent. I’m not saying you need to “agree” with that premise, and I think the franchise as a whole is usually too conservative, with the passivity of the light sife being a big part of that. It’s just deeply absurd to me for that to be the takeaway from that story. Like all the people who’s main takeaway from Jurassic Park was “man, wouldn’t it be cool if we had real dinosaurs?” who then went on to be the victims and villains of Jurassic World.
I’ll have you know there’s lots of important WoW lore in the novels!!!
Tassadar’s probably the most telling. For those not in the know, the Protoss are noble savages modeled after samurai, templar, and Native Americans. Tassadar in particular is modeled after the stories of legendary Hiawatha and real person Geronimo, first uniting the Protoss under a single banner and then sacrificing himself in a cutscene at the end of a big battle before repeatedly re-appearing as a ghost in later titles. On one hand, Tassadar’s the most influential Protoss in the entire setting; after his death, everybody switches in-game from a greeting revering ancient hero Adun (“in taro Adun”) to a greeting mentioning new hero Tassadar (“in taro Tassadar”). But on the other hand, he’s a general and warrior deeply enmeshed in a military tradition which demands his unwavering total sacrifice in order to achieve any progress. Tassadar is a racist stereotype embodying the idea of stoic acceptance; when Protoss say “it is a good day to die” they are echoing tropes about Native American beliefs.
Not gonna touch the Undertale reference today.
God knows I love me a good dose of genre fiction, but I believe that if you’re gonna base your entire worldview on fiction you should use something that’s not second or third hand.
I feel a bit regretful sometimes that none of my copious fanfic output has inspired anyone to draw fanart. But at least no one has gotten weird about it, either.
I feel like you may be judging Tassadar too harshly. What you said is true but his defining feature is openness and empathy towards other cultures. The Conclave, the ruling body of the protoss, consider humans basically animals and blast them from orbit without a care, and they dismiss the ‘Dark Templar’ as heretics. This even though the Dark Templar are the only ones who have the magitech to kill the invading aliens. His whole arc is about rejecting prejudice and teaming up with people your culture considers inferior, not just heroic sacrifice.
I can understand what they’re saying, though. Like, his defining moment is the finale of SC1 where he does sacrifice himself and become this major culture hero. There is definitely room to question that warrior ethos and what it says about the Protoss and what that in turn says about how we think about the real-world cultures and ideas that inspired them, and I’m pretty open to those constructs not being particularly respectful. But within those background structures and the culture they describe the immediate storyline is about how the conclave and even the Khala itself is ultimately destructive and makes the Protoss more vulnerable even as it is their source of strength and identity, which feels actually pretty timely if you read it that way.
while it’s technically plausible that Ziz was involved in a minor oopsy whoopsy fucky wucky deady weady or two or six, she’s always been lovely to me, much of the time,
Uh oh
https://www.rollingstone.com/culture/culture-features/gwen-danielson-zizians-interview-1235552043/
Going on a media tour now
The foursome had been on the lot for a few months when the pandemic struck in March 2020. That same month, the price of Bitcoin — in which most of Borhanian’s life savings was invested, money that was covering much of the group’s expenses at that time — cratered. Soon after, the four of them stopped paying rent to Lind altogether.
Aella also lost much of her early earnings on crypto.
Curtis Lind reminds me of the businessman who supported Elron early on and lost most of his money.
The end where Gwen Danielson decides that Yudkowsky is
hertheir savior is tragic.edit/ The article describes Danielson as transfemme but refers to them as them so I will do the same
The back-and-forth between Gwen and LessWrong commenters is getting spicy. This definitely deserves a top-level post on SneerClub.
Not really part of the back and forth but I find this illuminating of their recent travails, regarding it not being a step to far to prevent them from posting:
“This isn’t super relevant since it’s not like the standards are super high but ever since the enormous onslaught of LLM psychosis posters, the default of people who try to post to LW is to get rejected from posting here”
Sounds like the mods have had to deal with a lot of unbalanced people lately, and are not having it.
I’m Gwen Danielson, a neuroscientist and bioengineer, who decided as a child that I would end Death
thiel jumpscare
Is the crappy dragon fursona related to Peter Thiel being an anagram for “the reptile”?
they’re all cosplaying medieval alchemy so at least it fits a theme
Ah yup, that is definitely the type of person who’s deeply attracted to cults.
Habryka’s all, “Dammit, why do you have to come here and remind everyone where the Zizians came from?”
EDIT: This person also seems to have no concept of the finality of death, which might explain why the Zizians were so murdery.
This person also seems to have no concept of the finality of death
The god ai can perfectly simulate people, and a sa copy is you, death isnt permanent. And when you start to think this is inevitable and close, murder becomes just another way to signal how strongly you feel about a thing.
The link to the guide to setting up a retrofitted boxtruck to continue AI alignment research in with local copies of the internet archive after civilization collapses in 2025 is fun
SCENE: a wind-blasted desert landscape. In the foreground, a weathered truck rests on the side of a ruined highway. The windscreen is dusty and cracked, and the tyres have long since rotted away.
A PAIR OF SCAVENGERS, clad in bulky rags, approach the truck with a mixture of excitement and trepidation.
Using a CROWBAR, they force open the back doors of the truck, and exclaim
“Fuck it, Ted, it’s one of those dumb AI trucks!”
These are the kind of people who I could picture working away at a laptop in a box truck and they tell you they’re close to a breakthrough and then you get closer and the laptop isn’t on, and hasn’t been powered up for years.
I’m no fan of Greg Egan’s fiction but I am a fan of him pissing of rats:
https://www.lesswrong.com/posts/EbqJfCz9qvfptNbCQ/an-angry-review-of-greg-egan-s-didicosm
Link to short story: https://www.gregegan.net/DIDICOSM/Complete/Didicosm.html
also
https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (shared in a comment to the above)
Here’s the short story that pissed Zack off: https://asimovs.com/wp-content/uploads/2025/03/DeathGorgon_Egan.pdf
Edit I found the time to read Didicosm and while it suffers from an Eganesque infodump (with diagrams!) it’s not nearly as bad as the LWers are wittering on about. Part of me thinks they’re mad because the main scientists are all women while the main dude is a stable, well-adjusted partner in the relationship.
I rarely find that reading fiction makes me upset […] fiction can be quite bad, but rarely do I find it personally offensive…
Rat reading a book that makes fun of AI doom in passing:
Me, a trans woman immigrant:
https://knowyourmeme.com/memes/mel-gibson-talking-to-bloody-jesusI’m a huge fan of Greg Egan’s fiction and a huge fan of him pissing off the rats. He’s been explicitly needling them and making fun of them in his fiction for over a decade. Making calm contradictions against them for over two decades, after noticing weirdos being fans of his.
He showed up at my blog to talk math back in the ancient days when I had a comment section. And he’s active on Mastodon a fair bit.
yeah he rules
Semi-topical, there’s a new Local 58 and even the moon demon hates AI, I guess.
Two tech-related links for today, both relating to fascism.
First, tante has a new blogpost about AI being overtly fascist in nature, which he’s also posted to LinkedIn, seemingly for kicks. (Found on red site, too)
Second, the Nazi laptop company has sent a second pre-release laptop to DHH, showing they have not changed at all since they went full fash six months ago:

Tante nails it again. No notes.
That he has a LinkedIn finally explains how I first heard of him. A liberal but very startup/hustle culture brained colleague shared an anti-blockchain thing on the Slack. Always wondered how she stumbled on a comm(o|u)nist tech critic, but it must’ve been LI.
Habryka doesn’t have time to write all the crazy shit he’s mulling on, so he offers a summary.
https://www.lesswrong.com/posts/MqgwHJ93pJpaeHXs6/posts-i-don-t-have-time-to-write
Do you enjoy living in a society that takes fire safety seriously? Sucks to be you, I guess:
- Fire codes are the root of all evil
How about we just make all the mosquito nets flammable. That’s effective altruism!
Also Switzerland is a libertarian paradise apparently.
I knew some people who lived in Switzerland for a bit and they joked that it was a libertarian paradise because you can drink and smoke when you’re a teenager, the age of consent is 16 and everyone is issued a gun as part of their military service. I didn’t expect anyone to take it seriously…
Afaik Switzerland does have a unique system of federated democratic governance, one of the oldest democratic systems in Europe. The downside is that women weren’t able to vote in some Swiss cantons until the 90s.
Fire accidents seem to have the unique combination of producing extremely strong emotional responses by people in a local community, while also often being traceable to an o-ring like failure that you can over-index on.
Gee, why would people get emotional about friends and family being burnt alive. How bizarre.
Also I am not a fire expert by any means whatsoever and maybe I’m missing this guy’s point. But pretty much every account I have ever read of a fire that killed a lot of people is like “the building did not meet fire safety standards and the management had been dodging calls from the fire safety inspectors. Multiple people said the building was unsafe. On the night the fire happened the fire exits were chained shut.” Like, read about this horrendous fire that happened near where I live. There is no need to bring up o-rings. Fires in residential buildings and entertainment venues are not the same as fires on NASA spaceships.
Also fire codes do not control the size of fire engines. That’s a bad decision made by firefighters.
Unfortunately there are a few things that make courts pretty tricky to implement in practice for things like the rationality, AI safety and EA communities. Badly implemented courts also can just make things worse by creating a clear target for attack and pressure. Seems very tricky, but probably we should have more courts (or maybe not, I would need to write the post to figure it out).
yesss yesss lesswrong people’s tribunals and struggle sessions let’s gooooooooo
in b4 “Committee for AI Safety” seizes control and executes people who are too smart with a guillotine.
might LWers be the real Pol Potists? Read on to find out!
They sure seem to enjoy the idea of a Year Zero
I mean, it’s a banger of a song.
They develop a special g-meter to find people who could potentially create more efficient gpus and send them to the gulags.
Is that what “g” has stood for all this time?
I was trying to combine the scientology e-meter stuff with the iq race science g-factor stuff.
We’ll hide them at the g spot. LW would never find them there.
But despite well-documented claims to genius IQs, somehow the billionaire set ends up not on the chopping block.
The fire code thing really is an excellent example of LessWrong Brain. Fire truck drivers insist on needlessly large trucks (no citation) which makes roads 30% wider than they would otherwise be (no citation) which has “probably” “non-trivially” contributed to larger cars (no citation) leading to enough additional road fatalities to cancel out the lives saved by stricter fire codes (no citation).
The LessWrong Brain argument starts with a deliberately contrarian conclusion and proves it with a Rube Goldberg chain of logical syllogisms. Of course, citations are strictly optional, and they are free to misinterpret them as they see fit. The only real standard of each claim is “looks good to me”, but you are supposed to be impressed that they managed to string a dozen of them together to reveal some shocking, deep truth of the world that nobody else knows about. The AI 2027 nonsense is an infamous example of this.
He uses the word “fermi” which is cult jargon based on Fermi estimation, a.k.a. guessing shit with back-of-the-envelope calculations. Not exactly what you want if you want to convince people to reform fire codes, especially if you have zero citations for anything.
I guess people just aren’t rational enough, and the only reason the fire codes are so irrational is because people are emotional about fire codes. Firefighters are apparently revered as heroes, when it is the LWers who should be the heroes. After all, firefighters merely save people from fires, while LWers buy multimillion dollar mansions to talk about saving quadrillions of hypothetical people from hypothetical basilisks!
rationalism is when i pull five numbers out of my ass and multiply them together
it’s what they do instead of prayer
Yeah but never pull 9 numbers out of your ass, that would make you too smart and they will tell the gov to drone strike you.
Fire codes are the root of all evil
Ah somebody got told by their landlord not to do something. (I remember our student housing landlord (a big org) was regularly claiming ‘fire codes’ as an excuse to get rid of stuff in semi public areas. The actual fire codes didn’t demand this btw, it was just the excuse they used to stop students from filling everything with random trash).
I think building codes and zoning reform are good topics to get into in rich English-speaking countries but you have to 1) learn from actual experts not x.com/wiseAss1488, and 2) engage in local politics and policy and not just post to nerds around the world.
Is there a more generalized form of “weird hill to die on but at least you’re dead”? Because a new-to-me way for too-rich people to end their gullible lives has emerged https://www.technologyreview.com/2026/03/30/1134780/r3-bio-brainless-human-clones-full-body-replacement-john-schloendorn-aging-longevity/
https://russwilcoxdata.substack.com/p/and-the-alignment-problem-what-chinas
In June 2025, Zhao Tingyang gave a talk at Tsinghua’s Fangtang Forum. The edited transcript ran in The Paper on July 4 under the title “人工智能的伦理与思维之限” (The Ethical and Thinking Limits of AI). Near the end, Zhao wrote this:
“What requires more reflection is that attempting to ‘align’ AI with human nature and values actually contains a risk of human species suicide. Human nature is selfish, greedy, and cruel. Humans are the most dangerous biological species. Almost all religions demand the restraint of human desire; this is no accident. AI aligned with human values may well become a dangerous subject by imitating humans. Originally, AI does not possess the selfish genes of carbon-based life, so AI is actually closer to the legendary ‘human nature is fundamentally good’ kind of existence, whereas human nature is not ‘fundamentally good.’” The alignment paradigm treats human values as the target AI should conform to. Zhao is arguing the target is the danger. An AI aligned to human values inherits the specific features of human judgment that Zhao says have produced the record of human harm. The paradigm is not incomplete. It is pointed the wrong way.
Zhao’s argument has developed across CASS, The Paper, and Wenhua Zongheng from late 2022 through 2025, from a provocative aside into a sustained critique of the alignment paradigm. In the same period, the English-language alignment and AI ethics literature produced no substantive engagement. No citations. No rebuttal. No naming. Zhao is a member of the Chinese Academy of Social Sciences Institute of Philosophy, author of the Tianxia framework, and one of the most cited philosophers working in Chinese today.
I need to think on this a little more, wasn’t on my radar.
In the same period, the English-language alignment and AI ethics literature produced no substantive engagement. No citations. No rebuttal.
Wow it’s almost like alignment and AI ethics studies is less a serious academic field and more like a prank capital likes to play on consumers.
But I also think Zhao Tingyang’s take that alignment will make AI evil because people are evil falls too much into the the-people-deserve-to-be-disempowered totalitarian state funny business side of things to be especially influential down these parts.
I asked someone from the mainland, she more or less agreed with you:
This is basically consistent with the long-standing logic of the Chinese internet: technology brings discursive power, and to give it away is to give away discursive power. AI is especially so.
To be fair, while I’m not familiar with the discourse in China I know a lot people consider (rightly) “alignment” as a framing to be a red flag for cranks and rats. It’s not that surprising that this attitude hasn’t been getting much recognition when the marketing departments of ai companies has been more engaged on that subject than serious academics.
guys

If it’s “agentic,” doesn’t that imply it smokes weed for you
I’m sorry, I think I need to believe that this is taking the piss in order to be able to function. It can’t be real (It’'s definitely real).
Oh God I read their FAQ and it looks like the whole concept is to gamify smoking weed because if there’s one problem with weed it’s that it’s not addictive enough on its own. I mean the actual concept is to try and smash enough hip tech buzzwords together to extract some amount of the dwindling venture capital continuing to slosh around the valley, but if it actually happens the thing it’s going to do is take all the addictionware tactics that app developers have developed and bring them to bear on promoting drug use.
This is a serious blow to coolness, from which not even drugs will easily recover.
Wake me up when there’s an agentic butt plug
You made me look and now you all have to know there’s a library for butt plugs (written in Rust) that has LLM generated code in it:
https://github.com/buttplugio/buttplug#inclusion-of-llm-generated-code
This is how I learn that buttplug.io has fallen
I hurt myself today… to see if I still feel…
I have mixed feelings about speaking things into existence
Considering the amount of weird hentai on the internet that cannot end well.

Agentic Ripz is my new jam band.
Looks like Mythos didn’t catch this one:
Anthropic secretly installs spyware when you install Claude Desktop
Whoopsie!
It’s fine, spyware is only a risk when it’s bad people’s spyware. It’s totally fine when it’s Anthropic™-approved spyware!
As for Mythos catching things, maybe they should have used Mythos on their very own Claude Code considering that it has hilariously obvious security exploits, such as this one which inserts an arbitrary string into a shell command. Actually, never mind I don’t see anything wrong here, maybe we should burn another $20k in electricity running Mythos on it again to find out.
anthropic is the most moral ai company in the universe
Well it does have the secret “Any attempt to arrest a senior officer of OCP results in shutdown” derective.
















