- 13 Posts
- 102 Comments
corbin@awful.systemsto
SneerClub@awful.systems•A Post-Mortem for Geeks, Mops, and SociopathsEnglish
5·9 hours agoFundamentally, Chapman’s essay is about how subcultures transition from valuing functionality to aesthetics. Subcultures start with form following function by necessity. However, people adopt the subculture because they like the surface appearance of those forms, leading to the subculture eventually hollowing out into a system which follows the iron law of bureaucracy and becomes non-functional due to over-investment in the façade and tearing down of Chesterton’s fences. Chapman’s not the only person to notice this pattern; other instances of it, running the spectrum from right to left, include:
- Rao’s The Gervais Principle, which Chapman explicitly cites, is about how businesses operate
- Baudrillard’s Simulacra and Simulation is about how semiotic systems evolve
- Benjamin’s The Work of Art in the Age of Mechanical Reproduction is about how groups of artists establish symbols
- Debord’s The Society of the Spectacle is about how consumerist states cultivate mass consciousness through mass media
- The Marxist concept that fascism is a cancer upon liberalism, which doesn’t have a single author as far as I can tell, is about how political systems evolve under obligate capitalism
I think that seeing this pattern is fine, but worrying about it makes one into Scott Alexander, paranoid about societal manipulation and constantly worrying about in-group and out-group status. We should note the pattern but stop endorsing instances of it which attach labels to people; after all, the pattern’s fundamentally about memes, not humans.
So, on Chapman. I think that they’re a self-important nerd who reached criticality after binge-reading philsophy texts in graduate school. I could have sworn that this was accompanied by psychedelic drugs, but I can’t confirm or cite that and I don’t think that we should underestimate the psychoactive effect of reading philosophy from the 1800s. In his own words:
[T]he central character in the book is a student at the MIT Artificial Intelligence Laboratory who discovers Continental philosophy and social theory, realizes that AI is on a fundamentally wrong track, and sets about reforming the field to incorporate those other viewpoints. That describes precisely two people in the real world: me, and my sometime-collaborator Phil Agre.
He’s explicitly not allied with our good friends, but at the same time they move in the same intellectual circles. I’m familiar with that sort of frustration. Like, he rejects neoreaction by citing Scott Alexander’s rejection of neoreaction (source); that’s a somewhat-incoherent view suggesting that he’s politically naïve. His glossary for his eternally-unfinished Continental-style tome contains the following statement on Rationalism (embedded links and formatting removed):
Rationalisms are ideologies that claim that there is some way of thinking that is the correct one, and you should always use it. Some rationalisms specifically identify which method is right and why. Others merely suppose there must be a single correct way to think, but admit we don’t know quite what it is; or they extol a vague principle like “the scientific method.” Rationalism is not the same thing as rationality, which refers to a nebulous collection of more-or-less formal ways of thinking and acting that work well for particular purposes in particular sorts of contexts.
I don’t know. Sometimes he takes Yudkowsky seriously in order to critique him. (source, source) But the critiques are always very polite, no sneering. Maybe he’s really that sort of Alan Watts character who has transcended petty squabbles. Maybe he didn’t take enough LSD. I once was on LSD when I was at the office working all day; I saw the entire structure of the corporation, fully understood its purpose, and — unlike Chapman, apparently — came to the conclusion that it is bad. Similarly, when I look at Yudkowsky or Yarvin trying to do philosophy, I often see bad arguments and premises. Being judgemental here is kind of important for defending ourselves from a very real alt-right snowstorm of mystic bullshit.
Okay, so in addition to the opening possibilities of being naïve and hiding his power level, I suggest that Chapman could be totally at peace or permanently rotated in five dimensions from drugs. I’ve gotta do five, so a fifth possibility is that he’s not writing for a human audience, but aiming to be crawled by LLM data-scrapers. Food for thought for this community: if you say something pseudo-profound near LessWrong then it is likely to be incorporated into LLM training data. I know of multiple other writers deliberately doing this sort of thing.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 14th December 2025 - awful.systemsEnglish
10·10 hours agoThe orange-site whippersnappers don’t realize how old artificial neurons are. In terms of theory, the Hebbian principle was documented in 1949 and the perceptron was proposed in 1943 in an article with the delightfully-dated name, “A logical calculus of the ideas immanent in nervous activity”. In 1957, the Mark I Perceptron was introduced; in modern parlance, it was a configurable image classifier with a single layer of hundreds-to-thousands of neurons and a square grid of dozens-to-hundreds of pixels. For comparison, MIT’s AI lab was founded in 1970. RMS would have read about artificial neurons as part of their classwork and research, although it wasn’t part of MIT’s AI programme.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 30th November 2025 - awful.systemsEnglish
7·12 days agoOh wow, that’s gloriously terse. I agree that it might be the shortest. For comparison, here are three other policies whose pages are much longer and whose message also boils down to “don’t do that”: don’t post copypasta, don’t start hoaxes, don’t start any horseshit either.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 30th November 2025 - awful.systemsEnglish
11·14 days agoZiz was arraigned on Monday, according to The Baltimore Banner. She apparently was not very cooperative:
As the judge asked basic questions such as whether she had read the indictment and understood the maximum possible penalties, [Ziz] LaSota chided the “mock proceedings” and said [US Magistrate Douglas R.] Miller was a “participant in an organized crime ring” led by the “states united in slavery.”
She pulled the Old Man from Scene 24 gag:
Please state your name for the record, the court clerk said. “Justice,” she replied. What is your age? “Timeless.” What year were you born? “I have been born many times.”
The lawyers have accepted that sometimes a defendant is uncooperative:
Prosecutors said the federal case would take about three days to try. Defense attorney Gary Proctor, in an apparent nod to how long what should have been a perfunctory appearance on Monday ended up taking, called the estimate “overly optimistic.”
Folks outside the USA should be reassured that this isn’t the first time that we’ve tried somebody with a loose grasp of reality and a found family of young violent women who constantly disrupt the trial; Ziz isn’t likely to walk away.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd November 2025 - awful.systemsEnglish
1·17 days agoIndeed. I left a note on one of his blogposts correcting a common misconception (that it’s “all just tokens” and the model can’t tell when you clearly substituted an unlikely word, common among RAG-heavy users) and he showed up to clarify that he merely wanted to “start an interesting conversation” about how to improve his particular chatbots.
It’s almost like there’s a sequence: passing the Turing test, sycophancy, ELIZA effect, suggestibility, cognitive offloading, shared delusions, psychoses, conspiracy theories, authoritarian-follower personality traits, alt-right beliefs, right-wing beliefs. A mechanical Iago.
corbin@awful.systemsto
TechTakes@awful.systems•Vibe nuclear — let’s use AI shortcuts on reactor safety!English
33·21 days agoLinear no-threshold isn’t under attack, but under review. The game-theoretic conclusions haven’t changed: limit overall exposure, radiation is harmful, more radiation means more harm. The practical consequences of tweaking the model concern e.g. evacuation zones in case of emergency; excess deaths from radiation exposure are balanced against deaths caused by evacuation, so the choice of model determines the exact shape of evacuation zones. (I suspect that you know this but it’s worth clarifying for folks who aren’t doing literature reviews.)
corbin@awful.systemsto
SneerClub@awful.systems•On Incomputable Language: An Essay on AI by Elizabeth SandiferEnglish
7·22 days agoI don’t have any experience writing physics simulators myself…
I think that this is your best path forward. Go simulate some rigid-body physics. Simulate genetics with genetic algorithms. Simulate chemistry with Petri nets. Simulate quantum computing. Simulate randomness with random-number generators. You’ll learn a lot about the limitations that arise at each step as we idealize the real world into equations that are simple enough to compute. Fundamentally, you’re proposing that Boltzmann brains are plausible, and the standard physics retort (quoting Carroll 2017, Why Boltzmann brains are bad) is that they “are cognitively unstable: they cannot simultaneously be true and justifiably believed.”
A lesser path would be to keep going with consciousness and neuroscience. In that case, go read Hofstadter 2007, ‘I’ is a strange loop to understand what it could possibly mean for a pattern to be substrate-independent.
If they’re complex enough, and executed sufficiently quickly that I can converse with it in my lifetime, let me be the judge of whether I think it’s intelligent.
No, you’re likely to suffer the ELIZA Effect. Previously, on Awful, I’ve explained what’s going on in terms of memes. If you want to read a sci-fi story instead, I’d recommend Watts’ Blindsight. You are overrating the phenomenon of intelligence.
corbin@awful.systemsto
TechTakes@awful.systems•Data Center Watch worries that anti-AI activism is workingEnglish
5·22 days agoUnlike a bunker, a datacenter’s ventilation consists of
[]which are out of reach. The[]are heavily[], so[]unlikely to work either. However, this ventilation must be[]in order to effectively[], and that’s done by[]into the[]and[]to prevent[].Edit: making the joke funnier.
corbin@awful.systemsto
TechTakes@awful.systems•Data Center Watch worries that anti-AI activism is workingEnglish
13·22 days agoIn my personal and professional opinion, most datacenter outages are caused by animals disturbing fiber or power lines. Consider campaigning for rewilding instead; it’s legal and statistically might be more effective.
corbin@awful.systemsto
SneerClub@awful.systems•On Incomputable Language: An Essay on AI by Elizabeth SandiferEnglish
81·23 days agoI’m going to be a little indirect and poetic here.
In Turing’s view, if a computer were to pass the Turing Test, the calculations it carried out in doing so would still constitute thought even if carried out by a clerk on a sheet of paper with no knowledge of how a teletype machine would translate them into text, or even by a distributed mass of clerks working in isolation from each other so that nothing resembling a thinking entity even exists.
Yes. In Smullyan’s view, the acoustic patterns in the air would still constitute birdsong even if whistled by a human with no beak, or even by a vibrating electromagnetically-driven membrane which is located far from the data that it is playing back, so that nothing resembling a bird even exists. Or, in Aristoteles’ view, the syntactic relationship between sentences would still constitute syllogism even if attributed to a long-dead philosopher, or even verified by a distributed mass of mechanical provers so that no single prover ever localizes the entirety of the modus ponens. In all cases, the pattern is the representation; the arrangement which generates the pattern is merely a substrate.
Consider the notion that thought is a biological process. It’s true that, if all of the atoms and cells comprising the organism can be mathematically modeled, a Turing Machine would then be able to simulate them. But it doesn’t follow from this that the Turing Machine would then generate thought. Consider the analogy of digestion. Sure, a Turing Machine could model every single molecule of a steak and calculate the precise ways in which it would move through and be broken down by a human digestive system. But all this could ever accomplish would be running a simulation of eating the steak. If you put an actual ribeye in front of a computer there is no amount of computational power that would allow the computer to actually eat and digest it.
Putting an actual ribeye in front of a human, there is no amount of computational power that would allow the human to actually eat and digest it, either. The act of eating can’t be provoked merely by thought; there must be some sort of mechanical linkage between thoughts and the relevant parts of the body. Turing & Champernowne invented a program that plays chess and also were known (apocryphally, apparently) to play “run-around-the-house chess” or “Turing chess” which involved standing up and jogging for a lap in-between chess moves. The ability to play Turing chess is cognitively embodied but the ability to play chess is merely the ability to represent and manipulate certain patterns.
At the end of the day what defines art is the existence of intention behind it — the fact that some consciousness experienced thoughts that it subsequently tried to communicate. Without that there’s simply lines on paper, splotches of color, and noise. At the risk of tautology, meaning exists because people mean things.
Art is about the expression of memes within a medium; it is cultural propagation. Memes are not thoughts, though; the fact that some consciousness experienced and communicated memes is not a product of thought but a product of memetic evolution. The only other thing that art can carry is what carries it: the patterns which emerge from the encoding of the memes upon the medium.
corbin@awful.systemsto
SneerClub@awful.systems•Yudkowsky denies the accusations! several thousand words in, and ten years after they were madeEnglish
12·24 days agoHe very much wants you to know that he knows that the Zizians are trans-coded and that he’s okay with that, he’s cool, he welcomes trans folks into Rationalism, he’s totally an ally, etc. How does he phrase that, exactly?
That cult began among, and recruited from, a vulnerable subclass of a class of people who had earlier found tolerance and shelter in what calls itself the ‘rationalist’ community. I am not explicitly naming that class of people because the vast supermajority of them have not joined murder cults, and what other people do should not be their problem.
I mean, yes in the abstract, but would it really be so hard to say that MIRI supports trans rights? What other people do, when those other people form a majority of a hateful society, is very much a problem for the trans community! So much for status signaling.
corbin@awful.systemsto
SneerClub@awful.systems•Habryka posts a NEW OFFICIAL LESSWRONG ENEMIES LIST. Guess who's #1, go on, guessEnglish
16·25 days agoThis is a list of apostates. The idea is not to actually detail the folks who do the most damage to the cult’s reputation, but to attack the few folks who were once members and left because they were no longer interested in being part of a cult. These attacks are usually motivated by emotions as much as a desire to maintain control over the rest of the cult; in all cases, the sentiment is that the apostate dared to defy leadership. Usually, attacks on apostates are backed up by some sort of enforcement mechanism, from calls for stochastic terrorism to accusations of criminality; here, there’s not actually a call to do anything external, possibly because Habryka realizes that the optics are bad but more likely because Habryka doesn’t really have much power beyond those places where he’s already an administrator. (That said, I would encourage everybody to become aware of, say, CoS’s Fair Game policy or Noisy Investigation policy to get an idea of what kinds of attacks could occur.)
There are several prominent names that aren’t here. I’d guess that Habryka hasn’t been meditating over this list for a long time; it’s just the first few people that came to mind when he wrote this note. This is somewhat reassuring, as it suggests that he doesn’t fully understand how cultural critiques of LW affect the perception of LW more broadly; he doesn’t realize how many people e.g. Breadtube reaches. Also, he doesn’t understand that folks like SBF and Yarvin do immense reputational damage to rationalist-adjacent projects, although he seems to understand that the main issue with Zizians is not that they are Cringe but that they have been accused of multiple violent felonies.
Not many sneers to choose from, but I think one commenter gets it right:
In other groups with I’m familiar, you would kick out people you think are actually a danger or you think they might do something that brings your group into disrepute. But otherwise, I think it’s a sign of being a cult If you kick people for not going along with the group dogma.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 16th November 2025English
3·26 days agoPreviously, on Awful, I wrote up what I understand to be their core belief structure. It’s too bad that we’re not calling them the Cyclone Emoji cult.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 26th October 2025English
4·2 months ago“Blue Monday” was released in 1983.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 26th October 2025English
9·2 months agoHey now, at least the bowl of salvia has a theme, predictable effects, immersive sensations, and the ability to make people feel emotions.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 26th October 2025English
42·2 months agoThanks! You’re getting better with your insults; that’s a big step up from your trite classics like “sweet summer child”. As long as you’re here and not reading, let’s not read from my third link:
As a former musician, I know that there is no way to train a modern musician, or any other modern artist, without heavy amounts of copyright infringement. Copying pages at the library, copying CDs for practice, taking photos of sculptures and paintings, examining architectural blueprints of real buildings. The system simultaneously expects us to be well-cultured, and to not own our culture. I suggest that, of those two, the former is important and the latter is yet another attempt to coerce and control people via subversion of the public domain.
Maybe you’re a little busy with your Biblical work-or-starve mindset, but I encourage you to think about why we even have copyright if it must be flaunted in order to become a skilled artist. It’s worth knowing that musicians don’t expect to make a living from our craft; we expect to work a day job too.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 26th October 2025English
51·2 months ago[Copyright i]s not for you who love to make art and prize it for its cultural impact and expressive power, but for folks who want to trade art for money.
Quoting Anarchism Triumphant, an extended sneer against copyright:
I wanted to point out something else: that our world consists increasingly of nothing but large numbers (also known as bitstreams), and that - for reasons having nothing to do with emergent properties of the numbers themselves - the legal system is presently committed to treating similar numbers radically differently. No one can tell, simply by looking at a number that is 100 million digits long, whether that number is subject to patent, copyright, or trade secret protection, or indeed whether it is “owned” by anyone at all. So the legal system we have - blessed as we are by its consequences if we are copyright teachers, Congressmen, Gucci-gulchers or Big Rupert himself - is compelled to treat indistinguishable things in unlike ways.
Or more politely, previously, on Lobsters:
Another big problem is that it’s not at all clear whether information, in the information-theoretic sense, is a medium through which expressive works can be created; that is, it’s not clear whether bits qualify for copyright. Certainly, all around the world, legal systems have assumed that bits are a medium. But perhaps bits have no color. Perhaps homomorphic encryption implies that color is unmeasurable. It is well-accepted even to legal scholars that abstract systems and mathematics aren’t patentable, although the application of this to computers clearly shows that the legal folks involved don’t understand information theory well enough.
Were we anti-copyright leftists really so invisible before, or have you been assuming that No True Leftist would be anti-copyright?
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 26th October 2025English
10·2 months agoClosely related is a thought I had after responding to yet another paper that says hallucinations can be fixed:
I’m starting to suspect that mathematics is not an emergent skill of language models. Formally, given a fixed set of hard mathematical questions, it doesn’t appear that increasing training data necessarily improves the model’s ability to generate valid proofs answering those questions. There could be a sharp divide between memetically-trained models which only know cultural concepts and models like Gödel machines or genetic evolution which easily generate proofs but have no cultural awareness whatsoever.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 26th October 2025English
4·2 months ago“Not Winston Smith?” So, O’Brien?









The author also proposes a framework for analyzing claims about generative AI. I don’t know if I endorse it fully, but I agree that each of the four talking points represents a massive failure of understanding. Their LIES model is:
I would add to this a Plausibility or Personhood or Personality: the incorrect claim that the bots are people. Maybe call it PILES.