Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Last substack for 2025 - may 2026 bring better tidings. Credit and/or blame to David Gerard for starting this.)


https://github.com/leanprover/lean4/blob/master/.claude/CLAUDE.md
Imagine if you had to tell people “now remember to actually look at the code before changing it.” – but I’m sure LLMs will replace us any day now.
Also lol this sounds frustrating:
Edit: I might be misreading this but is this signs of someone working on an LLM driven release process? https://github.com/leanprover/lean4/blob/master/.claude/commands/release.md ??
So many CRITICAL and MANDATORY steps in the release instruction file. As it always is with AI, if it doesn’t work, just use more forceful language and capital letters. One more CRITICAL bullet point bro, that’ll fix everything.
Sadly, I am not too surprised by the developers of Lean turning towards AI. The AI people have been quite interested in Lean for a while now since they think it is a useful tool to have AIs do math (and math = smart, you know).
Great. Now we’ll need to preserve low-background-radiation computer-verified proofs.
The whole culture of writing “system prompts” seems utterly a cargo-cult to me. Like if the ST: Voyager episode “Tuvix” was instead about Lt. Barclay and Picard accidentally getting combined in the transporter, and the resulting sadboy Barcard spent the rest of his existence neurotically shouting his intricately detailed demands at the holodeck in an authoritative British tone.
If inference is all about taking derivatives in a vector space, surely there should be some marginally more deterministic method for constraining those vectors that could be readily proceduralized, instead of apparent subject-matter experts being reduced to wheedling with an imaginary friend. But I have been repeatedly assured by sane, sober experts that it is just simply is not so
One of my old teachers would send documents to the class with various pieces of information. They were a few years away from retirement and never really got word processors. They would start by putting important stuff in bold. But some important things were more important than others. They got put in bold all caps. Sometimes, information was so critical it got put in bold, underline, all caps and red font colour. At the time we made fun of the teacher, but I don’t think I could blame them. They were doing the best they could with the knowledge of the tools they had at the time.
Now, in the files linked above I saw the word “never” in all caps, bold all caps, in italics and in a normal font. Apparently, one step in the process is mandatory. Are the others optional? This is supposed to be a procedure to be followed to the letter with each step being there for a reason. These are supposed computer-savvy people
I’ll admit I did not read the scripts in detail but this is a solved problem. The solution is a script with structured output as part of a pipeline. Why give up one of the only good thing computers can do: executing a well-defined task in a deterministic way. Reading this is so exhausting…
It reminds me of the bizzare and ill-omened rituals my ancestors used to start a weed eater.
Yes, they are trying to automate releases.
sidenote: I don’t like how taking an approach of mediocre software engineering to mathematics is becoming more popular. Update your dependency (whose code you never read) to v0.4.5 for bug fixes! Why was it incorrect in the first place? Anyway, this blog post sets some good rules for reviewing computer proofs. The second-to-last comment tries to argue npm-ification is good actually. I can’t tell if satire
would you be willing to elaborate on this? i am just curious because i took the opposite approach (started as a mathematician now i write bad python scripts)
The flipside to that quote is that computer programs are useful tools for mathematicians. See the mersenne prime search, OEIS and its search engine, The L-function database, as well as the various python scripts and agda, rocq, lean proofs written to solve specific problems within papers. However, not everything is perfect: throwing more compute at the problem is a bad solution in general; the stereotypical python script hacked together to serve only a purpose has one-letter variable names and redundant expressions, making it hard to review. Throw in the vibe coding over it all, and that’s pretty much the extent of what I mean.
I apologize if anything is confusing, I’m not great at communication. I also have yet to apply to a mathematics uni, so maybe this is all manageable in practice.
no need to apologize, i understand what you mean. my experience with mathematicians has been that this is really common. even the theoretical computer scientists (the “lemma, theorem, proof” kind) i have met do this kind of bullshit when they finally decide to write a line of code. hell, their pseudocode is often baffling — if you are literally unable to run the code through a machine, maybe focus on how it comes across to a human reader? nah, it’s more important that i believe it is technically correct and that no one else is able to verify it.