

re: last line: no, he never will admit or concede to a single damn thing, and that’s why every time I remember this article exists I have to reread dabblers & blowhards one more time purely for defensive catharsis
re: last line: no, he never will admit or concede to a single damn thing, and that’s why every time I remember this article exists I have to reread dabblers & blowhards one more time purely for defensive catharsis
The problem with calling imaginary entities by “funny wordplay” on the slurs used against Black people and Mexicans isn’t the imaginary entities, is that you imply that Black people and Mexicans are something negative to be compared to. It implies that racial slurs are so trifling and inconsequential that it’s appropriate subject matter for puns; it implies racial slurs are not an act of targeted oppression.
That’s literally the opposite of calling nazis nazis. Personally I deal with nazis through the use of direct violence. The world deals with Black people and immigrants through systemic violence. There’s a process by which people get convinced that it is ok that Black people get targeted by police, and that process begins with hegemonic normalisation of supremacist values—it beings with words, with implications. Just like, for example, the process by which it becomes OK to discard the lives of disabled people begins with language that insults others based on “intelligence”.
It is contemptible to be a fascist; it is not contemptible to be a wetback. Therefore it is a good thing to insult the machines by comparing them to 1984 versificators; it is a bad thing to insult the machines by comparing them to Mexicans. The direction you insult towards matters, just like there’s a difference between violence done by the oppressor and violence done by the oppressor.
So I learned about the rise of pro-Clippy sentiment in the wake of ChatGPT and that led me on a little ramble about the ELIZA effect vs. the exercise of empathy https://awful.systems/post/5495333
I’ve often called slop “signal-shaped noise”. I think the damage already done by slop pissed all over the reservoirs of knowledge, art and culture is irreversible and long-lasting. This is the only thing generative “AI” is good at, making spam that’s hard to detect.
It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email; no more and no less. I remember how it was a small revolution, in the arms race against spammers, when statistical methods came up; everywhere we took of the load of straining SpamAssassin with rspamd (in the years before gmail devoured us all). I would argue “A Plan for Spam” launched Paul Graham’s notoriety, much more than the Lisp web stores he was so proud of. Filtering emails by keywords was not being enough, and now you could train your computer to gradually recognise emails that looked off, for whatever definition of “off” worked for your specific inbox.
Now we have the richest people building the most expensive, energy-intensive superclusters to use the same statistical methods the other way around, to generate spam that looks like not-spam, and is therefore immune to all filtering strategies we had developed. That same blob-like malleability of spam filters makes the new spam generators able to fit their output to whatever niche they want to pollute; the noise can be shaped like any signal.
I wonder what PG is saying about gen-“AI” these days? let’s check:
“AI is the exact opposite of a solution in search of a problem,” he wrote on X. “It’s the solution to far more problems than its developers even knew existed … AI is turning out to be the missing piece in a large number of important, almost-completed puzzles.”
He shared no examples, but […]
Who would have thought that A Plan for Spam was, all along, a plan for spam.
choice quote from Elsevier’s response:
Q. Have authors consented to these hyperlinks in their scientific articles?
Yes, it is included on the signed agreement between the author and Elsevier.
Q. If I were to publish my work with Elsevier, do I risk that hyperlinks to AI summaries will be added to my papers without my consent?
Yes, because you will need to sign an agreement with Elsevier.
consent, everyone!
From gormless gray voice to misattributed sources, it can be daunting to read articles that turn out to be slop. However, incorporating the right tools and techniques can help you navigate instructionals in the age of AI. Let’s delve right in and and learn some telltale signs like:
Dunno I just enjoyed the fuck out of “Landlocked in Foreign Skin”, like it’s been a long time since I pause my life to devour a book in one sitting like this, and given that Drew Huff writes from Seattle I’m thinking they’re a USian? And I was really engrossed by Arkady Martine’s A Memory Called Empire which resonated a lot with my experiences as a Third World immigrant, with a certain honesty in portrayal of what it feels like to admire “culture” at a distance from a colony that I seldomly see (I’m on book #2 currently). I’m more of a fantasy reader, but Octavia Butler and Le Guin’s sci-fi were absolutely formative to me, and if you ask me one modern sci-fi series I liked besides those mentioned so far, I’d probably say Wayfarers or Monk & Robot. Plenty of good SF authors from the USA whose politics are more or less the opposite of what you describe.
The trick is I read books by queer folk, women and PoC almost exclusively. Absolutely don’t regret it, all the fun stuff is there in the margins.