It’s not always easy to distinguish between existentialism and a bad mood.

  • 6 Posts
  • 238 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle


  • He isn’t even trying with the yellow and orange boxes. What the fuck do “high-D toroidal attractor manifolds” and “6D helical manifolds” have to do with anything? Why are they there? And he really thinks he can get away with nobody closely reading his charts, with the “(???, nothing)” business. Maybe I should throw in that box in my publications and see how that goes.

    It’s from another horseshit analogy that roughly boils down to both neural net inference (specifically when generating end-of-line tokens) and aspects of specific biological components of human perception being somewhat geometrically modellable. I didn’t include the entire context or a link to the substack in the OP because I didn’t care to, but here is the analogy in full:

    spoiler

    The answer was: the AI represents various features of the line breaking process as one-dimensional helical manifolds in a six-dimensional space, then rotates the manifolds in some way that corresponds to multiplying or comparing the numbers that they’re representing. You don’t need to understand what this means, so I’ve relegated my half-hearted attempt to explain it to a footnote1. From our point of view, what’s important is that this doesn’t look like “LOL, it just sees that the last token was ree and there’s a 12.27% of a line break token following ree.” Next-token prediction created this system, but the system itself can involve arbitrary choices about how to represent and manipulate data.

    Human neuron interpretability is even harder than AI neuron interpretability, but probably your thoughts involve something at least as weird as helical manifolds in 6D spaces.I searched the literature for the closest human equivalent to Claude’s weird helical manifolds, and was able to find one team talking about how the entorhinal cells in the hippocampus, which help you track locations in 2D space, use “high-dimensional toroidal attractor manifolds”. You never think about these, and if Claude is conscious, it doesn’t think about its helices either2. These are just the sorts of strange hacks that next-token/next-sense-datum prediction algorithms discover to encode complicated concepts onto physical computational substrate.

    re: the bolded part, I like how explicitly cherry-picking neuroscience passes for peak rationalism.



  • I like how even by ACX standards scoot’s posts on AI are pure brain damage

    One level lower down, your brain was shaped by next-sense-datum prediction - partly you learned how to do addition because only the mechanism of addition correctly predicted the next word out of your teacher’s mouth when she said “three plus three is . . . “ (it’s more complicated than this, sorry, but this oversimplification is basically true). But you don’t feel like you’re predicting anything when you’re doing a math problem. You’re just doing good, normal mathematical steps, like reciting “P.E.M.D.A.S.” to yourself and carrying the one.

    The most compelling analogy: this is like expecting humans to be “just survival-and-reproduction machines” because survival and reproduction were the optimization criteria in our evolutionary history. […] This simple analogy is slightly off, because it’s confusing two optimization levels: the outer optimization level (in humans, evolution optimizing for reproduction; in AIs, companies optimizing for profit) with the inner optimization level (in humans, next-sense-datum prediction; in AIs, next-token prediction). But the stochastic parrot people probably haven’t gotten to the point where they learn that humans are next sense-datum predictors, so the evolution/reproduction one above might make a better didactic tool.

    He also threatens an Anti-Stochastic-Parrot FAQ.

    Here’s hoping if this happens Bender et al enthusiastically point out this is coming from a guy whose long term master plan is to fight evil AI with eugenics. Or who uses the threat of evil AI to make eugenics great again if they are feeling less charitable.









  • That was a good read.

    Corey doc wrote:

    It’s not “unethical” to scrape the web in order to create and analyze data-sets. That’s just “a search engine”

    Equivocating what LLMs do and what goes into LLM web scraping with “a search engine” is messed up. His article that he links about scraping is mostly about how badly copyright works and how analysing trade-secret-walled data can be beneficial both to consumers and science but occasionally bad for citizen privacy, which you’ll recognize as mostly irrelevant to the concerns people tend to have against LLM training data providers ddosing the fuck out of everything, and all the rest of the stuff tante does a good job of explaining.

    Corey also provides this anecdote:

    As a group of human-rights defending forensic statisticians, HRDAG has always relied on cutting edge mathematics in its analysis. With its Colombia project, HRDAG used a large language model to assign probabilities for responsibility for each killing documented in the databases it analyzed.

    That is, HRDAG was able to rigorously and legibly say, “This killing has an X% probability of having been carried out by a right-wing militia, a Y% probability of having been carried out by the FARC, and a Z% probability of being unrelated to the civil war.”

    The use of large language models — produced from vast corpuses of scraped data — to produce accurate, thorough and comprehensible accounts of the hidden crimes that accompany war and conflict is still in its infancy. But already, these techniques are changing the way we hold criminals to account and bring justice to their victims.

    Scraping to make large language models is good, actually.

    what the actual shit

    edit: I mean, he tried transformer powered voice-to-text and liked it, and now he’s all in on the LLMs are a rigorous and accurate tool actually bandwagon?

    Also the web scraping article is from 2023 but CD linked it in the recent pluralistic post so I assume his views haven’t changed.








  • I mean the vibe is pretty spot on, I’ll give them that, and the premise of someone making fashy noises in an anarchist squat and getting summarily thrown out on his ass is perfectly believable, but the protagonist seems like an obvious a parody character and I think his belief that everyone he meets should adhere to the NAP is meant as a running gag.

    Also, I’d take issue with the unspoken premise that this would be an Exarcheia thing, getting your ass beat by anarchists who clocked you as a fascist (and vice versa) is a panhellenic phenomenon. Fascist here meaning less someone who likes Tucker Carlson and toothbrush mustaches, and more someone who is an organized member of a heavily nationalist sportball fan club and/or whatever is currently filling the void that Golden Dawn left but isn’t making the news.