• 0 Posts
  • 68 Comments
Joined 9 months ago
cake
Cake day: May 16th, 2025

help-circle

  • $1000 a week?? Even putting aside literally all of the other issues of AI, it is quite damning that AI cannot even beat humans on cost. AI somehow manages to screw up the one undeniable advantage of software. How do these people delude themselves into thinking that the dogshit they’re eating is good?

    As a sidenote, I think after the bubble collapses, the people who predict that there will still be some uses for genAI are mostly wrong. In large part, this is because they do not realize just how ruinously expensive it is to run these models, let alone scrape data and train them. Right now, these costs are being subsidized by venture capitalists putting their money into a furnace.






  • I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you’re muddling through examples, that generally means you either don’t know what your precise statement is or you don’t have a proof. I’d say not having a precise statement is much worse, and that is what is happening here.

    Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It’s the equivalent of someone saying, “Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.” This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, “in lots of particular cases … it may be easy enough to tell what’s going to happen.” That is not reassuring.

    I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like “find me a solution to this set of linear equations” or “figure out how to pack these boxes in a bin.” (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don’t care about the “arbitrary Turing machines ‘in the wild’” that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.

    Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

    Finally, here are some quibbles about some of the strange terminology he uses. He talks about “ruliology” as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about “computational irreducibility”, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn’t really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!

    If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)




  • I’d say even the part where the article tries to formally state the theorem is not written well. Even then, it’s very clear how narrow the formal statement is. You can say that two agents agree on any statement that is common knowledge, but you have to be careful on exactly how you’re defining “agent”, “statement”, and “common knowledge”. If I actually wanted to prove a point with Aumann’s agreement theorem, I’d have to make sure my scenario fits in the mathematical framework. What is my state space? What are the events partitioning the state space that form an agent? Etc.

    The rats never seem to do the legwork that’s necessary to apply a mathematical theorem. I doubt most of them even understand the formal statement of Aumann’s theorem. Yud is all about “shut up and multiply,” but has anyone ever see him apply Bayes’s theorem and multiply two actual probabilities? All they seem to do is pull numbers out of their ass and fit superexponential curves to 6 data points because the superintelligent AI is definitely coming in 2027.


  • The sad thing is I have some idea of what it’s trying to say. One of the many weird habits of the Rationalists is that they fixate on a few obscure mathematical theorems and then come up with their own ideas of what these theorems really mean. Their interpretations may be only loosely inspired by the actual statements of the theorems, but it does feel real good when your ideas feel as solid as math.

    One of these theorems is Aumann’s agreement theorem. I don’t know what the actual theorem says, but the LW interpretation is that any two “rational” people must eventually agree on every issue after enough discussion, whatever rational means. So if you disagree with any LW principles, you just haven’t read enough 20k word blog posts. Unfortunately, most people with “bounded levels of compute” ain’t got the time, so they can’t necessarily converge on the meta level of, never mind, screw this, I’m not explaining this shit. I don’t want to figure this out anymore.


  • Randomly stumbled upon one of the great ideas of our esteemed Silicon Valley startup founders, one that is apparently worth at least 8.7 million dollars: https://xcancel.com/ndrewpignanelli/status/1998082328715841925#m

    Excited to announce we’ve raised $8.7 Million in seed funding led by @usv with participation from [list a bunch of VC firms here]

    @intelligenceco is building the infrastructure for the one-person billion-dollar company. You still can’t use AI to actually run a business. Current approaches involve lots of custom code, narrow job functions, and old fashioned deterministic workflows. We’re going to change that.

    We’re turning Cofounder from an assistant into the first full-stack agent company platform. Teams will be able to run departments - product/engineering, sales/GTM, customer support, and ops - entirely with agents.

    Then, in 2026 we’ll be the first ones to demonstrate a software company entirely run by agents.

    $8.7 million is quite impressive, yes, but I have an even better strategy for funding them. They can use their own product and become billionaires, and now they can easily come up with $8.7 million considering that is only 0.87% of their wealth. Are these guys hiring? I also have a great deal on the Brooklyn Bridge that I need to tell them about!

    Our branding - with the sunflowers, lush greenery, and people spending time with their friends - reflects our vision for the world. That’s the world we want to build. A world where people actually work less and can spend time doing the things they love.

    We’re going to make it easy for anyone to start a company and build that life for themselves. The life they want to build, and spend every day dreaming about.

    This just makes me angry at how disconnected from reality these people are. All this talk about giving people better lives (and lots of sunflowers), and yet it is an unquestionable axiom that the only way to live a good life is to become a billionaire startup founder. These people do not have any understanding or perspective other than their narrow culture that is currently enabling the rich and powerful to plunder this country.



  • These worries are real. But in many cases, they’re about changes that haven’t come yet.

    Of all the statements that he could have made, this is one of the least self-aware. It is always the pro-AI shills who constantly talk about how AI is going to be amazing and have all these wonderful benefits next year (curve go up). I will also count the doomers who are useful idiots for the AI companies.

    The critics are the ones who look at what AI is actually doing. The informed critics look at the unreliability of AI for any useful purpose, the psychological harm it has caused to many people, the absurd amount of resources being dumped into it, the flimsy financial house of cards supporting it, and at the root of it all, the delusions of the people who desperately want it to all work out so they can be even richer. But even people who aren’t especially informed can see all the slop being shoved down their throats while not seeing any of the supposed magical benefits. Why wouldn’t they fear and loathe AI?



  • There are some comments speculating that some pro-AI people try to infiltrate anti-AI subreddits by applying for moderator positions and then shutting those subreddits down. I think this is the most reasonable explanation for why the mods of “cogsuckers” of all places are sealions for pro-AI arguments. (In the more recent posts in that subreddit, I recognized many usernames who were prominent mods in pro-AI subreddits.)

    I don’t understand what they gain from shutting down subreddits of all things. Do they really think that using these scummy tactics will somehow result in more positive opinions towards AI? Or are they trying the fascist gambit hoping that they will have so much power that public opinion won’t matter anymore? They aren’t exactly billionaires buying out media networks.


  • Don’t forget the other comment saying that if you hate AI, you’re just “vice-signalling” and “telegraphing your incuruosity (sic) far and wide”. AI is just like computer graphics in the 1960s, apparently. We’re still in early days guys, we’ve only invested trillions of dollars into this and stolen the collective works of everyone on the internet, and we don’t have any better ideas than throwing more money compute at the problem! The scaling is still working guys, look at these benchmarks that we totally didn’t pay for. Look at these models doing mathematical reasoning. Actually don’t look at those, you can’t see them because they’re proprietary and live in Canada.

    In other news, I drew a chart the other day, and I can confidently predict that my newborn baby is on track to weigh 10 trillion pounds by age 10.

    EDIT: Rich Hickey has now disabled comments. Fair enough, arguing with promptfondlers is a waste of time and sanity.


  • I went deep into the Yud lore once. A single fluke SAT score served as the basis for Yud’s belief in his own world-changing importance. In middle school, he took an SAT with a score of 670 verbal and 740 math (maximum 800 each) and the Midwest Talent Search contacted him to tell him that his scores were very high for a middle schooler. Despite his great pains to talk about how he tried to be humble about it, he also says that he was in the “99.9998th percentile” and “not only bright but waayy out of the ordinary.”

    I was in the math contest scene. I have good friends who did well on AP Calculus in middle school, and were skilled enough at contests that they would have easily gotten an 800 on the math SAT if they took it. Even so, there were middle schoolers who were far more skilled than them, and I have seen other people who were far less “talented” in middle school rise to great heights later in life. As it turns out, skills can be developed through practice.

    Yud’s performance would not even be considered impressive in the math contest community, let alone justify calling him one of the most important people in the world. Perhaps at the time, he didn’t know better. But he decided to make this a core part of his self-identity. His life quickly spiraled out of control, starting with him refusing to attend high school.


  • It is how professors talk to each other in … debate halls? What the fuck? Yud really doesn’t have any clue how universities work.

    I am a PhD student right now so I have a far better idea of how professors talk to each other. The way most professors (in math/CS at least) communicate in a spoken setting is through giving talks at conferences. The cool professors use chalkboards, but most people these days use slides. As it turns out, debates are really fucking stupid for scientific research for so many reasons.

    1. Science assumes good faith out of everyone, and debates are needlessly adversarial. This is why everyone just presents and listens to talks.
    2. Debates are actually really bad for the kind of deep analysis and thought needed to understand new research. If you want to seriously consider novel ideas, it’s not so easy when you’re expected to come up with a response in the next few minutes.
    3. Debates generally favor people who use good rhetoric and can package their ideas more neatly, not the people who really have more interesting ideas.
    4. If you want to justify a scientific claim, you do it with experiments and evidence (or a mathematical proof when applicable). What purpose does a debate serve?

    I think Yud’s fixation on debates and “winning” reflects what he thinks of intellectualism. For him, it is merely a means to an end. The real goal is to be superior and beat up other people.