• 0 Posts
  • 9 Comments
Joined 4 months ago
cake
Cake day: May 16th, 2025

help-circle

  • On one side, we have a trolley problem thought experiment involving hypothetical children tied to hypothetical train tracks and some people sending him rude emails. On the other side, we have actual dead children and actual hospitals and apartments reduced to rubble. I wonder which side is more convincing to me?

    It’s the same pattern of thought as rationalists with AI, trying to fit everything they see into their apocalypse narrative while ignoring the real harms. Rationalists talk a good game about evidence, but what I see them do in practice is very different. First, use mental masturbation (excuse me, “first principles”) to arrive at some predetermined edgy narrative, and then cherry pick and misinterpret all evidence to support it. It is very important that the narratives are edgy, otherwise what are we even writing 10,000 word blog posts for?








  • I have a lot to say about Scott, being that I used to read his blog frequently and it affected my worldview. This blog title is funny. It was quite obvious that he at least entertained, if not outright supported, rationalists for a long time.

    For me, the final break came when he defended SBF. One of his defenses was that SBF was a nerd, so he couldn’t have had bad intentions. I share a lot of background with both SBF and Scott (we all did a lot of math contests in high school), but even I knew that it’s not remotely an excuse for stealing billions of dollars.

    I feel like a lot of his worldview centers around nerds vs everyone else. There’s this archetype of nerds being awkward, but well-intentioned and smart people who can change the world. They know better than everyone else on how to improve the world, so they should be given as much power as possible. I now realize that this cultural conception of a nerd actually has very little to do with how smart or well-intentioned you really are. The rationalists aren’t very good at technical matters (experts in an area can easily spot their errors), but they pull off this culture very well.

    Recently, I watched a talk by Scott, where he mentioned an anecdote when he was at OpenAI. Ilya Sutskever asked him to come up with a formal, mathematical definition to describe if “an AI loves humanity”. That actually pissed me off. I thought, can we even define if a human loves humanity? Yeah, surely all the literature, art, and music in the world is unnecessary now, we’ve got a definition right here!

    If there’s one thing I’ve learned from all this, it’s that actions speak louder than any number of 10,000 word blog posts. Perhaps the rationalists could stop their theorycrafting for once and, you know, look at what Sam Altman and friends are actually doing.