

…I will freely admit to not knowing the norms of courtroom conduct, but isn’t having preestablished penalties for specific infractions central to the whole concept of law itself.


…I will freely admit to not knowing the norms of courtroom conduct, but isn’t having preestablished penalties for specific infractions central to the whole concept of law itself.


We are three paragraphs and one subheading down before we hit an Ayn Rand quote. This clearly bodes well.
A couple paragraphs later we’re ignoring both the obvious philosophical discussion about creativity and the more immediate argument about why this technology is being forced on us so aggressively. As much as I’d love to rant about this I got distracted by the next bit talking about how micro expressions will let LLMs decode emotions and whatever. I’d love to know this guy’s thoughts on that AI-powered phrenologist features a couple weeks ago.


Hang on I’ve been trying to create a whole house for this joke and I could have just used the bathroom?


What’s more plausible, that I made a bad assumption in my fermi estimation or that all the world’s governments have been undertaking the most wildly successful coverup for nearly a century with no leaks or failures? Clearly the latter.


Factor Fexcectorn sounds like a Roman centurion who tried to improve the army’s logistics by hitching multiple wagons together in sequence.


So I’m not double checking their work because that’s more of a time and energy investment than I’m prepared for here. I also do not have the perspective of someone who has actually had to make the relevant top-level decisions. But caveats aside I think there are some interesting conclusions to be drawn here:
It’s actually heartening to see that even the LW comments open by bringing up how optimistic this analysis is about the capabilities of LLM-based systems. “Our chatbot fucked up” has some significant fiscal downsides that need to be accounted for.
The initial comparison of direct API costs is interesting because the work of setting up and running this hypothetical replacement system is not trivial and cannot reasonably be outsourced to whoever has the lowest cost of labor due. I would assume that the additional requirements of setting up and running your own foundation model similarly eats through most of the benefits of vertical integration, even before we get into how radically (and therefore disastrously) that would expand the capabilities of most companies. Most organizations that aren’t already tech companies couldn’t do it, and those that could will likely not see the advertised returns.
I’m not sure how much of the AI bubble we’re in is driven even by an expectation of actual financial returns at this point. To what extent are we looking at an investor and managerial class that is excited to put “AI” somewhere on their reports because that’s the current Cutting Edge of Disruptive Digital Transformation into New Paradigms of Technology and Innovation and whatever else all these business idiots think they’re supposed to do all day.
I’m actually going to ignore the question of what happens to the displaced workers here because the idea that this job is something that earns a decent living wage is still just as dead if it’s replaced by AI or outsourced to whoever has the fewest worker protections. That said, I will pour one out for my frontline IT comrades in South Africa and beyond. Whenever this question is asked the answer is bad for us.


Finally had a chance to listen, continuing to enjoy it greatly and commenting here in liu of having patreon money.
I feel like some of what you talk about with Powell’s libertarian economics contrasting with his racist cultural chauvinism seems to tie in with our good friends in silicon valley and the way their libertarianism seems to have moved so swiftly into technofascism and getting on board with The Guy. Being openly racist appears to have been almost like the missing piece that ties it into an internally consistent political project.


This bounced off of the earlier stub about LLM recipes to create a new cooking show: Chef Jippity. The contestants are all sous chefs at a new restaurant, with the head of the kitchen being some dumbass who blindly follows the instructions of an LLM. Can you work around the robot to create edible food or will Chef Jippity run this whole thing into the ground and lose everyone their jobs? Find out Thursday on Food Network!


Twitter adds default country tags. Immediately finds a whole bunch of foreign bots agitating about US politics. Promptly ignores that in order to be racist.


I’m going to laugh if they try to spin it as “we’re not being racist, we just wanted to get as much institutional clout as possible and avoided prominently featuringanyone from other institutions!”
As Warren Buffett might quip: only buy what you’d hold if markets closed for a decade
And once again the conservative sandiwch-heavy portfolio pays off for the hungry investor!


Harry takes a swig and immediately sees the truth: that he is the smartest specialest bestest boy in the universe. He spends the remaining 73 chapters celebrating and gloating about this fact while accomplishing nothing. So the story doesn’t meaningfully change at all.


Literally every trend he brings up to showcase this cultural stagnation is some combination of aesthetic, nonexistent, or driven by income inequality and a loss of economic security.
The whole thing has a vaguely ex-catholic vibe where sin is simultaneously the result of evil actions on earth and also something that’s inherently part of your soul as a human being because dumb woman ate an apple. As someone who was raised in the church to a degree it never felt unreal and actually resonated pretty hard, but also yeah it doesn’t make a lot of sense logically.


They say the unexamined life isn’t worth living, but outsourcing the examination to an LLM gives you more time to hustle and grind, maximizing financial returns. That’s what they mean, right?


…you know the first time you mentioned this I assumed it was just for the bit but now I’m both impressed and intimidated.


So data lake and data warehouse are different words for the giant databases of business data that you can perform analytics on to understand your deep business lore or whatever. I assume that a data lake house is similar to the other two but poorly maintained and inconvenient to access, but with a very nice UI and a boat dock.


One of the only reasons I’m hesitant to call Rationalism a cult in its own right is that Yudkowsky and friends always seem to respond to this element of cultiness by saying “oh, let me explain our in-group jargon in exhaustive detail so that you can more or less understand what we’re trying to say” rather than “you just need to buy our book and attend some meetings and talk to the guru and wear this robe…”


This is why I only hang out with groups like “terrible shit” or “bunch of self-satisfied assholes”. This has worked only to my advantage so far.
Hat tip to the person who wants to try and include DMT and other hallucinogens and psychedelics. How many of these experiences are gonna be tagged “Machine Elves” by the time anyone starts asking wtf we’re doing here?