this is Habryka talking about how his moderating skills are so powerful it takes lesswrong three fucking years to block a poster who’s actively being a drain on the site
here’s his reaction to sneerclub (specifically me - thanks Oliver!) calling LessOnline “wordy racist fest”:
A culture of loose status-focused social connection. Fellow sneerers are not trying to build anything together. They are not relying on each other for trade, coordination or anything else. They don’t need to develop protocols of communication that produce functional outcomes, they just need to have fun sneering together.
He gets us! He really gets us!
some UN-associated ACM talk I was listening to recently had someone cite a number at (iirc)
$1.5tn total estimated investment$800b[0]. haven’t gotten to fact-check it but there’s a number of parts of that talk I wish to write up and make more knownone of the people in it made some entirely AGI-pilled comments, and it’s quite concerning
this talk; looks like video is finally up on youtube too (at the time I yanked it by pcap-ing a zoom playout session - turns out zoom recordings are hella aggressive about not being shared)
the question I asked was:
response is about here
[0] edited for correctness; forget where I saw the >$1.5t number
hearing him respond like that in real time and carefully avoiding the point makes clear the attraction of ChatGPT
Yeah a new form of apologism that I started seeing online is “this isn’t a bubble! Nobody expects an AGI, its just Sam Altman, it will all pay off nicely from 20 million software developers worldwide spending a few grand a year each”.
Which is next level idiotic, besides the numbers just not adding up. There’s only so much open source to plagiarize. It is a very niche activity! It’ll plateau and then a few months later tiny single GPU models catch up to this river boiling shit.
The answer to that has always been the singularity bullshit where the biggest models just keep staying ahead by such a large factor nobody uses the small ones.
but they can plagiarize all the code too that gets sent to them from software dev companies where employees use AI coding tools
We should be so lucky, the ensuing barrage of lawsuits about illegally cribbing company IP would probably make the book author class action damages pale in comparison.
but how would they figure out that it’s happening?
I figure eventually some proprietary work would make it into the wild via autocomplete. Copilot used to be cool with inserting other programmer’s names and emails in author notes for instance, though they seem to have started filtering that out in the mean time.
Copilot licenses let you specifically opt out from your prompts and your code being used to train new models, so it would be a big deal.