Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisat…
as one of the people representing the “hero group” (for lack of a better term) your comment references: eh. I didn’t start out with all this knowledge and experience. it built up over time.
it’s more about the mode of thinking and how to engage with a problem, than it is about specific “highly skilled” stuff. the skill and experience help/contribute, they refine, they assist in filtering
the reason I make this comment is because I think it’s valuable that anyone who can do the job well gets to do the thing, and that it’s never good to gatekeep people out. let’s not unnecessarily contribute to imposter syndrome
Yeah, the glorious future where every half-as-good-as-expert developer is now only 25% as good as an expert (a level of performance also known as being “completely shit at it”), but he’s writing 10x the amount of unusable shitcode.
But when a mid-tier or entry level dev can do 60% of what a senior can do
This simply isn’t how software development skill levels work. You can’t give a tool to a new dev and have them do things experienced devs can do that new devs can’t. You can maybe get faster low tier output (though low tier output demands more review work from experienced devs so the utility of that is questionable). I’m sorry but you clearly don’t understand the topic you’re making these bold claims about.
Even pre AI I had to deal with a project where they shoved testing and compliance at juniors for a long time. What a fucking mess it was. I had to go through every commit mentioning Coverity because they had a junior fixing coverity flagged “issues”. I spent at least 2 days debugging a memory corruption crash caused by such “fix”, and then I had to spend who knows how long reviewing every such “fix”.
And don’t get me started on tests. 200+ tests, of them none caught several regressions in handling of parameters that are shown early in the frigging how-to. Not some obscure corner case, the stuff you immediately run into if you just follow the documentation.
With AI all the numbers would be much larger - more commits “fixing coverity issues” (and worse yet fixing “issues” that LLM sees in code), more so called “tests” that don’t actually flag any real regressions, etc.
LLM-assisted entry-level developers merely need to be half as good as expert human unassisted developers
This isn’t even close to existing.
The theoretical cyborg-developer at that skill level would surely be introducing horrible security bugs or brittle features that don’t stand up to change
Sadly i think this is exactly what many CEOs are thinking is going to happen because they’ve been sold on openai and anthropic lies that it’s just around the corner
This is a very “nine women can make a baby in one month”.
The idea that there can even be two half as good developers is a misunderstanding of how anything works. If it worked like that, the study would be a dud because people could just run two AIs for 160% productivity.
deleted by creator
Are these entry-level developers that are merely half as good as expert human unassisted developers in the room with us right now?
deleted by creator
deleted by creator
as one of the people representing the “hero group” (for lack of a better term) your comment references: eh. I didn’t start out with all this knowledge and experience. it built up over time.
it’s more about the mode of thinking and how to engage with a problem, than it is about specific “highly skilled” stuff. the skill and experience help/contribute, they refine, they assist in filtering
the reason I make this comment is because I think it’s valuable that anyone who can do the job well gets to do the thing, and that it’s never good to gatekeep people out. let’s not unnecessarily contribute to imposter syndrome
deleted by creator
the astute reader may note a certain part of my comment addressed a particular aspect of this
deleted by creator
You’re the one bringing up popularity in response to a substantial argument. I hope you’re okay…
and upon hearing the lesson, the journeyman went to the pub
Entry-level devs ain’t replacing anyone. One senior dev is going to be doing the work of a whole team
deleted by creator
Yeah, the glorious future where every half-as-good-as-expert developer is now only 25% as good as an expert (a level of performance also known as being “completely shit at it”), but he’s writing 10x the amount of unusable shitcode.
deleted by creator
Okay but that is different from the argument that entry developers only need to be half as good to deliver a working product
This simply isn’t how software development skill levels work. You can’t give a tool to a new dev and have them do things experienced devs can do that new devs can’t. You can maybe get faster low tier output (though low tier output demands more review work from experienced devs so the utility of that is questionable). I’m sorry but you clearly don’t understand the topic you’re making these bold claims about.
I think more low tier output would be a disaster.
Even pre AI I had to deal with a project where they shoved testing and compliance at juniors for a long time. What a fucking mess it was. I had to go through every commit mentioning Coverity because they had a junior fixing coverity flagged “issues”. I spent at least 2 days debugging a memory corruption crash caused by such “fix”, and then I had to spend who knows how long reviewing every such “fix”.
And don’t get me started on tests. 200+ tests, of them none caught several regressions in handling of parameters that are shown early in the frigging how-to. Not some obscure corner case, the stuff you immediately run into if you just follow the documentation.
With AI all the numbers would be much larger - more commits “fixing coverity issues” (and worse yet fixing “issues” that LLM sees in code), more so called “tests” that don’t actually flag any real regressions, etc.
Same as how an entry level architect can build a building 60% as tall, and that’ll last 60% as long, right?
Edit: And an entry level aerospace engineer with AI assistance will build a plane that’s 60% as good at not crashing.
I’m not looking forward to the world I believe is coming…
Get 2 and the plane will be 120% as good!
In fact if children with AI are a mere 1% as good, a school with 150 children can build 150% as good!
I am sure this is how project management works, and if it is not maybe Elon can get Grok to claim that it is. (When not busy praising Hitler.)
this brooks no argument and it’s clear we should immediately throw all available resources at ai so as to get infinite improvement!!~
(I even heard some UN policy wonk spout the AGI line recently 🙄)
is there like a character sheet somewhere so i can know where i fall on this developer spectrum
It’s going to be your INT bonus modifier, but you can get a feat that also adds the WIS modifier
For prolonged coding sessions you do need CON saving throws, but you can get advantage from drinking coffee (once per short rest)
I must have picked up a feat somewhere because I hit that shit way more than once per short rest
deleted by creator
“I’m not scared a LLM is going to be able to replace me. I’m scared that CEO are going to think that”
AI->cocaine filter: Cocaine isn’t going to replace you. Someone using cocaine is going to replace you.
This is a very “nine women can make a baby in one month”.
The idea that there can even be two half as good developers is a misunderstanding of how anything works. If it worked like that, the study would be a dud because people could just run two AIs for 160% productivity.