- cross-posted to:
- lobsters@lemmy.bestiver.se
- auai@programming.dev
- cross-posted to:
- lobsters@lemmy.bestiver.se
- auai@programming.dev
I’m tired of hearing about vibecoding on Lobsters, so I’ve written up three of my side tasks for coding agents. Talk is cheap; show us the code.



There’s already a couple of 'em - one of 'em, as expected, is being a sneerable little shit:
I’ve started grading and his grade is ready to read. I didn’t define an F tier for this task, so he did not place on the tier list. The most dramatic part of this is overfitting to the task at agent runtime (that is, “meta in-context learning”); it was able to do quite well at the given benchmark but at the cost of spectacular failure on anything complex outside of the context.
Oh no, tasks that have actual concrete outcomes and requirements! Vibe coders biggest nemesis!
Then why did you submit it, dipshit?
That “kind of standards” being basic competence.
it was worth trying to start from my phone