- cross-posted to:
- lobsters@lemmy.bestiver.se
- auai@programming.dev
- cross-posted to:
- lobsters@lemmy.bestiver.se
- auai@programming.dev
I’m tired of hearing about vibecoding on Lobsters, so I’ve written up three of my side tasks for coding agents. Talk is cheap; show us the code.



It occurs to me that this audience might not immediately understand how hard the chosen tasks are. I was fairly adversarial with my task selection.
Two of them are in RPython, an old dialect of Python 2.7 that chatbots will have trouble emitting because they’re trained on the incompatible Python 3.x lineage. The odd task out asks for the bot to read Raku, which is as tough as its legendary predecessor Perl 5, and to write low-level code that is very prone to crashing. All three tasks must be done relative to a Nix flake, which is easy for folks who are used to it but not typical for bots. The third task is an open-ended optimization problem where a top score will require full-stack knowledge and a strong sense of performance heuristics; I gave two examples of how to do it, but by construction neither example can result in an S-tier score if literally copied.
This test is meant to shame and embarrass those who attempt it. It also happens to be a slice of the stuff that I do in my spare time.
Let’s see if you get any takers.
There’s already a couple of 'em - one of 'em, as expected, is being a sneerable little shit:
I’ve started grading and his grade is ready to read. I didn’t define an F tier for this task, so he did not place on the tier list. The most dramatic part of this is overfitting to the task at agent runtime (that is, “meta in-context learning”); it was able to do quite well at the given benchmark but at the cost of spectacular failure on anything complex outside of the context.
Oh no, tasks that have actual concrete outcomes and requirements! Vibe coders biggest nemesis!
Then why did you submit it, dipshit?
That “kind of standards” being basic competence.
it was worth trying to start from my phone