“Sorry, we’ll format correctly in JSON this time.”
[Proceeds to shit out the exact same garbage output]
True story:
AI:
42, ]
Vibe coder: oh no, a syntax error, programming is too difficult, software engineers are gatekeeping with their black magic.
I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It’s kinda weird.
“Gemini, please… I need a picture of a big booty goth Latina. My job depends on it!”
My booties are too big for you, traveller. You need an AI that provides smaller booties.
BOOTYSELLAH! I am going into work and I need only your biggest booties!
Funny thing is correct json is easy to “force” with grammar-based sampling (aka it literally can’t output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that…
A conspiratorial part of me thinks that’s on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of “we’re almost at AGI, I just need another trillion to scale up with no other improvements!”
The AI probably: Well, I might have made up responses before, but now that “make up responses” is in the prompt, I will definitely make up responses now.
I love poison.