• psmgx@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    “Sorry, we’ll format correctly in JSON this time.”

    [Proceeds to shit out the exact same garbage output]

  • Engraver3825@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    True story:

    AI: 42, ]

    Vibe coder: oh no, a syntax error, programming is too difficult, software engineers are gatekeeping with their black magic.

  • Undaunted@feddit.org
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It’s kinda weird.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 months ago

    Funny thing is correct json is easy to “force” with grammar-based sampling (aka it literally can’t output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that…

    A conspiratorial part of me thinks that’s on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of “we’re almost at AGI, I just need another trillion to scale up with no other improvements!”

  • borth@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    The AI probably: Well, I might have made up responses before, but now that “make up responses” is in the prompt, I will definitely make up responses now.