• schnurrito@discuss.tchncs.de
    link
    fedilink
    arrow-up
    29
    arrow-down
    1
    ·
    edit-2
    2 months ago

    This is hardly programmer humor… there is probably an infinite amount of wrong responses by LLMs, which is not surprising at all.

      • KairuByte@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        2 months ago

        Eh

        If I program something to always reply “2” when you ask it “how many [thing] in [thing]?” It’s not really good at counting. Could it be good? Sure. But that’s not what it was designed to do.

        Similarly, LLMs were not designed to count things. So it’s unsurprising when they get such an answer wrong.

        • Rainer Burkhardt@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I can evaluate this because it’s easy for me to count. But how can I evaluate something else, how can I know whether the LLM ist good at it or not?

          • KairuByte@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            ·
            2 months ago

            Assume it is not. If you’re asking an LLM for information you don’t understand, you’re going to have a bad time. It’s not a learning tool, and using it as such is a terrible idea.

            If you want to use it for search, don’t just take it at face value. Click into its sources, and verify the information.