• SpikesOtherDog@ani.social
    link
    fedilink
    English
    arrow-up
    14
    ·
    10 months ago

    it makes things up, it makes no attempt to adhere to reason when it’s making an argument.

    It doesn’t hardly understand logic. I’m using it to generate content and it continuously will assert information in ways that don’t make sense, relate things that aren’t connected, and forget facts that don’t flow into the response.

    • mayonaise_met@feddit.nl
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      10 months ago

      As I understand it as a layman who uses GPT4 quite a lot to generate code and formulas, it doesn’t understand logic at all. Afaik, there is currently no rational process which considers whether what it’s about to say makes sense and is correct.

      It just sort of bullshits it’s way to an answer based on whether words seem likely according to its model.

      That’s why you can point it in the right direction and it will sometimes appear to apply reasoning and correct itself. But you can just as easily point it in the wrong direction and it will do that just as confidently too.

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        10 months ago

        It has no notion of logic at all.

        It roughly works by piecing together sentences based on the probability of the various elements (mainly words but also more complex) being there in various relations to each other, the “probability curves” (not quite probability curves but that’s a good enough analog) having been derived from the very large language training sets used to train them (hence LLM - Large Language Model).

        This is why you might get things like pieces of argumentation which are internally consistent (or merelly familiar segments from actual human posts were people are making an argument) but they’re not consistent with each other - the thing is not building an argument following a logic thread, it’s just putting together language tokens in common ways which in its training set were found associate with each other and with language token structures similar to those in your question.