Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • meeeeetch@lemmy.world
    link
    fedilink
    English
    arrow-up
    80
    ·
    1 year ago

    Ah fuck, it’s been scraping the Facebook comments under every math problem with parentheses that was posted for ‘engagement’

    • Matt Shatt@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 year ago

      The masses of people there who never learned PEMDAS (or BEDMAS depending on your region) is depressing.

      • orclev@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Pretty much all of those rely on the fact that PEMDAS is ambiguous with actual usage. The reason why is it doesn’t differentiate between explicit multiplication and implicit multiplication by placement. E.G. in actual usage “a*b” and “ab” are treated with two different precedence. Most of the time it doesn’t matter but when you introduce division it does. “a*b/c*d” and “ab/cd” are generally treated very differently in practice, while PEMDAS says they’re equivalent.

        • 0ops@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          I see your point. When those expressions are poorly handwritten it can be ambiguous. But as I read it typed out it’s ambiguous only if PEMDAS isn’t strictly followed. So I guess you could say that it might be linguistically ambiguous, but it’s not logically ambiguous. Enter those two expressions in a calculator and you’ll get the same answer.

          • orclev@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            1 year ago

            You actually won’t. A good graphing calculator will treat “ab/cd” as “(a*b)/(c*d)” but “a*b/c*d” as “((a*b)/c)*d” (or sometimes as “a*(b/c)*d”) and actual usage by engineers and mathematicians aligns with the former not the later. You actually can’t enter the expression in a non graphing calculator typically because it won’t support implicit multiplication or variables. While you can write any formula using PEMDAS does that really matter when the majority of professionals don’t?

            Actual usage typically goes parentheses, then exponents, then implicit multiplication, then explicit multiplication and division, then addition and subtraction. PEI(MD)(AS) if you will.

            • 0ops@lemm.ee
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 year ago

              Interesting, I decided to try it with a few calculators I had laying around (TI-83 plus, TI-30XIIS, and Casio fx-115ES plus), and I found that the TI’s obeyed the order of operations, while the Casio behaved as you describe. I hardly use the Casio, so I guess that I’ve been blissfully unaware that usage does differ. TIL. I don’t think I’ve ever used or heard of a calculator that supports parentheses but not implicit multiplication though. Honestly though, the only time I see (AB)/(CD) written as AB/CD in clear text (or handwritten with the dividend and divisor vertically level with each other visually) is in derivatives, but that doesn’t even count because dt and dx are really only one variable represented by two characters. I’m only a math minor undergrad though who’s only used TI’s so maybe I’m just naive lol

              • orclev@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Or you take HPs approach and just sidestep the entire debate by using reverse polish notation in your calculators. From a technical standpoint RPN is really great, but I still find it a little mind bending to try to convert to/from on the fly in my head so I’m not sure I could ever really use a RPN calculator regularly.

  • impiri@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    ·
    1 year ago

    Have we considered the possibility that math has just gotten more difficult over the past few months?

  • xantoxis@lemmy.one
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    8
    ·
    1 year ago

    Why is “98%” supposed to sound good? We made a computer that can’t do math good

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      ·
      edit-2
      1 year ago

      It’s a language model, text prediction. It doesn’t do any counting or reasoning about the preceding text, just completes it with what seems like the most logical conclusion.

      So if enough of the internet had said 1+1=12 it would repeat in kind.

      • tony@lemmy.hoyle.me.uk
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Someone asked it to list the even prime numbers… it then went on a long rant about how to calculate even primes, listing hundreds of them…

        ChatGPT knows nothing about what it’s saying, only how to put likely sounding words together. I’d use it for a cover letter, or something like that… but for maths… no.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Not quite.

        Legal Othello board moves by themselves don’t say anything about the board size or rules.

        And yet when Harvard/MIT researchers fed them into a toy GPT model, they found that the neural network best able to predict outputting legal moves had built an internal representation of the board state and rules.

        Too many people commenting on this topic as armchair experts are confusing training with what results from the training.

        Training on completing text doesn’t mean the end result can’t understand aspects that feed into the original generation of that text, and given a fair bit of research so far, the opposite is almost certainly the case to some degree.

    • Cybermass@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      That’s because they paywalled the good versions, and only corporations get access to that one.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        No, even corporations can’t get access to the pretrained models.

        And given this is almost certainly the result of the fine tuning for ‘safety,’ that means corporations are seeing worse performance too (which seems to be the sentiment of developers working with it on HN).

  • chairman@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    Well, lots of people deleted their Reddit posts and comments. ChatGPT can’t find a place to learn no more. We got to beef up the Fediverse to help ChatGPT put. /s

  • 332@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Seems pretty plausible that the compute required for the “good” version was too high for them to sustainably run it for the normies.