instead of being killed by murdering robots controlled by an AI, we’re blackmailed by an AI

  • albigu
    link
    fedilink
    arrow-up
    22
    ·
    10 months ago

    They aren’t “programmed” to do something, they just produce likely text. If it somehow “learned” from portions of the data to threaten to dox people in circumstances like this, it just replicates that. The programmers themselves likely never saw that portion of the corpus with 4chan bickering, since the dataset is usually impossibly large.

    • redtea
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      10 months ago

      So they can’t execute code when receiving certain prompts? I know what you mean about not being ‘programmed’ but do they now do more than regurgitate text? What if someone were to ask gpt for something illegal, would it not flag that with a user-profile report? Sounds like a huge flaw, if it can’t do that.

      • albigu
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        10 months ago

        I don’t know the internals of Bing, but they have some triggers which themselves seem to be made with NLP. They use it a lot to fetch web info. That means that if the model somehow produces some creative version of a crime that doesn’t get caught, it’ll just send.

        I think this is why Bing sometimes refuses to continue the conversation, or ChatGPT will flag its own text as against their terms sometimes. But yeah, they definitely can encourage how to crime sometimes, I’ve made ChatGPT explicitly tell me how to replicate some crimes like the Armin Meiwes cannibalism one while bored.

        • redtea
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          10 months ago

          The legal cases are going to be fun reading when they come out!

          The AI made me do it