• ☆ Yσɠƚԋσʂ ☆OP
    link
    fedilink
    arrow-up
    8
    ·
    8 months ago

    I bet it’s cause they originally trained it using the OpenAI API, so it inherited a lot of the biases from it.

    • Commiejones
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      I don’t know how these things work but it seems strange it would it inherit censorship. Especially because deepseek starts to answer but then erases it and “computer says no”

        • bobs_guns
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          They just don’t want to open any cans of worms. Western models are also quite censored, because LLMs are unreliable dogshit

      • Sino-Soviet Drip
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        That censorship is not inherited. The censorship in the corpus of training data is though.

        Seems like the CPC just approached someone in Deepseek and said “fix this up before the next release” and provided tool to determine what qualifies for removal, which is why it happens post-generation.