• DigitalJacobin@lemmy.ml
    link
    fedilink
    English
    arrow-up
    81
    arrow-down
    13
    ·
    1 year ago

    What in the world would an “uncensored” model even imply? And give me a break, private platforms choosing to not platform something/someone isn’t “censorship”, you don’t have a right to another’s platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.

    • Doug7070@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      4
      ·
      1 year ago

      This is something I think a lot of people don’t get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There’s a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.

      Talking about an “uncensored” LLM basically just comes down to saying you’d like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you’re actively trying to produce a model to do illegal or unethical things I don’t quite see the point of contention or what “censorship” could actually mean in this context.

      • underisk@lemmy.ml
        link
        fedilink
        arrow-up
        19
        arrow-down
        3
        ·
        1 year ago

        It means they can’t make porn images of celebs or anime waifus, usually.

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        That’s not at all how a uncensored LLM is. That sounds like an untrained model. Have you actually tried an uncensored model? It’s the same thing as regular, but it doesn’t attempt to block itself for saying stupid stuff, like “I cannot generate a scenario where Obama and Jesus battle because that would be deemed offensive to cultures”. It’s literally just removing the safeguard.

      • RobotToaster@mander.xyz
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        It’s a machine, it should do what the human tells it to. A machine has no business telling me what I can and cannot do.

    • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
      link
      fedilink
      arrow-up
      22
      arrow-down
      1
      ·
      1 year ago

      I fooled around with some uncensored LLaMA models, and to be honest if you try to hold a conversation with most of them they tend to get cranky after a while - especially when they hallucinate a lie and you point it out or question it.

      I will never forget when one of the models tried to convince me that photosynthesis wasn’t real, and started getting all snappy when I said I wasn’t accepting that answer 😂

      Most of the censorship “fine tuning” data that I’ve seen (for LoRA models anyway) appears to be mainly scientific data, instructional data, and conversation excerpts

    • TheWiseAlaundo@lemmy.whynotdrs.org
      link
      fedilink
      arrow-up
      18
      arrow-down
      2
      ·
      1 year ago

      There’s a ton of stuff ChatGPT won’t answer, which is supremely annoying.

      I’ve tried making Dungeons and Dragons scenarios with it, and it will simply refuse to describe violence. Pretty much a full stop.

      Open AI is also a complete prude about nudity, so Eilistraee (Drow godess that dances with a sword) just isn’t an option for their image generation. Text generation will try to avoid nudity, but also stop short of directly addressing it.

      Sarcasm is, for the most part, very difficult to do… If ChatGPT thinks what you’re trying to write is mean-spirited, it just won’t do it. However, delusional/magical thinking is actually acceptable. Try asking ChatGPT how licking stamps will give you better body positivity, and it’s fine, and often unintentionally very funny.

      There’s plenty of topics that LLMs are overly sensitive about, and uncensored models largely correct that. I’m running Wizard 30B uncensored locally, and ChatGPT for everything else. I’d like to think I’m not a weirdo, I just like D&d… a lot, lol… and even with my use case I’m bumping my head on some of the censorship issues with LLMs.

    • 👁️👄👁️@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      24
      ·
      1 year ago

      Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don’t want it to be censored. It’s gathering this from public data they don’t own after all. I agree with Mozilla’s principles, but also LLMs are tools and should be treated as such.

      • salarua@sopuli.xyz
        link
        fedilink
        arrow-up
        27
        arrow-down
        4
        ·
        edit-2
        1 year ago

        shit just went from 0 to 100 real fucking quick

        for real though, if you ask an LLM how to make a bomb, it’s not the LLM that’s the problem

        • 👁️👄👁️@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          12
          ·
          edit-2
          1 year ago

          If it has the information, why not? Why should you be restricted by what a company deems appropriate. I obviously picked the bomb example as an extreme example, but that’s the point.

          Just like I can demonize encryption by saying I should be allowed to secretly send illegal content. If I asked you straight up if encryption is a good thing, you’d probably agree. If I mentioned its inevitable bad use in a shocking manner, would you defend the ability to do that, or change your stance that encryption is bad?

          To have a strong stance means also defending the potential harmful effects, since they’re inevitable. It’s hard to keep values consistent, even when there are potential harmful effects of something that’s for the greater good. Encryption is a perfect example of that.

        • 👁️👄👁️@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          14
          ·
          edit-2
          1 year ago

          Do gun manufacturers get in trouble when someone shoots somebody?

          Do car manufacturers get in trouble when someone runs somebody over?

          Do search engines get in trouble if they accidentally link to harmful sites?

          What about social media sites getting in trouble for users uploading illegal content?

          Mozilla doesn’t need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I’m not asking them to host this themselves, which is an important distinction I should have made.

          Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.

      • Doug7070@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        My brother in Christ, building a bomb and doing terrorism is not a form of protected speech, and an overwrought search engine with a poorly attached ability to hold a conversation refusing to give you bomb making information is not censorship.