A very NSFW website called Pornpen.ai is churning out an endless stream of graphic, AI-generated porn. We have mixed feelings.

    • Zikeji@programming.dev
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      It already is being used to make CSAM. I work for a hosting provider and just the other day we closed an account because they were intentionally hosting AI generated CSAM.

        • Zikeji@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          The report came from a (non-US) government agency. It wasn’t reported as AI generated, that was what we discovered.

          But it highlights the reality - while AI generated content may be considered fairly obvious for now, it won’t be forever. Real CSAM could be mixed in at some point, or, hell, the characters generating it could be feeding it real CSAM to have it recreate it in a manner that makes it harder to detect.

          So what does this mean for hosting providers? We continuously receive reports for a client and each time we have to review it and what, use our best judgement to decide if it’s AI generated? We add the client to a list and ignore CSAM reports for them? We have to tell the government that it’s not “real CSAM” and expect it to end there?

          No legitimate hosting provider is going to knowingly host CSAM, AI generated or not. We aren’t going to invest legal resources into defending that, nor are we going to jeopardize the mental well-being of our staff by increasing the frequency of those reports.

            • Zikeji@programming.dev
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 year ago

              I don’t understand the logic behind this. If it’s your job to analyze and deduce whether certain content is or is not acceptable, why shouldn’t you make assessments on a case by case basis?

              The bit about “ignoring it” was more in jest. We do review each report and handle it in a case by case basis, my point with this statement is that someone hosting questionable content is going to generate alot of reports, regardless of whether it is illegal or not, and we won’t take an operating loss and let them keep hosting with us.

              Usually we try and determine if it was intentional or not, if someone is hosting CSAM and is quick and responsive with resolving the issue, we generally won’t immediately terminate them for it. But even if they (our client) is a victim, we are not required to host for them and after a certain point we will terminate them.

              So when we receive a complaint about a user hosting CSAM, we review it and see they are hosting a site advertising itself as intended to allow users to distribute AI generated CP, we aren’t going to let him continue hosting with us.

              Even if you remove CSAM from the equation you still have to continuously sift through content and report any and all illegal activities - regardless of its frequency.

              This is not an accurate statement, at least in the U.S. where we are based. We are not (yet) required to sift through any and all content uploaded on our servers (not to mention the complexity of such an undertaking making it virtually impossible at our level). There have been a few laws proposed that would have changed that, as we’ve seen in the news from time to time. We are required to handle reports we receive about our clients.

              Keep in mind when I say we are a hosting provider, I’m referring to pretty high up the chain - we provide hosting to clients that would say, host a Lemmy instance, or a Discord bot, or a personal NextCloud server, to name a few examples. A common dynamic is how much abuse is your hosting provider willing to put up with, and if you recall with the CSAM attacks on Lemmy instances part of the discussion was risking getting their servers shutdown.

              Which is valid, hosting providers will only put up with so much risk to their infrastructure, reputation, and / or staff. Which is why people who run sites like Lemmy or image hosting services do usually want to take an active role in preventing abuse - whether or not they are legally liable won’t matter when we pull the plug because they are causing us an operating loss.

              And it’s the right of any … [continued]

              I’m just going to reply to the rest of your statement down here, I think I did not make my intent/purpose clear enough. I originally replied to your statement talking about AI being used to make CP in the future by providing a personal anecdote about it already happening. To which you asked a question as to why I defined AI generated CP as CSAM, and I clarified. I wasn’t actually responding to the rest of that message. I was not touching the topic or discussion of what impact it might have on the actual abuse of children, merely providing my opinion as to why, whether legal or not, hosting providers aren’t ever going to host that content.

              The content will be hosted either way, but whether it is merely relegated to “offshore” providers but still accessible via normal means and not criminal content, or becomes another part of the dark web, will be determine at some point in the future. It hasn’t become a huge issue yet but it is rapidly approaching that point.

    • pinkdrunkenelephants@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      11
      ·
      edit-2
      1 year ago

      The fact that it can make CP at all is the reason why it needs to be banned outright.

      EDIT: Counting 8 9 10 11 butthurt pedophiles afraid their new CP source will be banned