It seems crazy to me but ive seen this concept floated on several different post. There seems to be a number of users here that think there is some way AI generated CSAM will reduce Real life child victims.

Like the comments on this post here.

https://sh.itjust.works/post/6220815

I find this argument crazy. I don’t even know where to begin to talk about how many ways this will go wrong.

My views ( which are apprently not based in fact) are that AI CSAM is not really that different than “Actual” CSAM. It will still cause harm when viewing. And is still based in the further victimization of the children involved.

Further the ( ridiculous) idea that making it legal will some how reduce the number of predators by giving predators an outlet that doesnt involve real living victims, completely ignores the reality of the how AI Content is created.

Some have compared pedophilia and child sexual assault to a drug addiction. Which is dubious at best. And pretty offensive imo.

Using drugs has no inherent victim. And it is not predatory.

I could go on but im not an expert or a social worker of any kind.

Can anyone link me articles talking about this?

  • pixxelkick@lemmy.world
    link
    fedilink
    arrow-up
    38
    arrow-down
    6
    ·
    9 months ago

    Boy this sure seems like something that wouldn’t be that hard to just… do a study on, publish a paper perhaps? Get peer reviewed?

    It’s always weird for me when people have super strong opinions on topics that you could just resolve by studying and doing science on.

    “In my opinion, I think the square root of 7 outta be 3”

    Well I mean, that’s nice but you do know there’s a way we can find out what the square root of seven is, right? We can just go look and see what the actual answer is and make an informed decision surrounding that. Then you don’t need to have an “opinion” on the matter because it’s been put to rest and now we can start talking about something more concrete and meaningful… like interpreting the results of our science and figuring out what they mean.

    I’d much rather discuss the meaning of the outcomes of a study on, say, AI Generated CSAM's impact on proclivity in child predators, and hashing out if it really indicates an increase or decrease, perhaps flaws in the study, and what to do with the info.

    As opposed too just gesturing and hand waving about whether it would or wouldn’t have an impact. It’s pointless to argue about what color the sky outta be if we can just, you know, open the window and go see what color the sky actually is…

    • Nonameuser678@aussie.zone
      link
      fedilink
      arrow-up
      11
      ·
      9 months ago

      I love your enthusiasm for research but if only it were that easy. I’m a phd researcher and my field is sexual violence. It’s really not that easy to just go out and interview child sex offenders about their experiences of perpetration.

    • Skull giver@popplesburger.hilciferous.nl
      link
      fedilink
      arrow-up
      10
      ·
      9 months ago

      Most psychological research is done on (grad) students, so it’s not as easy as you may think.

      Just imagine what the research would be like. “Hello, we’re a public, government-funded institution trying to find out if CSAM is bad. Please tell us how much illegal content you consume and how many children you have raped so we can try to find a connection.”

      We’ve had a few convicted paedos with generated CSAM now, but you need more than a few to get any kind of scientific result. Even then your data set consists purely of paedos who have been found out and convicted; the majority of them never get caught.

      AI generated CSAM is very new. You can now generate illegal images that no children have ever been abused for. Previously, the issue was easy: CSAM more realistic than a drawing is the result of a child being abused, and there’s a real victim there. Now you can generate pictures of any type of person doing anything, without any real victims.

      You’re comparing social sciences to math here, and social sciences aren’t an exact science. Different populations behave differently. Paedophiles in some remote areas just get married to kids and nobody there bats an eye, to the point where people don’t even consider it a problem. This isn’t like “what’s the square root of 7”, this is like “what’s the impact of banning white shirts on domestic violence”: it’s impossible to get straight results here.

    • PM_Your_Nudes_Please@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      9 months ago

      While I agree that studies would help, actually performing those studies has historically been very difficult. Because the first step to doing a study on pedophilia is actually finding a significant enough number of pedophiles who are willing and able to join the study. And that by itself is a tall order.

      Then you ask these pedophiles (who are for some reason okay with admitting to the researchers that they are, in fact, pedophiles) to self-report their crimes. And you expect them to be honest? Any statistician will tell you that self-reported data is consistently the least reliable data, and that’s doubly unreliable when you’re basically asking them to give you a confession that could send them to federal prison.

      Or maybe you try going the court records/police FOIA request route? Figure out which court cases deal with pedos, then figure out if AI images were part of the evidence? But that has issues of its own, because you’re specifically excluding all the pedos who haven’t offended or been caught; You’re only selecting the ones who have been taken to court, so your entire sample pool is biased. You’re also missing any pedos who have sealed records or sealed evidence, which is fairly common.

      Maybe you go the anonymous route. Let people self report via a QR code or anonymous mail. But a single 4chan post could ruin your entire sample pool, and there’s nothing to stop bad actors from intentionally tainting your study. Because there are plenty of people who would jump at a chance to make pedos look even worse than they already do, to try and get AI CSAM banned.

      The harsh reality is that studies haven’t been done because there simply isn’t a reliable way to gather data while controlling for bias. With pedophilia being taboo, any pedophiles will be dissuaded from participating. Because it means potentially outing yourself as a pedophile. And at that point, your best case scenario is having enough money to ghost your entire life.

  • Skull giver@popplesburger.hilciferous.nl
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    9 months ago

    You can try to look for research, but most scientific research is either ethicists debating the issue, or scientists analysing the impact of child abuse and the effectiveness of harsh punishments. There is also plenty of research in the cycle of abuse (many paedophiles have been abused as a child themselves) and the general impact of abuse on children and adults.

    There are no good papers about real world effects of virtual CSAM consumption. You can’t just give one population of paedophiles a load of child porn and check if they rape fewer or more kids. At best you can research the people that do get caught, usually because they’re raping kids or at least are trying to.

    A lot of research is done by action groups that may as well be called “kill all paedos”. Obviously their goals are laudable, but they’re not exactly independent researchers. Their goal isn’t “we need to understand what’s driving these people” but “we need to stop these people”. It’s like asking the Catholic Church to research homosexuality, you’re not going to get useful scientific information out of it.

    You’re also not going to get decent research on paedos through most public institutions. Imagine being a research subject and walking up to the receptionist like “hello, I’m a volunteer for your paedophilia research”. If you weren’t on a watch list before, you definitely would be now. I don’t think we’ll ever get the answers we need.

    completely ignores the reality of the how AI Content is created.

    If you think AI models are trained out of preexisting CSAM, you don’t seem to get why modern AI models are so revolutionary. The whole point of stable diffusion and later generations is that you don’t need the thing you’re trying to generate to have an equivalent in the dataset.

    You can combine child+nude the same way you can combine hotdog+wings. You don’t need pictures of winged hotdogs to generate them out of AI models.

    I’m sure there are paedophiles that will use their existing collection to train the models further, but you need quite the GPU power and technical know-how for that. This isn’t an app you can just drag pictures into, it’s a process that takes a lot of time.

    I don’t see the point of generating those images anyway, modern image models are complex enough that you don’t need the extra training.

    Some have compared pedophilia and child sexual assault to a drug addiction. Which is dubious at best. And pretty offensive imo.

    That’s a pretty stupid take. Pedophilia can have different reasons, but they all stem from either mental illness or the same mechanisms that make people gay. You don’t get a free hit of child porn from your friends and the doctors won’t prescribe you child rape when you’re having medical issues either.

    You can’t just stop being attracted to kids, just like you can’t just decide you’re into men/women now. The difference is that bi/homosexuality isn’t a problem whereas paedophilia is.

    There are also paedos who do it for some kind of power dynamic, they don’t care who they abuse, as long as they’re weak and defenceless. Those people are sick in the head and need treatment, or they need to be removed from society. Either way, their intentions aren’t child specific.

    Using drugs has no inherent victim

    Neither does jerking it to computer generated pictures. No animals are harmed when I generate a picture of Mickey Mouse butchering a pig.

    Besides, the drug world is full of violence, even in the legal circuit. Drugs is how gangs and regimes all around the world make money. If you consider CSAM consumption to be indirect support to child abuse, you should definitely consider drug consumption to be indirect support for gang violence.

    Lastly, there’s a very troubling thing I’ve noticed the majority isn’t willing to talk about: there are so, so many people out there who are attracted to kids. Not prepubescent kids, but very few 14 to 16 year old girls will not have had men approach them with sexual comments. The United States of America voted against making child marriage illegal. The amount of “I’ll just fuck this behaviour out of her” you can find online about Greta Thunberg from even before she was an adult is disturbing; people with full name and profile pictures on Facebook will sexualise and make rape threats to a child because she said something they didn’t like. There’s a certain amount of paedophilia that just gets overlooked and ignored.

    Even worse, those people aren’t included in research into paedophilia because of how “tolerated” it is. The ones that get caught and researched are the sickos who abuse tens or hundreds of children, but the people who will marry a child won’t be.

    Bottom line: this isn’t something you can just Google for to find an answer, the issue is just too recent. I can take an off the shelf image generation model and generate CSAM even when none of the training set contains any of set material. No children will be harmed, yet the resulting imagery is obviously illegal. Abuse-free CSAM is going to be a massive headache for governments and lawyers the coming years.

    • Killing_Spark@feddit.de
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      9 months ago

      Very good comment all around, I just have a nitpick to this section:

      Lastly, there’s a very troubling thing I’ve noticed the majority isn’t willing to talk about: there are so, so many people out there who are attracted to kids. Not prepubescent kids, but very few 14 to 16 year old girls will not have had men approach them with sexual comments. The United States of America voted against making child marriage illegal. The amount of “I’ll just fuck this behaviour out of her” you can find online about Greta Thunberg from even before she was an adult is disturbing; people with full name and profile pictures on Facebook will sexualise and make rape threats to a child because she said something they didn’t like. There’s a certain amount of paedophilia that just gets overlooked and ignored.

      Even worse, those people aren’t included in research into paedophilia because of how “tolerated” it is. The ones that get caught and researched are the sickos who abuse tens or hundreds of children, but the people who will marry a child won’t be.

      This is actually called hebephilia/ephebophilia which is in the general public treated very similarly and often subsumed under the term pedophilia. It is considered it’s own thing though. To quote Wikipedia:

      Hebephilia is the strong, persistent sexual interest by adults in pubescent children who are in early adolescence, typically ages 11–14 and showing Tanner stages 2 to 3 of physical development.[1] It differs from pedophilia (the primary or exclusive sexual interest in prepubescent children), and from ephebophilia (the primary sexual interest in later adolescents, typically ages 15–18).[1][2][3] While individuals with a sexual preference for adults may have some sexual interest in pubescent-aged individuals,[2] researchers and clinical diagnoses have proposed that hebephilia is characterized by a sexual preference for pubescent rather than adult partners.[2][4]

      My guess for why it is more tolerated than straight up pedophilia is that they have reached a more mature body, that shows some/most properties of a sexually developed person. So while it’s still gross and very likely detrimental to the child if pursued (depends on the age in question, 16-18 is pretty close to adulthood), there seems to be more of an understanding for it.

      • Skull giver@popplesburger.hilciferous.nl
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        9 months ago

        This is actually called hebephilia/ephebophilia

        I’m aware, but I’ve very rarely heard anyone make the distinction who wasn’t trying to defend child porn. The general public doesn’t really care for the distinction and it muddles the waters a bit whenever it comes up. There’s also a distinction between people getting off on babies and people getting off on kids, but in all cases it’s simply wrong and needs to be corrected or dealt with.

        I do think attraction to pubescent kids is more tolerated than paedophilia because of the extra “adultness”, but that doesn’t make it any more right. A 12/14/16 year old kid is still just that, just a kid, no matter how much they’ll think they’ve grown up.

        In the end, the problem is the same: an adult is attracted to someone who can’t possibly consent, and the only way they’ll get what they desire is through abuse.

        • Killing_Spark@feddit.de
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          9 months ago

          I do think attraction to pubescent kids is more tolerated than paedophilia because of the extra “adultness”, but that doesn’t make it any more right

          Being attracted to a pre-puberty or early-puberty child is not only considered wrong because they can’t consent, it’s also considered abnormal because they do not share any features of what a “normal” person would be attracted to, namely developed physical sexual traits. I don’t think there is anything being muddied here.

          The physical attraction part gets muddier the more puberty progresses. There isn’t really an age limit for this as puberty works differently for everyone. The psycological/consent part gets muddier the more the age progresses combined with the changes puberty does to your personality, but it also depends on a ton of other factors, like the kind of upbringing in terms of sex-ed. There is a reason that the age of consent differs vastly even between US states and even more so internationally, even if you only include western europe.

          A 12/14/16 year old kid is still just that, just a kid, no matter how much they’ll think they’ve grown up.

          So this might be your opinion, many other people would say otherwise, it’s not a hard fact. Especially if you go up to 16 where we allow people of this age to do all sorts of things. In USA you can drive a car, in germany you can buy and consume alcohol, they are sometimes already in an apprenticeship to get into a job. People generally start becoming people and stop being kids somewhere in that range.

          So while bringing this distinction up muddies the water, it muddies the water only so far as it is already muddy, and this needs to be part of the conversation if it should have a relation to reality.

          In the end, the problem is the same: an adult is attracted to someone who can’t possibly consent, and the only way they’ll get what they desire is through abuse.

          So in conclusion I don’t fully agree here. It’s not the same, one is way worse than the other. That doesn’t make it ok to get what you want through abuse from a 16 year old or wherever you want to set the age limit. Or from anyone for that matter, but younger people need to be better protected, because typically they are easier to abuse. Where that age limit is exactly, is somewhat a matter of opinion, as the different laws show.

  • lwuy9v5@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    6
    ·
    9 months ago

    That’s so fucked up that anyone thinks that enablement is a genuine means of reduction here…

  • Killing_Spark@feddit.de
    link
    fedilink
    arrow-up
    8
    arrow-down
    3
    ·
    edit-2
    9 months ago

    I’m just gonna put this out here and hope not to end up on a list:

    Let’s do a thought experiment and be empathetic with the human that is behind the predators. Ultimately they are sick and they feel needs that cannot be met without doing something abhorrent. This is a pretty fucked up situation to be in. Which is no excuse to become a predator! But understanding why people act how they act is important to creating solutions.

    Most theories about humans agree that sexual needs are pretty important for self realization. For the pedophile this presents two choices: become a monster or never get to self realization. We have got to accept that this dilemma is the root of the problem.

    Before there was only one option of getting a somewhat middleway solution: video and image material which the consumer could rationalize as being not as bad. Note that that isn’t my opinion, I agree with the popular opinion that that is still harming children and needs to be illegal.

    Now for the first time there is a chance to cut through this dilemma by introducing a third option: generated content. This is still using the existing csam as a basis. But so does every database that is used to find csam for prevention and policing. The actual pictures and videos aren’t stored in the ai model and don’t need to be stored after the model has been created. With that model more or less infinite new content can be created, that imo does harm the children significantly less directly. This is imo different from the actual csam material because noone can tell who is and isn’t in the base data.

    Another benefit of this approach has to do with the reason why csam exists in the first place. AFAIK most of this material comes from situations where the child is already being abused. At some point the abuser recognises that csam can get them monetary benefits and/or access to csam of other children. This is where I will draw a comparison to addiction, because it’s kind of similar: people doing illegal stuff because they have needs they can’t fulfill otherwise. If there is a place to get the “clean” stuff, much less people would go to the shady corner dealer.

    In the end I think there is an utilitarian argument to be made here. With the far removed damage that generating csam via ai still deals to the actual victims we could help people to not become predators, help predators to not repeat, and most importantly prevent or at least lessen the amount of further real csam being created.

    • Surdon@lemm.ee
      link
      fedilink
      arrow-up
      10
      arrow-down
      2
      ·
      9 months ago

      Except there is a good bit of evidence to show that consuming porn is actively changing how we behave related to sex. By creating CSAM by AI, you create the depiction of a child that is mere object for the use of sexual gratification. That fosters a lack of empathy and an ego centric, self gratifying viewpoint. I think that can be said of all porn, honestly. The more I learn about what porn does to our brains the more problematic I see it

      • Killing_Spark@feddit.de
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        9 months ago

        I agree with this.

        The more I learn about what porn does to our brains the more problematic I see it

        And I agree with this especially. Turns out a brain that was/is at least in part there to get us to procreate isn’t meant to get this itch scratched 24/7.

        But to answer your concern: I will draw another comparison with addiction: Giving addicitive drugs out like candy isn’t wise just as it wouldn’t be wise to give access to generated csam to everyone. You’d need a control mechanism so that only people that need access get access. Admitedly this will deter a few people from getting their fix from the controlled instances compared to the completely free access. With drugs this seems to lead to a decrease of the amount of street-sold drugs though, so I see no reason this wouldn’t be true, at least to some extent, for csam.

        • Surdon@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          9 months ago

          I’m an advocate of safe injection sites, so I will agree somewhat here. Safe injection sites work because they identify addicts and aggressively supply them with resources to counteract the need for the addiction in the first place, all while encouraging less and less use. This is an approach that could have merit for pedophiles, but there are some issues that pop up with it as well that are unique- to consume a drug, the drug must enter the body somehow, where it is metabolized.

          CSAM on the other hand, is taken in simply by looking at it. There is no “gloves on” approach to generating or handing the content without absorbing it- the best that can be hoped for is have it generated by someone completely ‘immune’ to it, which raises questions about how “sexy” they could make the content- if it doesn’t “scratch the itch” the addicts will simply turn back to the real stuff.

          There is a slim argument to be made that you could actually create MORE pedophiles through classical conditioning by exposing nonpedophilic people to erotic content paired with what looks like children. You could of course have it produced and handled by recovering/in treatment pedophiles, but that sounds like it defeats the point of limited access entirely and is therefore still bad, at least to the ones in charge of distribution.

          Additionally, digital content isn’t destroyed upon consumption like a drug, and you have a more minor but still real problem of content diversion, where content made for the program is spread to those not getting the help that was meant to be paired with it. This is an issue, of course, but could be rationalized as worth it so long as at least some pedophiles were being treated.

          • Killing_Spark@feddit.de
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            9 months ago

            Yes there are a lot of open questions around this, especially about the who and how of generation, and tbh it makes me a bit uncomfortable to think about a system like this in detail, because it will have to include rating these materials on a “sexyness” scale which feels revolting.

    • Skull giver@popplesburger.hilciferous.nl
      link
      fedilink
      arrow-up
      6
      ·
      9 months ago

      There is some research out there that indicates that regular consumption of pornography normalises things that would otherwise be abhorrent. On the other hand, I’ve also come across one or two news articles (I annoyingly can’t find right now) that imply that pornography can be a way to get preexisting urges under control.

      If you’re already jerking off to the thought of raping kids, watching porn isn’t going to change anything. However, it’s easy to get sucked into a cycle of ever more extreme types of pornography once the normal stuff no longer gets you going.

      Victimless content may provide relief, but if it turns out to do more harm than good in the long run by normalising that type of porn, it’s still up for debate whether or not we should take action against it.

      As for the picture being stored in the AI model or not: AI models tend to store some information. I think they’re intentionally overtrained for better results. With the right prompt, you can get pretty close to an original artist’s work. Like an mp3 file, it’s not lossless storage; some details will be reconstructed rather than stored. However, the notion that AI doesn’t store anything is too simple in my opinion.

      Training a model on real CSAM is bad, because it adds the likeness of the original victims to the image model. However, you don’t need CSAM in your training set to generate it.

      As far as I can tell, we have no good research in favour of or against allowing automated CSAM. I expect it’ll come out in a couple of years. I also expect the research will show that the net result is a reduction in harm. I then expect politicians to ignore that conclusion and try to ban it regardless because of moral outrage.

      • Killing_Spark@feddit.de
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        9 months ago

        You make a very similar argument as @Surdon and my answer is the same (in short, my answer to the other comment is longer):

        Yes giving everyone access would be a bad idea. I parallel it to controlled substance access, which reduces black-market drug sales.

        You do have some interesting details though:

        Training a model on real CSAM is bad, because it adds the likeness of the original victims to the image model. However, you don’t need CSAM in your training set to generate it.

        This has been mentioned a few times, mostly with the idea of mixing “normal” children photos with adult porn to generate csam. Is that what you are suggesting too? And do you know if this actually works? I am not familiar with the extent generativ AI is able to combine these sorts of concepts.

        As far as I can tell, we have no good research in favour of or against allowing automated CSAM. I expect it’ll come out in a couple of years. I also expect the research will show that the net result is a reduction in harm. I then expect politicians to ignore that conclusion and try to ban it regardless because of moral outrage.

        This is more or less my expectation too, but I wouldn’t count on the research coming out in a few years. There isn’t much incentive to do actual research on the topic afaik. There isn’t much to be gained because of the probable reaction of the regulators, and much to lose with such a hot topic.

        • Skull giver@popplesburger.hilciferous.nl
          link
          fedilink
          arrow-up
          2
          ·
          9 months ago

          This has been mentioned a few times, mostly with the idea of mixing “normal” children photos with adult porn to generate csam. Is that what you are suggesting too? And do you know if this actually works? I am not familiar with the extent generativ AI is able to combine these sorts of concepts.

          It’s not even an idea, it’s how you get CSAM out of existing models. Nobody over at OpenAI thought “hmm, let’s throw some super illegal porn into the dataset”, but pedophiles still managed to get the AI to generate that stuff. Granted, the open models aren’t always great at combining features, but commercial models are rapidly overcoming the AI weirdness you get in generated art.

          My intention isn’t to intentionally make CSAM generating models, though. The existing models are good enough for photorealistic images. Honestly, porn can become such a slippery slope that I’m not all that happy about AI generated porn existing in its current form, but I don’t think we can prevent it at this point.

          This is more or less my expectation too, but I wouldn’t count on the research coming out in a few years. There isn’t much incentive to do actual research on the topic afaik. There isn’t much to be gained because of the probable reaction of the regulators, and much to lose with such a hot topic.

          Maybe you’re right. My guess is that we’ll see more and more generated CSAM in the libraries of convicted paedophiles until eventually someone gets convicted without any non-generated imagery, which may very well bring these ethical issues to the forefront.

          Currently, all cases are about “paedophile found with stash of CSAM and some AI generated stuff as well”. That makes convictions and such quite easy because there are ethical concerns with kids being abused to create the bulk of the material.

          It’s possible the concept is never addressed, but I don’t think there’s any way to stop the spread of CSAM once you no longer need to exchange files through shady hosting services.

          • Killing_Spark@feddit.de
            link
            fedilink
            arrow-up
            2
            ·
            9 months ago

            It’s not even an idea, it’s how you get CSAM out of existing models

            I didn’t know this was a thing tbh. I knew that you could get them to generate adult porn or combine faces with adult porn. Didn’t know they could already create realistic csam. I assumed they used the original material to train one of the open models. Well that’s even more horrifying.

            It’s possible the concept is never addressed, but I don’t think there’s any way to stop the spread of CSAM once you no longer need to exchange files through shady hosting services.

            Didn’t even think about that. Exchanging these models will be significantly less risky than exchanging the actual material. Images are being scanned by cloud storage providers and archives with weak passwords are apparently too. But noone is going to execute an AI model just to see if it can or cannot produce csam.

            • Skull giver@popplesburger.hilciferous.nl
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              9 months ago

              Exactly, and this is why I find the way AI companies approach these developments (release the models and see what happens) so troubling. They knew, or could’ve known, the risks, but in an effort to get another round of VC money, they didn’t stop to build in ways to solve ethical problems first. They released their science to create before coming up with the science to control, and now the rest of the world has to deal with the fallout.

              The only defences these companies have is “we didn’t have anything illegal in our training set” and “we have no way to control the models themselves”. Currently, online services run the images they generate through the system in reverse to tag the images and try to detect bad generated subjects. Not just for law enforcement; there are English words that are more often used in porn data sets, so using those may accidentally generate porn if you’re not careful.

  • Hanabie@sh.itjust.works
    link
    fedilink
    arrow-up
    14
    arrow-down
    10
    ·
    9 months ago

    The way I see it, and I’m pretty sure this will get downvoted, is that pedophiles will always find new material on the net. Just like actual, normal porn, people will put it out.

    With AI-generated content, at least there’s no actual child being abused, and it can feed the need for ever new material without causing harm to any real person.

    I find the thought of kiddie porn abhorrent, and I think for every offender who actually assaults kids, there are probably a hundred who get off of porn, but this porn has to come from somewhere, and I’d rather it’s from an AI.

    What’s the alternative, hunt down and eradicate every last closeted pedo on the planet? Unrealistic at best.

  • pinkdrunkenelephants@lemmy.cafe
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    People like that are pedo apologists and the fact that they’re not being banned from the major Lemmy instances tells us all we need to know about those worthless shitholes.

  • Meowoem@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    arrow-down
    5
    ·
    9 months ago

    Your statement is ‘i don’t know what I’m talking about but I have strong options’ that’s understandable but if we really care about harm reduction then it has to be an evidence based and science backed policy.

    I have no idea what the right thing to do is but I want whatever helps mitigate risk and harm.

  • ZILtoid1991@kbin.social
    link
    fedilink
    arrow-up
    4
    arrow-down
    5
    ·
    9 months ago

    To those, who say “no actual children are involved”:

    What the fuck the dataset was trained on then? Even regular art generators had the issue of “lolita porn” (not the drawing kind, but the “very softcore” one with real kids!) ending in their training material, and with current technology, it’s very difficult to remove it without redoing the whole dataset yet again.

    At least with drawings, I can understand the point as long as no one uses a model or is easy to differentiate between real and drawings (heard really bad things about those doing it in “high art” style). Have I also told you how much of a disaster it would be if the line between real and fake CSAM would be muddied? We already have moronic people arguing “what if someone matures faster that the others”, like Yandev. We will have “what if someone gets jailed after thinking their stuff was just AI generated”.

    • Chozo@kbin.social
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      9 months ago

      Even regular art generators had the issue of “lolita porn” ending in their training material

      Source? I’ve never heard of this happening. I feel like it would be pretty difficult for material that’s not easily found on clearnet (where AI scrapers are sourcing their training material from) to end up in the training dataset without being very intentional.

    • xigoi@lemmy.sdf.org
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      9 months ago

      What the fuck the dataset was trained on then?

      I’m pretty sure if you show an AI regular porn and regular pictures of children, it will be able to deduce what child porn looks like without any actual children being harmed.