I got baited into posting a picture of a child eating Popcorn on Discord, not knowing it was associated with CSAM. The account got banned, but I dont care about it but more about the legal consequences. Has anyone heard of legal action against people posting it?
No. I see no way to prosecute over popcorn. If a country actually did I would find the exit. Fast!
Not condoning child abuse
But popcorn?
Yeah.
Discord needs to moderate, so they ban and therefore conclude their legal obligations.
If it was CSAM and discord thinks it’s bad enough, they will probably forward the information to the authorities.
Now if the authorities think it’s worth an investigation and give it the proper priority, they will start one. If the investigation concludes and they still think you’ve done goofed bad enough, they will persue you under criminal law.
See how many ifs there are and how many people have to sign off on it? There’s quadruple human review at minimum in there, and there’s no way they think they can win on those charges when the evidence if gd damn popcorn.
Also, you can appeal a ban. I got auto banned on discord about 2 months ago and I appealed because I know for a fact I did nothing wrong - I was literally asleep and my last messages did not even contain profanity. I was so mad cause that account is important to me. They reinstated it - to their credit - in a matter of hours. Still, could’ve done without the heart attack.
TL;DR you’re more than safe as long as it wasn’t actual CSAM.
Yeah, like I found the exact same image posted on Twitter since 2 years and it is still up.
Helpful video: https://www.youtube.com/watch?v=Kyc_ysVgBMs
But basically, the picture was a cropped frame from a CSAM content, which then their systems thought you are posting CSAM content when you did not.
About the legal consequences, i am not a lawyer, but i don’t think you will be visited by the police anytime soon, since the picture you posted isn’t CSAM by itself, just a cropped portion which does not contain the material itself.
Edit: as someone said, it goes through multiple human reviews.
no. they won’t send the police after you. if they wanted to you wouldn’t be online currently. it’s just their stupid AI auto flagging things.
Can someone explain this to me, cus huh?
Websites have false positives all the time and while it sucks, it’s infeasible for them to have human reviewers checking everything and it’s better to have false positives than false negatives… What isn’t acceptable is that the appeals process uses the exact same models as the flagging process so it gets the exact same false positives and false negatives…
Pic related as it was one of the first to reveal how broken the appeals process in most social media platforms was.






