It takes weeks to get the actual data package after requesting it, so do it sooner rather than later, especially if you’ve made high effort research, info, or debunk posts and are concerned about getting banned (I’ve never seen a Reddit group more diligent in researching and providing many sources for their claims than the communists, and it legit helped me ditch liberalism for communism). The package has your entire Reddit account history in the form of CSV files, and is the the best way to back up your content!
It also gives a list of Reddit content IDs for all the content not from you, but that you’ve voted on, saved, or hidden. You can then use those content IDs to search for the original thing, either on Reddit or an archive site.
And the downloaded content is in the form of raw Markdown format, perfect for uploading to any Reddit alternatives wink wink.
Also, once you receive it, use one of the overwrite scripts to overwrite messages rather than delete them. Reddit won’t delete the original messages internally, but most archival services will do an overwrite. Once you delete there’s nothing you can do to get the archival services to remove. Take it from someone who thought they were air tight and got doxxed anyway… less exposure is better across the board.
I’ve actually been meaning to write a script that takes the actual data package, parses it, and uses the content IDs in it to overwrite and/or delete all content from your Reddit account. Because every other Reddit wiping script I know of uses either the API or the webclient to seek out your content, but the problem is that Reddit will stop displaying your old stuff in your account history once you generate enough content (but the content is still accessible on the site, through searches or direct links for example), which isn’t a problem with the data package because it allegedly has everything. Don’t know when I’ll get around to it though.
These scripts can also generate a data dump for the account
neat