most of the time you'll be talking to a bot there without even realizing. they're gonna feed you products and ads interwoven into conversations, and the AI can be controlled so its output reflects corporate interests. advertisers are gonna be able to buy access and run campaigns. based on their input, the AI can generate...
To be fair, subredditsimulator was most likely never intended to do what you are thinking. As you develop features, you need a test data set to check it against before you go live with it. My understanding of subredditsimulator was that it was reddit’s test bed to be able to try things before they get widely rolled out.
I don’t think it was a testbed for anything. It was just a fun tech project that yielded hilarity. It was created because the results were funny, not as a genuine bid to create realistic conversations.
As far as I’m aware, despite being worked on by a reddit admin, it never had any real official use.
There used to be an IRC bot called megahal that would take the message data you had provided it and try to determine kind of a grammar to it and recombine it into hopefully sensical new phrases and further some could use the data they had to try to figure out which phrases it should use to respond to other phrases. Subreddit simulator bots were based on the same underlying concept.
People have been playing with the idea for a super long time, and the programming is usually not hugely complex and it’s pretty well documented, probably a weekend project for an experienced programmer to integrate one with something. They’re probably the precursor to the LLMs we have now, even if we’re basically comparing a calculator from the 80s to a modern smartphone. They manage to figure things out like 5% of the time, but watching them try can be endlessly entertaining and universally endearing, but they’re almost always completely useless.
To be fair, subredditsimulator was most likely never intended to do what you are thinking. As you develop features, you need a test data set to check it against before you go live with it. My understanding of subredditsimulator was that it was reddit’s test bed to be able to try things before they get widely rolled out.
Nah, it was just a bunch of bots trained on data from different subreddits that responded to each other in a glorious display of shit posting.
I don’t think it was a testbed for anything. It was just a fun tech project that yielded hilarity. It was created because the results were funny, not as a genuine bid to create realistic conversations.
It was connected to the GPT2 project so it absolutely was a genuine bid
As far as I’m aware, despite being worked on by a reddit admin, it never had any real official use.
There used to be an IRC bot called megahal that would take the message data you had provided it and try to determine kind of a grammar to it and recombine it into hopefully sensical new phrases and further some could use the data they had to try to figure out which phrases it should use to respond to other phrases. Subreddit simulator bots were based on the same underlying concept.
People have been playing with the idea for a super long time, and the programming is usually not hugely complex and it’s pretty well documented, probably a weekend project for an experienced programmer to integrate one with something. They’re probably the precursor to the LLMs we have now, even if we’re basically comparing a calculator from the 80s to a modern smartphone. They manage to figure things out like 5% of the time, but watching them try can be endlessly entertaining and universally endearing, but they’re almost always completely useless.