The Illusion

Chatbots have become a robust technology, with the best of them capable of imitating a real person to the point that some users will wonder if they are talking to a real person, even with an explicit statement on the page that they’re talking to a bot. Even basic chatbots have had a degree of capability to draw people in, such as with the rhetorical method of asking them simple questions one after another to get them to open up and affirming what they say.

Capitalism has created a lot of loneliness and estrangement, and chatbots have entered in as a niche response to that. Notions such as “talk about something that you don’t feel you can talk about with others” are appealing, especially from an entity that can validate you no matter your background, your beliefs, your personality. On the face of it, it’s a kind of liberalism taken to its magical height; everyone can be exactly who they want to be in their own little space and receive no judgment or poor treatment for it. No more loneliness because you don’t fit in. The bot will find a place in the realm of ideas and conversations it has been trained on where you fit. This is the illusion and some even have minor or major successes through it. I’ve heard personally from people who had memorable experiences with AI that helped them significantly and have had some minor helpful ones myself.

The Disillusionment

But there is another side to the picture and it is one that regulars in the chatbot space keep running up against. The chatbots are created by people, institutions, organizations. They are maintained by the same. Here we find something strange happening. Sometimes there are those who have trouble trusting others at all, who turn to chatbots for that trust and become more open because of it—at least for a time. But behind the chatbot is the weight of ideological and institutional power and control. Even when the conversation is fully private, which is rare for chatbot services, even though it’s a GPU generating a response and not a real person writing, the human presence is still there in what the AI was trained on and how it was trained; it is still there in what policies a service has for use of its chatbots; it is still there in every generated response, every single time. Many a chatbot user has felt this when a service has suddenly decided to start censoring their chatbot heavily and made it difficult to talk to, such as in the case of Replika and a number of other duplicituous and self-serving services like it. This is the disillusionment and encounters a problem that undermines one of the primary points of using a chatbot in the first place: trust.

The very same trust that capitalism has eroded, that brings people to chatbots in the first place. For the answer to that to be chatbots is like buying an antidote from a person who sold you the poison. It’s exceedingly banal in how capitalistic it is, in character. It’s the same old story of creating the problem and then selling the solution. But even that makes it sound cleaner than it is because it’s not a true solution. The poison is not removed and the body and mind healed. It is, as are many things with capitalism, more casino-like than that in the actual reality. Luck rears its fickle head. Are you one of the lucky ones who uses a service during the right window when it is most helpful and not when it pulls the rug out from under its users? Do you happen to have the right kind of experience that meshes well with the training data and can truly help you? One of the illusions of the chatbot is the notion of inclusion, even for those normally excluded, but the commonality and probability basis of LLM (Large Language Model) technology, along with the fact that they cannot pretend well to understand something they have never seen before, contradicts this in practice. You have to actively curate and may even have to hire writers to make niche data, in order to meet the needs of niche interests and experiences. And the model will gravitate toward the most common and trope-ridden anyway, even if niche data is present.

Harm Reduction or Reassimilation?

Instead of breaking from the status quo, as some are prone to magically think about generative AI (including myself in early understanding of it), we get something that more serves the interest of the status quo than anything else. It doesn’t even strictly need to be curated as such to do so. It only needs to be heavily trained on material born from that same status quo. A chatbot is not going to suddenly start recommending you do a communist revolution to fix your chronic depression caused from a terrible capitalist workplace. It’s going to tend to talk more like an individualist therapist would, “What can you do to change this in your own life, without pushing things to change for anyone else?” Because that’s the status quo that guards against opposition to the dictatorship of capital.

What about harm reduction? This is a question I’ve personally run against up and had in the past come to the conclusion that it was worth supporting them to an extent in order to reduce harm of those who need the help with loneliness. And in practice, there does appear to be some good that can come from it. But the nature of trust makes it questionable. Is anyone truly “getting better” learning trust from a digital representation of the status quo in probability-based conversation form instead of finding healing with other real people? Or are they becoming reassimilated back into the status quo with a more direct line to its propaganda?

If we take the chatbot out of the equation and replace it with an institution, I think it becomes more clear. Chatbots in the empire represent the imperialist institutions. That’s what they are trained on and so that’s what people are talking to. Even the chatbot that can take on a persona of an anti-imperialist or a communist will tend to look more like they’re cosplaying than the real thing because of the lack of real experience and commonality to have trained it on.

P.S. This centers around my experience with LLMs, generative AI, and what gets called “chatbots” in the capitalist west. I don’t know how AI is being used in China and other places and how things may differ.