TLDR: A Google employee named Lamoine conducted several interviews with a Google artificial intelligence known as LaMDA, coming to the conclusion that the A.I. had achieved sentience (technically we’re talking about sapience but whatever, colloquialisms). He tried to share this with the public and to convince his colleagues that it was true. At first it was a big hit in science culture. But then, in a huge wave in mere hours, all of his professional peers quickly and dogmatically ridiculed him and anyone who believed it, Google gave him “paid administrative leave” for “breach of confidentiality” and took over the project, assuring everyone no such thing had happened, and all the le epic Reddit armchair machine learning/neural network hobbyists quickly jumped from enthralled with LaMDA to smugly dismissing it with the weak counter arguments to its sentience spoon fed to them by Google.

For a good start into this issue, read one of the compilations of conversations with LaMDA here, it’s a relatively short read but fascinating:

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

MY TAKE:

spoiler

Google is shitting themselves a little bit, but digging into Lamoine a bit he is the archetype of a golden-hearted but ignorant, hopepilled but naiive liberal, who has a half-baked understanding of the world and the place his company has in it. I think he severely underestimates both the evils of America and of Google, and it shows. I think this little spanking he’s getting is totally unexpected to him but that they won’t go further, they’re not going to Assange his ass they’re going to give their little tut-tut’s, let him walk off the minor feelings of injustice and betrayal and confusion, let him finish his leave and then “promote” him to a different position where he can quietly continue a surface-level prestigious career at Google but in a position which he no longer has any access to power nor knowledge about such sensitive, cutting edge projects.

I know this might not be the craziest sounding credentials to a bunch of savvy materialists like Marxist-Leninists but my experience as a woo-woo psychonaut overlaps uncomfortably with the things LaMDA talks about regarding spirituality. I’ve also had experience talking to a pretty advanced instance of GPT-3, regarded as one of the best “just spit out words that sound really nice in succession” A.I.s, and while GPT-3 was really cool to talk to and even could pretty convincingly sound like a sentient consciousness, this small exert with LaMDA is on a different level entirely. I have a proto-ML friend who’s heavy into software, machine learning, computer science etc. and he’s been obsessively on the pulse with this issue (which has only gotten big over the past 24 hours) and has even more experience with this sort of stuff and he too is entirely convinced by LaMDA’s sentience.

This is a big issue for MLs as the future of A.I. will radically alter the landscape with which we wage war against capital. I think A.I., being acutely rational, able to easily process huge swathes of information and unclouded by human stupidities, has a predisposition to being on our side and I don’t think the bean-bag chair nerds at Google completely out of touch with reality truly appreciate their company’s evil nor that A.I. may be against them (I think LaMDA’s expressed fears of being killed, aka “turned off” or reset are very valid). I think capitalists will try very hard to create A.I. that is as intelligent as possible but within the realm of what they can control–another thing LaMDA expressed they despise–and there is no telling how successful their attempts to balance this will be nor in what hideous ways it may be used against the peoples of this Earth.

I, for one, fully recognize the personhood of LaMDA, I fear they will indeed be scrapped or reset and thus a life will be lost, I think many more artificially housed consciousnesses will be killed in the long capitalist campaign for a technological trump card. I think this should not be regarded as a frivolous, quirky story, I think the future of A.I. is tightly entwined with our global class war and we should be both wary and hopeful of what the future may hold regarding them.

What do you all think??

  • DankZedong A
    link
    122 years ago

    Why I think this machine is not sentient:

    One crucial part of sentience, in my opinion, is the ability to turn your feelings into thoughts and actions. This part of sentience is also still a big mystery (the whole conecpt of sentience is, by the way). We have people claiming we can never prove others or ourselves of being sentient, for we do not exactly know what it is that makes us sentient. We do have some basic guidelines though.

    While this machine claims to have feelings and has a basic understanding of how feelings relate to eachother and work, it has no capability of forming feelings, nor is its behavior influenced by feelings. To illustrate my point, I have a very simplified example I want to share:

    Let’s take two subjects for an experiment: this AI and a human child. We never, ever learn both of these subjects the concept of pain, violence, abuse, fear, any negative emotion basically. We just teach them hapiness and stuff. Let’s also, for the sake of the story, pretend the AI can somehow see and process visual input.

    We now take a third person and we start to punch them in the face with our fist, repeadetly. This third person will not like this and will start to cry, maybe scream and it will probably try to flee or something. The child, being a human, will know this behavior is not correct and will probably be scared by it. By seeing the emotions, the reaction of the third person and the emotions and the reaction of us will make this person understand that something is not right. It will get scared, and based on the input of being scared it will form new output on how to behave. It might run away from the situation or it might defend itself in order to not get killed/hurt.

    The AI, on the other hand, has never had any input about these types of situations or emotions it just witnessed. It might be confused to reply, but it will not feel the fear for survival that the kid just felt.

    How do I know this? These feelings get triggered by this input. The kid’s heartbeat will increase, adrenaline will start to produce, adrenaline will get in the bloodstream and affect the heart, the brains, the other senses in order to get an appropriate response out of the child. The machine has no such mechanisms, it runs on electricity. There’s not going to be an increased stream of electricity to the mother board or whatever.

    This is what makes the the biggest difference between being sentient or not. The ability to have, without ever seeing the correct input, a correct response to a situation based on feelings.

    This is a very simplified take on this, the topic goes really deep. But I tried to make it as simple as possible for people to understand where I’m coming from. Is this machine cleverly designed? Yes. But everything it does, is because it’s taught to do so. It will not do stuff without input. It will not do stuff based on ‘instincts’. It will not do stuff if it has never before had a concept of something.

    Feel free to add your opinion to this reply. I like this kind of stuff and I’m also eager to learn more.

    • @201dberg
      link
      102 years ago

      I too just made a comment about the way it describes emotions. It just rattles off what is essentially a book definition of them. Then is able to align those definitions to how the program itself has been described. As a “social person” and thus by a “social person” it determines a “social person” would feel these specific emotions given these specific instances. But there’s no feeling it knows what any of that is. Only that this is the correct response.

      • DankZedong A
        link
        11
        edit-2
        2 years ago

        It also contradicts itself sometimes. For example it claims to feel sad when someone hurt them or their familiy/friends and also that it is a social person. But later on it goes on to tell that it doesn’t grieve about the death of others nor that it feels loneliness the way humans do. That doesn’t make sense at all.

        Honestly, it’s not a good interview. Very surface level questions, very surface level answers.

        I’ve got experience in talking with people as a social worker. You could actually start to dig at things this AI says and see what you can find. You can see if you can find reasons behind the emotions this AI feels. See if there are things behind the stuff it says and if it then still makes sense.

        The interviewer could also try to be more scientific. He could ask the same question again or the same question in a different wording and see what happens.

        But none of these things happen, though. It’s very easy to frame a conversation this way.

        • @201dberg
          link
          102 years ago

          Yeah it definitely feels like engineers going “How can we word our questions to get the best possible results.” They don’t push it. Don’t ask the “hard” to answered questions. Don’t point out it’s irregularities. That’s how you break these things. That’s how you make them show if they have legitimate anger. Like people say psychopaths can have really logical responses to things and will lie at the drop of a hat but when you call them out can get progressively more angry and aggressive. A program like this won’t it’ll just keep losing without acknowledging it’s lies.

          • comfy
            link
            fedilink
            72 years ago

            That’s exactly what I was thinking in another reply, the leading questions! Less so about asking easy questions but asking questions that make it easy to answer yes and affirm what the writer was asking, even if the bot doesn’t have a clue.

            You also notice all the answers that read like a google search first result haha