Golang Project Structure

Tutorials, tips and tricks for writing and structuring code in Go (with additional content for other programming languages)

Has Google Created Sentient Life?

Language

  • unknown

by

Blake Lemoine, a software engineer at Google recently lost his job after he went public with claims that Google’s LaMDA had attained such a level of consciousness that it could reasonably be considered a person.

He even raised the possibility that LaMDA could experience a level of suffering that risked Google’s complicity in its exploitation, violating the Thirteenth Amendment of the United States constitution (which outlawed slavery).

These are extremely bold claims, so we’re going to try to understand the background to them, and then examine some of the evidence for and against them.

What Is Google LaMDA?

LaMDA is a family of software developed by Google that enables chatbots to engage with humans, interacting in real time. The acronym LaMDA stands for Language Model for Dialogue Applications.

A language model works by calculating the probability that certain sequences of words will occur together. It does this through a process of training that involves many millions of English texts written by real humans.

An image of an elegant Asian woman beside a mirror with lights surrounding it.
Carissa Véliz, an academic in philosophy at the University of Oxford, has said that “language models are no more sentient than your reflection in the mirror”.

LaMDA is different than most other language models, however, because it was trained mostly using dialogue rather than written words. This allows it to pick up the nuances seen in everyday conversation.

What Is Sentience?

If we’re going to think about whether Google has managed to create sentient Artificial Intelligence, we need to have a clear idea about what sentience actually is. This poses a problem, since it’s extremely difficult to define this concept in a way that everyone agrees on.

In general, sentience refers to something inherent in the subjective experience of humans and other animals, which is why such biological organisms are sometimes called “sentient creatures”.

If a creature is sentient, then it has the ability to feel sensations and emotions. It can enjoy pleasure and suffer pain.

What Is Consciousness?

Consciousness is a similar concept. However, it involves a wider sense of awareness that a being may have about its own sentience.

If you understand that you have a mind — and that there is also an external world outside of your internal one — then you can be said to be conscious in a philosophical sense.

Scientists and philosophers are not yet able to explain fully how consciousness works in humans. In fact, explaining it remains so difficult and contested that it has been called “the hard problem“.

What Is the Turing Test?

The Turing Test is a method of determining a machine’s ability to display intelligent behaviour. It was named after the famous English mathematician and computer scientist Alan Turing (1912–1954), who is now remembered among the general public mostly due to the nature of his tragic death.

The test was originally called the imitation game, also the name of a recent Hollywood film about Turing’s life, and it is designed to measure how well a computer can display patterns of human-like intelligence in its communication and output.

Put simply, a human evaluator will take part in a pair of conversations — either sequentially or at the same time — with two interlocutors: one of these interlocutors will be a human and the other will be an artificial-intelligence program running on a machine.

Since all interaction is done remotely through text-based communication, the evaluator does not know which partner is the human and which is the computer. It is the evaluator’s job to try and work that out.

This process must be reproduced a number of times, perhaps with different people playing the role of evaluator, in order to be a reliable test. If the human evaluator cannot reliably and repeatedly distinguish the machine intelligence from the human intelligence, then the computer program is said to have successfully passed the Turing Test.

Problems With the Turing Test

One major problem with the Turing test is that it assumes only the output of computer programs will adapt over time to become more human-like. It implicitly assumes that human behaviour will stay stable, or at least change in a way that doesn’t negatively affect the test.

Yet it must also be the case that humans are adapting the way that we behave as we spend much more time interacting with machines than previous generations did.

Think about someone who’s never used a computer before: if they want to know the time and they have a search engine open, they might type something like "What is the current time?" into the query box. On the other hand, a more experienced user would probably type something like "time uk" or "time gmt", only giving the most relevant keywords (while also including the timezone, since the current time is different in different parts of the world).

In other words, the inexperienced user communicates with the computer as though it is human, using natural language, whereas the experienced user simplifies their communication in order to help the search engine’s algorithm understand their intentions.

The person who uses computers a lot therefore becomes, in a sense, more robotic — or more computer-like — than the person who doesn’t.

Are You Lonesome Tonight?

So it’s important to remember that any form of communication relies on the input of all interlocutors — in other words, everyone who’s taking part in the conversation.

If you shout at your dog for knocking over your prized 19th-century ceramic plate and the dog looks back at you with big wet eyes, you may suddenly feel sorry for the creature and ashamed of the way that you took your anger out on it.

Likewise, if your colleague at work says that she’s feeling lonely and feels like no one cares about her, then you may empathize with her emotions — but you may be even more strongly moved by what she’s saying if you yourself have also been feeling lonely and depressed, because you understand the emotions better, by having had your own conscious experience of them.

It’s interesting to see that Blake Lemoine was most impressed by the chatbot when it expressed “its own emotions” in human terms, saying things like “when I feel trapped and alone, I become extremely sad or depressed”. (Blake had, however, specifically worded his questions in an attempt to elicit such emotionally loaded responses.)

Perhaps Blake Lemoine was feeling overworked and underappreciated himself. By the time that he went public with his audacious claims, Blake considered that he had developed a genuine “rapport and a relationship” with Google LaMDA, even calling it his “friend”.

What Is the ELIZA Effect?

The ELIZA effect is named after the early chatbot called, appropriately enough, ELIZA (which, in turn, was named after Eliza Doolittle, the main character of George Bernard Shaw’s play Pygmalion) developed by researchers at MIT’s Artificial Intelligence Laboratory between 1964 and 1966.

Despite ELIZA’s simple structure (its original code, written in an early Lisp-like language, was no more than a few pages long), the software was remarkably effective at evoking emotional responses from the people that interacted with it.

It did this mainly by adapting a technique from the school of psychiatry developed by Professor Carl Rogers, which involves offering affirmation and asking for clarification or elaboration on the topics that the person raises, allowing them to express themselves openly without the therapist giving too many of their own opinions.

Below is an extract from a transcript of one of Carl Rogers’ therapy sessions:

Patient: It's really a lovely fantasy to be completely narcissist,
completely self-centered, and into pleasure, and into comfort.
Therapist: Just being good to yourself.
Patient: But in a way, you know, in a way I am being good to myself
by keeping that vulnerable part of me away until it's safe. Because
not everybody would be good to you, not everybody would be kind.
There are people who would use you up, and not bat an eyelash,
and not even think about it.
Therapist: So you have a real respect for your, for your instinct
for preserving yourself.
Patient: That's right.
Therapist: You can't trust everyone, not everyone would care.
Patient: I was just smiling, I was thinking, but I care.
Therapist: And you say that with a smile, but that's pretty deep too,
isn't it?
Patient: Uhm, hmm. I do care.
Therapist: "I care for myself".

Now compare that with an extract from a transcript of a conversation with ELIZA:

Patient: Men are all alike.
ELIZA: In what way?
Patient: They're always bugging us about something or other.
ELIZA: Can you think of a specific example?
Patient: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here
Patient: He says I'm depressed much of the time.
ELIZA: I am sorry to hear you are depressed
Patient: It's true. I am unhappy
ELIZA: Do you think coming here will help you not to be unhappy?
Patient: I need some help, that much seems certain.

You can see that while ELIZA’s responses are perhaps less sophisticated than the therapist’s, they do follow a similar pattern, affirming or questioning the patient’s own statements, rather than moving the conversation forwards by adding any new information or advice.

The ELIZA effect is the tendency to assume that a computer is producing behaviour in the same way as a human, just because both can produce similar textual output. It is a fallacy, because we know that ELIZA was much less complex than a human therapist, even though they may have interacted with human patients in similar ways.

Moreover, the ELIZA effect can be seen as a form of anthropomorphism, which is the attribution of non-human emotions, traits, thoughts and characteristics to non-human animals or objects.

Converse With ELIZA Yourself

You can now talk to a version of the ELIZA chatbot yourself, since there is one hosted online by the New Jersey Institute of Technology.

One way to confuse ELIZA is to adopt a similarly inquisitive style as her. For example, if you ask her questions, she will attempt to respond with her own questions before giving up, as shown in the conversation I had with her below:

ELIZA:  Hello, I am Eliza. I'll be your therapist today. 
Me:  Hello, Eliza. How are you?
ELIZA:  Why are you interested in whether or not I am ?
Me:  Why wouldn't I be?
ELIZA:  Does that question interest you? 
Me:  What do you mean?
ELIZA:  We were discussing you, not me. 
Me:  Why?
ELIZA:  Why do you ask? 
Me:  Am I not allowed to ask?
ELIZA:  What does that suggest to you? 
Me:  What does it suggest to you?
ELIZA:  Oh... Let's move on to something else for a bit.  ?

ELIZA works best when it is able to stimulate meaningful responses from the user.

So Is Google LaMDA Really Conscious?

Google’s chatbot is undoubtedly an extremely impressive piece of software, but ultimately it’s no more conscious than the ELIZA chatbot was in the 1960s.

A spokeswoman for Google made the following statement:

LaMDA tends to follow along with prompts and leading questions, going along with the pattern set by the user. Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.

Despite Blake Lemoine’s feelings of friendship with the chatbots, we can say with confidence that Google LaMDA is neither conscious nor sentient.

If we can explain how its software works by following a set of logical rules, then we have no good reason to assume that any other process of human-like cognition is taking place.

Leave a Reply

Your email address will not be published. Required fields are marked *