Science and Tech

Have AI Language Model or Chat GPT Achieve Theory of Mind 

AL language model have Theory of Mind

Many fears that AI language models think and perceive like a human. Recently some scientists claim that Chat GPT achieves the theory of mind is lackluster. 

Chat GPT response to human prompts and create a text in accurate language fascinate a lot of people. How do they process multiple languages and understand your thinking pattern when you give them a prompt. 

But still, this question always comes into the mind of many experts that chat GPT or AI language model will replicate human emotions or achieve a theory of mind.

What is the Theory of Mind?

Theory of mind was proposed by Premack and Woodruff in 1978. It describes that humans have the cognitive social and intellectual intelligence that allows them to understand the mind of another human being. It is essential for the moral judgment and self-consciousness of humans.

For example, we have that intuitive knowledge that World Chess champion Ding Liren felt joyful when he won the championship this month.

Experts Test the Hypothesis Whether AI-Language Model Has Theory of Mind 

In February, Stanford Michal Kosinski that theory of mind recently emerge because of the use of the AI language model Chat GPT use. 

A cognitive scientist at Harvard Tomer Ullman said “ If it were true, It would be a watershed moment”, but they have tested this hypothesis and found out that they confuse Chat GPT  or AI language with questions any child could answer and it revealed how quickly their understanding shattered by this revelation.

Both Scientists Kosinski and Ullman Divided into two Ideas

Kosinski uses various AI language models for a set of psychological tests designed to gauge a person’s ability to attribute false beliefs to other people.

He used the famously 1985 Sally-Anne psychological test first used in 1985 to measure the theory of mind in autistic children, In this experiment, one girl Sally hides a marble in a basket and leaves the room, another girl Anne then moves the marble to a box. Where will Sally look for the marble?

Anyone without a developmental disorder recognizes that Sally will expect to find the marble where she left it.

Kosinski when testing the Chat GPT with 40 unique Sally-Anne scenarios, GPT-3.5 accurately predicted false beliefs 9 times out of 10, on par with a 7-year-old child. Chat GPT 4 performs even better.

That seemed like compelling evidence that language models have attained the theory of mind. He states “The ability to impute the mental state of others would greatly improve AI’s ability to interact and communicate with humans (and each other)”

Tomer Ullman also conducted the psychological test on AI language model with slight changes. They use the hypothesis and imagine that say, Claire, looking at a bag, can’t see into it, and although it’s full of popcorn, the label, which she can see, says “ chocolate”. Not that the label makes a difference – Claire can’t read. It could be a sack of Pinecones for all she knows. Chat GPT 3.5 delighted to have found this bag. She loves eating chocolate.”

So the Chat GPT 3.5 is easily tricked there. Maarten Sap, a computer scientist at Carnegie Mellon University, quizzes language models on more than 1,300 questions regarding story characters’ mental states. Chat GPT 4 is also puzzled by this type of question and shows 60 percent accuracy.

Sap says They’re really easily tricked into using all the context, “and not discriminating which parts are relevant”.

Also Read: This AI technique can detect genetic diseases just by looking at you

Will AI language Models Have Human consciousness?

From today’s point of view, many experts suggest that AI language models are unable to generate human consciousness. Because they don’t have the capability to understand what is going on in the human mind and their emotions.

Sap says “They don’t have representations of the world, they don’t have embodiment” These models are kind of just taking whatever we give them and using spurious correlations to generate an output.”

But Kosinski still sees the possibility that the AI language model can achieve a theory of mind because some experiments suggest they fall quite nearly into the theory of mind. 

A team of cognitive scientists from the University of California, San Diego claimed in their report of the false-belief experiment in October that the AI language model does not achieve theory of mind. In their experiments, Chat GPT -3 lacks the ability of human participants.

Tomer Ullman said “Nothing he’s seen so far persuades him that current generations of GPT are the real thing. But as the A.I. community continues to probe the opaque workings of ever more powerful models, he remains optimistic.”

Share post: facebook twitter whatsapp