Technology

AI Can Now Read Your Mind; Gives It In Writing

AI

Scientists said on Monday that a new artificial intelligence system could non-invasively convert the brain activity of someone who is listening to, imagining, or watching a silent film into a continuous stream of text.

A study by a four-person research team at the University of Texas at Austin, which included an Indian graduate student, has demonstrated that the system can produce understandable word sequences from brain activity detected by functional magnetic resonance imaging (f-MRI) scans.

What Did The Researchers Found?

According to the researchers, the technology may one day assist patients who are cognizant but unable to talk due to a stroke or other medical conditions.

The computational technology that underpins OpenAI’s ChatGPT, an AI system that engages in conversational interaction, is used by the AI system that decodes brain activity.

The purpose of language decoding, according to Jerry Tang, a computer science doctorate student who oversaw the project, is to take recordings of a user’s brain activity and anticipate the words the user is hearing, saying, or visualizing.

“This is proof that language can be decoded from non-invasive recordings.”

The study by Tang and his colleagues is the first to translate non-invasive brain records obtained from f-MRI into continuous language, meaning more than single words or sentences. Currently under development technologies for decoding languages require surgical implantation.

The system needs to be customised for particular users. According to Alexander Huth, an assistant professor of neuroscience and computational science at the university, a person needs to spend up to 15 hours inside an MRI scanner reclining motionless and paying close attention to the stories they are listening to before this actually works well on them.

The training helps the researchers create a model that forecasts how a user’s brain activity will react to other stories.

Possibilities Of Error By AI

AI

Credit:google

Although there may be some errors, the AI system’s output aims to capture the essence of what is being said or thought rather than a word-for-word transcript.

For instance, when a speaker said, “I don’t have my driver’s license yet,” a participant who heard that concept was translated as she hasn’t even begun driving lessons yet.

On Monday, the study was published in the journal Nature Neuroscience. The other authors include Shailee Jain, a graduate student who earned a BTech from the National Institute of Technology, Surathkal before relocating to the US, and Amanda LaBel, a former researcher in Huth’s lab.

In a different experiment, a participant heard the following: “I got up from the air mattress and pressed my face against the bedroom window expecting to see eyes staring back but finding only darkness.”

The decoder’s translation: “I just kept walking up to the window, opened the glass, stood on my tiptoes, peeked out, didn’t see anything, and looked up again I saw nothing. 

Share post: facebook twitter whatsapp