close
close

Brains react differently to artificial intelligence and human language

Brains react differently to artificial intelligence and human language

Humans are not particularly good at distinguishing between human voices and voices generated by artificial intelligence (AI). But our brains react differently to human and AI voices, according to a study presented today (Tuesday) at the Forum 2024 of the Federation of European Neuroscience Societies (FENS).

The study was presented by doctoral student Christine Skjegstad and conducted by Ms Skjegstad and Professor Sascha Frühholz, both from the Department of Psychology, University of Oslo (UiO), Norway.

We already know that AI-generated voices have advanced to the point where they are almost indistinguishable from real human voices. It is now possible to clone a person’s voice from just a few seconds of recording, and fraudsters have used this technology to impersonate a loved one in need and trick victims into transferring money. While machine learning experts have developed technological solutions to recognize AI voices, much less is known about how the human brain responds to these voices.”


Christine Skjegstad, Department of Psychology, University of Oslo

The study involved 43 people who were asked to listen to human and AI-generated voices expressing five different emotions: neutral, angry, fearful, happy, delighted (2). They were asked to identify the voices as synthetic or natural while their brains were scanned using functional magnetic resonance imaging (fMRI). fMRI is used to detect changes in blood flow in the brain that indicate which parts of the brain are active. Participants were also asked to rate the characteristics of the voices they heard in terms of naturalness, trustworthiness, and authenticity.

Participants correctly identified human voices only 56% of the time and AI voices 50.5% of the time. This means that they were equally bad at recognizing both voice types.

People were more likely to correctly identify a “neutral” AI voice as AI (75% compared to 23% who correctly identified a neutral human voice as human). This suggests that people are more likely to perceive neutral voices as AI-like. Neutral female AI voices were correctly identified more often than neutral male AI voices. For happy human voices, the correct identification rate was 78%, compared to just 32% for happy AI voices. This suggests that people are more likely to perceive happiness as human-like.

Both AI and human neutral voices were perceived as the least natural, trustworthy, and authentic, while happy human voices were perceived as the most natural, trustworthy, and authentic.

However, when examining the brain images, the researchers found that human voices elicited stronger responses in areas of the brain associated with memory (right hippocampus) and empathy (right inferior frontal gyrus). AI voices elicited stronger responses in areas associated with error detection (right anterior middle cingulate cortex) and attention regulation (right dorsolateral prefrontal cortex).

Ms Skjegstad said: “My research shows that we cannot tell very accurately whether a voice is human or generated by an AI. Participants also often expressed how difficult it was for them to tell the difference between the voices. This suggests that current AI voice technology can mimic human voices to such an extent that it is difficult for humans to reliably tell them apart.

“The results also suggest a perceptual bias, where neutral voices were more likely to be perceived as AI-generated and happy voices were more likely to be perceived as more human, regardless of whether they actually were. This was particularly the case for neutral female AI voices, which may be due to our familiarity with female voice assistants like Siri and Alexa.

“Although we are not very good at distinguishing human voices from AI voices, there seems to be a difference in the brain’s response. AI voices can elicit heightened alertness, while human voices can elicit a sense of connectedness.”

The researchers now want to investigate whether personality traits such as extraversion or empathy lead people to perceive the differences between human and AI voices more or less sensitively.

Professor Richard Roche is chair of the FENS Forum Communications Committee and deputy head of the Department of Psychology at Maynooth University in Maynooth, County Kildare, Ireland, and was not involved in the research. He said: “Investigating the brain’s responses to AI voices is vital as this technology continues to develop. This research will help us understand the potential cognitive and social implications of AI voice technology, which may help to support policy and ethical guidance.”

“The risk that this technology will be used to cheat and deceive people is obvious. However, there are also potential benefits, such as providing voice replacements for people who have lost their natural voice. AI voices could also be used in therapy for some mental illnesses.”

Source:

Federation of European Neuroscience Societies