AI is making people sound boringly alike, scientists warn


The researchers at the University of Southern California suggest that human traits such as adaptability and reasoning will become difficult to maintain the more people come to lean on chatbots. — dpa

LOS ANGELES: Artificial intelligence is eroding the subtleties, nuances and individuality of speech, writing and even thought, a group of psychologists and computer scientists are warning.

The researchers at the University of Southern California suggest that human traits such as adaptability and reasoning will become difficult to maintain the more people come to lean on chatbots.

If such "homogenisation" is not checked, the ability of people to reason intuitively or in the abstract could be weakened, according to the team's paper, published in the journal Trends in Cognitive Sciences on Wednesday.

Large language models (LLMs) are becoming "deeply embedded in people’s lives," the authors write, pointing to evidence across linguistics, psychology, cognitive science and computer science to show how language and reasoning is at risk of being standardised.

"Cognitive diversity is shrinking worldwide as billions of people are using the same handful of AI chatbots for an increasing number of tasks," the researchers said, with the likely effect that creativity and problem-solving capabilities are undermined.

"When people use chatbots to help them polish their writing, for example, the writing ends up losing its stylistic individuality," the researchers pointed out.

The result is "standardised expressions and thoughts across users," according to Zhivar Sourati of the University of Southern California.

The team warned that the way chatbots function can lead to people falling in line with the modes of reasoning they see rolling out on the screen in front of them, which can be based on a "narrow and skewed slice of human experience," depending on what material the bot has been trained on.

"Rather than actively steering generation, users often defer to model-suggested continuations, selecting options that seem ‘good enough’ instead of crafting their own, which gradually shifts agency from the user to the model," said Sourati.

"AI developers should incorporate more real-world diversity into LLM training sets, not only to help preserve human cognitive diversity, but also to improve chatbots’ reasoning abilities," the researchers said. – dpa 

 

 

 

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

US jury to begin deliberations in social media addiction trial
Meta delays rollout of new AI model code-named 'Avocado,' NYT reports
Musk's X to change verification system in Europe, Bloomberg News reports
Amazon plans to move Prime Day event to June from July, Bloomberg News reports
SentinelOne's quarterly profit forecast falls short of estimates amid stiff competition
Adobe's longtime CEO to exit role amid AI disruption, shares fall
Telus says it is investigating hack of its systems
Amazon unit withdraws from drone trade group, raises safety concerns
Lucid sees positive cash flow late in decade with affordable model, autonomous offerings
US appeals court voids much of injunction against California children's online safety law

Others Also Read