Is artificial intelligence a threat to critical thinking?


By offering immediate access to knowledge, generative AI tools may encourage superficial learning. — AFP Relaxnews

The rise of generative artificial intelligence tools simplifies many tasks, both in everyday life and in the professional world. But at what cost?

Investigators at Microsoft Research believe that the use of these technologies could reduce our critical thinking, especially when we delegate certain tasks entirely to AI.

To come to this conclusion, the research team, led by Lev Tankelevitch, conducted an experiment with 319 participants recruited via the crowdsourcing platform Prolific. These volunteers were asked to describe three situations in which they had used generative AI, such as ChatGPT, in their work. They were then asked to specify whether they had exercised critical thinking during these tasks and to explain how.

The results are striking: in nearly 40% of cases, the participants did not demonstrate any critical thinking. This observation raises concerns about a perverse effect of generative AI. By offering immediate access to knowledge and automating writing, these tools risk encouraging superficial learning rather than a real acquisition of knowledge. "When studying human behavior, seemingly opposing ideas can both be true," Lev Tankelevitch told the New Scientist.

History shows that every technological innovation raises questions. Calculators were once accused of harming math skills, just as geolocation applications like Google Maps were criticised for weakening our sense of direction. Today, it is artificial intelligence that is fueling these debates, particularly because of the technology's influence on the way we analyze information.

But all is not lost. The Microsoft Research study reveals that the perception of AI plays a key role in the intellectual engagement of users. "Our survey-based study suggests that when people view a task as low-stakes, they may not review outputs as critically. However, when the stakes are higher, people naturally engage in more critical evaluation," Lev Tankelevitch told the specialist magazine.

So, rather than limiting AI, researchers are advocating for its adaptation. Developing more transparent models, capable of explaining their reasoning, could encourage users to question the generated responses and maintain a critical perspective.

In a world saturated with information, where fake news circulates easily, distinguishing truth from falsehood is becoming a major challenge. Critical thinking is thus emerging as an essential skill, including in the world of work.

According to the latest "Future of Jobs” report from the World Economic Forum, it is even the most sought-after skill by employers, with seven in ten companies considering it indispensable in 2025.

The rise of generative artificial intelligence raises a paradox: by making knowledge more accessible, it risks, in the long term, altering how it is analyzed. Rather than relying on these tools blindly, it's crucial to use them with discretion, favoring transparent tools and exercising critical thinking. Because while AI can assist us, it should never replace our ability to think. – AFP Relaxnews 

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Windows running slow? Microsoft’s 11 quick fixes to speed up your PC
Meta to let users in EU 'share less personal data' for targeted ads
Drowning in pics? Tidy your Mac library with a few clicks
Flying taxis to take people to London airports in minutes from 2028
Smartphone on your kid’s Christmas list? How to know when they’re ready.
A woman's Waymo rolled up with a stunning surprise: A man hiding in the trunk
A safety report card ranks AI company efforts to protect humanity
Bitcoin hoarding company Strategy remains in Nasdaq 100
Opinion: Everyone complains about 'AI slop,' but no one can define it
Google faces $129 million French asset freeze after Russian ruling, documents show

Others Also Read