Much of the discussion around artificial intelligence (AI) centres on concerns from workers across industries about the technology potentially making humans redundant.
Both policymakers and tech experts, however, emphasise that AI is intended to enhance human capabilities, not replace them, in the workplace and beyond.
Yet, even with these reassurances – and with some remaining unconvinced – an important question often goes overlooked: What are the implications of becoming too dependent on this technology in our daily lives? And perhaps more crucially, has that dependency already begun?
Friend or foe?
Dr Azree Nazri, head of laboratory at the Institute of Mathematical Research at Universiti Putra Malaysia, highlights that overdependence on AI has already become a concerning issue, one that is likely to persist as the technology continues to proliferate.
He compares the situation to the film Limitless, where a pill that boosts cognitive abilities plays a central role in the plot.
“Similar to how the protagonist in Limitless achieves rapid success but faces negative consequences from overuse of the drug, AI can perform tasks such as writing and data analysis with incredible efficiency. However, overdependence may reduce cognitive skills and creativity.
“The ethical challenges of privacy, bias, and the need for responsible AI use underscore the importance of human oversight and balance in its deployment.
“National AI guidelines, such as Malaysia’s AI Roadmap, emphasise the need for responsible governance, ethics, and human oversight to mitigate risks effectively,” he says.
Azree, who is also the Artificial Intelligence Society Malaysia president, further says that AI adoption is rapidly growing, with one in five workers being frequent AI users and 81% of workers believing the technology will shape their careers based on a study from employment firm Ranstad.
Prof Dr Ho Chin Kuan, Asia Pacific University of Technology and Innovation (APU) vice chancellor, says that he has personally observed a “mix of positive and concerning trends” in the use of AI by students.
He says that students are adept at using new technology, with AI tools having become increasingly popular for research, writing and problem-solving, which can be helpful as a supplement to the learning process. However, he adds, these tools come with certain caveats.
“There’s also a tendency for some students to over-rely on these tools, especially when it comes to writing assignments, which raises concerns about originality, critical thinking and learning outcomes.
“The challenge for educational institutions is then to encourage students to see AI as a tool to enhance their learning, not as replacements for their own thinking and effort.
“It’s an important ongoing conversation, and it’s definitely something we’re actively working on,” he says.
From Prof Ho’s perspective, the question of whether people are relying too much on AI is not a simple yes or no question, with the technology becoming increasingly embedded in everyday life.
“AI, on the one hand, provides amazing possibilities. It can automate boring stuff and give us intelligence to work on some hard problems. We can observe this in healthcare, finance and manufacturing.
“Yet, there is real concern about becoming overreliant. We will lose the ability to think critically and be flexible if we depend too much on AI in decision-making. Balance is everything,” he says.
He stresses that AI must be seen as complementary to human intellect instead of an alternative to a person’s own judgement, creativity and insight.
Bringing back balance
Since the initial AI boom, kicked off by the arrival of ChatGPT in 2022, there has been no shortage of reports about the use of AI, shining a light on both the benefits of the technology and its dark side.
In a case from 2023, two US lawyers were fined US$5,000 (RM20,650) for submitting a legal brief containing six fictitious cases generated by ChatGPT to court. These non-existent cases were even attributed to real judges.
Azree points out that some may start believing that the technology is infallible, putting blind trust into AI-generated outcomes without questioning their accuracy, while at the same time diminishing human skills like empathy and critical thinking.
He says it can pose both “privacy and ethical concerns due to biassed algorithms, and increase vulnerability to technological failures”.
“Furthermore, it threatens human autonomy by allowing machines to make critical decisions,” he says, recommending a balanced approach “that maintains human judgement, ethical considerations and accountability” to mitigate such issues when integrating AI with society.
Azree emphasises the importance of verifying AI outputs by referencing a September incident involving Seputeh MP Teresa Kok. The issue stemmed from a translation error by ChatGPT, which incorrectly included the word “memalukan” when translating from Mandarin to Bahasa Malaysia regarding the enforcement of halal certification for restaurant operators.
Kok clarified that her original statement intended to highlight that the move could provoke negative reactions.
“AI users, particularly those employing technologies like ChatGPT, hold a responsibility to ensure accurate and culturally sensitive translations, as seen in the recent case,” he says.
In the case of Kok’s mistranslation by ChatGPT, he says the issue partly stemmed from how the model processes language through tokens. A token is a unit of text used in AI models, and the AI’s understanding improves with more comprehensive and diverse training data.
“Given the relatively low token allocation for Malay (0.29%) compared to languages like English (58.20%), AI models may have less nuanced understanding of certain languages, increasing the risk of errors.
“With AI adoption growing, such incidents could become more common unless human oversight and proper validation are applied to translations, especially in sensitive contexts,” he adds.
Prof Ho similarly believes that the capabilities, limitations and potential risks of AI need to be understood by the public so that they can make informed decisions about its use in their lives.
“The future of work will involve collaboration between humans and AI (software and robots). Humans bring creativity and emotional intelligence, while AI can help with data analysis and automation.
“Building public trust in AI while educating people about the dangers of overdependence is crucial. Open dialogue, education, and ethical development can address these concerns and foster a more positive view of AI.
“Educational programmes should be designed and implemented to teach people about AI, its capabilities, limitations, and potential risks. Such programmes must be available to the general public and professionals using and developing AI systems.
“Robust regulations and oversight mechanisms are also needed to ensure that AI is developed and used responsibly, addressing privacy, security, and bias concerns,” he says.
In building literacy towards AI technology among the general public, Prof Ho recommends a multi-pronged approach, incorporating its basic concepts into the academic curriculum with an emphasis on critical thinking and problem-solving.
At the same time, he says Malaysians of all ages and backgrounds need to be engaged via awareness programmes and online educational resources, adding that initiatives like the AI Untuk Rakyat programme can serve as a good example of this.
“Focus on the process, not just the answers. Encourage students to question and reason,” he says, “foster human skills and values – emphasise creativity and innovation, promote collaboration and communication, and cultivate empathy and emotional intelligence.”
“It’s a delicate balance because we want to encourage students to explore new technologies while also promoting critical thinking and original work.
“We started by having open conversations with students about AI, its capabilities and its limitations. This also includes discussing the ethical implications of using AI for academic purposes, particularly plagiarism.
“In fact, we organised a debate among students, one team arguing for the use of AI and another team against it. This enabled students to discover the good and bad for themselves,” he adds.
Lessons from the past
Older technologies that are now ubiquitous can provide some insight into navigating the impact of AI today, according to both Azree and Ho.
Ho highlights the early days of Internet adoption, which brought “significant concerns about privacy, security and digital divides” due to limited public understanding, emphasising the importance of early public engagement about AI.
Azree shares a similar perspective on technology adoption.
“From the introduction of calculators and personal computers, we learned that widespread accessibility and education are crucial for successful adoption.
“For instance, when calculators first became common in schools, they faced criticism for reducing basic maths skills. However, as educators integrated them effectively, they enhanced learning by focusing on problem-solving rather than manual calculations.
“Similarly, personal computers and Microsoft Office revolutionised office work, allowing users to automate tasks like data entry and word processing.
“Applying these lessons to AI, we should focus on educating the public, ensuring access, and teaching responsible usage to maximise its benefits.
“For example, integrating AI tools like Microsoft Copilot in workplace training can ensure users harness AI to improve productivity while maintaining ethical standards,” he says.