KUALA LUMPUR: Adequate infrastructure and ecosystems need to be developed with long-term use in mind for both artificial intelligence (AI) and future technologies that come after it, says Digital Minister Gobind Singh Deo.
"We speak about AI. We know what AI does. We know how it transforms everything, how it improves lives, your business, even governments. But what do you need in order to ensure that everything is AI in this country?
"So the thinking has to begin right at the start," he says during a fireside chat at the 2025 Future Tech Forum held by the Georgia Tech Institute for People and Technology
According to Gobind, Malaysia has done well with the development of connectivity infrastructure via the 5G and 4G networks, alongside securing data centre investments from global players, which started many years ago.
"It's about building infrastructure, which means that you have the building blocks that enable you to use, not just AI, but next-generation technologies, because AI is changing in itself. We need to look ahead the next five years, next ten years, just like we did in 2018.
"We realised the significance of data centres, and we started going around the world looking for investments, building those systems so that we can ensure that when the technology is ultimately here and ready for use, we have an ecosystem that permits us to take good advantage of it," he says.
Gobind further spoke on ensuring the security of such infrastructure once they are available to ensure that people trust it, and are willing to actually use it.
A separate panel discussion, centred around deploying AI innovations while ensuring integrity, discussed the safety measures and ethics of the technology as it steadily sees adoption in business applications.
Such innovations include use in the construction sector, as highlighted by John Lim Ji Xiong, Gamuda's chief digital officer, who says that the technology can be deployed to manage project budgets, improve decision-making, adjusting plans, and even automating equipment.
Henry Yang, chief marketing officer of agentic AI provider Manus, boils down integrity to three key points, which should all be built into an AI product to operate with integrity.
The first, and most important according to Yang, is transparency, not only in how an AI operates, but also in how the company providing it operates.
This ranges from what exactly a large language model or agentic AI (such as Manus.AI) doing upon a user's prompt, allowing them to double check and understand what exactly is going one, to the partners and services that a company uses, and the standards it complies to.
Yang also says that an AI should be honest with what it can and cannot do, rather than generating something that is fake to meet a user's request. It should also know when to ask a user to intervene to solve any issues.
Lastly, he believes that an AI tool should be safe, with minimal risks for its users, and without the chance for mistakes made to cause wider issues.
On the topic of trust in AI, Benjamin Croc, CEO of BrioHR, says that in his experience there tends to be two types of people: those who use AI and love it, and those that do not use AI and hate it.
Ding Wang, a senior research for responsible AI with Google Research, similarly spoke on balancing overly-trusting and under-trusting attitudes towards the technology.
"I think to build AI trust in one side over-trusting, one side under-trusting, is to have a very balanced approach to it, and also to deliver the true value you need from using any of the AI products," she says.
Ding adds that those who were left out should be given to opportunity to use and benefit from AI, which would also function as a way to build trust in the technology.
She also highlights the need for AI tools to be adapted to regional contexts, rather than expecting users to adjust themselves to the technology.
Ding adds that many AI models are trained and deployed in the US, which means they may miss cultural nuances or behave differently when used in other parts of the world.
One example is cultural representation. She says that when she asks an AI model to show a family celebrating a holiday in a local context, she would much rather it reflect a local celebration by default, instead of defaulting to something more typical of the US.
