MUNICH: Microsoft has issued a strong warning about the uncontrolled use of autonomous software assistants using artificial intelligence (AI) ahead of the upcoming Munich Security Conference.
In a report released on Tuesday, researchers from the software company said AI assistants are already in use for programming in over 80% of Fortune 500 companies.
However, most companies lack clear rules for the use of AI, the rapid spread of which poses incalculable risks, Microsoft argued. A lack of oversight by those responsible and "shadow AI" open the door to new attack methods, the report added.
Top managers unaware of AI use
"Shadow AI" refers to the use of AI applications by employees without the knowledge or official approval of the company's IT or security department.
Employees independently use AI tools or agents from the Internet, such as autonomously acting computer programmes, to complete their tasks more quickly, without anyone in the company hierarchy being informed.
The Microsoft report sounds the alarm about a growing gap between innovation and security.
While AI usage is growing explosively, not even half of the companies – or 47% – have specific security controls for generative AI, and 29% of employees are already using unauthorized AI agents for their work. This creates blind spots in corporate security.
Quick deployment insecure
According to the Microsoft experts, the risk increases if companies do not take enough time when introducing AI applications.
The rapid deployment of AI agents can bypass security and compliance controls and increase the risk of shadow AI, the report said.
Malicious actors could exploit the permissions of agents and turn them into unintended double agents, Microsoft suggested. Like human employees, an agent with too much access – or incorrect instructions – can become a vulnerability.
The authors of the study emphasized that these are not theoretical risks. Recently, Microsoft's Defender team discovered a fraudulent campaign in which several actors used an AI attack technique known as "memory poisoning" to permanently manipulate the memory of AI assistants, and thus the results.
Limit access to data
The report recommends several countermeasures to keep the risk of using AI applications as low as possible.
Software assistants with AI should only have access to the data they absolutely need to solve their task.
Additionally, companies should establish a central register to see which AI agents exist in the company, who owns them and what data they access. In addition, unauthorized agents must be identified and isolated. – dpa
