Microsoft warns of dangerous 'shadow AI'


According to the Microsoft experts, the risk increases if companies do not take enough time when introducing AI applications. — Bloomberg

MUNICH: Microsoft has issued a strong warning about the uncontrolled use of autonomous software assistants using artificial intelligence (AI) ahead of the upcoming Munich Security Conference.

In a report released on Tuesday, researchers from the software company said AI assistants are already in use for programming in over 80% of Fortune 500 companies.

However, most companies lack clear rules for the use of AI, the rapid spread of which poses incalculable risks, Microsoft argued. A lack of oversight by those responsible and "shadow AI" open the door to new attack methods, the report added.

Top managers unaware of AI use

"Shadow AI" refers to the use of AI applications by employees without the knowledge or official approval of the company's IT or security department.

Employees independently use AI tools or agents from the Internet, such as autonomously acting computer programmes, to complete their tasks more quickly, without anyone in the company hierarchy being informed.

The Microsoft report sounds the alarm about a growing gap between innovation and security.

While AI usage is growing explosively, not even half of the companies – or 47% – have specific security controls for generative AI, and 29% of employees are already using unauthorized AI agents for their work. This creates blind spots in corporate security.

Quick deployment insecure

According to the Microsoft experts, the risk increases if companies do not take enough time when introducing AI applications.

The rapid deployment of AI agents can bypass security and compliance controls and increase the risk of shadow AI, the report said.

Malicious actors could exploit the permissions of agents and turn them into unintended double agents, Microsoft suggested. Like human employees, an agent with too much access – or incorrect instructions – can become a vulnerability.

The authors of the study emphasized that these are not theoretical risks. Recently, Microsoft's Defender team discovered a fraudulent campaign in which several actors used an AI attack technique known as "memory poisoning" to permanently manipulate the memory of AI assistants, and thus the results.

Limit access to data

The report recommends several countermeasures to keep the risk of using AI applications as low as possible.

Software assistants with AI should only have access to the data they absolutely need to solve their task.

Additionally, companies should establish a central register to see which AI agents exist in the company, who owns them and what data they access. In addition, unauthorized agents must be identified and isolated. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Chatbots are the new influencers brands must woo
German publishers reject Apple's revised app tracking rules, urge antitrust fine
Lawsuit challenges US policy barring visas for social media researchers
Bluesky CEO Jay Graber steps down, advisor Toni Schneider named interim chief
HPE projects revenue above estimates, focuses on higher-margin networking orders
Factbox-Key claims in Anthropic's lawsuit against Trump's blanket government ban on its tech
Nasdaq teams up with Kraken to expand tokenization infrastructure
Reform UK's Nigel Farage invests in bitcoin-buying company
Amazon's Zoox to launch command hub in Arizona, expand testing to Dallas and Phoenix
Trump bought Netflix and Warner Bros bonds at height of bidding war with Paramount

Others Also Read