AI agents ‘perilous’ for secure apps such as Signal, Whittaker says


For an AI agent to act effectively on behalf of a human, it would need unilateral access to apps storing sensitive information such as credit card data and contacts, said Whittaker. Any data that the agent stores – the so-called context window – is at greater risk of being compromised, she said. — Bloomberg

Artificial intelligence agents that autonomously carry out tasks pose a threat to secure apps such as Signal, according to Meredith Whittaker, president of the Signal Foundation.

Deeper integration of AI agents into devices are "pretty perilous” for encrypted services because they require access to huge amounts of data stored in various apps, said Whittaker during an interview with Emily Chang at Bloomberg House in Davos, Switzerland on Jan 20. 

"If you give a system like that root access permissions, it can be hijacked,” said Whittaker.

The promise of AI agents is that they can conduct a wide range of tasks, from coding for developers to everyday activities like booking appointments or sending out invitations for a birthday party. But their effectiveness will depend on how freely they can tap into personal data, including contacts for friends and acquaintances. Tech companies are already struggling with how much leeway to give these agents, acting on behalf of humans. 

Amazon.com Inc is suing Perplexity AI Inc, which makes an agent that can shop on behalf of users, claiming it introduced privacy vulnerabilities. Amazon is also developing its own agents, some capable of shopping for users. 

For an AI agent to act effectively on behalf of a human, it would need unilateral access to apps storing sensitive information such as credit card data and contacts, said Whittaker. Any data that the agent stores – the so-called context window – is at greater risk of being compromised, she said.

"That’s what we’ve called breaking the blood-brain barrier between the application and the operating system,” she said. "Because our encryption no longer matters if all you have to do is hijack this context window that has effectively root permissions running in your operating system to access things like all this data.”

Apart from expanded data access, AI agents also pose a security risk because the goal is to operate with little human oversight, according to a McKinsey analysis.

Signal Foundation is the nonprofit organisation behind the Signal encrypted messaging app, popular with journalists, activists and, sometimes, government officials trying to avoid scrutiny. 

It was developed by pseudonymous programmer and privacy advocate Moxie Marlinspike as a more secure alternative to services such as Meta Platforms Inc’s WhatsApp. Unlike mainstream competitors, Signal collects little metadata and, as of 2024, does not require the user’s phone number. – Bloomberg 

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

David Rosen, 95, dies; video game visionary and co-founder of Sega
Gates and OpenAI team up for AI health push in African countries
L'oreal to invest $383 million in Indian beauty tech hub
OpenAI to start offering chatbot ads to advertisers, The Information reports
DeepSeek technique to improve AI’s ability to ‘read’ long texts questioned by new research
Philippines to restore access to Grok after developer commits to safety fixes
OpenAI to unveil chatbot ads to its advertisers, The Information reports
OpenAI unveils plan to keep data-center energy costs in check
Chinese-owned Temu catches up with Amazon in global cross-border e-commerce
Five things too much screen time is doing to your child’s brain

Others Also Read