AI agents ‘perilous’ for secure apps such as Signal, Whittaker says


For an AI agent to act effectively on behalf of a human, it would need unilateral access to apps storing sensitive information such as credit card data and contacts, said Whittaker. Any data that the agent stores – the so-called context window – is at greater risk of being compromised, she said. — Bloomberg

Artificial intelligence agents that autonomously carry out tasks pose a threat to secure apps such as Signal, according to Meredith Whittaker, president of the Signal Foundation.

Deeper integration of AI agents into devices are "pretty perilous” for encrypted services because they require access to huge amounts of data stored in various apps, said Whittaker during an interview with Emily Chang at Bloomberg House in Davos, Switzerland on Jan 20. 

"If you give a system like that root access permissions, it can be hijacked,” said Whittaker.

The promise of AI agents is that they can conduct a wide range of tasks, from coding for developers to everyday activities like booking appointments or sending out invitations for a birthday party. But their effectiveness will depend on how freely they can tap into personal data, including contacts for friends and acquaintances. Tech companies are already struggling with how much leeway to give these agents, acting on behalf of humans. 

Amazon.com Inc is suing Perplexity AI Inc, which makes an agent that can shop on behalf of users, claiming it introduced privacy vulnerabilities. Amazon is also developing its own agents, some capable of shopping for users. 

For an AI agent to act effectively on behalf of a human, it would need unilateral access to apps storing sensitive information such as credit card data and contacts, said Whittaker. Any data that the agent stores – the so-called context window – is at greater risk of being compromised, she said.

"That’s what we’ve called breaking the blood-brain barrier between the application and the operating system,” she said. "Because our encryption no longer matters if all you have to do is hijack this context window that has effectively root permissions running in your operating system to access things like all this data.”

Apart from expanded data access, AI agents also pose a security risk because the goal is to operate with little human oversight, according to a McKinsey analysis.

Signal Foundation is the nonprofit organisation behind the Signal encrypted messaging app, popular with journalists, activists and, sometimes, government officials trying to avoid scrutiny. 

It was developed by pseudonymous programmer and privacy advocate Moxie Marlinspike as a more secure alternative to services such as Meta Platforms Inc’s WhatsApp. Unlike mainstream competitors, Signal collects little metadata and, as of 2024, does not require the user’s phone number. – Bloomberg 

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Telegram messaging app faces fines in Russia, state media report
Alphabet sells bonds worth $20 billion to fund AI spending
Instant payment system Pix poised to capture half of Brazil's e-commerce market by 2028
Apple and Google agree app store changes to appease UK regulator
Spotify forecasts profit above estimates as founder Daniel Ek moves to new role
Alibaba pushes into robotics AI with open-source ‘RynnBrain’
WhatsApp wins court backing to challenge $268 million Irish privacy fine
Brazil seeks to restore block of Rumble video app
SoftBank earnings set for OpenAI boost, with focus on future funding
South Korea blames Coupang data breach on management failure, not sophisticated attack

Others Also Read