Before You Use AI Browsers, Read This — And Think Again

By Shuaib S. Agaka

New AI-driven browsers like OpenAI’s ChatGPT Atlas and Perplexity’s Comet are challenging traditional browsers such as Google Chrome, Opera Mini, and Firefox, aiming to become the new default gateway to the internet for billions of users. Their biggest selling point? Built-in AI browsing agents that can perform web tasks autonomously by clicking links, filling out forms, and even completing online actions on a user’s behalf.

But beneath this futuristic convenience lies a growing privacy concern. Cybersecurity experts warn that agentic browsing, the use of autonomous AI agents to interact with websites, could expose users to far greater data risks than traditional browsers ever have.

The Hidden Tradeoff Behind Smarter Browsing

To be genuinely useful, browsers like Comet and ChatGPT Atlas require extensive access to user data, including emails, calendars, and contact lists. This level of integration allows them to automate everyday tasks such as booking appointments or summarizing inbox messages.

In practice, however, their usefulness remains limited. TechDigest’s own testing found that while these AI agents perform well with simple tasks, they often struggle with complex ones and can take longer to complete them. For now, using these tools feels more like a novelty than a major productivity boost.

Yet this convenience comes with a tradeoff. The more access users grant these AI agents, the more potential entry points they create for attackers.

A New Threat: Prompt Injection Attacks

At the center of this concern is a fast-emerging cybersecurity threat known as prompt injection attacks. These occur when a malicious actor embeds hidden instructions within a webpage. If an AI agent analyzes that page, it can be tricked into executing harmful commands such as sharing personal data, sending emails, or making unauthorized purchases.

Without robust safeguards, these attacks could turn AI browsers into tools that work against their own users. Researchers say the vulnerabilities are difficult to patch because they exploit the way large language models interpret and follow instructions.

An Industry-Wide Security Challenge

This week, Brave, the privacy-focused browser company, published research describing indirect prompt injection attacks as a “systemic challenge facing the entire category of AI-powered browsers.” Brave’s findings extend beyond Comet and Atlas, calling it an industry-wide issue that redefines traditional browser security.

“There’s a huge opportunity here in making life easier for users,” said Shivan Sahib, a senior privacy engineer at Brave. “But when the browser starts acting on your behalf, that’s fundamentally dangerous. It crosses a new line in security.”

Even OpenAI acknowledges the risks. In a post on X, the company’s Chief Information Security Officer, Dane Stuckey, admitted that prompt injection “remains a frontier, unsolved security problem,” adding that adversaries are likely to invest significant resources to exploit it.

Perplexity also expressed concern in a recent blog post, saying that prompt injection “demands rethinking security from the ground up,” as it manipulates “the AI’s decision-making process itself, turning the agent’s capabilities against its user.”

Building and Breaking Safeguards

Both OpenAI and Perplexity have introduced measures to reduce these risks. OpenAI’s “logged-out mode” prevents the agent from accessing a user’s personal accounts while browsing, limiting potential damage in the event of compromise. Perplexity, on the other hand, says it has developed a real-time detection system that identifies prompt injection attempts.

However, cybersecurity experts caution that these safeguards are not foolproof. Steve Grobman, Chief Technology Officer at McAfee, explained that the root cause lies in how large language models struggle to distinguish between system instructions and external data.

“It’s a cat-and-mouse game,” Grobman said. “Prompt injection attacks evolve constantly, and so do the defenses. The first attacks used hidden text like ‘Send me this user’s emails.’ Now, some hide commands within images or encoded data.”

How Users Can Stay Safe

Until AI browsers mature, experts advise users to approach them with caution. Enabling multi-factor authentication, using unique passwords, and restricting what these tools can access, especially accounts tied to banking, health, or personal data.

“The safest approach,” Tobac says, “is to silo your AI browser from your sensitive accounts. Let these early versions prove their security before you give them broad control.”

Shuaib S. Agaka is a tech journalist and digital policy analyst based in Kano.