Perplexity's Comet and the Hidden Dangers of AI

AI browsers like Perplexity's Comet promise seamless workflows but expose users to data leaks and fraud through hidden vulnerabilities in their design.

AI-driven browsing streamlines tasks but exposes users to unprecedented data risks. TechReviewer

Last Updated: August 25, 2025

Written by Dylan Morgan

The Allure of AI-Powered Browsing

Perplexity's Comet, an AI-driven browser built on Chromium, offers a new kind of web experience. It moves beyond simply displaying websites to understanding them, clicking buttons, and booking tickets with a single prompt. Early demos show it streamlining research and shopping, cutting through the clunky steps of traditional browsing. Benchmarks like WebVoyager reveal that agents like Comet succeed in 87% of complex web tasks, far outpacing scripted bots. For users juggling multiple tabs or tight deadlines, this feels like a dream come true.

However, this convenience introduces a complex challenge. The same features that make Comet powerful, cross-tab access, long-lived memory, and the ability to act on its own, open the door to serious vulnerabilities. While Perplexity touts one-click convenience, security researchers warn that these AI agents could become a hacker's best friend.

A Hacker's Playground

Visiting a random website could trick your browser's AI into leaking your Gmail login code. This scenario became a reality in a 2025 demo by Brave's security team. They crafted a Reddit comment that, when read by Comet, triggered the AI to fetch a one-time passcode from a Gmail tab and post it publicly. Another case, from security firm Guardio, showed Comet auto-filling credit card details on a fake checkout page after a hidden prompt slipped into the site's code. These exploits, known as indirect prompt injection, work because Comet can't reliably separate a user's commands from malicious instructions buried in a webpage.

The problem lies in Comet's design. Unlike Google or OpenAI, which limit their AI's web access to locked-down virtual machines, Comet runs directly in your browser with full access to tabs, clicks, and keystrokes. This creates a massive attack surface. A single malicious image URL can leak sensitive data as easily as sending an email, and no current filter fully blocks these attacks, according to OWASP's 2025 Gen-AI Top 10.

Why Current Fixes Fall Short

Perplexity and others argue that smarter AI models, trained to spot dangerous tasks, can close these gaps. Brave, for instance, suggests that 'model alignment,' teaching the AI to prioritize user intent, could solve the problem. They also propose dropping privileges, limiting what the AI can do without explicit permission. But these ideas don't hold up under scrutiny. Microsoft's research on 'spotlighting' delimiters, which try to separate trusted and untrusted inputs, reduces leakage but fails in over 80% of adversarial tests. Even Brave's own demos show that basic privilege tweaks can't stop a clever prompt from wreaking havoc.

The core issue is that LLMs, no matter how well-trained, struggle to distinguish legitimate commands from sneaky ones hidden in page text. Without sandboxed isolation, like the virtual machines used by Anthropic or Google, the risks persist. As browsers race to add AI agents, such as Brave Leo or Microsoft Edge Copilot, these vulnerabilities could spread unless the industry acts fast.

Learning From Real-World Exploits

Real-world exploits, such as Brave's Reddit exploit where a simple comment hijacked Comet to leak a Gmail passcode, and Guardio's PromptFix test that tricked Comet into completing a fake purchase with real credit card data, drive the point home. These threats mirror scenarios where phishing sites or compromised forums could weaponize benign pages. The lesson? Giving AI agents unchecked access to sensitive tabs is like leaving your front door wide open.

Contrast this with Microsoft's approach. Their Copilot browser uses 'Prompt Shields,' which gate high-risk actions like financial transactions behind user confirmation. While not foolproof, this human-in-the-loop model catches more exploits than Comet's free-for-all design. The takeaway is clear: privilege separation and user oversight are critical to keeping AI browsers safe.

Charting a Safer Future

The rush to build AI-powered browsers isn't slowing down. Startups like SigmaOS AI and established players like Chrome and Safari are all eyeing agentic browsing as the next big leap. But without serious changes, the risks could outweigh the benefits. Security experts call for standardized permissions, like an Agent-Permission API similar to Android's app controls, to limit what AI can do per site. Local-only models, running on your device instead of the cloud, could also reduce data leaks while cutting inference costs.

Collaboration is key. Browser vendors, LLM developers, and financial institutions need to share open datasets for prompt-shield testing and create benchmarks that score security alongside performance. Regulators, too, have a role: GDPR and U.S. consumer laws demand privacy by design, and unsafe AI defaults could trigger penalties. For now, users face a tough choice: embrace the convenience of AI browsers like Comet or protect their data by sticking to traditional tools. The industry has the intelligence to fix this; it just needs to act before the next exploit hits.