New research from Brave Software shows how hidden text in an image can be used to manipulate Perplexity’s Comet browser.
AI-powered browsers are supposed to be smart. However, new security research suggests that they can also be weaponized to hack users, including when analyzing images on the web.
On the same day OpenAI introduced its ChatGPT Atlas browser, Brave Software published details on how to trick AI browsers into carrying out malicious instructions.
The potential flaw is another prompt injection attack, where a hacker secretly feeds a malicious prompt to an AI chatbot, which might include loading a dangerous website or viewing a user’s email. Brave, which develops the privacy-focused Brave browser, has been warning about the trade-offs involved with embedding automated AI agents into such software. On Tuesday, it reported a prompt injection attack that can be delivered to Perplexity’s AI-powered Comet browser if it’s used to analyze an image, such as a screenshot taken from the web.
“In our attack, we were able to hide prompt injection instructions in images using a faint light blue text on a yellow background. This means that the malicious instructions are effectively hidden from the user,” Brave Software wrote.
If the Comet browser is asked to analyze the image, it’ll read the hidden malicious instructions and possibly execute them.