How do you hack a whole house in 2025? You say please to Gemini, of course.
Modern AI models have done away with complex lines of code, and instead respond to user commands made in natural language. However, this means that it has become a lot easier to trick an AI model into executing malicious inputs, including controlling someone’s smart home.
Researchers brought Google’s attention to this matter back in February. The team was able to embed prompts in a Google Calendar invite, which led to Gemini carrying out actions that the original user had not asked for.
Gemini began to turn off the lights and fire up a boiler, just because the user had said thanks. Of course, a lot more dangerous actions could have been taken in a smart home had the hackers not been using this flaw to demonstrate vulnerabilities.