Start United States USA — software Google Experiments With Using AI to Flag Phishing Threats, Stop Scams

Google Experiments With Using AI to Flag Phishing Threats, Stop Scams

59
0
TEILEN

The experiment was able to explain to an email recipient why a message was likely malicious, but the technology takes a lot of computing power to run at scale.
To see if AI can help stop cyberattacks, Google recently ran an experiment that used generative AI to explain to users why a phishing message was flagged as a threat.
At the RSA Conference in San Francisco, Google DeepMind research lead Elie Bursztein talked about the experiment to highlight how today’s AI chatbot technology could help companies combat malicious hacking threats. 
According to Bursztein, most of the malicious documents Gmail currently blocks, around 70%, contain both text and images, such as official company logos, in an effort to scam users. 
The company experimented using Google’s Gemini Pro chatbot, a large language model, (LLM) to see if it could spot the malicious documents. Gemini Pro was able to detect 91% of the phishing threats, though it fell behind a specially trained AI program that had a success rate of 99% while running 100 times more efficiently, Bursztein said.
Hence, using Gemini Pro to detect phishing messages doesn’t appear to be the best use of an LLM. Instead, today’s generative AI excels at explaining why a phishing message has been detected as malicious, rather than merely acting as a phishing email detector, Bursztein said.

Continue reading...