Домой United States USA — IT AI isn't for the good guys alone anymore

AI isn't for the good guys alone anymore

458
0
ПОДЕЛИТЬСЯ

NewsHubLast summer at the Black Hat cybersecurity conference, the DARPA Cyber Grand Challenge pitted automated systems against one another, trying to find weaknesses in the others’ code and exploit them.
“This is a great example of how easily machines can find and exploit new vulnerabilities, something we’ll likely see increase and become more sophisticated over time,” said David Gibson, vice president of strategy and market development at Varonis Systems.
His company hasn’t seen any examples of hackers leveraging artificial intelligence technology or machine learning, but nobody adopts new technologies faster than the sin and hacking industries, he said.
“So it’s safe to assume that hackers are already using AI for their evil purposes,” he said.
“It has never been easier for white hats and black hats to obtain and learn the tools of the machine learning trade,” said Don Maclean, chief cybersecurity technologist at DLT Solutions . “Software is readily available at little or no cost, and machine learning tutorials are just as easy to obtain.”
Take, for example, image recognition.
It was once considered a key focus of artificial intelligence research. Today, tools such as optical character recognition are so widely available and commonly used that they’re not even considered to be artificial intelligence anymore, said Shuman Ghosemajumder, CTO at Shape Security.
“People don’t see them as having the same type of magic as it has before,” he said. “Artificial intelligence is always what’s coming in the future, as opposed to what we have right now.”
Today, for example, computer vision is good enough to allow self-driving cars to navigate busy streets.
And image recognition is also good enough to solve the puzzles routinely presented to website users to prove that they are human, he added.
For example, last spring, Vinay Shet, the product manager for Google’s Captcha team, told Google I/O conference attendees that in 2014, they had a distorted text Captcha that only 33 percent of humans could solve. By comparison, the state-of-the-art OCR systems at the time could already solve it with 99.8 percent accuracy.
The criminals are already using image recognition technology, in combination with “Captcha farms,” to bypass this security measure, said Ghosemajumder. The popular Sentry MBA credential stuffing tool has it built right in, he added.
So far, he said, he hasn’t seen any publicly available tool kits based on machine learning that are designed to bypass other security mechanisms.
But there are indirect indicators that criminals are starting to use this technology, he added.

Continue reading...