Start United States USA — IT Fool ML once, shame on you. Fool ML twice, shame on… the...

Fool ML once, shame on you. Fool ML twice, shame on… the AI dev? If you can hoodwink one model, you may be able to trick many more

337
0
TEILEN

Some tips on how to avoid miscreants deceiving your code
Adversarial attacks that trick one machine-learning model can potentially be used to fool other so-called artificially intelligent systems, according to a new study.
It’s hoped the research will inform and persuade AI developers to make their smart software more robust against these transferable attacks, preventing malicious images, text, or audio that hoodwinks one trained model from tricking another similar model.
Neural networks are easily deceived by what’s called adversarial attacks, which input data producing one output is subtly changed to produce a completely different one. For example, you could show a gun to an object classifier that correctly guesses it’s a gun, and then change just a small part of its coloring to fool the AI into thinking it’s a red-and-blue-striped golfing umbrella. Now you can potentially slip past that smart CCTV camera scanning the crowd for weapons.
This is because machines can’t tell the difference between real or fudged inputs, and will continue to operate all the same, despite spitting out incorrect answers. Adding a few pixels here and there causes an image of banana to be classified as a toaster. And it’s not just a problem that affects computer vision systems: natural language models are vulnerable too.
So far, most attacks have been demonstrated by feeding AI systems poisoned input data during inference, or the final decision-making stage: the part where the software predicts what it’s looking at, or listening to, and so on.
This involves trial and error if you, the attacker, do not know how the models work internally. If you’re trying this against a production system, you could face consequences if the attack fails: for example, you could set off security alarms, result in a facial-recognition system identifying you, trigger an AI-based network monitoring system, or otherwise give the game away that you’re trying to game an AI application.

Continue reading...