Security News Fooling AI into seeing something that isn't there

Logethica

Level 13
Thread author
Verified
Top Poster
Well-known
Jun 24, 2016
636
HOW TO FOOL AI INTO SEEING SOMETHING THAT ISN’T THERE;

Our machines are littered with security holes, because programmers are human. Humans make mistakes. In building the software that drives these computing systems, they allow code to run in the wrong place. They let the wrong data into the right place. They let in too much data. All this opens doors through which hackers can attack, and they do.

But even when artificial intelligence supplants those human programmers, risks remain. AI makes mistakes, too. As described in a new paper from researchers at Google and OpenAI, the artificial intelligence startup recently bootstrapped by Tesla founder Elon Musk, these risks are apparent in the new breed of AI that is rapidly reinventing our computing systems, and they could be particularly problematic as AI moves into security cameras, sensors, and other devices spread across the physical world. “This is really something that everyone should be thinking about,” says OpenAI researcher and ex-Googler Ian Goodfellow, who wrote the paper alongside current Google researchers Alexey Kurakin and Samy Bengio.

Seeing What Isn’t There
With the rise of deep neural networks ,a form of AI that can learn discrete tasks by analyzing vast amounts of data ,we're moving toward a new dynamic where we don't so much program our computing services as train them. Inside Internet giants like Facebook and Google and Microsoft, this is already starting to happen. Feeding them millions upon millions of photos,Mark Zuckerberg and company are training neural networks to regognize faces on the world's most popular social network. Using vast collections of spoken words, Google is training neural nets to identify commands spoken into Android phones. And in the future, this is how we’ll build our intelligent robots and our self-driving cars.

Today, neural nets are quite good at recognizing faces and spoken words—not to mention objects, animals, letters, and words. But they do make mistakes—sometimes egregious mistakes. “No machine learning system is perfect,” says Kurakin. And in some cases, you can actually fool these systems into seeing or hearing things that aren’t really there.

As Kurakin explains, you can subtly alter an image so that a neural network will think it includes something it doesn’t, an these alterations may be imperceptible to the human eye—a handful of pixels added here and another there. You could change several pixels in a photo of an elephant, he says, and fool a neural net into thinking it’s a car. Researchers like Kurakin call these “adversarial examples.”
And they too are security holes.

With their new paper, Kurakin, Bengio, and Goodfellow show that this can be a problem even when a neural network is used to recognize data pulled straight from a camera or some other sensor. Imagine a face recognition system that uses a neural network to control access to a top-secret facility. You could fool it into thinking you’re someone who you’re not, Kurakin says, simply by drawing some dots on your face.

Goodfellow says this same type of attack could apply to almost any form of machine learning, including not only neural networks but things like decision trees and support vector machines ,machine learning methods that have been popular for more than a decade, helping computer systems make predictions based on data. In fact, he believes that similar attacks are already practiced in the real world. Financial firms, he suspects, are probably using them to fool trading systems used by competitors. “They could make a few trades designed to fool their competitors into dumping a stock at a lower price than its true value,” he says. “And then they could buy the stock up at that low price.”

In their paper, Kurakin and Goodfellow fool neural nets by printing an adversarial image on a piece of a paper and showing the paper to a camera. But they believe that subtler attacks could work as well, such as the previous dots-on-the-face example. “We don’t know for sure we could do that in the real world, but our research suggests that it’s possible,” Goodfellow says. “We showed that we can fool a camera, and we think there are all sorts of avenues of attack, including fooling a face recognition system with markings that wouldn’t be visible to a human.”

Continue reading this article at the link at the top of the page.
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top