Building a better brain, bit by bit

Autonomous vehicles, facial recognition, disease diagnosis - machines are increasingly being tasked with completing extremely complex, traditionally human tasks. And if it were a competition, humans are often firmly in second place.

“It was reported relatively recently that you can use deep learning systems to achieve better accuracy in identifying skin cancer than a human dermatologist can,” says Ting Wang, assistant professor of computer engineering.

It wasn’t always this way. Research into deep learning systems and neural networks began in the 1970s when members of the artificial intelligence community said they wanted to develop something to simulate the human brain.

“The basic unit of the brain is the neuron. By connecting a bunch of neurons, they envisioned that they could simulate some functions of the human brain,” Wang says.

The maximum computing power of the era critically inhibited any serious attempt to mimic complex brain activity back then, but the overarching philosophy never went away. At that time, Wang says, engineers could only compute for two layers of activity and maybe a dozen neurons. Jump forward a few decades to the early 2000s, and Wang says advances in computing infrastructure systems, including the advent of cloud computing, revived the idea.

“Now, we can build very large scale neural networks with hundreds of layers and millions of neurons,” he says, “and suddenly, because of the increase in the complexity of this architecture, and the amazing amount of data we have right now, we can do much much more complicated things.”

However, for every deep learning success story, there’s a vulnerability.

Adversarial inputs are manipulations to the system that are imperceptible to humans. One example, Wang says, might be with image recognition software related to autonomous vehicles. An attacker might simply flip a few pixels of an image, so that the system recognizes it as saying “6 MPH” rather than “STOP.” If undetected, the result can be a traffic accident, which is why Wang says it’s critical that adversarial inputs are detected as such as quickly as possible.

That’s where his “Eagle Eye” comes in. It’s the name he gave to a research project funded by the National Science Foundation (NSF) to the tune of nearly $500,000. “The manipulation introduced to deep learning systems is so small a human eye cannot see it,” he says of the nickname. “That’s why we need an eagle eye - to identify these tiny pieces of adversarial inputs for what they are.”

Most defense and detection methods are static, which means when you deploy them, they just sit there, and the adversary eventually finds a way to circumvent them, Wang says.

That’s why his project was designed to detect adversarial inputs in an attack-agnostic manner. “It’s adaptive to the attacks,” Wang says. “They might be attacks we are unfamiliar with, but if they follow certain patterns, we’ll be able to detect them.”

Notably, this new method of detection is independent of the system itself, meaning you can build on it as you learn more about various attacks. You can also change it or improve it to fit the specific needs of your deep learning system, Wang says. Additionally, this independence means the presence of the detection and defense mechanism isn’t a drag on overall system performance.

When completed, the base product will be available to organizations and governmental agencies to use and build upon, and because it’s NSF-funded, Wang says it will be free to the public.

“Deep learning is the frontier of the machine learning and data analysis fields,” Wang says, and it’s an area that’s intrigued him going all the way back to his days as an undergraduate student at Zhejiang University in China.

“Before college, I never had a chance to touch computers,” he says. “That was the first time I used them, and it really opened my world.”

At the time, he got involved in a project that automated the analysis of documents so that someone could intelligently search through thousands to find the one most relevant to his or her interests.

“It was around 1999 - the pre-Google era,” he remembers. “Not too many people really knew what a search engine was about or how it worked, but before you knew it, it changed the world.”

The speed and scope of change is one of the things that excites Wang about the possibilities of deep learning, but he also recognizes the importance of continuing to address societal issues related to transparency and security. In addition to “Eagle Eye,” he’s currently working on a project that tackles people’s privacy concerns when they use their mobile phones.

“You have a huge amount and variety of data on your mobile phone, and when you use it, you get very personalized service based on your history,” he says. “For example, your phone may note what restaurants you like. It also notes your location, and when you’re close by, perhaps it sends you a coupon.”

The trade-off between maintaining information privacy and receiving personalized service is a tricky one, and everyone draws their own line regarding what’s acceptable. But as Wang notes, “Most people don’t end up controlling the type and amount of information that gets sent to their service providers.”

He and his team of students are building a framework where each person has explicit control over what data and how much of it gets sent, he says. “You will control the gateway, not someone else.”