NSF CAREER winner on trustworthy software development

We live in the era of software 2.0.

An era of computer systems capable of learning desired behaviors, like how to recognize voices and faces, and predicting outcomes, like whether a tumor is benign or malignant. Whereas its predecessor—software 1.0 if you will—is relatively straightforward with its lines of code, machine learning is built on a network of mathematical transformations that parses data, finds patterns, and produces an outcome. These systems are increasingly superior to humans in their accuracy, and they are transforming nearly every aspect of our lives from travel to medicine to entertainment to security.

But such complexity is problematic. Machine learning systems are so large that to expedite their creation, developers often reuse foundational building blocks, or neural network layers, called primitive models. These models are available online, and it’s often unclear who their source is—and whether or not he or she can be trusted.

“These systems are so complicated, it doesn’t make sense to build them from scratch,” says Ting Wang, an assistant professor of computer science and engineering (above, right, with fellow CSE assistant professor Eric Baumer) at Lehigh University’s P.C. Rossin College of Engineering and Applied Science. “One way to build a system quickly is to take one model from here, one from there, and put them together. But many models out there are contributed by untrusted third parties. If these models contain a vulnerability, your system inherits that vulnerability.”

Read the full story here.