The NSF CAREER award recognizes promising early-career researchers poised to become academic role models and leaders in their fields. In 2022, three Rossin College professors joined the distinguished group of Lehigh faculty who hold this prestigious national distinction.

Xiu YangFixing the noise problem in quantum computing

The promise of quantum computing is a big one.

The ability to solve—in days—problems in areas such as mathematics, finance, and biological systems that are so complex it would take a classical computer hundreds of years to calculate.

But that ability is a long way off, and a big reason why is noise.

“The key barrier for quantum computing is noise in the device, which is a hardware issue that causes errors in computing,” says Xiu Yang, an assistant professor of industrial and systems engineering. “My research is on the algorithm level. And my goal is, given that noise in the device, what can we do about it when implementing quantum algorithms?” 

Yang recently won support from the National Science Foundation’s Faculty Early Career Development (CAREER) program for his proposal to develop methods to model the error propagation in quantum computing algorithms and filter the resulting noise in the outcomes.

Yang will use cutting-edge statistical and mathematical methods to quantify the uncertainty induced by the device noise in quantum computing algorithms. His work could help quantum computing move toward real-world adoption in a wide range of fields, such as drug development, portfolio optimization, and data encryption, where the technology is seen as a potential game-changer.

“My first goal is to model noise accumulation,” he says. “So for example, if I run a so-called iterative algorithm, the noise or the error from the device will accumulate through each iteration. It’s possible that for some algorithms, the error will be so large that the outcome of the algorithm is useless. But in other cases, it may not be that significant.”

In those cases, the noise that’s contaminating the outcome could be filtered out.

“So I first need to see how the error propagates, and then, if I know how much it contaminated the outcome, I can determine if the results are useless, or if the noise can be filtered out to get the desired outcome,” he says.

To that end, Yang will investigate various types of algorithms to determine how they are affected, and whether or not they need to be redesigned or if he can develop a filter instead.

“Basically, I’m analyzing the suitability of quantum algorithms on quantum computers,” he says. “So this is a quantum numerical analysis from a probabilistic perspective.”

The ultimate goal is to enable quantum computing to achieve its promise of unparalleled speed when it comes to solving highly complex problems like those physical and chemical systems involving interactions between millions of molecules.

“Let’s say a pharmaceutical company wants to design a new drug or vaccine,” he says. “They need to understand the interaction between all those particles. If I were to use a classical computer, that process would be very slow. But with a quantum computer, it would be very, very fast.”
Yang says the award not only helps his field get a step closer to that reality but also reflects a recognition outside his community of researchers that quantum computing’s potential is worth the investment.

“This award is from both the NSF’s Division of Computing and Communication Foundations and its Division of Mathematical Sciences,” he says. “Which means that people in the math and statistics community are now getting interested in quantum computing. They realize this is a very important area, and we can make a contribution.”

Subhrajit BhattacharyaA simpler path to supercharge robotic systems

Robotic arms are typically composed of joints, which help the arm move and perform a task. Each of those joints is capable of either rotating or extending—and the number of such possible independent movements is referred to as the robotic system’s degree of freedom. The more joints in the arm, the higher the degree of freedom.

“In general, robotic systems are very high-degree-of-freedom systems,” says Subhrajit Bhattacharya, an assistant professor in the Department of Mechanical Engineering and Mechanics. A robotic arm, for example, “might have 10 different joints. So if you’re trying to make the arm move and grab something, this high degree of freedom—these 10 different joints—present 10 different variables for making that motion. The more variables you have, the more computationally expensive it is to operate that system.”

Bhattacharya is a member of Lehigh’s Autonomous and Intelligent Robotics (AIR) Lab and he researches ways to reduce this complexity. He received the NSF CAREER award for his proposal to use topological abstraction for robot path planning.

Topology is the mathematical study of properties preserved through the twisting, stretching, and deformation—but not the tearing—of an object.

Bhattacharya uses topology to abstract the complexities (basically, removing the details) presented by all these variables within a robotic system to make operating the systems simpler. So if, for example, you’re using a robotic manipulator with 10 joints and you want it to grab something, topology would reduce the number of variables involved in that motion, and provide the robot with an algorithm that allows it to efficiently and accurately achieve its final destination.

“The output of the algorithm is the sequence of motions,” he says.

Bhattacharya says abstraction is currently being done by others in the field, but in an ad hoc manner. “They’re often very randomized searches. What I’m proposing is a more formal approach of doing abstraction that guarantees optimality and algorithmic completeness, which is a guarantee that we’ll find a solution if a solution exists.”

Robotic systems have become increasingly important in industries such as transportation, manufacturing, and health care, and this formalized approach can make those systems even more reliable, says Bhattacharya. In addition, his approach can be broadly applied and used in systems such as those that employ multiple robots or one or more flexible cables, robotics manipulators, and soft robotic arms.

“The ultimate goal is the design of systematic algorithms that can achieve these topological abstractions in an efficient manner, and in a way that is not specific to a particular system,” he says.

Sihong XiePulling back the curtain on the intelligence behind AI

Machine learning models are capable of solving complex problems by analyzing an enormous amount of data. But how they computationally solve the problem is a mystery. “It’s difficult for humans to make sense of the reasoning process of the program,” says Sihong Xie, an assistant professor of computer science and engineering who won the NSF CAREER award for his proposal to make machine learning models more transparent, more robust, and more fair.

“Creating accountable machine learning is the ultimate goal of our research,” he says.

Explainability of machine learning algorithms will generate greater confidence and trust of the human users in the models. To establish that explainability, Xie will work with domain experts to combine human knowledge with machine learning programs. He’ll incorporate the constraints that guide these professionals in their decision-making into the development of algorithms that more closely reflect human domain knowledge and reasoning patterns.

Ultimately, few human experts will be able to dedicate the time necessary to fully compile the constraints around any one question. To that end, Xie intends to automate the creation of such checklists by collecting relevant data from the experts instead. He and his team will design another algorithm to find what he calls the sweet spot in this checklist creation. One that is sensitive enough to detect subtle but useful positives, but not so sensitive it generates too many false positives.

“Real-world data are dirty,” he says. “We want the machine program to be robust enough so that if it is dealing with reasonably dirty data, it will still generate reliable output.”

Along with questions about accountability come concerns about algorithmic fairness: If machine learni algorithms now influence what we read in our social feeds, which job postings we see, and how our loan applications are processed, how can we be sure that choices are made ethically?

To address those concerns, Xie will utilize multiple objective optimization to find the most efficient solutions to competing perspectives on what’s considered fair.

“Different people, different organizations, different countries, they all have their own definition of fairness,” he says. “So we have to explore all the possible trade-offs between these different definitions, and that’s the technical challenge, because there are so many different ways to trade off. The computer has to actually search for how much each of these fairness standards has to be respected.”

He will provide algorithmic solutions that can efficiently search such trade-offs.

The implications of this research could be profound. Xie says that, eventually, experts could have much more confidence in artificial intelligence, and algorithms could become more responsive to social norms.

“The biggest motivation for me in conducting this research is that it has the potential to make a real social impact,” he says. “And because we always have humans in the loop, we’re going to ensure that these models inspire more confidence and treat people fairly.”