Nader Motee
“Robot learning is an intrinsically multidisciplinary subject,” says Nader Motee, who organized a recent workshop on the topic, hosted by Lehigh’s Institute for Data, Intelligent Systems, and Computation. (Photo by Douglas Benedict/Academic Image)

Networks of robots working in tandem can accomplish complex tasks, but when one machine falters, it can cause chaos. Picture a drone flying away from its fleet and failing to photograph its assigned area, or a self-driving car getting too close to another and disrupting a carefully designed platoon.

Making networks like these smarter, more functional, and more efficient is the subject of two research projects, funded by more than $1 million in grants, led by Nader Motee, an associate professor of mechanical engineering and mechanics.

With support from the Office of Naval Research, Motee is investigating how to represent streaming data (e.g., images taken by an onboard high-frame-rate camera) efficiently for feature extraction, learning, planning, and control objectives.

For context, he uses the example of map classification using a fleet of flying robots. Although a single robot could accomplish this task, he says, the process might take hours or days. The timeline shrinks to minutes when a hundred or so robots do similar, more focused tasks, but the robots must communicate with one another to exchange relevant information and increase efficiency and resiliency while working in uncertain environments.

“The challenge is to figure out which pairs of robots should talk to one another, and how often, and what information to share,” Motee says.

The amount of data the cameras on these flying robots collect is staggering—anywhere between 200 and 1,000 frames per second, with each frame at four or five megabytes in size. The robots must process this data in real time because storing it all would be impossible. But not all data is relevant to the success of the task, nor does every robot in the fleet need to receive every bit of data from every other robot. It’s important to efficiently represent the data to enable real-time learning, task planning, and control. 

Getting robots to make these determinations for themselves is a very complicated but important step in the long-term development of artificial intelligence and autonomy.

"Improving robots’ awareness and decision-making is a complicated but critical step in advancing AI."
Nader Motee

“[This work] will be relevant for time-sensitive missions and tasks when humans cannot stay in the loop to monitor the deployed robots,” Motee says. “Achieving long-term autonomy and using onboard intelligent mechanisms will help robots survive for long periods of time during their missions in uncertain environments.”

In a separate project focused on risk analysis of non-linear dynamical networks, supported by the Air Force Office of Scientific Research, Motee seeks to improve robot planning and control by transforming a robot’s dynamic behaviors from nonlinear (in finite dimensions) to linear (in infinite dimensions).

“For instance, if we change the input signal to a robot by 10 percent,” he explains, “its output will not change by the same 10 percent. Robots’ nonlinear behavior makes control design and task planning problems very challenging.”

Motee’s team will investigate what conditions make nonlinear systems—such as platoons of self-driving cars or power networks—more prone to failure.

“If a tree branch falls on a power line, it may cause that line to fail, which may cause nearby power lines to overload and fail. These local events may result in a global power outage, or systemic failure. And with platoons of self-driving cars, if the two leading cars fail to maintain a safe distance and collide, it will result in several collisions.”

This project will explore how engineers can mitigate the effects of local failures in networks and prevent systemic failures.