Robot perception

Resolve Magazine Fall 2023 >> Making Sense of Machine Learning >> Stories >> Solving ambiguity in robot perception


Solving Ambiguity“For robots to achieve true autonomy in the future, they must be able to assess risks before making decisions,” says Nader Motee (pictured below), a professor of mechanical engineering and mechanics.

Motee recently received a nearly $680,000 grant from the Office of Naval Research to develop a novel, multi-stage, perception-based control paradigm that will essentially help robots assess risk, and ultimately make autonomous systems safer and more efficient.

We humans conduct risk analysis all the time—from how we drive to what we say and how we say it. That analysis allows us to make a decision—slow down, say “I’m sorry,” maybe not use all caps in that text message. At this moment, robots can’t do this kind of analysis, which means they can’t make decisions on their own (a relief to most of us, no doubt). But a world with autonomous robots could be a world in which we humans get a lot of meaningful assistance from machines—more help with disaster recovery, for instance. 

However, to do risk analysis, robots first require quantifying the ambiguity of perception. “In humans, our perception is based on what we’ve learned in the past,” says Motee. “But the number of samples that a robot, or a human, can be fed of any given object is limited. So there’s always ambiguity and uncertainty about what the robot is seeing. Is it a stop sign? On top of that, if there’s noise in the environment, like rain or fog or darkness, there’s uncertainty about the object itself. Is it even a sign at all? So there is uncertainty about not only the object, but also the identity of that object inside that class. So the ambiguity is the uncertainty of the uncertainty.” 

Nader MoteeAmbiguous perception in a robot is dangerous—consider, for example, the consequences of a self-driving car perceiving a stop sign as a speed limit sign.

It’s a real problem to solve, and Motee and his team are tackling it by quantifying the sources of uncertainty. Essentially, they want to go inside the black box of a range of perception modules—machine learning models that use visual sensing—to better understand how the models are perceiving the environment.

“The relationship between the input, which are the images, and the output, which are the labels (like traffic sign) is very complex,” he says. “But to quantify the ambiguity and the output of perception, I have to analyze these models and the relationships between these two quantities. Then I can compute if I have some uncertainty on the output, how that would be transferred to the output.”

That’s the first step. 

“Once we quantify the ambiguity,” he says, “we could use risk measures for decision-making purposes.”

A robot capable of assessing risk could, in theory, make a safe decision on its next course of action. A team of robots could communicate effectively. They could also perceive the actions of the humans around them and infer how they could best assist them. 

“But they have to assess risk first to determine if their next course of action is actually going to help the humans, or make their work even harder. They’ll have to do a lot of analysis.”

And so will Motee and his team. But he finds the prospect of a future where perception modules work as a connected network exciting. They could perceive information about our health, our transportation system, and our security.

“These modules would collaborate with each other,” he says, “and hopefully create a smart society that could improve our health and our lifestyles.”  

Main image: Creisinger/iStock

 

Resolve Magazine Fall 2023

Resolve Magazine: Making Sense of Machine Learning

Diving deeper into artificial intelligence

Exposing Fakes: Are you sure what you are seeing is real?

Decoding disease: Using AI to identify the features associated with healthy and diseased tissue

Protecting Diversity: Using machine learning to preserve and revive native languages

READ THE STORIES >>

Resolve Magazine: Making Sense of Machine Learning

Clouded Judgment

Why did so many of the experts who signed the “Statement of AI Risk” call their life’s work an “existential threat”? In part, it may be because they’ve released something into the world without fully understanding how it works.

READ THE STORY >>