Games and Filters: a Road to Safe Robot Autonomy

Dr. Jaime Fernández Fisac

Assistant Professor

Department of Electrical and Computer Engineering 

Princeton University

About Prof. Jaime Fernández Fisac

Jaime Fernández Fisac is an Assistant Professor of Electrical and Computer Engineering at Princeton University, where he directs the Safe Robotics Laboratory and co-directs the Princeton AI4ALL outreach summer program. His research combines control theory, artificial intelligence, cognitive science, and game theory with the goal of enabling robots to operate safely in human-populated spaces in a way that is well understood, and thereby trusted, by their users and the public at large. Prior to joining the Princeton faculty, he was a Research Scientist at Waymo (formerly Google’s Self-Driving Car project) from 2019 to 2020, working on autonomous vehicle safety and interaction. He received an Engineering Diploma from the Universidad Politécnica de Madrid, Spain in 2012, a M.Sc. in Aeronautics from Cranfield University, U.K. in 2013, and a Ph.D. in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 2019. He is a recipient of the La Caixa Foundation Fellowship, the Leon O. Chua Award, the Google Research Scholar Award, and the Sony Focused Research Award.


Autonomous robotic systems are promising to revolutionize our homes, cities and roads—but will we trust them with our lives? This talk will take stock of today’s safety-critical robot autonomy, highlight recent advances, and offer some reasons for optimism. We will first see that a broad family of safety schemes from the last decade can be understood under the common lens of a universal safety filter theorem, lighting the way for the systematic design of next-generation safety mechanisms. We will explore reinforcement learning (RL) as a general tool to synthesize global safety filters for previously intractable robotics domains, such as walking or driving through abrupt terrain, by departing from the traditional notion of accruing rewards in favor of a safety-specific Bellman equation. We will show that this safety-RL formulation naturally allows learning from near misses and boosting robustness through adversarial gameplay. Finally, we will turn our attention to dense urban driving, where safety hinges on the autonomous vehicle’s rapidly unfolding interactions with other road users. We will examine how robots can bolster safety by leaning on their future ability to seek out missing key information about other agents’ intent or even their location. We will conclude with an outlook on future autonomous systems and the role of transparent, real-time safety proofs in generating public trust.