CSE professor Daniel P. Lopresti co-authors Computing Community Consortium white paper calling for coordinated, measurable research efforts to shape the future of artificial intelligence

As artificial intelligence races ahead—reshaping nearly every aspect of modern life—Lehigh University computer scientist Daniel P. Lopresti is among a group of researchers asking a deceptively simple question: What would it look like if the field of computing rallied around a shared, long-term goal?

Lopresti, a professor of computer science and engineering in the P.C. Rossin College of Engineering and Applied Science, and his peers argue that while computing underpins many modern scientific breakthroughs, it has never defined a unifying “grand challenge,” nor produced a coordinated, large-scale effort comparable to the Moonshot or the Human Genome Project. That absence, he says, has become increasingly consequential as AI systems are rapidly deployed across society.

“You could argue that if the push toward AI had grown out of a grand challenge—one that baked in the concerns about ethics, societal impact, and resources from the very beginning—we might be in a better place now,” says Lopresti, whose research spans machine learning, pattern recognition, AI for social good, and systems-level algorithm design.

Defining “grand challenges” for computing

Lopresti is a co-author of a recent white paper, “The Imperative for Grand Challenges in Computing,” developed through the Computing Community Consortium (CCC), an NSF-funded committee (under the umbrella of the nonprofit Computing Research Association) that convenes leading researchers to shape long-term priorities for the computing field. He has been active with CCC since 2015, serving on its council and, most recently, as vice chair and council chair, before transitioning to chair emeritus—roles in which he has helped guide community-wide visioning efforts and national research roadmaps.

The new paper examines the history of grand challenges across science and engineering, considers what such efforts could look like in computing, and outlines a set of prototype challenges designed to meet clear criteria: they must address a critical societal need and be ambitious enough to drive progress in the field, barely feasible with current techniques, inherently interdisciplinary, and measurable.  

“A grand challenge can’t be something like, ‘We want to build better AI,’” Lopresti says. “What does that even mean? Does that mean more fair AI or more ethical AI? You have to be able to measure success. It has to be something you either accomplish—or you don’t.”

Net-zero computing and the future of energy-efficient AI

Lopresti contributed a prototype challenge that focuses on net-zero computing—rethinking how algorithms, hardware, and systems are designed to dramatically reduce energy and resource consumption. Right now, he says, the field tends to reward research that delivers faster results, even if those algorithms burn an enormous amount of energy. 

For Lopresti, net-zero computing reframes what progress looks like in AI and high-performance computing. 

“There’s an optimization problem to solve if we shift focus from speed to how we use resources,” he says. “When computing is applied to real energy-use problems, the savings achieved through more efficient algorithms can outweigh the energy required to run them in the first place. That’s what makes the idea net zero. Let’s reward—and publish—the researchers who can say, ‘I achieved this incredible result using a tiny fraction of the resources others used.’ I’m fond of this idea because it’s measurable.”

Beyond any single proposal, the paper is ultimately a call to action. Its authors are urging the broader computing community—universities, professional societies, industry leaders, and funding agencies—to critique, refine, and build upon these ideas, with the aim of fostering long-term thinking in a field often driven by short-term incentives.

While it may seem as though the moment to shape AI through grand challenges has long since passed, Lopresti disagrees. It’s not too late, he says, but the window is narrowing.

“We’ve seen amazing things with AI, and we’ve also seen troubling ones,” he says. “But if we don’t step back and take a more encompassing view—one that considers all the implications—we risk missing the chance to shape where this technology goes.”

—Story by Christine Fennessy

About Daniel P. Lopresti

Daniel P. Lopresti is a professor of computer science and engineering in Lehigh University’s P.C. Rossin College of Engineering and Applied Science. He joined Lehigh in 2003 and has held a number of leadership roles, including a decade as chair of the Department of Computer Science and Engineering (ending in 2019) and a term as interim dean of the college from 2014 to 2015. He also served as director of Lehigh’s Data X Initiative and played a key role in the design of the renovation of Mountaintop Building C.

Lopresti’s research spans document analysis, pattern recognition, machine learning, and applied artificial intelligence, with an emphasis on systems that interact with people and society. He has held research positions at Brown University, Bell Labs, and Lucent Technologies, and helped found the Matsushita Information Technology Laboratory in Princeton, New Jersey. An internationally recognized leader in document analysis, he has co-chaired major conferences in the field, served as co–editor-in-chief of the International Journal on Document Analysis and Recognition from 2014 through earlier this year, and sits on the editorial board of Computer Vision and Image Understanding. He has authored more than 150 peer-reviewed publications and holds 24 patents. In addition to his research, Lopresti has been active in applying AI to societal challenges, including electronic voting security and international efforts to combat human trafficking, and has served in leadership roles with the Computing Community Consortium and the International Association for Pattern Recognition.