Rising Star Lifang He
Computer scientist Lifang He envisions unified models that bring the reasoning of machine learning to clinical care

Lifang He, an associate professor of computer science and engineering (CSE), didn’t set out to build clinical tools when she began working with machine learning models. But during her doctoral research, after applying tensor-based methods to neuroimaging data in collaboration with clinicians at Northwestern University and the University of Illinois Chicago, she saw those models sharpen diagnostic accuracy in ways that proved immediately consequential.

The experience redirected her trajectory, moving her from methodological innovation toward medicine, where computational insight could shape real-world decisions.

“I was very excited about that result, so I started to work in the medical field and try to bridge the gap between machine learning and medicine,” He says.

Now at Lehigh, He focuses on developing artificial intelligence systems that can synthesize complex health data and translate patterns into clinically meaningful understanding. Her engagement with the broader AI and health research communities reflects that same translational focus.

She recently co-organized an event in the 2025 Spring Symposium Series sponsored by the Association for the Advancement of Artificial Intelligence with fellow CSE professor Mooi Choo Chuah and collaborators from Massachusetts General Hospital/Harvard Medical School and the Northwestern University Feinberg School of Medicine.

The AI for Health Symposium highlighted emerging directions in multimodal modeling, healthcare-AI partnerships, and federal priorities in AI-enabled medicine. She has also been invited to serve as area chair for this year’s ACM SIGKDD Conference on Knowledge Discovery and Data Mining in Jeju, Korea, where she will lead the “AI for Healthcare” section under the new AI for Science track.

“It’s the premier data science and AI conference,” she says, “and the new track focuses on how AI can better serve science.”

Within the regional professional community, He serves as Chair of the IEEE Computer Society for the Lehigh Valley section, working to strengthen connections among academia, industry, and the general public, while advocating for the responsible application of new computing advances.

Across these roles and activities runs a consistent research aim: building AI systems capable of processing varied data streams with the nuance of a medical professional.

“Our goal is to build AI systems that can reason about human health, the way clinicians or doctors do,” He says.

Several proposed lines of research suggest how that goal could take shape. He and colleagues at Yale University and the University of Pennsylvania have conducted preliminary work toward multimodal AI systems that integrate neuroimaging, retinal imaging, and clinical data. A proposal building on that work is under review by the National Institutes of Health. The concept is to bridge artificial intelligence, clinical data, and translational medicine to better understand mechanisms underlying complex neurological diseases, with the long-term objective of improving early diagnosis and patient prognosis.

Central to the proposal is an eye-brain foundation model trained on expansive datasets that include retinal and brain images and clinical data—an approach motivated by evidence that the eye can serve as a window into the brain in neurological disease.

“A generalist model can uncover patterns and connections that reveal insights across many diseases,” she says. “A key innovation is its ability to handle missing and irregular longitudinal data. In real clinical settings, patients often have only one type of scan available. So it will be able to make predictions from eye images only or from brain images only, or both if they’re available.”

The framework would combine broad-based AI methods with targeted architectures optimized for particular diseases or imaging modalities. Such systems would offer a flexible foundation for a variety of clinical applications, including early detection and the prediction of disease progression.

“It will be able to support many downstream tasks with minimal additional training,” she says.

A separate collaboration with researchers at William & Mary and VCU Health outlines a complementary direction: integrating wearable sensors and mobile technologies to monitor and predict motor symptoms in Parkinson’s disease. That proposal envisions translating advanced foundation models into lightweight, real-time systems that could operate directly on smartphones, enabling continuous monitoring and more personalized care in everyday clinical settings.

For He, merging high-level versatility with the precision needed for specific medical diagnoses isn't just a coding challenge—it’s the most promising route toward more responsive, patient-centered care.

“I’m excited now about the possibility of unifying the generalist and specialist models so we can meaningfully improve outcomes for patients.”