In machine learning, “optimization algorithms are becoming more and more important, for two reasons,” says Sihong Xie, an assistant professor of computer science and engineering. “One is that, as the data sets become larger and larger, you cannot process all the data at the same time; and the second reason is that we can formulate an optimization problem that will analyze why the machine learning algorithm makes a particular decision.”

The latter is particularly important when applying machine learning to make decisions involving human users. Xie and his research group are investigating the transparency of machine learning models, because, as he explains, despite all of the advances driven by the technology in recent years, there are still significant open questions to be addressed.

One of them is that the algorithms that can learn from data and make predictions are still somewhat of a black box to the end user.

For example, imagine that you and a colleague are on a social network of job-seekers, akin to LinkedIn. You’re both in the same field, and equally qualified, but as you discuss your prospects over a cup of coffee, it’s clear that your friend has been seeing more high-quality job postings than you have.

It makes you wonder: What information did the site’s algorithm use to generate the recommendations in the first place?

And are you not seeing some postings because of your age or your gender or something in your past experience? If so, the algorithm that produced the recommendations is suboptimal because its results are unfairly discriminatory, says Xie.

In late 2021, Xie’s PhD students Jiaxin Liu and Chao Chen presented the results of their work at two renowned meetings on information retrieval and data mining (Liu, at the ACM International Conference on Information and Knowledge Management, and Chen, at the IEEE International Conference on Data Mining).

The team found that the current state-of-the-art model, known as a “graph neural network,” can actually exacerbate bias in the data that it uses in its decisions. In response, they developed an optimization algorithm using the stochastic gradient method—an approach with a not-so-well-known Lehigh connection—that could find optimal trade-offs among competing fairness goals that would allow domain experts to select a trade-off that is least harmful to all subpopulations.

For example, the algorithm could be used to help ensure that selection for a specific job was unaffected by the applicant’s sex while potentially still allowing the company’s overall hiring rate to vary by sex if, say, women applicants tended to apply for more competitive jobs.

The team’s work on explainable graph neural networks adopted the stochastic gradient method to find human-friendly explanations of why the machine learning model makes favorable or unfavorable decisions over different subpopulations.

Read more about the stochastic gradient method and how Lehigh Engineering researchers are invested in making machine learning smarter, faster, and more trustworthy in the full article

Story by Steve Neumann

Sihong Xie

Sihong Xie, assistant professor, Department of Computer Science and Engineering