A. Emrah Bayrak, Human-AI Collaboration
Mechanical engineering researcher earns NSF CAREER award to develop best practices for human-AI collaboration in engineering design

As artificial intelligence (AI) is inevitably woven into the workplace, teams of humans will increasingly collaborate with robots on complex design problems, such as those in the automotive and aerospace industries.

“Right now, design is mainly done by humans, and it’s based on their expertise and intuitive decision-making, which is learned over time,” says A. Emrah Bayrak, an assistant professor of mechanical engineering and mechanics who joined the Rossin College faculty earlier this year. “Usually, you’re not creating something totally new. You take something that works already, understand how it works, and make incremental changes. But introducing AI could make the process a lot faster—and potentially more innovative.”

A. Emrah BayrakHowever, best practices for integrating AI in a way that both maximizes productivity and the job satisfaction of the human worker remain unclear. Bayrak recently won support from the National Science Foundation’s Faculty Early Career Development (CAREER) program for his proposal to allocate portions of complex design problems to human and AI teams based on their capabilities and preferences. 

The prestigious NSF CAREER award is given annually to junior faculty members across the United States who exemplify the role of teacher-scholars through outstanding research, excellent education, and the integration of education and research. Each award provides approximately $500,000 in funding over a five-year period.

Bayrak will explore the problem of dividing a complex task between human designers and artificial intelligence from both a computational and experimental perspective. For the former, he will use models that predict how a rational human being would explore the design of, say, the powertrain in an electric vehicle. 

“We know that decision-making is a sequential process,” he says. “People will make a decision, look at the outcome, and revise their next decision accordingly. In order to maximize the range of an EV, when humans consider the design of the powertrain, they have to make decisions about gear ratios, motor size, and battery size. These are all mathematical variables that we can feed into a model to predict what the next decision should be if a human is a rational person.”

AI, in contrast, makes decisions based on training data. Feed it data on good decisions regarding gears, motors, and batteries, and it can then estimate possible vehicle designs that will yield an acceptable range. Artificial intelligence could also use that knowledge to think about what the next design decision should be.

Bayrak’s model will also contain different human archetypes. For example, a person who trusts AI completely versus one who does not, and those who hover somewhere in the middle. The model will combine the mathematical variables that represent decision-making with the full range of archetypes to determine strategies for the division of labor between humans and AI.

Bayrak will then test those findings experimentally. Study participants will be asked to work together with artificial intelligence to virtually approach the design of a vehicle.

“We give them a design problem and tell the people which decisions they’re responsible for making and which are the responsibility of the AI,” says Bayrak. “They work together, and the goal is to collect the data and see if the computational results reflect what happens in the experimental findings. In other words, do designers act as predicted by the computational models or do those designers who don’t fully trust AI end up satisfied with the division of labor?” 

This work could potentially shape how organizations are structured.
-A. Emrah Bayrak

The ultimate goal, he says, is not to replace humans in the workplace. Rather, it’s to develop principles for how and to what extent AI should be integrated into complex design projects. And those guidelines will reflect different priorities—for example, a team may want to incorporate AI as merely an assistant, or they may want to give it significant responsibility. Teams may want to prioritize quick decision-making, innovation, or job satisfaction. 

“The idea is that we’ll have quantitative evidence that reveals which practices work well to achieve specific objectives and which do not,” he says. “This work could potentially shape how organizations are structured in the future.”

Like all NSF CAREER proposals, Bayrak’s project also includes an educational component. He plans to turn his model into a video game of sorts that will incentivize users to create better designs to solve problems.

“I’d like to teach people the principles of statistical analysis and data science in a more practical way,” he says. “So instead of just showing them a dataset and walking them through the analysis, the game will draw them into a problem, allow them to collect data in real time, and reveal how that data informs the models—and, ultimately, guide their decision-making.”

He envisions using the game within existing data-science-related courses and in future data analytics boot camps he plans to run for PhD students.

“So many of our students here at Lehigh need to process data in their research,” he says, “and using the game in these two- to three-week-long boot camps will teach them the basic principles of how data can be analyzed and used to draw meaningful—and potentially exciting—conclusions for their research.”