There’s recently been a surge of media coverage around machine learning and artificial intelligence (AI). And you would be forgiven for thinking they’re essentially the same thing, as the terms are often used interchangeably.

Brian Davison“They overlap, but for those of us who work in this space, they are distinct,” says professor Brian Davison (pictured, left), chair of the Department of Computer Science and Engineering (CSE). “AI is an umbrella term that encompasses any computing technique that seems intelligent, and one technique under that umbrella is machine learning.” 

More specifically, artificial intelligence is the development of systems that are capable of mimicking human intelligence to perform tasks such as image, speech, and facial recognition; decision-making; and language translation. These systems are capable of iterating to improve their performance. Programmers and developers build them with tools that include machine learning, deep learning, neural networks, computer vision, and natural language processing—all of which allow the systems to simulate the reasoning humans use to learn, problem-solve, and make decisions. 

As a subfield of AI, machine learning employs algorithms to enable computers to learn from data to identify patterns and then do something with that recognition, such as make a prediction, a decision, or a recommendation. Examples of artificial intelligence that uses machine learning are vast and include virtual assistants, navigational systems, recommendation systems (think Netflix), and chatbots like ChatGPT.

To some, however, words like intelligent and learning can be fraught. 

Larry Snyder“The machine is not learning,” says professor Larry Snyder (pictured, right), Harvey E. Wagner Endowed Chair in the Department of Industrial and Systems Engineering (ISE) and Lehigh’s Deputy Provost for Faculty Affairs. “What it’s doing is meant to emulate what a human brain does, but the machine is not intelligent. It’s not smart. If we had called it something like ‘algorithmic methods for prediction,’ it would sound a lot less sci-fi—and a lot more accurate.”

The distinction is important, especially in the era of ChatGPT, the large language model, or LLM, that uses deep-learning algorithms and enormous datasets to recognize, summarize, translate, predict, and generate content based on user prompts. 

Two months after it launched last November, ChatGPT reached 100 million monthly active users, making it the fastest-growing consumer application in history, according to a UBS study. Just as fast came the backlash and concern, in particular over issues of privacy, security, intellectual property, disinformation, bias, education, and employment. 

In May, employees of ChatGPT’s creator, OpenAI—including its CEO Sam Altman—joined journalists, lecturers, and other domain experts in signing a one-sentence “Statement of AI Risk.” It reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”  

Such a warning, however, may also be misleading. 

“There are certainly reasons to worry about ChatGPT and other technologies like facial recognition, but we are very far off from a Terminator-type event where the systems of the world unite and turn against their creators,” says Davison. “Almost all of the very cool things that we see today are about being able to do a specific thing. They aren’t self-aware, they don’t understand what they’re doing, and they can’t reason about what they’re doing. Something like ChatGPT runs through patterns, then generates something, but it doesn’t know that it generated anything. I’m a sci-fi person. I watch those movies, and I think those ideas are worth thinking about. But I don’t think they’re worth panicking about.”

Davison says that tech companies will undoubtedly respond to the criticism, and ChatGPT and other similar technologies (like Google’s Bard and Microsoft’s Bing) will only get better over time.

“We’re going to be able to talk to our systems in ways that we haven’t been able to do before, and for the most part, I think that will be an advantage, and an improvement over the way the world has been,” he says.

Improving the world, of course, is at the heart of the research ethos at Lehigh. Machine learning has been a tool of researchers here for decades and is a key focus area in Lehigh’s Institute for Data, Intelligent Systems, and Computation (I-DISC). With advances in computational techniques and computer hardware, the potential for both discovery into the fundamentals of machine learning and applications of it are seemingly limitless. The projects showcased here are just a sample of the range of innovative work on campus, but they reflect the moment we’re in: a world and a reality that is increasingly powered by AI, and that demands a more holistic approach to ensure a future that serves and protects all users.

Resolve Magazine Fall 2023

Resolve Magazine: Making Sense of Machine Learning

Diving deeper into artificial intelligence

Exposing Fakes: Are you sure what you are seeing is real?

Decoding disease: Using AI to identify the features associated with healthy and diseased tissue

Protecting Diversity: Using machine learning to preserve and revive native languages

READ THE STORIES >>

Resolve Magazine: Making Sense of Machine Learning

Clouded Judgment

Why did so many of the experts who signed the “Statement of AI Risk” call their life’s work an “existential threat”? In part, it may be because they’ve released something into the world without fully understanding how it works.

READ THE STORY >>