Like it or not, social media has become the new mall for kids. It’s where they want to be, and it’s a place they can easily go—often with no guidance, no oversight, and no guardrails. And when the content gets ugly or confusing or weird, it can be tough for them to know what to do.
Dominic DiFranzo, an assistant professor of computer science and engineering in Lehigh University’s P.C. Rossin College of Engineering and Applied Science, has devoted his research to helping kids better navigate the perils of social media. He and his team have recently received two grants from the National Science Foundation to develop artificial intelligence tools and techniques that will ultimately help youngsters become savvier—and healthier—digital citizens.
Both grants build upon a digital literacy platform called Social Media TestDrive that DiFranzo developed in collaboration with Cornell University. “The platform teaches kids how to do things like spot cyber-bullying, recognize fake news, and create passwords,” he says. “It’s a social media simulation system, so it looks and feels like an actual site, but it’s not real. Kids can practice what they learn, and make mistakes in a safe environment.”
The team launched the platform in 2019, and since then, has partnered with Common Sense Media to adapt Social Media TestDrive into the organization’s K-12 digital literacy curriculum.
“Social Media TestDrive is now in middle schools across the country, and more than 1 million students have used it so far,” he says. “So it’s been a really successful project, but we’ve been thinking of ways that we can leverage AI to scale it up and make it even more effective.”
The platform has 12 modules, each exploring a specific topic—cyberbullying, for instance—with four steps to each module. The first step introduces and illustrates key terms, defining the concept of cyberbullying, for example. The second step is a guided activity; in this case, they’re shown cyberbullying posts and taught strategies for responding to them.
“They’ll go through those different strategies, and do things like write an encouraging message to the victim, respond to the bully, or flag the content,” says DiFranzo.
The third step is called “free play,” and it’s where students go into the platform’s social media simulation on their own. They’re free to post, comment, and interact as they would on a real feed. But embedded in the feed are examples of cyberbullying. The hope, says DiFranzo, is that the students will notice those messages, and respond with what they’ve learned.
The fourth and final step is a reflective phase.
“That’s where they’re asked, ‘Hey, did you notice those three cyberbullying posts during the free play session?’”says DiFranzo. “And they’re asked a lot of open-ended questions, like, ‘How did you react? What should you have done? What might you do in the future? If you saw this in the real world, what could you do?’ The point is to help them reflect on what they saw and what they did, and try to apply those lessons to their own lived experiences.”
DiFranzo will use one of the grants to leverage conversational AI tools like Chat GPT to help students stand up against bullying within social media. Such technologies will provide instant feedback and make the simulations on Social Media TestDrive more interactive. For example, if a student clicks on a troubling post, the system might immediately be prompted to say, “Hey something doesn’t sound right here—what do you think you should do? Would you like to go through some scenarios of what you could say?”
A key part of the project, he says, is working with students to better understand their comfort level when it comes to conversing with AI. Members of the team from both Pennsylvania State University and Cornell University are developing a method in which they interview groups of teens and record how they respond to various designs and scenarios.
“The students are going to be our co-designers,” he says. “This won’t be us saying, the system needs to do X, Y, and Z, because at the end of the day, they’re the experts on what they’re experiencing. So we want to know, what are the scenarios they're seeing in real life on their social media feeds, and what are their typical responses? We want the platform to reflect their reality.”
Working together with the students, DiFranzo and his team will design interventions that will augment the Social Media TestDrive platform, and further teach students to be proactive when it comes to cyberbullying—to be “upstanders.”
“Cyberbullying is a huge public health crisis,” he says. “It’s not just kids saying bad words to each other on the playground anymore. It affects both their mental and physical health. One way we can help is through bystander intervention—people who see something, and then do something in response. We want people to know they’re not alone, there are people out there who care about them, and who are creating pro-social communities where they establish new norms that make it clear cyberbullying is not okay.”
The second grant will focus on digital literacy more broadly—how to confront risks like online aggression, privacy violation, phishing, and scams—using an innovative AI-based conversational intervention called Social Media Co-Pilot. The tool will simulate more complicated scenarios within Social Media TestDrive, in part through using characters of sorts—the bully, for example, or the victim of a scam—that would allow the student to interact with that character to determine the best course of action.
“And it will give students advice as they go through the simulations, similar to how an educator might offer real-time guidance in the classroom,” says DiFranzo.
Collaborators at Cornell University are building the tool, while the Lehigh team will design it and integrate it into Social Media TestDrive. After that, they’ll incorporate extensive user feedback from teens and educators to iterate on their design. And while it’s a long way off, DiFranzo says the ultimate goal is to translate the technology into the real world in the form of an app or browser extension that would allow it to operate within a user’s actual social media. For example, he says, if a student is scrolling through Instagram, a bot could pop up and say, “Hey the information in this post sounds sketchy, do you want to role play and talk about what to do?”
“It’s one thing to have an educational intervention in the classroom, but a student has to be able to remember that information when cyberbullying or misinformation occurs in the real world,” says DiFranzo. “The goal is to have this educational intervention happen instantly, as kids are experiencing their own social media. It will be like having a little angel on your shoulder, guiding you. We’re exploring how we might test, build, and prototype a tool like this, first within Social Media TestDrive, but with the goal of extending it out into real feeds.”
There’s a long road of R&D ahead, but DiFranzo is unfazed. He knows, through the success of Social Media TestDrive, that the work he’s doing is already having an impact. It’s giving kids the skills to better navigate their digital worlds.
“As researchers, our work often ends up in a publication where a few people may read it, but it doesn’t translate into the field. My work is focused on building the next generation of responsible social media users and more empathic online communities,” he says. “To see that happening, and to be a part of it, has been truly rewarding.”