The New View
How an Ursinus lab is threading machine learning and the human element.
Bill Mongan was feeling uncertain.
He had been called upon by colleagues in Drexel University’s Center for Functional Fabrics who needed help solving a problem he thought was beyond his expertise. They were attempting to decipher the data gathered by a piece of clothing woven with powerful technology that could detect concerning biological markers in women with high-risk pregnancies, or in babies susceptible to sleep apnea.
Using radio frequency identification technology and thread made out of a minuscule antenna, the researchers could identify a wireless signal that shifted with each breath or contraction. But the signal was full of noise, an illegible mess of information filled with artifacts created by the surrounding environment.
Enter Mongan, then an associate teaching professor of computer science at Drexel, now an associate professor at Ursinus and founder of the Human-Machine Intelligent Systems Lab. When he walked into that room, in 2013, he joined an electrical engineer, a sociologist, a fashion designer, and an OB-GYN who was still in scrubs after delivering a baby earlier that day.
“I thought, ‘I don’t belong here. This is not a place for me,’” Mongan said. “I don’t know anything about babies. I took one biology class as a kid.”
Mongan thought he would introduce himself and never see those people again. Instead, his skills allowed the team to understand the meaning of the data they were generating. By applying signal processing and unsupervised machine learning—advanced mathematical methods that use algorithms to identify otherwise invisible patterns in a data set—he helped his colleagues clean up the noise, find the signal, and identify the respiratory rate the fabric was designed to track.
“Thank God you’re here,” the doctor told him, “because practitioners don’t know how to do any of that stuff.”
For Mongan, the experience was transformative. Aided by his collaborators’ confidence, he expanded his comfort zone to tackle a challenge that had seemed to exceed the scope of his training. He stayed on the project, known as the belly band, and his research eventually became the subject of his Ph.D. in computer engineering. Years later, he’s carried forward the lessons he learned from it: the power of pushing one’s boundaries and the critical role of interdisciplinary breadth in innovation.
Now, he is working alongside Ursinus students and faculty to explore the potential for machine learning and artificial intelligence to improve human well-being, including an intercollegiate effort to further study the belly band with those same Drexel colleagues. His lab has supported research with implications for health care, radar tracking, digital privacy, astrophysics, and education about machine learning itself—a range of fields as varied as the liberal arts will allow. It embodies an idea that first struck Mongan in that meeting a decade ago.
“This is bigger than any one of us,” he said. “This wouldn’t exist without all of us.”
Two Sides of the Coin
The Human-Machine Intelligent Systems Lab isn’t a physical space. Perhaps appropriately, given that its focus is on applying computer intelligence to real-world problems, it exists somewhere in the ether. To enter, all it takes is a computer with enough power to operate a state-of-the-art graphics processing unit.
Mongan launched the lab after coming to Ursinus in 2020, hoping to bring research—or, as he thinks of it, “discovery” or “inquiry”—to the everyday student experience. “It felt like planting the seeds and reinvesting in the next generation,” Mongan said.
At Ursinus, he wanted to create a space for students to engage in their disparate fields with a shared language rooted in his field. The lab isn’t just for computer science students, though; even taking the college’s artificial intelligence course isn’t required. They need only to be interested in exploring how AI and machine learning affect people and eager to find ways to improve that relationship.
“This is bigger than any one of us.
This wouldn’t exist without all of us.”
- Bill Mongan
The lab’s name is intentionally broad, suggesting that its work uses mathematical modeling to consider intelligent computing systems from a human-centric perspective. Its projects cohere around two goals that should go hand-in-hand but are too often treated as distinct concepts. One half of the lab’s vision is focused on applying machine learning to biomedical research and the internet of things—the growing conglomeration of devices containing sensors that communicate data to each other and the cloud. The other half is about ensuring the security and privacy that can keep AI horror stories from coming to fruition.
“We believe those two need to develop together,” Mongan said. “They need to happen in the same room, because they’re not going to morph together by accident.”
Cutting-Edge Concerns
Mongan recently decided to sign up for Facebook and was told, much to his surprise, that he was not a real person. Somehow, the algorithms working behind the scenes to keep the social network from being overrun by bots and scammers had determined that he was suspicious.
“That’s clearly a mistake that’s easy to fix,” Mongan said, “but there’s no person behind that decision that you can talk to and impeach.”
In his case, the harm was minimal, but similar flaws in AI systems have done significant damage. Unchecked AI (like racist chatbots, gender bias in recruiting, and discriminatory bank lending) has a tendency to exacerbate flaws in data sets that reflect humanity’s own historical faults and biases. Mongan and the members of his lab, which by now includes more than a dozen students and several faculty members, want machines to act with humans in mind—and humans to do the inverse.
“There’s this tool working either for or against people, but they don’t have the knowledge to communicate about it and make those decisions,” said Kevin Hoffman ’23, who came to Ursinus focused on biology but eventually found his way to Mongan’s course on computer science programming.
Hoffman’s interest in AI and machine learning grew when Mongan launched the lab two years ago, creating a collaborative research setting for students to explore the evolving field. As a self-described “cautious optimist” about AI, he wants to orient his career toward “explainable AI” in the long run, helping people who don’t have extensive experience with these systems gain a common understanding of their uses and misuses so they can better navigate the world around them.
People often ask Mongan if AI is going to rise up and take us over. While he doesn’t foresee anything like that, he said he’s more concerned about a phenomenon that he sees already developing: the blind acceptance of what AI tells us and the risk that such passivity can be exploited.
“We want to be on the cutting edge of developing these systems, and we also want to be the people who are developing and deploying them responsibly,” Mongan said. “The people on that leading edge need to have that mindset or else it’s going to run away as a technology and you’re not going to get that toothpaste back in the tube.”
Across the Universe
In many ways, a liberal arts college is an environment well suited to Mongan’s mission. It offers an array of expertise that crosses academic boundaries and the type of critical thinking that can ensure research is ethically sound.
Kacey La ’25 spent his summer working with Mongan on a project that seeks to show how a neural network can be used in remote healthcare monitoring. By analyzing WiFi signals bouncing between off-the-shelf wireless routers placed in a room with an individual, the network can create an image of that person’s positioning within the room—and, theoretically, detect an elderly person’s dangerous fall or identify quirks in someone’s gait associated with neurological or respiratory disease. The first question La asked Mongan and Christopher Tralie, an assistant professor of math and computer science, was about the privacy concerns of tracking a person’s pose in a room, Mongan said.
“Admittedly, I wasn’t sure if I wanted to do this project,” La said. “It’s a Pandora’s box—you kind of don’t want to open it. But I’m hoping it can be used for good.”
To Mongan, La’s uncertainty presented “a very Ursinus question that a lot of people wouldn’t ask.” That type of open-minded view of AI and machine learning is how the lab balances human well-being and responsible development.
Leslie New, an assistant professor of statistics, worked with Tralie, La, and another student last summer on a project that sought to use machine learning to automatically identify bowhead whales, a 60-foot-long endangered species that is almost entirely black and therefore difficult to identify and protect. She said machine learning can’t be used for good unless those using it are thinking across disciplines.
“If you develop something in isolation,” New said, “you cause harm.”
Mongan’s lab intentionally avoids that isolation, as the diverse work associated with it demonstrates. To Mongan, machine learning offers a lens through which to view the world around us and can be applied to biology and the humanities all the same.
“The language that this provides lets you tackle the whole universe,” he said. “If you can model it, there are techniques that can inform you about it.”
The Human in the Loop
Machine learning isn’t going anywhere. Rather than fearing these systems as a dystopian weapon like Skynet from The Terminator, New urges a more optimistic view of the potential they hold.
“If we are conscientious in our development, it will actually build better equity,” she said. “It will help people’s ability to live and find those subtle patterns that are difficult for the human eye to see but that a computer with data points can see.”
Mongan wants to educate everyday people about the influence and impact these systems can have on their lives—for better and worse. If he could pull a random person off the street and ask them about the role AI plays in their life, and if that person could describe its benefits and how to mitigate its risks, then he would know he’d done his job.
“That’s the hole in the work that’s being done broadly that we’re well-positioned to address: Getting the human in the loop, that stakeholder who stands to benefit from these advances but needs to be better informed about how they work and what they do so they can be a better advocate for themselves,” Mongan said. “I’d like to help with that.”