A team of University of Maryland students trained a dog to assess the injuries of a plastic mannequin.
But one thing separates “Spot” from other trained service animals on campus: It’s a talking, AI-powered robot.
At a demonstration hosted at the E.A. Fernandez IDEA Factory on Thursday, the robotic dog approached the mannequin and used its speaker to say, “I’m here to help. Can you tell me what happened?”
Spot is designed to use its cameras and sensors to help people in mass casualty incidents, said Derek Paley, a professor of aerospace engineering who leads the university’s team.
The technology was one of the dozens of AI projects showcased at the “AI for Public Good” event at this university on Thursday. Research teams gathered in multiple buildings for the event, which brought together students and faculty from seven departments.
“What we’re trying to highlight here is the breadth of ways that AI can be used,” said Hal Daumé, director of the Artificial Intelligence Interdisciplinary Institute at Maryland.
[UMD president Darryll Pines discusses federal pressure in State of the Campus address]
At the Brendan Iribe Center, another team showcased their AI project focused on helping people with disabilities.
Assistant information professor Stephanie Valencia demonstrated the team’s AI-assisted tools designed to help people with aphasia, a language disorder typically caused by stroke.
During the presentation, Valencia described a joke-telling application, which listens to ongoing conversations with a microphone and uses a language model to suggest relevant jokes.
Unlike other accessibility solutions that require several minutes of effort to communicate one sentence, the team’s application automates the process and minimizes the typing needed to tell a joke, she said.
Valencia added her team hopes to eventually develop smaller models that can run locally on devices.
AI models that currently power apps like ChatGPT are run from datacenters, requiring users to send their personal information off-device, which risks user privacy, Valencia said.
[DOGE website says it cut nearly $15M from UMD grants]
Other teams at the event expressed that running AI models locally comes with the added benefit of being cheaper and better for the environment since they would not require a datacenter. Still, many teams said that packing a lot of computing power into a small device can be challenging.
Daumé, a computer science professor who has been working with artificial intelligence for more than 25 years, said the tech industry has little incentive to change course.
“Right now in the industry sector, there’s not a lot of pressure to make [models] small,” Daumé said. “Yes, it would help with environmental impacts, but I’m not convinced that they really care.”
Levi Burner, a computer science postdoctoral student who was presenting robotics research at the event, echoed Daumé’s sentiments. He said the industry’s push in the opposite direction, toward ever-larger models, may be reaching its limits.
“If you look at what OpenAI and Google are doing with these things, they’re spinning up nuclear reactors and warehouses of graphics cards,” he said. “They want to scale it up and up and up, and they’re going to hit the ceiling.”
Burner’s team demonstrated a flapping-wing drone modeled after hummingbirds, equipped with AI-stabilized cameras to make it ready to fly in real-world applications.
The team’s biology-inspired approach was partly driven by practical constraints, including the need to run artificial intelligence models that work on small devices.
Daumé remains optimistic about the university’s role in shaping the future of artificial intelligence research.
He argued if researchers can make AI models run on less powerful devices, it could make them cheaper to run, more accessible and less damaging to the environment.
“Solving that would also have a lot of knock-on consequences that would be good for the world, frankly,” he said.