Home » Content Tags » UMIACS

UMIACS

2/23/23

By Maryland Today Staff 

 

 

 

 

Sometimes students need an instructional pick-me-up between violin lessons. Others can’t afford as many lessons as their talent merits, or they live in a place where violin teachers are in short supply.

A new artificial intelligence-powered system under development by a University of Maryland classical violinist and a computer scientist with expertise in robotics and computer vision could fill in those gaps.

“Our project combines the expertise of traditional violin pedagogy with artificial intelligence and machine learning technology,” said Irina Muresanu, an internationally known concert violinist and an associate professor of violin in the School of Music. “Our aim is to ultimately create software that will be able to provide guidance for all string instruments, and even other instruments.”

The system is not designed to replace human expertise, but to augment it, the researchers say.

“Our system will observe the players using vision and audio, and will analyze the playing in order to give the appropriate feedback, and also to give suggestions on what to practice,” said Cornelia Fermüller, a research scientist with the Institute for Advanced Computer Studies and the Computer Vision Laboratory.

The research is funded by a 2021 Maryland Innovation Initiative Award, as well as a Grand Challenges Team Project grant announced last week.

(Video produced by Maria Herd M.A. '19)

11/7/22

The University of Maryland Strategic Partnership: MPowering the State on Friday announced the appointment of three professors from the University of Maryland, College Park (UMCP) and three from the University of Maryland, Baltimore (UMB) as MPower Professors. The professorship recognizes, incentivizes and fosters collaborations between faculty who are working together on the most pressing issues of our time.

To be considered for the MPower Professorship, faculty must take on strategic research that would be unattainable or difficult to achieve by UMB or UMCP alone, and must embrace MPower’s mission to serve the state of Maryland and its citizens. Each professor will receive $150,000, allocated over three years, to apply to their salary or to support supplemental research activities.

“The MPower Professors have shown incredible dedication and commitment to collaboration, innovation and discovery. Their work to solve major challenges and positively impact the lives of others is bolstered by this investment,” said UMB President Bruce E. Jarrell, M.D.

“The six professors selected for this honor are each working across disciplines to address the most complex challenges facing society today, bridging research and scholarship between institutions to foster innovation that will impact citizens in Maryland, across the country and around the world,” said UMCP President Darryll J. Pines.

The 2022 MPower Professors are using the latest advancements in computer science, machine learning and augmented reality to revolutionize medical care, linguistics and neuroscience; developing enhanced understanding and treatment for a range of infections and diseases; investigating cutting-edge approaches and new materials to regenerate human tissue; and examining the relationship between agriculture, energy and water to create a safer and sustainable global food supply.

 

Philip S. Resnik

Philip S. Resnik is a professor of linguistics in the UMCP College of Arts and Humanities and holds a joint appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS). He is also an affiliate professor in the Department of Computer Science in the College of Computer, Mathematical, and Natural Sciences. Resnik's research focuses on computational modeling of language that brings together linguistic knowledge, domain expertise, and machine learning methods. His current work emphasizes applications in computational social science and scientific research questions in computational cognitive neuroscience. Resnik holds two patents and has authored or co-authored more than 100 peer-reviewed articles and conference papers. In 2020, he was named a fellow of the Association for Computational Linguistics.

-----------------------

Click below to read the full annoucement with the other 2022 MPower Professors.

11/2/22

By Maria Herd M.A. ’19

 

Fitness trackers and smartwatches are widely used to monitor health, activity and exercise, but they’re pretty sedentary themselves. They stay strapped on your wrist or clipped to your clothing despite the fact it’s more effective to monitor different areas—your upper body for breathing, for example, or your wrist to track typing or writing.

Now, researchers at the University of Maryland are putting wearable sensors on track to do their best work—literally—with a miniature robotics system capable of traversing numerous locations on the human body.

Their device, called Calico, mimics a toy train by traveling on a cloth track that can run up and down users’ limbs and around their torso, operating independently of external guidance through the use of magnets, sensors and connectors. Their paper describing the project was recently published in the ACM Journal on Interactive, Mobile, Wearable and Ubiquitous Technologies and presented at UBICOMP, a conference on ubiquitous computing.

 

closeup of wearable sensor on wrist

“Our device is a fast, reliable and precise personal assistant that lays the groundwork for future systems,” said Anup Sathya M.S. ’21, who led Calico’s development for his master’s thesis in human-computer interaction. Sathya is now a first-year Ph.D. student in computer science at the University of Chicago.

Most wearable workout devices are limited in the type of exercises they can monitor, but Calico is versatile. For example, it can track running on a user's arm, move to the elbow to count push-ups, to the back for planks, and then to the knee to count squats.

And unlike other devices, Calico moves quickly and accurately without getting stuck on clothing or at awkward angles. “For the first time, a wearable can traverse the user’s clothing with no restrictions to their movement,” said Huaishu Peng, an assistant professor of computer science who was Sathya’s adviser at UMD.

Peng, who also has an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS), sees a future in which mini wearable devices like Calico will seamlessly integrate with humans for interaction, actuation and sensing.

He recently took Calico in a creative direction by establishing a new collaboration with Jonathan David Martin, a lecturer in Immersive Media Design; and Adriane Fang, an associate professor at the School of Theatre, Dance, and Performance Studies.

The interdisciplinary team is combining dance, music, immersive media, robotics and wearable technology into a novel and compelling series of interactive dance performances that are choreographed in real time through Calico.

First, Peng’s research group programmed Calico to instruct a dancer to execute specific movements using motion and light. Then, using their smartphones, the audience gets to collectively vote on how Calico should instruct the dancer.

The project is being funded with a $15,000 award from the Arts for All initiative, which leverages the combined power of the arts, technology and social justice to make the University of Maryland a leader in addressing grand challenges.

“The idea is to explore the dynamics and connections between human plus robot and performer plus audience,” said Peng. “In this instance, Calico will and act as the ‘mediator’ to broadening art and tech participation and understanding.”

Calico’s original creators include Jiasheng Li, a second-year Ph.D. student in computer science; Ge Gao, an assistant professor in the College of Information Studies with an appointment in UMIACS; and Tauhidur Rahman, an assistant professor in data science at the University of California, San Diego.

VIDEO: Calico: Relocatable On-cloth Wearables with Fast, Reliable, and Precise Locomotion

10/28/21

There are myriad benefits to learning a new language—from conversing with people from other backgrounds, to easing international travel, to advancing your career. But acquiring a new language as an adult is not always easy, particularly if a person is trying to distinguish phonetic sounds not often heard in their native language.

With funding from the National Science Foundation (NSF), researchers in the Computational Linguistics and Information Processing (CLIP) Laboratory at the University of Maryland are exploring this phenomenon, using computational modeling to investigate learning mechanisms that can help listeners adapt their speech perception of a new language.

Naomi Feldman (left), an associate professor of linguistics with an appointment in the University of Maryland Institute for Advanced Computer Studies, is principal investigator of the $496K grant(link is external).

Feldman is overseeing five students in the CLIP Lab who are heavily involved in the project, including two who are pictured below. Craig Thorburn(link is external) (right), is a fourth-year doctoral student in linguistics, and Saahiti Potluri (left), is an undergraduate double majoring in applied mathematics and finances.

For their initial work, the researchers are taking a closer look at the specific difficulties native Japanese speakers face when learning English.

As an adult, it is often difficult to alter the speech categories that people have experienced since childhood, particularly as it relates to non-native or unfamiliar speech sounds. For example, native English speakers can easily distinguish between the “r” and “l” sound, which native Japanese speakers are not accustomed to.

Feldman’s research team is developing two types of computational models based on adult perceptual learning data: probabilistic cue weighting models, which are designed to capture fast, trial-by-trial changes in listeners’ reliance on different parts of the speech signal; and reinforcement learning models, which are designed to capture longer term, implicit perceptual learning of speech sounds. Thorburn and Potluri are working on the latter models.

With guidance from Feldman, the two researchers are exploring a reward-based mechanism that research suggests is particularly effective in helping adults acquire difficult sound contrasts when learning a second language.

“We're trying to uncover the precise mechanism that makes learning so effective in this paradigm,” Thorburn says. “This appears to be a situation in which people are able to change what they learned as an infant, something we refer to as having plasticity—the ability of the brain to adapt—in one’s representations. If we can pin down what is happening in this experiment, then we might be able understand what causes plasticity more generally.”

Potluri says that the powerful computational resources provided by UMIACS are critical to the project, noting that the model they are working with goes through hundreds of audio clips and “learns” over thousands of trials.

“The lab's servers can run these experiments in a matter of hours. Whereas with less computational power, it would literally take days to run a single experiment,” she says. “After running the model, we also need to analyze the massive datasets generated by the trials, and they are easier to store and manipulate—without concerning memory issues—on the lab's servers.”

Potluri says it was her interest in learning languages and a desire to get involved in linguistics research that drew her to apply to work in CLIP as an undergraduate. Despite having very little previous coursework in the subject, she and Feldman found that the NSF-funded project was a great area for her to exercise her knowledge in math while gaining new skills.

Feldman says the complementary skill sets of Thorburn and Potluri make them a good team to assist on the project.

“Craig and Saahiti have interests that are very interdisciplinary—spanning everything from language science to computer science to applied math—which makes them a perfect fit for research that uses computational models to study how people learn language,” she says. “Their collaborative work has already proven to be very impressive, and I am glad to have them on our team.”

—Story by Melissa Brachfeld

Subscribe to RSS - UMIACS