Home » Content Tags » linguistics

linguistics

11/15/21

By Jessica Weiss ’05

Anyone who’s ever tried having a conversation with a 1-year-old knows it can feel like very little is getting through. But according to linguistics Professor Jeffrey Lidz, there’s plenty going on behind the adorable babble and occasional slobbering.

For the past two decades, Lidz has focused on behind-the-scenes action in the youngest human minds, seeking to discern how and when infants begin to understand things like sentence structure or the difference between a noun and a verb.

Lidz co-authored—with Laurel Perkins Ph.D ’19, now an assistant professor at UCLA—a groundbreaking study published last month in Proceedings of the National Academy of Sciences showing that syntax, or the arrangement of words and phrases to create well-formed sentences, actively develops during the second year of life. According to the researchers, 18-month-olds have developed syntax capacities on par with adults.

We recently spoke with Lidz, who is also the director of the University of Maryland Project on Children's Language Learning and one of the founders of the Infant and Child Studies Consortium, about his latest discovery and what it’s like to conduct research into the mysteries of baby talk.

When did you begin researching language in kids?
In the last year of my Ph.D., I took a course on language acquisition, which I thought was really cool. I had some questions about how that research worked, and I managed to get a postdoc at the University of Pennsylvania in 1997. So, I started researching syntax and semantics in 3- and 4-year-olds, and the thing I kept finding was that for almost every phenomenon I looked at, children had a really sophisticated knowledge of their language—even though they didn’t speak super fluently. So, the question that was driving me was: How did they get there? Language is a really, really complicated mental construct. It almost seems like an impossible task to figure out how children acquire all that knowledge.

You’ve been at the forefront of making discoveries about syntactic abilities in young children. How does your recent paper fit into the trajectory of your research?
Before the early 2000s, nobody was really studying the syntax of children between 1 and 2 because they thought there wasn’t anything to study because kids that age don’t talk much. But sometimes what kids say is a reflection of what they know, and sometimes what they say is much less than what they know because it’s hard to coordinate a long expression. By 18 months, kids understand that sentences are hierarchically structured, even though you can’t see that in their productions. We found that kids know about grammatical categories like the difference between nouns and verbs, between 16 and 18 months. This most recent paper is about a central feature of language structure, which is the ability to create dependency between words in a sentence that are far away from each other. Discovering that kids can do those computations by the time they’re 18 months is new and exciting.

How do you manage to get babies to cooperate for research studies?
It’s fun—and it’s a challenge. We try and make the lab environment an interesting place to be, and we spend a fair amount of time at the beginning of a study playing with toys, making the children feel comfortable and getting them accustomed to the environment so that when we want to take them into the room to do the study, they’re happy to go with us and interested to see what’s there. We want to make the lab feel like a mix between a dentist’s office and a preschool. In the dentist’s office everything works the way it’s supposed to work and you feel like you’re in the hands of total professionals. But we also want it to be a place that’s fun, where the kid feels happy and so do the parents. If the kids are not feeling comfortable and safe, the experiments are just not gonna work.

Are there things parents can do to help their own kids’ language development?
When my kids were little, we would play with them and figure out ways to probe what they understood. I think playing with your kids linguistically is a fun thing to do, like by seeing how they react when something is ungrammatical. You’ll learn a lot about how sophisticated their knowledge is. But I don’t think parents need to worry, generally, about language development. Children are aggressive learners, and they’re motivated to learn language because they're trying to be understood and they’re trying to understand the world around them.

-------------------

Photo: Linguistics Professor Jeff Lidz talks about adjectives with an aspiring child scientist at Family Science Days at the American Association for the Advancement of Science annual meeting in 2016. Lidz is co-author of a groundbreaking new study about how children learn syntax in language.

Photo courtesy of Maryland Language Science Center

10/28/21

There are myriad benefits to learning a new language—from conversing with people from other backgrounds, to easing international travel, to advancing your career. But acquiring a new language as an adult is not always easy, particularly if a person is trying to distinguish phonetic sounds not often heard in their native language.

With funding from the National Science Foundation (NSF), researchers in the Computational Linguistics and Information Processing (CLIP) Laboratory at the University of Maryland are exploring this phenomenon, using computational modeling to investigate learning mechanisms that can help listeners adapt their speech perception of a new language.

Naomi Feldman (left), an associate professor of linguistics with an appointment in the University of Maryland Institute for Advanced Computer Studies, is principal investigator of the $496K grant(link is external).

Feldman is overseeing five students in the CLIP Lab who are heavily involved in the project, including two who are pictured below. Craig Thorburn(link is external) (right), is a fourth-year doctoral student in linguistics, and Saahiti Potluri (left), is an undergraduate double majoring in applied mathematics and finances.

For their initial work, the researchers are taking a closer look at the specific difficulties native Japanese speakers face when learning English.

As an adult, it is often difficult to alter the speech categories that people have experienced since childhood, particularly as it relates to non-native or unfamiliar speech sounds. For example, native English speakers can easily distinguish between the “r” and “l” sound, which native Japanese speakers are not accustomed to.

Feldman’s research team is developing two types of computational models based on adult perceptual learning data: probabilistic cue weighting models, which are designed to capture fast, trial-by-trial changes in listeners’ reliance on different parts of the speech signal; and reinforcement learning models, which are designed to capture longer term, implicit perceptual learning of speech sounds. Thorburn and Potluri are working on the latter models.

With guidance from Feldman, the two researchers are exploring a reward-based mechanism that research suggests is particularly effective in helping adults acquire difficult sound contrasts when learning a second language.

“We're trying to uncover the precise mechanism that makes learning so effective in this paradigm,” Thorburn says. “This appears to be a situation in which people are able to change what they learned as an infant, something we refer to as having plasticity—the ability of the brain to adapt—in one’s representations. If we can pin down what is happening in this experiment, then we might be able understand what causes plasticity more generally.”

Potluri says that the powerful computational resources provided by UMIACS are critical to the project, noting that the model they are working with goes through hundreds of audio clips and “learns” over thousands of trials.

“The lab's servers can run these experiments in a matter of hours. Whereas with less computational power, it would literally take days to run a single experiment,” she says. “After running the model, we also need to analyze the massive datasets generated by the trials, and they are easier to store and manipulate—without concerning memory issues—on the lab's servers.”

Potluri says it was her interest in learning languages and a desire to get involved in linguistics research that drew her to apply to work in CLIP as an undergraduate. Despite having very little previous coursework in the subject, she and Feldman found that the NSF-funded project was a great area for her to exercise her knowledge in math while gaining new skills.

Feldman says the complementary skill sets of Thorburn and Potluri make them a good team to assist on the project.

“Craig and Saahiti have interests that are very interdisciplinary—spanning everything from language science to computer science to applied math—which makes them a perfect fit for research that uses computational models to study how people learn language,” she says. “Their collaborative work has already proven to be very impressive, and I am glad to have them on our team.”

—Story by Melissa Brachfeld

10/8/21

By Jessica Weiss ’05

Starting next summer, University of Maryland language scholars will have a new place to conduct their research and a new source of participants for their studies: the Planet Word museum in downtown Washington, D.C. and its visitors.

A new $440,000 grant from the National Science Foundation funds a partnership between UMD, Howard University and Gallaudet University and Planet Word to advance research and public understanding about the science of language.

For example, experiments may look at what non-signing people believe about what makes various American Sign Language signs hard or easy to learn, why it’s easier to understand the speech of people we know rather than strangers, or whether we think differently when reading a text message versus formal writing.

The experiments will be interactive and fun, said Assistant Research Professor in UMD’s Maryland Language Science Center Charlotte Vaughn, who is leading the project.

“Language is already the topic of conversation at the museum, so there’s an unparalleled opportunity for our studies and activities about language science to be a seamless and memorable part of visitors’ experience,” she said.

Planet Word, opened in late 2020 and housed in the historic Franklin School building, aims to show the depth, breadth and fun of words, language and reading. Faculty from UMD’s Maryland Language Science Center, the Department of Linguistics, the Department of Hearing and Speech Sciences and the Department of English were involved in shaping the museum’s vision and programming. It has been a hope of the museum’s founder, Ann Friedman, to also have it be a space for research and discovery.

In addition to Vaughn, the lead project team includes Associate Professor in the Department of Hearing and Speech Sciences Yi Ting Huang and postdoc affiliate in the Department of Hearing and Speech Sciences Julie Cohen at UMD, as well as Assistant Professor of Psychology at Howard University Patrick Plummer and Assistant Professor of Linguistics at Gallaudet University Deanna Gagne. Other personnel include Jan Edwards and Rochelle Newman, both professors in the Department of Hearing and Speech Sciences at UMD; Colin Phillips, professor in the Department of Linguistics at UMD; and Laura Wagner, professor in the Department of Psychology at the Ohio State University.

Vaughn said the opportunity to partner with a historically Black university and the world's only liberal arts university for Deaf and hard-of-hearing people will allow for significant progress on issues central to the field.

“Engaging the diverse Planet Word audience in our activities will make our research stronger, more representative, and more widely accessible,” Vaughn said. “At the same time, our collaborative partnership, plus offering unique research experiences to students underrepresented in the field, works toward diversifying the future of the language sciences.”

The grant also funds the development of a training course in public-facing research, which will be offered for the first time at Planet Word next summer. Though offered through UMD, the course will be open to students from across the region. Those who take part will help lead the research studies, set to begin around the same time.

“Participating in public-facing research is an excellent opportunity for students,” said Huang. “Communicating science to broad audiences involves developing ways to hook people into engaging with questions when they have limited familiarity with the topic and unraveling scientific puzzles through the format of conversations.”

8/18/21

By Chris Carroll

As the clouds of mental illness gather, it can be difficult for patients to recognize their own symptoms and find necessary help to navigate storms like episodes of depression or schizophrenia.

With $1.2 million in new funding from the National Science Foundation, University of Maryland researchers are creating a computerized framework that could one day lead to a system capable of a mental weather forecast of sorts. It would meld language and speech analysis with machine learning and clinical expertise to help patients and mental health clinicians connect and head off crises while dealing with a sparsely resourced U.S. mental health care system.

“We’re addressing what has been called the ‘clinical white space’ in mental health care, when people are between appointments and their doctors have little ability to help monitor what’s happening with them,” said Philip Resnik, a professor of linguistics with a joint appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS) who is helping to lead the research.

The project was born with the help of a seed grant through the AI + Medicine for High Impact (AIM-HI) Challenge Awards, which bring together scholars at the University of Maryland, College Park (UMCP) with medical researchers at the University of Maryland, Baltimore (UMB) on major research initiatives that link artificial intelligence and medicine. Deanna Kelly, a professor of psychiatry at the University of Maryland School of Medicine, is another of the project’s leaders, as are electrical and computer engineering Professor Carol-Espy Wilson and computer science Assistant Professor John Dickerson, both at UMCP.

The new funding will help the research team pour their diverse expertise into a single framework, which would then be developed into a deployable system for testing in a clinical setting.

How would such a system work? Users might answer a series of questions about physical and emotional well-being, with the system employing artificial intelligence to analyze word choice and language use—Resnik’s area of focus in the project. It could also monitor the patient’s speech patterns, analyzing changes in the timing and degree of movement made by the lips and different parts of the tongue, and comparing it to a baseline sample taken from healthy control subjects or earlier when the participant was in remission, said Espy-Wilson, who has an appointment in the Institute for Systems Research.

People generally overlap neighboring sounds when speaking, beginning the next sound before finishing the previous one, a process called co-production. But someone suffering from depression, for instance, has simpler coordination, and their sounds don’t overlap to the same extent.

“You can't think as fast, you can't talk as fast when you’re depressed,” said Espy-Wilson. “And when you talk, you have more and longer pauses … You have to think more about what you want to say. The more depressed you are, the more of the psychomotor slowing you're going to have.”

While the final form of the system has yet to take shape, it could potentially live in an app on patients’ phones, and with their permission, automatically monitor their mental state and determine their level of need for clinical intervention, as well as what resources are available to help.

If the system simply directed streams of patients at already overloaded doctors or facilities with no open beds, it could potentially make things worse for everyone, said Dickerson, who has a joint appointment in UMIACS.

He’s adding his expertise to work that Resnik and Espy-Wilson have been pursuing for years, and taking on the central challenge—using an approach known in the machine learning field as the “multi-armed bandit” problem—of creating a system that can deploy limited clinical resources while simultaneously determining how to best meet a range of evolving patient needs. During development and testing, the AI system’s determinations will always be monitored by a human overseer, said Dickerson.

The World Health Organization estimated a decade ago that the cost of treating mental health issues between 2011 and 2030 would top $16 trillion worldwide, exceeding cardiovascular diseases. The stresses of the COVID-19 pandemic have exacerbated an already high level of need, and in some cases resulted in breakdown conditions for the system, said Kelly, director of the Maryland Psychiatric Research Center’s Treatment Research Program.

As the project develops, the technology could not only connect patients with a higher level of care to prevent worsening problems (avoiding costlier care), but also might help clinicians understand which patients don’t need hospitalization. Living in the community with necessary supports is often healthier than staying in a psychiatric facility—plus it’s cheaper and frees up a hospital bed for someone who needs it, she said.

“Serious mental illness makes up a large portion of health care costs here in the U.S. and around the world,” Kelly said. “Finding a way to assist clinicians in preventing relapses and keeping people well could dramatically improve people’s lives, as well as save money.”

Aadit Tambe M.Jour. ’22 contributed to this article.

Subscribe to RSS - linguistics