search

UMD     This Site





Professor Jonathan Simon (Biology/ECE/ISR) is the Principal Investigator of a new National Institutes of Health National Institute on Deafness and Other Communication Disorders R01 grant, "Auditory Scene Analysis and Temporal Cortical Computations." The five year, $1.5M grant started March 1, 2015. The research will further the understanding of how in an environment with many sounds and voices, people are able to concentrate on an individual voice and what it is saying.

When many people in a room are talking at the same time, the sounds of their voices mix with each other before ever arriving at our ears. Despite the fact that sorting out this sound mixture, or auditory scene, into individual voices is a profoundly difficult mathematical problem, the human brain routinely accomplishes this task, and often with little apparent effort. The neural underpinnings of this task are not at all well understood. In addition, when this ability declines?for example due to hearing loss or aging?it is not known which specific mechanisms of neural processing are the most critical in preserving the remaining aspects of this ability.

Simon will use magnetoencephalography (MEG) to record from the auditory cortexes of the brains of human subjects, specifically the temporally dynamic neural responses to individual sound elements and their mixtures. Linking the neural responses with their auditory stimuli and attentional state will allow inferences of neural representations of these sounds. These neural representations are temporal: neural processing unfolds in time in response to ongoing acoustic dynamics.

Simon will use these temporal representations to investigate how complex auditory scenes are neurally encoded?from the broad mixture of the entire acoustic scene to separated individual sources, in different areas of auditory cortex, and with a special emphasis on speech. He hypothesizes that the brain?s auditory cortex employs a universal neural encoding scheme, genuinely temporal in nature, which underlies not only general auditory processing but also auditory scene segregation.

Simon will determine how the auditory cortex neurally represents speech in difficult listening situations. One example is of speech in noise in a reverberant environment, a very relevant combination which can strongly undermine speech intelligibility. Another example is listening to a speaker in the presence of several competing speakers. In this case, understanding how the background (the mixture of the competing speakers) is neurally represented is of particular interest, and of direct relevance in determining how the brain segregates the foreground speech from the background.

Simon also will determine analogs of these neural speech representations for dynamic non-speech sounds, especially when the sounds are separate components of a larger acoustic scene. This will generalize what is known about speech segregation to a wider class of sounds. (While speech is very important for human listeners, most sounds are not speech.).

In addition, Simon will investigate the detailed neural mechanisms by which the auditory cortex identifies and isolates individual speakers in a complex acoustic scene. Pitch and timbre, two acoustic cues known to be important for this task, will be separately and independently modified, so that their individual contributions to the neural process of auditory scene segregation of speech may be determined.



Related Articles:
AESoP symposium features speakers, organizers with UMD ties
Christian Brodbeck joins ISR as postdoc
Jonathan Simon is invited speaker at CHScom 2015
Simon and Alumnus Ding Publish Research in NeuroImage
‘Priming’ helps the brain understand language even with poor-quality speech signals
New UMD Division of Research video highlights work of Simon, Anderson
Uncovering the mysteries of networking in the brain
How does the brain turn heard sounds into comprehensible language?
Internal predictive model characterizes brain's neural activity during listening and imagining music
The brain makes sense of math and language in different ways

March 5, 2015


«Previous Story  

 

 

Current Headlines

Remembering Rance Cleaveland (1961-2024)

Dinesh Manocha Inducted into IEEE VGTC Virtual Reality Academy

ECE Ph.D. Student Ayooluwa (“Ayo”) Ajiboye Recognized at APEC 2024

Balachandran, Cameron, Yu Receive 2024 MURI Award

UMD, Booz Allen Hamilton Announce Collaboration with MMEC

New Research Suggests Gossip “Not Always a Bad Thing”

Ingestible Capsule Technology Research on Front Cover of Journal

Governor’s Cabinet Meeting Features Peek into Southern Maryland Research and Collaboration

Celebrating the Impact of Black Maryland Engineers and Leaders

Six Clark School Faculty Receive 2024 DURIP Awards

 
 
Back to top  
Home Clark School Home UMD Home