My latest research is described here:
Project: Auditory Scene Analysis
More than half the world's population above the age of 75 years develop age-related hearing loss. They have difficulty understanding speech amidst background noise, like when listening to someone speak in a noisy cafe. Colloquially this is known as the ‘cocktail party problem’ which most animals are able to solve but computers cannot. However, how our brains solve this challenge is not well understood.
I explored whether monkeys are a good model of human brain mechanisms underlying auditory segregation. Unlike in humans, the use of monkeys allows systematic invasive brain recordings to characterise how single neurons achieve this feat. However, before one can record from a monkey brain and generalize the results to humans it is essential to show that the underlying mechanisms are similar in both species.
I employed synthetic auditory stimuli over speech as they do not have semantic confounds and help us to develop animal models. Our behavioural experiments showed that rhesus macaques are able to perform auditory segregation based on the simultaneous onset of spectral elements (temporal coherence). I conducted functional magnetic resonance imaging (fMRI) in awake behaving macaques to show that the underlying brain network is similar to that seen in humans. My study is the first investigation to show such evidence in any animal model.
Now my experiments aim to address the dynamics underlying auditory figure-ground and speech-in-noise segregation using electroencephalography (EEG) in normal hearing humans.
Poster Summary:
Video Summary:
Project: Timbre
Timbre is a multidimensional acoustic property that allows us to distinguish two musical instruments playing an identical pitch. Spectral flux is one of the timbral dimensions that is particularly relevant for processing of animal vocalisations and human speech. It can be characterized as the time duration over which information is conveyed.
I explored whether monkeys are a good model of human brain mechanisms underlying processing of spectral flux. I conducted fMRI in awake behaving macaques using synthetic stimuli to show that their anatomical organization of processing of time windows is similar to humans. However, monkeys exhibit a decreased sensitivity to longer time windows as compared to humans.
This difference in sensitivity between humans and monkeys is surprising given their phylogenetic proximity. This difference in sensitivity for long time windows between species might be due to the specialization of the human brain for processing of speech which requires greater sensitivity to longer time windows. My study highlights the brain mechanisms that might be unique to humans, possibly an outcome of divergent evolution alongside the development of speech.
Now my experiments aim to find the difference limen for spectral flux discrimination in humans using adaptive staircase techniques (psychophysics).
Project: Auditory Working Memory
Auditory working memory (AWM) is the process of keeping representations of auditory objects in mind for short duration when the sounds are not in the environment. This is different from phonological WM as these sounds cannot be assigned a semantic label.
Recent fMRI study on AWM in humans showed a network of activation in auditory cortex, hippocampus, and inferior frontal gyrus. They proposed a system for AWM where sound specific respresentations in auditory cortex are kept active by projections from hippocampus and inferior frontal cortex.
My MEG project aims to understand the dynamics underlying this proposed system. What mechanisms underlie neural activity during retention?
Project: Auditory object boundary detection
A visual object might be easy to define and understand, but objects perceived via audition are also important. Auditory object analysis refers to the process of perceiving the auditory world that requires, amongst other process, the fundamental perceptual process of detecting boundaries between auditory objects.
Thus a fundamental question in auditory perception is how does the brain detect boundaries between objects that contain multiple spectro-temporally varying components. This requires mechanisms that detect changes in the statistical rules governing object regions in frequency-time space.
My MEG project aims to understand the dynamics underlying auditory object boundary detection in humans using synthetic stimulus that we call "Acoustic Textures".