Speech Perception

I am engaged in several ongoing lines of investigation into how the brain perceives speech. I am interested in how the brain represents speech sounds, how it uses those representations to make predictions, and whether those representations affect our ability to learn new contrasts.

There is abundant evidence from phonological rule application that phonemes are a fundamental organizational unit of speech sounds, but there is less direct evidence about when and how phonemes are deployed in the course of speech perception. Initial coding of speech sounds has been found to be detailed and phonetic, but some evidence has been found that the auditory system opts for abstract phoneme representations when making predictions over sets of varying speech sounds.

I use a passive mismatch negativity (MMN) paradigm to measure the brain’s response to sounds that deviate from the set of varying input sounds.

The brain’s MMN response can tell us about the representations and predictions generated within auditory sensory memory. Does the brain use pre-defined, abstract phoneme representations to make predictions, or will it generate novel, ad hoc representations?

By manipulating the phonetic properties of the repeated stimuli, we can see if the brain is tracking these small differences, or if the auditory prediction consists only of category identity. In other words, if the brain is presented with a stream of phonetically varying /t/ sounds, will the brain use that fine-grained phonetic information to make a highly detailed prediction, or will it simply consider /t/ versus not-/t/?

In a series of experiments, I have found no modulation of MMN amplitude based on sub-categorical phonetic distance. This seems to indicate that either the brain treats all phonetically varying /t/ sounds simply as /t/ in this predictive context; or that the MMN does not encode deviance magnitude in a straightforward way.

The first of these experiments was published in the Attention, Perception, and Psychophysics 2019 special issue in honor of Randy DIehl: https://doi.org/10.3758/s13414-019-01728-1

I interpret these results as an indication that the brain is lazily using pre-defined phoneme representations under these variable conditions, rather than making more accurate predictions using ad hoc phonetic representations.

In a way, our prior knowledge is interfering with the accuracy of our perceptual/predictive system!

This is complicated by another finding – that the brain can discriminate standard from deviant when all tokens are in the same category (all /t/), suggesting that while a pre-defined category representation is being used, it may still contain phonetic information!

The implications of this finding are fascinating to me, because it may indicate that the brain “fills in” the missing phonetic information (information that the brain lazily stopped tracking!) with a prototypical phonetic value.