Research in the ALLears lab broadly examines how listeners perceive the speech signal and understand spoken words. In particular, we're interested in the perceptual mechanisms that support spoken word understanding under less than ideal listening conditions (e.g., degraded signals of speech) and how principles of learning can support adaptation to novel speech signals. The overall research theme of the lab includes an understanding of the plasticity mechanisms that support speech perception in typical perceivers (e.g., listeners with normal speech, language, or hearing) and atypical perceivers (e.g., listeners with impairments in speech, language, or hearing).
Current research areas of interest include:
🗣️ Speech Perception:
Our perception of speech is not a direct translation of the physical properties of the acoustic signal. How do listeners come to perceive the auditory signal of speech as meaningful words and sentences - and use that information to enhance future interactions?
🎧 Perceptual Learning of Speech:
Listeners routinely encounter difficult to understand speech signals (e.g., bad cellphone reception, accented speech, noisy environments) yet often with a bit of experience, we come to comprehend those signals rather quickly. How might structured experiences through auditory training paradigms be designed to maximize benefits for adults learning to comprehend acoustically degraded speech signals?
🦻 Translational Rehabilitation:
Listeners with hearing loss may use hearing aids or cochlear implants to re-gain some degree of auditory stimulation and/or speech understanding. A current challenge in the rehabilitation literature is accounting for variability in speech comprehension performance observed among individuals who use these devices. How can clinicians optimize the rehabilitation for patients with hearing loss who utilize cochlear implants or hearing aids?
The ALLears lab is committed to making our research as accessible as possible to scientists, clinicians, and the general public. We are eager to embrace open science practices and we are moving towards implementing as many open science practices as permittable in our research. We use the OSF as a repository for preprints, preregistrations, publicly available data files, and code whenever possible. Click the icon below to access our OSF repository.
You can access our latest publications by clicking on the link. Please note this list is intended for personal use (downloading them may violate copyright law in your country).
- Drouin, J.R. & Theodore, R.M. (in prep). Many tasks, same outcome: Role of training task on learning and maintenance of noise-vocoded speech. To be submitted to Journal of the Acoustic Society of America.
- Drouin, J.R., Zysk, V., Myers, E.B., & Theodore, R.M. (in prep). Sleep-based memory consolidation stabilizes perceptual learning of noise-vocoded speech. To be submitted to Journal of Speech, Language, and Hearing Research.
- Drouin, J.R. & Theodore, R.M. (2020). Leveraging interdisciplinary perspectives to optimize auditory training for cochlear implant users. Language & Linguistics Compass, 14(9), e12394. http://doi.org/10.1111/Inc3.12394
- Drouin, J.R. & Theodore, R.M. (2018). Lexically guided perceptual learning is robust to task-based changes in listening strategy. Journal of the Acoustical Society of America, 144(2), 1089 – 1099.https://doi.org/10.1121/1.5047672
- Drouin, J.R., Monto, N.R., & Theodore, R.M. (2017). Talker-specificity effects in spoken language processing: Now you seem them, now you don’t. The speech processing lexicon: Neurocognitive and behavioral approaches, 22, (pp. 107 – 128). http://doi.org/10.1515/9783110422658-006
- Drouin, J.R. & Theodore, R.M. (2016). Lexically guided perceptual learning of internal phonetic category structure. Journal of the Acoustical Society of America, 140(4), EL307 – EL313.https://doi.org/10.1121/1.504767