Using speech production to detect cognitive decline and dementia progression
Experiment names: Talk of Life, Circle of Life
Language is an extremely rich signal and is highly variable between different speakers, and can be diagnostic of a speaker's underlying cognitive status. This line of research investigates the characteristics of speech production (e.g., the types of words produced, speech rate, use of filler words, etc.) that are measurably affected in Alzheimer's disease, other dementias, and cognitive decline - and how speech production changes over time with the progression of the disease. In particular, this research focuses on developing automated tools for the calculation of lexical-semantic features of speech to identify early stages of cognitive decline. We are investigating speech as a diagnostic marker both cross-sectionally, to try to detect people with different degrees of impairment, as well as longitudinally, to use current speech production to predict future degree of impairment. Ultimately, understanding how language production changes over the course of a progressive disease will allow us to use linguistic measures as a diagnostic tool to supplement existing clinical tools.
Collaborators: Sara Czaja (Weill Cornell Medical College), Wally Boot (Weill Cornell Medical College), Graham Flick (Baycrest Hospital, U Toronto), John Gunstad (Kent State University), Kat Chia (Florida State University - graduate student intern)
Publications:
Grant Support:
Collaborators: Sara Czaja (Weill Cornell Medical College), Wally Boot (Weill Cornell Medical College), Graham Flick (Baycrest Hospital, U Toronto), John Gunstad (Kent State University), Kat Chia (Florida State University - graduate student intern)
Publications:
- Using Automatic Assessment of Speech Production to Predict Current and Future Cognitive Function in Older Adults (Ostrand & Gunstad, 2021)
- Automated Assessment of Speech Production and Prediction of MCI in Older Adults (Sanborn, Ostrand, Ciesla, & Gunstad, 2022)
- Lexical Speech Features of Spontaneous Speech in Older Persons With and Without Cognitive Impairment: Reliability Analysis (Hamrick, Sanborn, Ostrand, & Gunstad, 2023)
Grant Support:
- Center for Research and Education on Aging and Technology Enhancement - CREATE: Technology Support for Cognition and Social Engagement for Aging Adults with Mild Cognitive Impairment (MCI) NIH P01 (National Institute of Aging). Role: Consultant.
- Spontaneous Speech and Health Disparities in Risk of Cognitive Decline: WHICAP Offspring Ancillary Study NIH R01 (National Institute of Aging). Role: Co-PI.
- Using automated speech analysis to predict cognitive decline and future Alzheimer's Disease Cleveland Brain Health Initiative Scholars Grant. Role: Co-PI.
Partner-specificity of linguistic alignment
Experiment names: Sole Train, Have You Ever Seen The Train?, Train Train Go Away, I Can See Clearly Now (The Train Is Gone), Train in Spain, Talk Like an Egyptian, Rachel-Squared, Synpacts, Talking Care of Business
This area of research investigates statistical language learning and contextual adaptation within dialogue and conversation. A person's speech production can be affected by many aspects of the linguistic context, including recent linguistic experience, personal linguistic preferences, and characteristics of their listener. In particular, speakers adapt many properties of their speech to match those properties produced by their conversational partners, a process known as alignment. Do speakers learn - and align to - a partner's linguistic preferences on an individual basis, or do they adapt to the overall linguistic environment in a partner-independent way? For example, does a listener learn that a speaker, when given a choice, will use one syntactic structure over another, or does a listener's syntactic system just adapt to recent events regardless of speaker identity?
Collaborators: Vic Ferreira (UC San Diego), Iva Ivanova (UT El Paso), Rachel Ryskin (UC Merced), Eleanor Chodroff (University of York)
Publications:
Collaborators: Vic Ferreira (UC San Diego), Iva Ivanova (UT El Paso), Rachel Ryskin (UC Merced), Eleanor Chodroff (University of York)
Publications:
- Rapid Lexical Alignment to a Conversational Agent. (Ostrand, Ferreira, & Piorkowski, 2023)
- Learning speaker-specific structural expectations (Ostrand & Ryskin, Stage 1 Registered Report)
- It's alignment all the way down, but not all the way up: Speakers align on some features but not others within a dialogue (Ostrand & Chodroff, 2021)
- Repeat after us: Syntactic alignment is not partner-specific (Ostrand & Ferreira, 2019)
- Syntactic entrainment: The repetition of syntactic structures in event descriptions (Gruberg, Ostrand, Momma, & Ferreira, 2019)
The time course of audio-visual integration in language processing
Experiment names: Lips Don't Lie
In face-to-face speech, listeners receive both an auditory and a visual stream from their conversational partner. These two modalities necessarily enter the brain separately, but eventually are integrated so that the listener experiences a single, unified speech percept. This line of research investigates the time course and mechanisms behind this multi-sensory integration. Does this integration happen before or after lexical access? Which signal is sent to the lexicon for lexical access: the unimodal auditory signal or the integrated audio-visual percept?
Collaborators: Sheila Blumstein (Brown), Jim Morgan (Brown), Vic Ferreira (UCSD)
Publications:Changes in speech production to detect cognitive states
Last updated: October 18, 2023
Collaborators: Sheila Blumstein (Brown), Jim Morgan (Brown), Vic Ferreira (UCSD)
Publications:
- Semantic Priming from McGurk words: Priming depends on perception. (Dorsi, Ostrand, & Rosenblum, 2023)
- What you see isn't always what you get: Auditory word signals trump consciously perceived words in lexical access (Ostrand, Blumstein, Ferreira, & Morgan, 2016)
- When Hearing Lips and Seeing Voices Becomes Perceiving Speech: Auditory-Visual Integration in Lexical Access (Ostrand, Blumstein, & Morgan, 2011)
Changes in speech production to detect cognitive states
Speech production may reflect cognitive changes induced by a number of mental health disorders, neurodegenerative diseases, and external states. In collaboration with many colleagues, I am involved in several exploratory projects investigating the use of linguistic markers to detect cognitive states including Parkinson's disease medication state and drug use.
Collaborators: Guillermo Cecchi (IBM Research), Kely Norel (IBM Research), Carla Agurto (IBM Research)
Publications:
Collaborators: Guillermo Cecchi (IBM Research), Kely Norel (IBM Research), Carla Agurto (IBM Research)
Publications:
- Automated computer vision assessment of hypomimia in Parkinson's disease (Abrami et al., 2021)
- Detection of Acute 3,4-Methylenedioxymethamphetamine (MDMA) Effects Across Protocols Using Automated Natural Language Processing (Agurto et al., 2020)
- Phonological markers of Oxytocin and MDMA ingestion (Agurto et al., 2017)
Last updated: October 18, 2023