According to several influential theoretical frameworks, phonological deficits in dyslexia result from reduced sensitivity to acoustic cues that are essential for the development of robust phonemic representations. Some accounts suggest that these deficits arise from impairments in rapid auditory adaptation processes that are either speech-specific or domain-general. Here, we examined the specificity of auditory adaptation deficits in dyslexia using a nonlinguistic tone anchoring (adaptation) task and a linguistic selective adaptation task in children and adults with and without dyslexia. Children and adults with dyslexia had elevated tone-frequency discrimination thresholds, but both groups benefited from anchoring to repeated stimuli to the same extent as typical readers. Additionally, although both dyslexia groups had overall reduced accuracy for speech sound identification, only the child group had reduced categorical perception for speech. Across both age groups, individuals with dyslexia had reduced perceptual adaptation to speech. These results highlight broad auditory perceptual deficits across development in individuals with dyslexia for both linguistic and nonlinguistic domains, but speech-specific adaptation deficits. Finally, mediation models in children and adults revealed that the causal pathways from basic perception and adaptation to phonological awareness through speech categorization were not significant. Thus, rather than having causal effects, perceptual deficits may co-occur with the phonological deficits in dyslexia across development.
The ability to detect and respond to linguistic errors is critical for successful reading comprehension, but these skills can vary considerably across readers. In the current study, healthy adults (age 18-35) read short discourse scenarios for comprehension while monitoring for the presence of semantic anomalies. Using a factor analytic approach, we examined if performance in nonlinguistic conflict monitoring tasks (Stroop, AX-CPT) would predict individual differences in neural and behavioral measures of linguistic error processing. Consistent with this hypothesis, domain-general conflict monitoring predicted both readers' end-of-trial acceptability judgments and the amplitude of a late neural response (the P600) evoked by linguistic anomalies. The influence on the P600 was nonlinear, suggesting that online neural responses to linguistic errors are influenced by both the effectiveness and efficiency of domain-general conflict monitoring. These relationships were also highly specific and remained after controlling for variability in working memory capacity and verbal knowledge. Finally, we found that domain-general conflict monitoring also predicted individual variability in measures of reading comprehension, and that this relationship was partially mediated by behavioral measures of linguistic error detection. These findings inform our understanding of the role of domain-general executive functions in reading comprehension, with potential implications for the diagnosis and treatment of language impairments.
During language comprehension, we routinely use information from the prior context to help identify the meaning of individual words. While measures of online processing difficulty, such as reading times, are strongly influenced by contextual predictability, there is disagreement about the mechanisms underlying this lexical predictability effect, with different models predicting different linking functions – linear (Reichle, Rayner, & Pollatsek, 2003) or logarithmic (Levy, 2008). To help resolve this debate, we conducted two highly-powered experiments (self-paced reading, N = 216; cross-modal picture naming, N = 36), and a meta-analysis of prior eye-tracking while reading studies (total N = 218). We observed a robust linear relationship between lexical predictability and word processing times across all three studies. Beyond their methodological implications, these findings also place important constraints on predictive processing models of language comprehension. In particular, these results directly contradict the empirical predictions of surprisal theory, while supporting a proportional pre-activation account of lexical prediction effects in comprehension.
To make sense of the world around us, we must be able to segment a continual stream of sensory inputs into discrete events. In this review, I propose that in order to comprehend events, we engage hierarchical generative models that “reverse engineer” the intentions of other agents as they produce sequential action in real time. By generating probabilistic predictions for upcoming events, generative models ensure that we are able to keep up with the rapid pace at which perceptual inputs unfold. By tracking our certainty about other agents' goals and the magnitude of prediction errors at multiple temporal scales, generative models enable us to detect event boundaries by inferring when a goal has changed. Moreover, by adapting flexibly to the broader dynamics of the environment and our own comprehension goals, generative models allow us to optimally allocate limited resources. Finally, I argue that we use generative models not only to comprehend events but also to produce events (carry out goal-relevant sequential action) and to continually learn about new events from our surroundings. Taken together, this hierarchical generative framework provides new insights into how the human brain processes events so effortlessly while highlighting the fundamental links between event comprehension, production, and learning.