In this study, we used event-related potentials to examine how different dimensions of emotion—valence and arousal—influence different stages of word processing under different task demands. In two experiments, two groups of participants viewed the same single emotional and neutral words while carrying out different tasks. In both experiments, valence (pleasant, unpleasant, and neutral) was fully crossed with arousal (high and low). We found that the task made a substantial contribution to how valence and arousal modulated the late positive complex (LPC), which is thought to reflect sustained evaluative processing (particularly of emotional stimuli). When participants performed a semantic categorization task in which emotion was not directly relevant to task performance, the LPC showed a larger amplitude for high-arousal than for low-arousal words, but no effect of valence. In contrast, when participants performed an overt valence categorization task, the LPC showed a large effect of valence (with unpleasant words eliciting the largest positivity), but no effect of arousal. These data show not only that valence and arousal act independently to influence word processing, but that their relative contributions to prolonged evaluative neural processes are strongly influenced by the situational demands (and by individual differences, as revealed in a subsequent analysis of subjective judgments).
Although there is broad agreement that top-down expectations can facilitate lexical-semantic processing, the mechanisms driving these effects are still unclear. In particular, while previous electroencephalography (EEG) research has demonstrated a reduction in the N400 response to words in a supportive context, it is often challenging to dissociate facilitation due to bottom-up spreading activation from facilitation due to top-down expectations. The goal of the current study was to specifically determine the cortical areas associated with facilitation due to top-down prediction, using magnetoencephalography (MEG) recordings supplemented by EEG and functional magnetic resonance imaging (fMRI) in a semantic priming paradigm. In order to modulate expectation processes while holding context constant, we manipulated the proportion of related pairs across 2 blocks (10 and 50% related). Event-related potential results demonstrated a larger N400 reduction when a related word was predicted, and MEG source localization of activity in this time-window (350-450 ms) localized the differential responses to left anterior temporal cortex. fMRI data from the same participants support the MEG localization, showing contextual facilitation in left anterior superior temporal gyrus for the high expectation block only. Together, these results provide strong evidence that facilitatory effects of lexical-semantic prediction on the electrophysiological response 350-450 ms postonset reflect modulation of activity in left anterior temporal cortex.
Since the early 2000s, several ERP studies have challenged the assumption that we always use syntactic contextual information to influence semantic processing of incoming words, as reflected by the N400 component. One approach for explaining these findings is to posit distinct semantic and syntactic processing mechanisms, each with distinct time courses. While this approach can explain specific datasets, it cannot account for the wider body of findings. I propose an alternative explanation: a dynamic generative framework in which our goal is to infer the underlying event that best explains the set of inputs encountered at any given time. Within this framework, combinations of semantic and syntactic cues with varying reliabilities are used as evidence to weight probabilistic hypotheses about this event. I further argue that the computational principles of this framework can be extended to understand how we infer situation models during discourse comprehension, and intended messages during spoken communication.
We consider several key aspects of prediction in language comprehension: its computational nature, the representational level(s) at which we predict, whether we use higher level representations to predictively pre-activate lower level representations, and whether we 'commit' in any way to our predictions, beyond pre-activation. We argue that the bulk of behavioral and neural evidence suggests that we predict probabilistically and at multiple levels and grains of representation. We also argue that we can, in principle, use higher level inferences to predictively pre-activate information at multiple lower representational levels. We also suggest that the degree and level of predictive pre-activation might be a function of the expected utility of prediction, which, in turn, may depend on comprehenders' goals and their estimates of the relative reliability of their prior knowledge and the bottom-up input. Finally, we argue that all these properties of language understanding can be naturally explained and productively explored within a multi-representational hierarchical actively generative architecture whose goal is to infer the message intended by the producer, and in which predictions play a crucial role in explaining the bottom-up input.
Probabilistic prediction plays a crucial role in language comprehension. When predictions are fulfilled, the resulting facilitation allows for fast, efficient processing of ambiguous, rapidly-unfolding input; when predictions are not fulfilled, the resulting error signal allows us to adapt to broader statistical changes in this input. We used functional Magnetic Resonance Imaging to examine the neuroanatomical networks engaged in semantic predictive processing and adaptation. We used a relatedness proportion semantic priming paradigm, in which we manipulated the probability of predictions while holding local semantic context constant. Under conditions of higher (versus lower) predictive validity, we replicate previous observations of reduced activity to semantically predictable words in the left anterior superior/middle temporal cortex, reflecting facilitated processing of targets that are consistent with prior semantic predictions. In addition, under conditions of higher (versus lower) predictive validity we observed significant differences in the effects of semantic relatedness within the left inferior frontal gyrus and the posterior portion of the left superior/middle temporal gyrus. We suggest that together these two regions mediated the suppression of unfulfilled semantic predictions and lexico-semantic processing of unrelated targets that were inconsistent with these predictions. Moreover, under conditions of higher (versus lower) predictive validity, a functional connectivity analysis showed that the left inferior frontal and left posterior superior/middle temporal gyrus were more tightly interconnected with one another, as well as with the left anterior cingulate cortex. The left anterior cingulate cortex was, in turn, more tightly connected to superior lateral frontal cortices and subcortical regions-a network that mediates rapid learning and adaptation and that may have played a role in switching to a more predictive mode of processing in response to the statistical structure of the wider environmental context. Together, these findings highlight close links between the networks mediating semantic prediction, executive function and learning, giving new insights into how our brains are able to flexibly adapt to our environment.