Can a single adjective immediately influence message-building during sentence processing? We presented participants with 168 sentence contexts, such as "His skin was red from spending the day at the …" Sentences ended with either the most expected word ("beach") or a low cloze probability completion ("pool"). Nouns were preceded by adjectives that changed their relative likelihood (e.g., "neighborhood" increases the cloze probability of pool whereas "sandy" promotes beach). We asked if participants' online processing can be rapidly updated by the adjective, changing the resulting pattern of facilitation at the noun, and, if so, whether updates unfold symmetrically-not only increasing, but also decreasing, the fit of particular nouns. We measured event-related potentials (ERPs) to the adjective and the noun and modeled these with respect to (a) the overall amount of updating promoted by the adjective, (b) the preadjectival cloze probability of the noun and, (c) the amount of cloze probability change for the obtained noun after the adjective. Bayesian mixed-effects analysis of N400 amplitude at the noun revealed that adjectives rapidly influenced semantic processing of the noun, but did so asymmetrically, with positive updating (reducing N400 amplitudes) having a greater effect than negative updating (increasing N400s). At the adjective, the amount of (possible) updating was not associated with any discernible ERP modulation. Overall, these results suggest the information provided by adjectives is buffered until a head noun is encountered, at which point the access of the noun's semantics is shaped in parallel by both the adjective and the sentence-level representation. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Background: The air traffic management (ATM) system has historically coped with a global increase in traffic demand ultimately leading to increased operational complexity. When dealing with the impact of this increasing complexity on system safety it is crucial to automatically analyse the losses of separation (LoSs) using tools able to extract meaningful and actionable information from safety reports. Current research in this field mainly exploits natural language processing (NLP) to categorise the reports,with the limitations that the considered categories need to be manually annotated by experts and that general taxonomies are seldom exploited. Methods: To address the current gaps,authors propose to perform exploratory data analysis on safety reports combining state-of-the-art techniques like topic modelling and clustering and then to develop an algorithm able to extract the Toolkit for ATM Occurrence Investigation (TOKAI) taxonomy factors from the free-text safety reports based on syntactic analysis. TOKAI is a tool for investigation developed by EUROCONTROL and its taxonomy is intended to become a standard and harmonised approach to future investigations. Results: Leveraging on the LoS events reported in the public databases of the Comisión de Estudio y Análisis de Notificaciones de Incidentes de Tránsito Aéreo and the United Kingdom Airprox Board,authors show how their proposal is able to automatically extract meaningful and actionable information from safety reports,other than to classify their content according to the TOKAI taxonomy. The quality of the approach is also indirectly validated by checking the connection between the identified factors and the main contributor of the incidents. Conclusions: Authors' results are a promising first step toward the full automation of a general analysis of LoS reports supported by results on real-world data coming from two different sources. In the future,authors' proposal could be extended to other taxonomies or tailored to identify factors to be included in the safety taxonomies.
Korean can be transcribed in two different scripts, one alphabetic (Hangul) and one logographic (Hanja). How does the mental lexicon represent the contributions of multiple scripts? Hangul’s highly transparent one-to-one relationship between spellings and sounds creates homophones in spoken Korean that are also homographs in Hangul, which can only be disambiguated through Hanja. We thus tested whether native speakers encoded the semantic contributions of the different Hanja characters sharing the same homographic form in Hangul in their mental representation of Sino-Korean. Is processing modulated by the number of available meanings, that is, the size of the semantic cohort? In two cross-modal lexical decision tasks with semantic priming,participants were presented with auditory primes that were either syllables (Experiment 1) or full Sino-Korean words (Experiment 2), followed by visual Sino-Korean full word targets. In Experiment 1, reaction times were not significantly modulated by the size of the semantic cohort. However, in Experiment 2, we observed significantly faster reaction times for targets preceded by primes with larger semantic cohorts. We discuss these findings in relation to the structure of the mental lexicon for bi-scriptal languages and the representation of semantic cohorts across different scripts. 1. Introduction 2. Hanja and Hangul during processing 3. Experiment 1: Cross-modal fragment priming 3.1. Method 3.1.1. Participants 3.1.2. Materials and design 3.1.3. Procedure 3.2. Results 3.3. Discussion 4. Experiment 2: Cross-modal full word priming 4.1. Method 4.1.1. Participants 4.1.2. Materials and design 4.1.3. Procedure 4.2. Results 4.3. Discussion 5. General discussion 6. Conclusions
Although heritage language phonology is often argued to be fairly stable, heritage language speakers often sound noticeably different from both monolinguals and second-language learners. In order to model these types of asymmetries, I propose a theoretical framework—an integrated multilingual sound system—based on modular representations of an integrated set of phonological contrasts. An examination of general findings in laryngeal (voicing, aspiration, etc.) phonetics and phonology for heritage languages shows that procedures for pronouncing phonemes are variable and plastic, even if abstract may representations remain stable. Furthermore, an integrated multilingual sound system predicts that use of one language may require a subset of the available representations, which illuminates the mechanisms that underlie phonological transfer, attrition, and acquisition.
Countries: United Kingdom, Italy, Netherlands, Italy, United Kingdom
Project: EC | ACT (289404), EC | ACT (289404)
The current study examined the effects of variability on infant event-related potential (ERP) data editing methods. A widespread approach for analyzing infant ERPs is through a trial-by-trial editing process. Researchers identify electroencephalogram (EEG) channels containing artifacts and reject trials that are judged to contain excessive noise. This process can be performed manually by experienced researchers, partially automated by specialized software, or completely automated using an artifact-detection algorithm. Here, we compared the editing process from four different editors-three human experts and an automated algorithm-on the final ERP from an existing infant EEG dataset. Findings reveal that agreement between editors was low, for both the numbers of included trials and of interpolated channels. Critically, variability resulted in differences in the final ERP morphology and in the statistical results of the target ERP that each editor obtained. We also analyzed sources of disagreement by estimating the EEG characteristics that each human editor considered for accepting an ERP trial. In sum, our study reveals significant variability in ERP data editing pipelines, which has important consequences for the final ERP results. These findings represent an important step toward developing best practices for ERP editing methods in infancy research.
A frequently used procedure to examine the relationship between categorical and dimensional descriptions of emotions is to ask subjects to place verbal expressions representing emotions in a continuous multidimensional emotional space. This work chooses a different approach. It aims at creating a system predicting the values of Activation and Valence (AV) directly from the sound of emotional speech utterances without the use of its semantic content or any other additional information. The system uses X-vectors to represent sound characteristics of the utterance and Support Vector Regressor for the estimation the AV values. The system is trained on a pool of three publicly available databases with dimensional annotation of emotions. The quality of regression is evaluated on the test sets of the same databases. Mapping of categorical emotions to the dimensional space is tested on another pool of eight categorically annotated databases. The aim of the work was to test whether in each unseen database the predicted values of Valence and Activation will place emotion-tagged utterances in the AV space in accordance with expectations based on Russell’s circumplex model of affective space. Due to the great variability of speech data, clusters of emotions create overlapping clouds. Their average location can be represented by centroids. A hypothesis on the position of these centroids is formulated and evaluated. The system’s ability to separate the emotions is evaluated by measuring the distance of the centroids. It can be concluded that the system works as expected and the positions of the clusters follow the hypothesized rules. Although the variance in individual measurements is still very high and the overlap of emotion clusters is large, it can be stated that the AV coordinates predicted by the system lead to an observable separation of the emotions in accordance with the hypothesis. Knowledge from training databases can therefore be used to predict AV coordinates of unseen data of various origins. This could be used to detect high levels of stress or depression. With the appearance of more dimensionally annotated training data, the systems predicting emotional dimensions from speech sound will become more robust and usable in practical applications in call-centers, avatars, robots, information-providing systems, security applications, and the like.
AbstractPeter Godfrey-Smith’s Metazoa and Joseph LeDoux’s The Deep History of Ourselves present radically different big pictures regarding the nature, evolution and distribution of consciousness in animals. In this essay review, I discuss the motivations behind these big pictures and try to steer a course between them.
In their recent paper on “Challenges in mathematical cognition”, Alcock and colleagues (Alcock et al. . Challenges in mathematical cognition: A collaboratively-derived research agenda. Journal of Numerical Cognition, 2, 20-41) defined a research agenda through 26 specific research questions. An important dimension of mathematical cognition almost completely absent from their discussion is the cultural constitution of mathematical cognition. Spanning work from a broad range of disciplines – including anthropology, archaeology, cognitive science, history of science, linguistics, philosophy, and psychology – we argue that for any research agenda on mathematical cognition the cultural dimension is indispensable, and we propose a set of exemplary research questions related to it. publishedVersion
This paper examines the empirical relationship between individuals’ cognitive and non-cognitive abilities and COVID-19 compliance behaviors using cross-country data from the Survey of Health, Ageing and Retirement in Europe (SHARE). We find that both cognitive and non-cognitive skills predict responsible health behaviors during the COVID-19 crisis. Episodic memory is the most important cognitive skill, while conscientiousness and neuroticism are the most significant personality traits. There is also some evidence of a role for an internal locus of control in compliance. usc Refereed/Peer-reviewed