Korean can be transcribed in two different scripts, one alphabetic (Hangul) and one logographic (Hanja). How does the mental lexicon represent the contributions of multiple scripts? Hangul’s highly transparent one-to-one relationship between spellings and sounds creates homophones in spoken Korean that are also homographs in Hangul, which can only be disambiguated through Hanja. We thus tested whether native speakers encoded the semantic contributions of the different Hanja characters sharing the same homographic form in Hangul in their mental representation of Sino-Korean. Is processing modulated by the number of available meanings, that is, the size of the semantic cohort? In two cross-modal lexical decision tasks with semantic priming,participants were presented with auditory primes that were either syllables (Experiment 1) or full Sino-Korean words (Experiment 2), followed by visual Sino-Korean full word targets. In Experiment 1, reaction times were not significantly modulated by the size of the semantic cohort. However, in Experiment 2, we observed significantly faster reaction times for targets preceded by primes with larger semantic cohorts. We discuss these findings in relation to the structure of the mental lexicon for bi-scriptal languages and the representation of semantic cohorts across different scripts. 1. Introduction 2. Hanja and Hangul during processing 3. Experiment 1: Cross-modal fragment priming 3.1. Method 3.1.1. Participants 3.1.2. Materials and design 3.1.3. Procedure 3.2. Results 3.3. Discussion 4. Experiment 2: Cross-modal full word priming 4.1. Method 4.1.1. Participants 4.1.2. Materials and design 4.1.3. Procedure 4.2. Results 4.3. Discussion 5. General discussion 6. Conclusions
Although heritage language phonology is often argued to be fairly stable, heritage language speakers often sound noticeably different from both monolinguals and second-language learners. In order to model these types of asymmetries, I propose a theoretical framework—an integrated multilingual sound system—based on modular representations of an integrated set of phonological contrasts. An examination of general findings in laryngeal (voicing, aspiration, etc.) phonetics and phonology for heritage languages shows that procedures for pronouncing phonemes are variable and plastic, even if abstract may representations remain stable. Furthermore, an integrated multilingual sound system predicts that use of one language may require a subset of the available representations, which illuminates the mechanisms that underlie phonological transfer, attrition, and acquisition.
Countries: United Kingdom, Italy, Netherlands, Italy, United Kingdom
Project: EC | ACT (289404), EC | ACT (289404)
The current study examined the effects of variability on infant event-related potential (ERP) data editing methods. A widespread approach for analyzing infant ERPs is through a trial-by-trial editing process. Researchers identify electroencephalogram (EEG) channels containing artifacts and reject trials that are judged to contain excessive noise. This process can be performed manually by experienced researchers, partially automated by specialized software, or completely automated using an artifact-detection algorithm. Here, we compared the editing process from four different editors-three human experts and an automated algorithm-on the final ERP from an existing infant EEG dataset. Findings reveal that agreement between editors was low, for both the numbers of included trials and of interpolated channels. Critically, variability resulted in differences in the final ERP morphology and in the statistical results of the target ERP that each editor obtained. We also analyzed sources of disagreement by estimating the EEG characteristics that each human editor considered for accepting an ERP trial. In sum, our study reveals significant variability in ERP data editing pipelines, which has important consequences for the final ERP results. These findings represent an important step toward developing best practices for ERP editing methods in infancy research.
A frequently used procedure to examine the relationship between categorical and dimensional descriptions of emotions is to ask subjects to place verbal expressions representing emotions in a continuous multidimensional emotional space. This work chooses a different approach. It aims at creating a system predicting the values of Activation and Valence (AV) directly from the sound of emotional speech utterances without the use of its semantic content or any other additional information. The system uses X-vectors to represent sound characteristics of the utterance and Support Vector Regressor for the estimation the AV values. The system is trained on a pool of three publicly available databases with dimensional annotation of emotions. The quality of regression is evaluated on the test sets of the same databases. Mapping of categorical emotions to the dimensional space is tested on another pool of eight categorically annotated databases. The aim of the work was to test whether in each unseen database the predicted values of Valence and Activation will place emotion-tagged utterances in the AV space in accordance with expectations based on Russell’s circumplex model of affective space. Due to the great variability of speech data, clusters of emotions create overlapping clouds. Their average location can be represented by centroids. A hypothesis on the position of these centroids is formulated and evaluated. The system’s ability to separate the emotions is evaluated by measuring the distance of the centroids. It can be concluded that the system works as expected and the positions of the clusters follow the hypothesized rules. Although the variance in individual measurements is still very high and the overlap of emotion clusters is large, it can be stated that the AV coordinates predicted by the system lead to an observable separation of the emotions in accordance with the hypothesis. Knowledge from training databases can therefore be used to predict AV coordinates of unseen data of various origins. This could be used to detect high levels of stress or depression. With the appearance of more dimensionally annotated training data, the systems predicting emotional dimensions from speech sound will become more robust and usable in practical applications in call-centers, avatars, robots, information-providing systems, security applications, and the like.
AbstractPeter Godfrey-Smith’s Metazoa and Joseph LeDoux’s The Deep History of Ourselves present radically different big pictures regarding the nature, evolution and distribution of consciousness in animals. In this essay review, I discuss the motivations behind these big pictures and try to steer a course between them.
In their recent paper on “Challenges in mathematical cognition”, Alcock and colleagues (Alcock et al. . Challenges in mathematical cognition: A collaboratively-derived research agenda. Journal of Numerical Cognition, 2, 20-41) defined a research agenda through 26 specific research questions. An important dimension of mathematical cognition almost completely absent from their discussion is the cultural constitution of mathematical cognition. Spanning work from a broad range of disciplines – including anthropology, archaeology, cognitive science, history of science, linguistics, philosophy, and psychology – we argue that for any research agenda on mathematical cognition the cultural dimension is indispensable, and we propose a set of exemplary research questions related to it. publishedVersion
This paper examines the empirical relationship between individuals’ cognitive and non-cognitive abilities and COVID-19 compliance behaviors using cross-country data from the Survey of Health, Ageing and Retirement in Europe (SHARE). We find that both cognitive and non-cognitive skills predict responsible health behaviors during the COVID-19 crisis. Episodic memory is the most important cognitive skill, while conscientiousness and neuroticism are the most significant personality traits. There is also some evidence of a role for an internal locus of control in compliance. usc Refereed/Peer-reviewed
Much categorization behavior can be explained by family resemblance: New items are classified by comparison with previously learned exemplars. However, categorization behavior also shows a variety of dimensional biases, where the underlying space has so-called "separable" dimensions: Ease of learning categories depends on how the stimuli align with the separable dimensions of the space. For example, if a set of objects of various sizes and colors can be accurately categorized using a single separable dimension (e.g., size), then category learning will be fast, while if the category is determined by both dimensions, learning will be slow. To capture these dimensional biases, almost all models of categorization supplement family resemblance with either rule-based systems or selective attention to separable dimensions. But these models do not explain how separable dimensions initially arise; they are presumed to be unexplained psychological primitives. We develop, instead, a pure family resemblance version of the Rational Model of Categorization (RMC), which we term the Rational Exclusively Family RESemblance Hierarchy (REFRESH), which does not presuppose any separable dimensions in the space of stimuli. REFRESH infers how the stimuli are clustered and uses a hierarchical prior to learn expectations about the variability of clusters across categories. We first demonstrate the dimensional alignment of natural-category features and then show how through a lifetime of categorization experience REFRESH will learn prior expectations that clusters of stimuli will align with separable dimensions. REFRESH captures the key dimensional biases and also explains their stimulus-dependence and how they are learned and develop. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
This study aims to quantify the effect of several information sources: acoustic, higher-level linguistic, and knowledge of the prosodic system of the language, on the perception of prosodic boundaries. An experiment with native and non-native participants investigating the identification of prosodic boundaries in Japanese was conducted. It revealed that non-native speakers as well as native speakers with access only to acoustic information can recognize boundaries better than chance level. However, knowledge of both the prosodic system and of higher-level information are required for a good boundary identification, each one having similar or higher importance than that of acoustic information.