Advanced search in Research products
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
The following results are related to Digital Humanities and Cultural Heritage. Are you interested to view more results? Visit OpenAIRE - Explore.
239 Research products, page 1 of 24

  • Digital Humanities and Cultural Heritage
  • Publications
  • Research data
  • Open Access
  • Article
  • 050105 experimental psychology
  • European Commission
  • EU

10
arrow_drop_down
Relevance
arrow_drop_down
  • Open Access English
    Authors: 
    Clara D. Martin; Monika Molnar; Manuel Carreiras;
    Publisher: Nature Publishing Group
    Country: Spain
    Project: EC | BILITERACY (295362), EC | ATHEME (613465)

    Published: 13 May 2016 The present study investigated the proactive nature of the human brain in language perception. Specifically, we examined whether early proficient bilinguals can use interlocutor identity as a cue for language prediction, using an event-related potentials (ERP) paradigm. Participants were first familiarized, through video segments, with six novel interlocutors who were either monolingual or bilingual. Then, the participants completed an audio-visual lexical decision task in which all the interlocutors uttered words and pseudo-words. Critically, the speech onset started about 350 ms after the beginning of the video. ERP waves between the onset of the visual presentation of the interlocutors and the onset of their speech significantly differed for trials where the language was not predictable (bilingual interlocutors) and trials where the language was predictable (monolingual interlocutors), revealing that visual interlocutor identity can in fact function as a cue for language prediction, even before the onset of the auditory-linguistic signal. This research was funded by the Severo Ochoa program grant SEV-2015-0490, a grant from the Spanish Ministry of Science and Innovation (PSI2012-31448), from FP7/2007-2013 Cooperation grant agreement 613465-AThEME and an ERC grant from the European Research Council (ERC-2011-ADG-295362) to M.C. We thank Antonio Ibañez for his work in stimulus preparation.

  • Publication . Other literature type . Article . 2017
    Open Access
    Authors: 
    Hilary S.Z. Wynne; Linda Wheeldon; Aditi Lahiri;
    Countries: United Kingdom, Norway
    Project: EC | MOR-PHON (695481)

    Abstract Four language production experiments examine how English speakers plan compound words during phonological encoding. The experiments tested production latencies in both delayed and online tasks for English noun-noun compounds (e.g., daytime), adjective-noun phrases (e.g., dark time), and monomorphemic words (e.g., denim). In delayed production, speech onset latencies reflect the total number of prosodic units in the target sentence. In online production, speech latencies reflect the size of the first prosodic unit. Compounds are metrically similar to adjective-noun phrases as they contain two lexical and two prosodic words. However, in Experiments 1 and 2, native English speakers treated the compounds as single prosodic units, indistinguishable from simple words, with RT data statistically different than that of the adjective-noun phrases. Experiments 3 and 4 demonstrate that compounds are also treated as single prosodic units in utterances containing clitics (e.g., dishcloths are clean) as they incorporate the verb into a single phonological word (i.e. dishcloths-are). Taken together, these results suggest that English compounds are planned as single recursive prosodic units. Our data require an adaptation of the classic model of phonological encoding to incorporate a distinction between lexical and postlexical prosodic processes, such that lexical boundaries have consequences for post-lexical phonological encoding.

  • Open Access English
    Authors: 
    Jana Hasenäcker; Olga Solaja; Davide Crepaldi;
    Country: Italy
    Project: EC | STATLEARN (679010)

    In visual word identification, readers automatically access word internal information: they recognize orthographically embedded words (e.g., HAT in THAT) and are sensitive to morphological structure (DEAL-ER, BASKET-BALL). The exact mechanisms that govern these processes, however, are not well established yet - how is this information used? What is the role of affixes in this process? To address these questions, we tested the activation of meaning of embedded word stems in the presence or absence of a morphological structure using two semantic categorization tasks in Italian. Participants made category decisions on words (e.g., is CARROT a type of food?). Some no-answers (is CORNER a type of food?) contained category-congruent embedded word stems (i.e., CORN-). Moreover, the embedded stems could be accompanied by a pseudo-suffix (-er in CORNER) or a non-morphological ending (-ce in PEACE) - this allowed gauging the role of pseudo-suffixes in stem activation. The analyses of accuracy and response times revealed that words were harder to reject as members of a category when they contained an embedded word stem that was indeed category-congruent. Critically, this was the case regardless of the presence or absence of a pseudo-suffix. These findings provide evidence that the lexical identification system activates the meaning of embedded word stems when the task requires semantic information. This study brings together research on orthographic neighbors and morphological processing, yielding results that have important implications for models of visual word processing.

  • Open Access English
    Authors: 
    Johann-Mattis List; George Starostin; Lai Yunfan;
    Country: Germany
    Project: EC | CALC (715618)
  • Publication . Article . 2012
    Open Access English
    Authors: 
    Andrew J. Martin; Sharon Peperkamp; Emmanuel Dupoux;
    Project: EC | BOOTPHON (295810)

    Before the end of the first year of life, infants begin to lose the ability to perceive distinctions between sounds that are not phonemic in their native language. It is typically assumed that this developmental change reflects the construction of language-specific phoneme categories, but how these categories are learned largely remains a mystery. Peperkamp, Le Calvez, Nadal, and Dupoux (2006) present an algorithm that can discover phonemes using the distributions of allophones as well as the phonetic properties of the allophones and their contexts. We show that a third type of information source, the occurrence of pairs of minimally differing word forms in speech heard by the infant, is also useful for learning phonemic categories and is in fact more reliable than purely distributional information in data containing a large number of allophones. In our model, learners build an approximation of the lexicon consisting of the high-frequency n-grams present in their speech input, allowing them to take advantage of top-down lexical information without needing to learn words. This may explain how infants have already begun to exhibit sensitivity to phonemic categories before they have a large receptive lexicon.

  • Publication . Other literature type . Article . 2013
    Open Access English
    Authors: 
    Nathaniel J. Smith; Roger Levy;
    Publisher: The Authors. Published by Elsevier B.V.
    Country: United States
    Project: NSF | CAREER: Rational Language... (0953870), EC | XPERIENCE (270273)

    AbstractIt is well known that real-time human language processing is highly incremental and context-driven, and that the strength of a comprehender’s expectation for each word encountered is a key determinant of the difficulty of integrating that word into the preceding context. In reading, this differential difficulty is largely manifested in the amount of time taken to read each word. While numerous studies over the past thirty years have shown expectation-based effects on reading times driven by lexical, syntactic, semantic, pragmatic, and other information sources, there has been little progress in establishing the quantitative relationship between expectation (or prediction) and reading times. Here, by combining a state-of-the-art computational language model, two large behavioral data-sets, and non-parametric statistical techniques, we establish for the first time the quantitative form of this relationship, finding that it is logarithmic over six orders of magnitude in estimated predictability. This result is problematic for a number of established models of eye movement control in reading, but lends partial support to an optimal perceptual discrimination account of word recognition. We also present a novel model in which language processing is highly incremental well below the level of the individual word, and show that it predicts both the shape and time-course of this effect. At a more general level, this result provides challenges for both anticipatory processing and semantic integration accounts of lexical predictability effects. And finally, this result provides evidence that comprehenders are highly sensitive to relative differences in predictability – even for differences between highly unpredictable words – and thus helps bring theoretical unity to our understanding of the role of prediction at multiple levels of linguistic structure in real-time language comprehension.

  • Open Access
    Authors: 
    Naim, Michelangelo; Katkov, Mikhail; Recanatesi, Stefano; Tsodyks, Misha;
    Project: EC | HBP SGA2 (785907), EC | HBP SGA1 (720270), EC | M-GATE (765549), NIH | Associative Processes in ... (2R01MH055687-21)

    Structured information is easier to remember and recall than random one. In real life, information exhibits multi-level hierarchical organization, such as clauses, sentences, episodes and narratives in language. Here we show that multi-level grouping emerges even when participants perform memory recall experiments with random sets of words. To quantitatively probe brain mechanisms involved in memory structuring, we consider an experimental protocol where participants perform ‘final free recall’ (FFR) of several random lists of words each of which was first presented and recalled individually. We observe a hierarchy of grouping organizations of FFR, most notably many participants sequentially recalled relatively long chunks of words from each list before recalling words from another list. More-over, participants who exhibited strongest organization during FFR achieved highest levels of performance. Based on these results, we develop a hierarchical model of memory recall that is broadly compatible with our findings. Our study shows how highly controlled memory experiments with random and meaningless material, when combined with simple models, can be used to quantitatively probe the way meaningful information can efficiently be organized and processed in the brain, so to be easily retrieved.Significance StatementInformation that people communicate to each other is highly structured. For example, a story contains meaningful elements of various degrees of complexity (clauses, sentences, episodes etc). Recalling a story, we are chiefly concerned with these meaningful elements and not its exact wording. Here we show that people introduce structure even when recalling random lists of words, by grouping the words into ‘chunks’ of various sizes. Doing so improves their performance. The so formed chunks closely correspond in size to story elements described above. This suggests that our memory is trained to create a structure that resembles the one it typically deals with in real life, and that using random material like word lists can be used to quantitatively probe these memory mechanisms.

  • Open Access English
    Authors: 
    Arthur Paté; Lapo Boschi; Danièle Dubois; Jean-Loïc Le Carrou; Benjamin K. Holtzman;
    Publisher: HAL CCSD
    Countries: Italy, France
    Project: EC | WAVES (641943)

    International audience; Auditory display can complement visual representations in order to better interpret scientific data. A previous article showed that the free categorization of “audified seismic signals” operated by listeners can be explained by various geophysical parameters. The present article confirms this result and shows that cognitive representations of listeners can be used as heuristics for the characterization of seismic signals. Free sorting tests are conducted with audified seismic signals, with the earthquake/seismometer relative location, playback audification speed, and earthquake magnitude as controlled variables. The analysis is built on partitions (categories) and verbal comments (categorization criteria). Participants from different backgrounds (acousticians or geoscientists) are contrasted in order to investigate the role of the participants' expertise. Sounds resulting from different earthquake/station distances or azimuths, crustal structure and topography along the path of the seismic wave, earthquake magnitude, are found to (a) be sorted into different categories, (b) elicit different verbal descriptions mainly focused on the perceived number of events, frequency content, and background noise level. Building on these perceptual results, acoustic descriptors are computed and geophysical interpretations are proposed in order to match the verbal descriptions. Another result is the robustness of the categories with respect to the audification speed factor.

  • Open Access English
    Authors: 
    Chuang Y; Voller M; Elnaz Shafaei-Bajestan; Susanne Gahl; Peter Hendrix; Baayen Rh;
    Publisher: Zenodo
    Project: EC | WIDE (742545)

    Nonwords are often used to clarify how lexical processing takes place in the absence of semantics. This study shows that nonwords are not semantically vacuous. We used Linear Discriminative Learning (Baayen et al., 2019) to estimate the meanings of nonwords in the MALD database (Tucker et al., 2018) from the speech signal. We show that measures gauging nonword semantics significantly improve model fit for both acoustic durations and RTs. Although nonwords do not evoke meanings that afford conscious reflexion, they do make contact with the semantic space, and the angles and distances of nonwords with respect to actual words co-determine articulation and lexicality decisions.

  • Open Access English
    Authors: 
    Kun Sun; Rong Wang;
    Publisher: Universität Stuttgart
    Country: Germany
    Project: EC | WIDE (742545)

    This study applies relative entropy in naturalistic large-scale corpus to calculate the difference among L2 (second language) learners at different levels. We chose lemma, token, POStrigram, conjunction to represent lexicon and grammar to detect the patterns of language proficiency development among different L2 groups using relative entropy. The results show that information distribution discrimination regarding lexical and grammatical differences continues to increase from L2 learners at a lower level to those at a higher level. This result is consistent with the assumption that in the course of second language acquisition, L2 learners develop towards a more complex and diverse use of language. Meanwhile, this study uses the statistics method of time series to process the data on L2 differences yielded by traditional frequency-based methods processing the same L2 corpus to compare with the results of relative entropy. However, the results from the traditional methods rarely show regularity. As compared to the algorithms in traditional approaches, relative entropy performs much better in detecting L2 proficiency development. In this sense, we have developed an effective and practical algorithm for stably detecting and predicting the developments in L2 learners’ language proficiency. H2020 European Research Council

Advanced search in Research products
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
The following results are related to Digital Humanities and Cultural Heritage. Are you interested to view more results? Visit OpenAIRE - Explore.
239 Research products, page 1 of 24
  • Open Access English
    Authors: 
    Clara D. Martin; Monika Molnar; Manuel Carreiras;
    Publisher: Nature Publishing Group
    Country: Spain
    Project: EC | BILITERACY (295362), EC | ATHEME (613465)

    Published: 13 May 2016 The present study investigated the proactive nature of the human brain in language perception. Specifically, we examined whether early proficient bilinguals can use interlocutor identity as a cue for language prediction, using an event-related potentials (ERP) paradigm. Participants were first familiarized, through video segments, with six novel interlocutors who were either monolingual or bilingual. Then, the participants completed an audio-visual lexical decision task in which all the interlocutors uttered words and pseudo-words. Critically, the speech onset started about 350 ms after the beginning of the video. ERP waves between the onset of the visual presentation of the interlocutors and the onset of their speech significantly differed for trials where the language was not predictable (bilingual interlocutors) and trials where the language was predictable (monolingual interlocutors), revealing that visual interlocutor identity can in fact function as a cue for language prediction, even before the onset of the auditory-linguistic signal. This research was funded by the Severo Ochoa program grant SEV-2015-0490, a grant from the Spanish Ministry of Science and Innovation (PSI2012-31448), from FP7/2007-2013 Cooperation grant agreement 613465-AThEME and an ERC grant from the European Research Council (ERC-2011-ADG-295362) to M.C. We thank Antonio Ibañez for his work in stimulus preparation.

  • Publication . Other literature type . Article . 2017
    Open Access
    Authors: 
    Hilary S.Z. Wynne; Linda Wheeldon; Aditi Lahiri;
    Countries: United Kingdom, Norway
    Project: EC | MOR-PHON (695481)

    Abstract Four language production experiments examine how English speakers plan compound words during phonological encoding. The experiments tested production latencies in both delayed and online tasks for English noun-noun compounds (e.g., daytime), adjective-noun phrases (e.g., dark time), and monomorphemic words (e.g., denim). In delayed production, speech onset latencies reflect the total number of prosodic units in the target sentence. In online production, speech latencies reflect the size of the first prosodic unit. Compounds are metrically similar to adjective-noun phrases as they contain two lexical and two prosodic words. However, in Experiments 1 and 2, native English speakers treated the compounds as single prosodic units, indistinguishable from simple words, with RT data statistically different than that of the adjective-noun phrases. Experiments 3 and 4 demonstrate that compounds are also treated as single prosodic units in utterances containing clitics (e.g., dishcloths are clean) as they incorporate the verb into a single phonological word (i.e. dishcloths-are). Taken together, these results suggest that English compounds are planned as single recursive prosodic units. Our data require an adaptation of the classic model of phonological encoding to incorporate a distinction between lexical and postlexical prosodic processes, such that lexical boundaries have consequences for post-lexical phonological encoding.

  • Open Access English
    Authors: 
    Jana Hasenäcker; Olga Solaja; Davide Crepaldi;
    Country: Italy
    Project: EC | STATLEARN (679010)

    In visual word identification, readers automatically access word internal information: they recognize orthographically embedded words (e.g., HAT in THAT) and are sensitive to morphological structure (DEAL-ER, BASKET-BALL). The exact mechanisms that govern these processes, however, are not well established yet - how is this information used? What is the role of affixes in this process? To address these questions, we tested the activation of meaning of embedded word stems in the presence or absence of a morphological structure using two semantic categorization tasks in Italian. Participants made category decisions on words (e.g., is CARROT a type of food?). Some no-answers (is CORNER a type of food?) contained category-congruent embedded word stems (i.e., CORN-). Moreover, the embedded stems could be accompanied by a pseudo-suffix (-er in CORNER) or a non-morphological ending (-ce in PEACE) - this allowed gauging the role of pseudo-suffixes in stem activation. The analyses of accuracy and response times revealed that words were harder to reject as members of a category when they contained an embedded word stem that was indeed category-congruent. Critically, this was the case regardless of the presence or absence of a pseudo-suffix. These findings provide evidence that the lexical identification system activates the meaning of embedded word stems when the task requires semantic information. This study brings together research on orthographic neighbors and morphological processing, yielding results that have important implications for models of visual word processing.

  • Open Access English
    Authors: 
    Johann-Mattis List; George Starostin; Lai Yunfan;
    Country: Germany
    Project: EC | CALC (715618)
  • Publication . Article . 2012
    Open Access English
    Authors: 
    Andrew J. Martin; Sharon Peperkamp; Emmanuel Dupoux;
    Project: EC | BOOTPHON (295810)

    Before the end of the first year of life, infants begin to lose the ability to perceive distinctions between sounds that are not phonemic in their native language. It is typically assumed that this developmental change reflects the construction of language-specific phoneme categories, but how these categories are learned largely remains a mystery. Peperkamp, Le Calvez, Nadal, and Dupoux (2006) present an algorithm that can discover phonemes using the distributions of allophones as well as the phonetic properties of the allophones and their contexts. We show that a third type of information source, the occurrence of pairs of minimally differing word forms in speech heard by the infant, is also useful for learning phonemic categories and is in fact more reliable than purely distributional information in data containing a large number of allophones. In our model, learners build an approximation of the lexicon consisting of the high-frequency n-grams present in their speech input, allowing them to take advantage of top-down lexical information without needing to learn words. This may explain how infants have already begun to exhibit sensitivity to phonemic categories before they have a large receptive lexicon.

  • Publication . Other literature type . Article . 2013
    Open Access English
    Authors: 
    Nathaniel J. Smith; Roger Levy;
    Publisher: The Authors. Published by Elsevier B.V.
    Country: United States
    Project: NSF | CAREER: Rational Language... (0953870), EC | XPERIENCE (270273)

    AbstractIt is well known that real-time human language processing is highly incremental and context-driven, and that the strength of a comprehender’s expectation for each word encountered is a key determinant of the difficulty of integrating that word into the preceding context. In reading, this differential difficulty is largely manifested in the amount of time taken to read each word. While numerous studies over the past thirty years have shown expectation-based effects on reading times driven by lexical, syntactic, semantic, pragmatic, and other information sources, there has been little progress in establishing the quantitative relationship between expectation (or prediction) and reading times. Here, by combining a state-of-the-art computational language model, two large behavioral data-sets, and non-parametric statistical techniques, we establish for the first time the quantitative form of this relationship, finding that it is logarithmic over six orders of magnitude in estimated predictability. This result is problematic for a number of established models of eye movement control in reading, but lends partial support to an optimal perceptual discrimination account of word recognition. We also present a novel model in which language processing is highly incremental well below the level of the individual word, and show that it predicts both the shape and time-course of this effect. At a more general level, this result provides challenges for both anticipatory processing and semantic integration accounts of lexical predictability effects. And finally, this result provides evidence that comprehenders are highly sensitive to relative differences in predictability – even for differences between highly unpredictable words – and thus helps bring theoretical unity to our understanding of the role of prediction at multiple levels of linguistic structure in real-time language comprehension.

  • Open Access
    Authors: 
    Naim, Michelangelo; Katkov, Mikhail; Recanatesi, Stefano; Tsodyks, Misha;
    Project: EC | HBP SGA2 (785907), EC | HBP SGA1 (720270), EC | M-GATE (765549), NIH | Associative Processes in ... (2R01MH055687-21)

    Structured information is easier to remember and recall than random one. In real life, information exhibits multi-level hierarchical organization, such as clauses, sentences, episodes and narratives in language. Here we show that multi-level grouping emerges even when participants perform memory recall experiments with random sets of words. To quantitatively probe brain mechanisms involved in memory structuring, we consider an experimental protocol where participants perform ‘final free recall’ (FFR) of several random lists of words each of which was first presented and recalled individually. We observe a hierarchy of grouping organizations of FFR, most notably many participants sequentially recalled relatively long chunks of words from each list before recalling words from another list. More-over, participants who exhibited strongest organization during FFR achieved highest levels of performance. Based on these results, we develop a hierarchical model of memory recall that is broadly compatible with our findings. Our study shows how highly controlled memory experiments with random and meaningless material, when combined with simple models, can be used to quantitatively probe the way meaningful information can efficiently be organized and processed in the brain, so to be easily retrieved.Significance StatementInformation that people communicate to each other is highly structured. For example, a story contains meaningful elements of various degrees of complexity (clauses, sentences, episodes etc). Recalling a story, we are chiefly concerned with these meaningful elements and not its exact wording. Here we show that people introduce structure even when recalling random lists of words, by grouping the words into ‘chunks’ of various sizes. Doing so improves their performance. The so formed chunks closely correspond in size to story elements described above. This suggests that our memory is trained to create a structure that resembles the one it typically deals with in real life, and that using random material like word lists can be used to quantitatively probe these memory mechanisms.

  • Open Access English
    Authors: 
    Arthur Paté; Lapo Boschi; Danièle Dubois; Jean-Loïc Le Carrou; Benjamin K. Holtzman;
    Publisher: HAL CCSD
    Countries: Italy, France
    Project: EC | WAVES (641943)

    International audience; Auditory display can complement visual representations in order to better interpret scientific data. A previous article showed that the free categorization of “audified seismic signals” operated by listeners can be explained by various geophysical parameters. The present article confirms this result and shows that cognitive representations of listeners can be used as heuristics for the characterization of seismic signals. Free sorting tests are conducted with audified seismic signals, with the earthquake/seismometer relative location, playback audification speed, and earthquake magnitude as controlled variables. The analysis is built on partitions (categories) and verbal comments (categorization criteria). Participants from different backgrounds (acousticians or geoscientists) are contrasted in order to investigate the role of the participants' expertise. Sounds resulting from different earthquake/station distances or azimuths, crustal structure and topography along the path of the seismic wave, earthquake magnitude, are found to (a) be sorted into different categories, (b) elicit different verbal descriptions mainly focused on the perceived number of events, frequency content, and background noise level. Building on these perceptual results, acoustic descriptors are computed and geophysical interpretations are proposed in order to match the verbal descriptions. Another result is the robustness of the categories with respect to the audification speed factor.

  • Open Access English
    Authors: 
    Chuang Y; Voller M; Elnaz Shafaei-Bajestan; Susanne Gahl; Peter Hendrix; Baayen Rh;
    Publisher: Zenodo
    Project: EC | WIDE (742545)

    Nonwords are often used to clarify how lexical processing takes place in the absence of semantics. This study shows that nonwords are not semantically vacuous. We used Linear Discriminative Learning (Baayen et al., 2019) to estimate the meanings of nonwords in the MALD database (Tucker et al., 2018) from the speech signal. We show that measures gauging nonword semantics significantly improve model fit for both acoustic durations and RTs. Although nonwords do not evoke meanings that afford conscious reflexion, they do make contact with the semantic space, and the angles and distances of nonwords with respect to actual words co-determine articulation and lexicality decisions.

  • Open Access English
    Authors: 
    Kun Sun; Rong Wang;
    Publisher: Universität Stuttgart
    Country: Germany
    Project: EC | WIDE (742545)

    This study applies relative entropy in naturalistic large-scale corpus to calculate the difference among L2 (second language) learners at different levels. We chose lemma, token, POStrigram, conjunction to represent lexicon and grammar to detect the patterns of language proficiency development among different L2 groups using relative entropy. The results show that information distribution discrimination regarding lexical and grammatical differences continues to increase from L2 learners at a lower level to those at a higher level. This result is consistent with the assumption that in the course of second language acquisition, L2 learners develop towards a more complex and diverse use of language. Meanwhile, this study uses the statistics method of time series to process the data on L2 differences yielded by traditional frequency-based methods processing the same L2 corpus to compare with the results of relative entropy. However, the results from the traditional methods rarely show regularity. As compared to the algorithms in traditional approaches, relative entropy performs much better in detecting L2 proficiency development. In this sense, we have developed an effective and practical algorithm for stably detecting and predicting the developments in L2 learners’ language proficiency. H2020 European Research Council