Rapid extraction of gist from visual text and its influence on word recognition.
Asano, Michiko; Yokosawa, Kazuhiko
2011-01-01
Two experiments explored rapid extraction of gist from a visual text and its influence on word recognition. In both, a short text (sentence) containing a target word was presented for 200 ms and was followed by a target recognition task. Results showed that participants recognized contextually anomalous word targets less frequently than contextually consistent counterparts (Experiment 1). This context effect was obtained when sentences contained the same semantic content but with disrupted syntactic structure (Experiment 2). Results demonstrate that words in a briefly presented visual sentence are processed in parallel and that rapid extraction of sentence gist relies on a primitive representation of sentence context (termed protocontext) that is semantically activated by the simultaneous presentation of multiple words (i.e., a sentence) before syntactic processing.
Willems, Roel M; Clevis, Krien; Hagoort, Peter
2011-09-01
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
Clevis, Krien; Hagoort, Peter
2011-01-01
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540
ERIC Educational Resources Information Center
Dorman, Michael F.; Liss, Julie; Wang, Shuai; Berisha, Visar; Ludwig, Cimarron; Natale, Sarah Cook
2016-01-01
Purpose: Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear implants (CIs). Method: Sentence material was presented in auditory (A), visual (V), and AV test conditions to listeners with normal hearing and CI users. Results: (a) Most CI users report that most of the time, they have access to both A and V…
Is Broca's Area Involved in the Processing of Passive Sentences? An Event-Related fMRI Study
ERIC Educational Resources Information Center
Yokoyama, Satoru; Watanabe, Jobu; Iwata, Kazuki; Ikuta, Naho; Haji, Tomoki; Usui, Nobuo; Taira, Masato; Miyamoto, Tadao; Nakamura, Wataru; Sato, Shigeru; Horie, Kaoru; Kawashima, Ryuta
2007-01-01
We used functional magnetic resonance imaging (fMRI) to investigate whether activation in Broca's area is greater during the processing of passive versus active sentences in the brains of healthy subjects. Twenty Japanese native speakers performed a visual sentence comprehension task in which they were asked to read a visually presented sentence…
Cross-Language Priming of Word Meaning during Second Language Sentence Comprehension
ERIC Educational Resources Information Center
Yuan, Yanli; Woltz, Dan; Zheng, Robert
2010-01-01
The experiment investigated the benefit to second language (L2) sentence comprehension of priming word meanings with brief visual exposure to first language (L1) translation equivalents. Native English speakers learning Mandarin evaluated the validity of aurally presented Mandarin sentences. For selected words in half of the sentences there was…
Perea, Manuel; Jiménez, María; Martín-Suesta, Miguel; Gómez, Pablo
2015-04-01
This article explores how letter position coding is attained during braille reading and its implications for models of word recognition. When text is presented visually, the reading process easily adjusts to the jumbling of some letters (jugde-judge), with a small cost in reading speed. Two explanations have been proposed: One relies on a general mechanism of perceptual uncertainty at the visual level, and the other focuses on the activation of an abstract level of representation (i.e., bigrams) that is shared by all orthographic codes. Thus, these explanations make differential predictions about reading in a tactile modality. In the present study, congenitally blind readers read sentences presented on a braille display that tracked the finger position. The sentences either were intact or involved letter transpositions. A parallel experiment was conducted in the visual modality. Results revealed a substantially greater reading cost for the sentences with transposed-letter words in braille readers. In contrast with the findings with sighted readers, in which there is a cost of transpositions in the external (initial and final) letters, the reading cost in braille readers occurs serially, with a large cost for initial letter transpositions. Thus, these data suggest that the letter-position-related effects in visual word recognition are due to the characteristics of the visual stream.
ERIC Educational Resources Information Center
Duyck, Wouter; Van Assche, Eva; Drieghe, Denis; Hartsuiker, Robert J.
2007-01-01
Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment,…
ERIC Educational Resources Information Center
Cacciari, C.; Bolognini, N.; Senna, I.; Pellicciari, M. C.; Miniussi, C.; Papagno, C.
2011-01-01
We used Transcranial Magnetic Stimulation (TMS) to assess whether reading literal, non-literal (i.e., metaphorical, idiomatic) and fictive motion sentences modulates the activity of the motor system. Sentences were divided into three segments visually presented one at a time: the noun phrase, the verb and the final part of the sentence. Single…
The role of visual imagery in the retention of information from sentences.
Drose, G S; Allen, G L
1994-01-01
We conducted two experiments to evaluate a multiple-code model for sentence memory that posits both propositional and visual representational systems. Both sentences involved recognition memory. The results of Experiment 1 indicated that subjects' recognition memory for concrete sentences was superior to their recognition memory for abstract sentences. Instructions to use visual imagery to enhance recognition performance yielded no effects. Experiment 2 tested the prediction that interference by a visual task would differentially affect recognition memory for concrete sentences. Results showed the interference task to have had a detrimental effect on recognition memory for both concrete and abstract sentences. Overall, the evidence provided partial support for both a multiple-code model and a semantic integration model of sentence memory.
Perceptual Span Depends on Font Size during the Reading of Chinese Sentences
ERIC Educational Resources Information Center
Yan, Ming; Zhou, Wei; Shu, Hua; Kliegl, Reinhold
2015-01-01
The present study explored the perceptual span (i.e., the physical extent of an area from which useful visual information is extracted during a single fixation) during the reading of Chinese sentences in 2 experiments. In Experiment 1, we tested whether the rightward span can go beyond 3 characters when visually similar masks were used. Results…
Object shape and orientation do not routinely influence performance during language processing.
Rommers, Joost; Meyer, Antje S; Huettig, Falk
2013-11-01
The role of visual representations during language processing remains unclear: They could be activated as a necessary part of the comprehension process, or they could be less crucial and influence performance in a task-dependent manner. In the present experiments, participants read sentences about an object. The sentences implied that the object had a specific shape or orientation. They then either named a picture of that object (Experiments 1 and 3) or decided whether the object had been mentioned in the sentence (Experiment 2). Orientation information did not reliably influence performance in any of the experiments. Shape representations influenced performance most strongly when participants were asked to compare a sentence with a picture or when they were explicitly asked to use mental imagery while reading the sentences. Thus, in contrast to previous claims, implied visual information often does not contribute substantially to the comprehension process during normal reading.
Liu, Hong; Zhang, Gaoyan; Liu, Baolin
2017-04-01
In the Chinese language, a polyphone is a kind of special character that has more than one pronunciation, with each pronunciation corresponding to a different meaning. Here, we aimed to reveal the cognitive processing of audio-visual information integration of polyphones in a sentence context using the event-related potential (ERP) method. Sentences ending with polyphones were presented to subjects simultaneously in both an auditory and a visual modality. Four experimental conditions were set in which the visual presentations were the same, but the pronunciations of the polyphones were: the correct pronunciation; another pronunciation of the polyphone; a semantically appropriate pronunciation but not the pronunciation of the polyphone; or a semantically inappropriate pronunciation but also not the pronunciation of the polyphone. The behavioral results demonstrated significant differences in response accuracies when judging the semantic meanings of the audio-visual sentences, which reflected the different demands on cognitive resources. The ERP results showed that in the early stage, abnormal pronunciations were represented by the amplitude of the P200 component. Interestingly, because the phonological information mediated access to the lexical semantics, the amplitude and latency of the N400 component changed linearly across conditions, which may reflect the gradually increased semantic mismatch in the four conditions when integrating the auditory pronunciation with the visual information. Moreover, the amplitude of the late positive shift (LPS) showed a significant correlation with the behavioral response accuracies, demonstrating that the LPS component reveals the demand of cognitive resources for monitoring and resolving semantic conflicts when integrating the audio-visual information.
Effects of speaker emotional facial expression and listener age on incremental sentence processing.
Carminati, Maria Nella; Knoeferle, Pia
2013-01-01
We report two visual-world eye-tracking experiments that investigated how and with which time course emotional information from a speaker's face affects younger (N = 32, Mean age = 23) and older (N = 32, Mean age = 64) listeners' visual attention and language comprehension as they processed emotional sentences in a visual context. The age manipulation tested predictions by socio-emotional selectivity theory of a positivity effect in older adults. After viewing the emotional face of a speaker (happy or sad) on a computer display, participants were presented simultaneously with two pictures depicting opposite-valence events (positive and negative; IAPS database) while they listened to a sentence referring to one of the events. Participants' eye fixations on the pictures while processing the sentence were increased when the speaker's face was (vs. wasn't) emotionally congruent with the sentence. The enhancement occurred from the early stages of referential disambiguation and was modulated by age. For the older adults it was more pronounced with positive faces, and for the younger ones with negative faces. These findings demonstrate for the first time that emotional facial expressions, similarly to previously-studied speaker cues such as eye gaze and gestures, are rapidly integrated into sentence processing. They also provide new evidence for positivity effects in older adults during situated sentence processing.
Grammatical Encoding and Learning in Agrammatic Aphasia: Evidence from Structural Priming
Cho-Reyes, Soojin; Mack, Jennifer E.; Thompson, Cynthia K.
2017-01-01
The present study addressed open questions about the nature of sentence production deficits in agrammatic aphasia. In two structural priming experiments, 13 aphasic and 13 age-matched control speakers repeated visually- and auditorily-presented prime sentences, and then used visually-presented word arrays to produce dative sentences. Experiment 1 examined whether agrammatic speakers form structural and thematic representations during sentence production, whereas Experiment 2 tested the lasting effects of structural priming in lags of two and four sentences. Results of Experiment 1 showed that, like unimpaired speakers, the aphasic speakers evinced intact structural priming effects, suggesting that they are able to generate such representations. Unimpaired speakers also evinced reliable thematic priming effects, whereas agrammatic speakers did so in some experimental conditions, suggesting that access to thematic representations may be intact. Results of Experiment 2 showed structural priming effects of comparable magnitude for aphasic and unimpaired speakers. In addition, both groups showed lasting structural priming effects in both lag conditions, consistent with implicit learning accounts. In both experiments, aphasic speakers with more severe language impairments exhibited larger priming effects, consistent with the “inverse preference” prediction of implicit learning accounts. The findings indicate that agrammatic speakers are sensitive to structural priming across levels of representation and that such effects are lasting, suggesting that structural priming may be beneficial for the treatment of sentence production deficits in agrammatism. PMID:28924328
Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.
Hunter, Cynthia R; Pisoni, David B
Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.
Can colours be used to segment words when reading?
Perea, Manuel; Tejero, Pilar; Winskel, Heather
2015-07-01
Rayner, Fischer, and Pollatsek (1998, Vision Research) demonstrated that reading unspaced text in Indo-European languages produces a substantial reading cost in word identification (as deduced from an increased word-frequency effect on target words embedded in the unspaced vs. spaced sentences) and in eye movement guidance (as deduced from landing sites closer to the beginning of the words in unspaced sentences). However, the addition of spaces between words comes with a cost: nearby words may fall outside high-acuity central vision, thus reducing the potential benefits of parafoveal processing. In the present experiment, we introduced a salient visual cue intended to facilitate the process of word segmentation without compromising visual acuity: each alternating word was printed in a different colour (i.e., ). Results only revealed a small reading cost of unspaced alternating colour sentences relative to the spaced sentences. Thus, present data are a demonstration that colour can be useful to segment words for readers of spaced orthographies. Copyright © 2015 Elsevier B.V. All rights reserved.
Incremental comprehension of spoken quantifier sentences: Evidence from brain potentials.
Freunberger, Dominik; Nieuwland, Mante S
2016-09-01
Do people incrementally incorporate the meaning of quantifier expressions to understand an unfolding sentence? Most previous studies concluded that quantifiers do not immediately influence how a sentence is understood based on the observation that online N400-effects differed from offline plausibility judgments. Those studies, however, used serial visual presentation (SVP), which involves unnatural reading. In the current ERP-experiment, we presented spoken positive and negative quantifier sentences ("Practically all/practically no postmen prefer delivering mail, when the weather is good/bad during the day"). Different from results obtained in a previously reported SVP-study (Nieuwland, 2016) sentence truth-value N400 effects occurred in positive and negative quantifier sentences alike, reflecting fully incremental quantifier comprehension. This suggests that the prosodic information available during spoken language comprehension supports the generation of online predictions for upcoming words and that, at least for quantifier sentences, comprehension of spoken language may proceed more incrementally than comprehension during SVP reading. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Moving Words: Dynamic Representations in Language Comprehension
ERIC Educational Resources Information Center
Zwaan, Rolf A.; Madden, Carol J.; Yaxley, Richard H.; Aveyard, Mark E.
2004-01-01
Eighty-two participants listened to sentences and then judged whether two sequentially presented visual objects were the same. On critical trials, participants heard a sentence describe the motion of a ball toward or away from the observer (e.g., ''The pitcher hurled the softball to you''). Seven hundred and fifty milliseconds after the offset of…
Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Rahimi, Zahra; Mayahi, Anis
2017-02-01
It is important for clinician such as speech-language pathologists and audiologists to develop more efficient procedures to assess the development of auditory, speech and language skills in children using hearing aid and/or cochlear implant compared to their peers with normal hearing. So, the aim of study was the comparison of the performance of 5-to-7-year-old Persian-language children with and without hearing loss in visual-only, auditory-only, and audiovisual presentation of sentence repetition task. The research was administered as a cross-sectional study. The sample size was 92 Persian 5-7 year old children including: 60 with normal hearing and 32 with hearing loss. The children with hearing loss were recruited from Soroush rehabilitation center for Persian-language children with hearing loss in Shiraz, Iran, through consecutive sampling method. All the children had unilateral cochlear implant or bilateral hearing aid. The assessment tool was the Sentence Repetition Test. The study included three computer-based experiments including visual-only, auditory-only, and audiovisual. The scores were compared within and among the three groups through statistical tests in α = 0.05. The score of sentence repetition task between V-only, A-only, and AV presentation was significantly different in the three groups; in other words, the highest to lowest scores belonged respectively to audiovisual, auditory-only, and visual-only format in the children with normal hearing (P < 0.01), cochlear implant (P < 0.01), and hearing aid (P < 0.01). In addition, there was no significant correlationship between the visual-only and audiovisual sentence repetition scores in all the 5-to-7-year-old children (r = 0.179, n = 92, P = 0.088), but audiovisual sentence repetition scores were found to be strongly correlated with auditory-only scores in all the 5-to-7-year-old children (r = 0.943, n = 92, P = 0.000). According to the study's findings, audiovisual integration occurs in the 5-to-7-year-old Persian children using hearing aid or cochlear implant during sentence repetition similar to their peers with normal hearing. Therefore, it is recommended that audiovisual sentence repetition should be used as a clinical criterion for auditory development in Persian-language children with hearing loss. Copyright © 2016. Published by Elsevier B.V.
Processing counterfactual and hypothetical conditionals: an fMRI investigation.
Kulakova, Eugenia; Aichhorn, Markus; Schurz, Matthias; Kronbichler, Martin; Perner, Josef
2013-05-15
Counterfactual thinking is ubiquitous in everyday life and an important aspect of cognition and emotion. Although counterfactual thought has been argued to differ from processing factual or hypothetical information, imaging data which elucidate these differences on a neural level are still scarce. We investigated the neural correlates of processing counterfactual sentences under visual and aural presentation. We compared conditionals in subjunctive mood which explicitly contradicted previously presented facts (i.e. counterfactuals) to conditionals framed in indicative mood which did not contradict factual world knowledge and thus conveyed a hypothetical supposition. Our results show activation in right occipital cortex (cuneus) and right basal ganglia (caudate nucleus) during counterfactual sentence processing. Importantly the occipital activation is not only present under visual presentation but also with purely auditory stimulus presentation, precluding a visual processing artifact. Thus our results can be interpreted as reflecting the fact that counterfactual conditionals pragmatically imply the relevance of keeping in mind both factual and supposed information whereas the hypothetical conditionals imply that real world information is irrelevant for processing the conditional and can be omitted. The need to sustain representations of factual and suppositional events during counterfactual sentence processing requires increased mental imagery and integration efforts. Our findings are compatible with predictions based on mental model theory. Copyright © 2013 Elsevier Inc. All rights reserved.
Brown, Jessica A; Hux, Karen; Knollman-Porter, Kelly; Wallace, Sarah E
2016-01-01
Concomitant visual and cognitive impairments following traumatic brain injuries (TBIs) may be problematic when the visual modality serves as a primary source for receiving information. Further difficulties comprehending visual information may occur when interpretation requires processing inferential rather than explicit content. The purpose of this study was to compare the accuracy with which people with and without severe TBI interpreted information in contextually rich drawings. Fifteen adults with and 15 adults without severe TBI. Repeated-measures between-groups design. Participants were asked to match images to sentences that either conveyed explicit (ie, main action or background) or inferential (ie, physical or mental inference) information. The researchers compared accuracy between participant groups and among stimulus conditions. Participants with TBI demonstrated significantly poorer accuracy than participants without TBI extracting information from images. In addition, participants with TBI demonstrated significantly higher response accuracy when interpreting explicit rather than inferential information; however, no significant difference emerged between sentences referencing main action versus background information or sentences providing physical versus mental inference information for this participant group. Difficulties gaining information from visual environmental cues may arise for people with TBI given their difficulties interpreting inferential content presented through the visual modality.
Task effects on BOLD signal correlates of implicit syntactic processing
Caplan, David
2010-01-01
BOLD signal was measured in sixteen participants who made timed font change detection judgments in visually presented sentences that varied in syntactic structure and the order of animate and inanimate nouns. Behavioral data indicated that sentences were processed to the level of syntactic structure. BOLD signal increased in visual association areas bilaterally and left supramarginal gyrus in the contrast of sentences with object- and subject-extracted relative clauses without font changes in which the animacy order of the nouns biased against the syntactically determined meaning of the sentence. This result differs from the findings in a non-word detection task (Caplan et al, 2008a), in which the same contrast led to increased BOLD signal in the left inferior frontal gyrus. The difference in areas of activation indicates that the sentences were processed differently in the two tasks. These differences were further explored in an eye tracking study using the materials in the two tasks. Issues pertaining to how parsing and interpretive operations are affected by a task that is being performed, and how this might affect BOLD signal correlates of syntactic contrasts, are discussed. PMID:20671983
Task effects on BOLD signal correlates of implicit syntactic processing.
Caplan, David
2010-07-01
BOLD signal was measured in sixteen participants who made timed font change detection judgments in visually presented sentences that varied in syntactic structure and the order of animate and inanimate nouns. Behavioral data indicated that sentences were processed to the level of syntactic structure. BOLD signal increased in visual association areas bilaterally and left supramarginal gyrus in the contrast of sentences with object- and subject-extracted relative clauses without font changes in which the animacy order of the nouns biased against the syntactically determined meaning of the sentence. This result differs from the findings in a non-word detection task (Caplan et al, 2008a), in which the same contrast led to increased BOLD signal in the left inferior frontal gyrus. The difference in areas of activation indicates that the sentences were processed differently in the two tasks. These differences were further explored in an eye tracking study using the materials in the two tasks. Issues pertaining to how parsing and interpretive operations are affected by a task that is being performed, and how this might affect BOLD signal correlates of syntactic contrasts, are discussed.
Huysmans, Elke; Bolk, Elske; Zekveld, Adriana A; Festen, Joost M; de Groot, Annette M B; Goverts, S Theo
2016-01-01
The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.
Risse, Sarah
2014-07-15
The visual span (or ‘‘uncrowded window’’), which limits the sensory information on each fixation, has been shown to determine reading speed in tasks involving rapid serial visual presentation of single words. The present study investigated whether this is also true for fixation durations during sentence reading when all words are presented at the same time and parafoveal preview of words prior to fixation typically reduces later word-recognition times. If so, a larger visual span may allow more efficient parafoveal processing and thus faster reading. In order to test this hypothesis, visual span profiles (VSPs) were collected from 60 participants and related to data from an eye-tracking reading experiment. The results confirmed a positive relationship between the readers’ VSPs and fixation-based reading speed. However, this relationship was not determined by parafoveal processing. There was no evidence that individual differences in VSPs predicted differences in parafoveal preview benefit. Nevertheless, preview benefit correlated with reading speed, suggesting an independent effect on oculomotor control during reading. In summary, the present results indicate a more complex relationship between the visual span, parafoveal processing, and reading speed than initially assumed. © 2014 ARVO.
Buchweitz, Augusto; Mason, Robert A.; Tomitch, Lêda M. B.; Just, Marcel Adam
2010-01-01
The study compared the brain activation patterns associated with the comprehension of written and spoken Portuguese sentences. An fMRI study measured brain activity while participants read and listened to sentences about general world knowledge. Participants had to decide if the sentences were true or false. To mirror the transient nature of spoken sentences, visual input was presented in rapid serial visual presentation format. The results showed a common core of amodal left inferior frontal and middle temporal gyri activation, as well as modality specific brain activation associated with listening and reading comprehension. Reading comprehension was associated with more left-lateralized activation and with left inferior occipital cortex (including fusiform gyrus) activation. Listening comprehension was associated with extensive bilateral temporal cortex activation and more overall activation of the whole cortex. Results also showed individual differences in brain activation for reading comprehension. Readers with lower working memory capacity showed more activation of right-hemisphere areas (spillover of activation) and more activation in the prefrontal cortex, potentially associated with more demand placed on executive control processes. Readers with higher working memory capacity showed more activation in a frontal-posterior network of areas (left angular and precentral gyri, and right inferior frontal gyrus). The activation of this network may be associated with phonological rehearsal of linguistic information when reading text presented in rapid serial visual format. The study demonstrates the modality fingerprints for language comprehension and indicates how low- and high working memory capacity readers deal with reading text presented in serial format. PMID:21526132
Game-Based Augmented Visual Feedback for Enlarging Speech Movements in Parkinson's Disease.
Yunusova, Yana; Kearney, Elaine; Kulkarni, Madhura; Haworth, Brandon; Baljko, Melanie; Faloutsos, Petros
2017-06-22
The purpose of this pilot study was to demonstrate the effect of augmented visual feedback on acquisition and short-term retention of a relatively simple instruction to increase movement amplitude during speaking tasks in patients with dysarthria due to Parkinson's disease (PD). Nine patients diagnosed with PD, hypokinetic dysarthria, and impaired speech intelligibility participated in a training program aimed at increasing the size of their articulatory (tongue) movements during sentences. Two sessions were conducted: a baseline and training session, followed by a retention session 48 hr later. At baseline, sentences were produced at normal, loud, and clear speaking conditions. Game-based visual feedback regarding the size of the articulatory working space (AWS) was presented during training. Eight of nine participants benefited from training, increasing their sentence AWS to a greater degree following feedback as compared with the baseline loud and clear conditions. The majority of participants were able to demonstrate the learned skill at the retention session. This study demonstrated the feasibility of augmented visual feedback via articulatory kinematics for training movement enlargement in patients with hypokinesia due to PD. https://doi.org/10.23641/asha.5116840.
Effects of syntactic structure in the memory of concrete and abstract Chinese sentences.
Ho, C S; Chen, H C
1993-09-01
Smith (1981) found that concrete English sentences were better recognized than abstract sentences and that this concreteness effect was potent only when the concrete sentence was also affirmative but the effect switched to an opposite end when the concrete sentence was negative. These results were partially replicated in Experiment 1 by using materials from a very different language (i.e., Chinese): concrete-affirmative sentences were better remembered than concrete-negative and abstract sentences, but no reliable difference was found between the latter two types. In Experiment 2, the task was modified by using a visual presentation instead of an oral one as in Experiment 1. Both concrete-affirmative and concrete-negative sentences were better memorized then abstract ones in Experiment 2. The findings in the two experiments are explained by a combination of the dual-coding model and Marschark's (1985) item-specific and relational processing. The differential effects of experience with different language systems on processing verbal materials in memory are also discussed.
Summarizing Audiovisual Contents of a Video Program
NASA Astrophysics Data System (ADS)
Gong, Yihong
2003-12-01
In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.
Visual readability analysis: how to make your writings easier to read.
Oelke, Daniela; Spretke, David; Stoffel, Andreas; Keim, Daniel A
2012-05-01
We present a tool that is specifically designed to support a writer in revising a draft version of a document. In addition to showing which paragraphs and sentences are difficult to read and understand, we assist the reader in understanding why this is the case. This requires features that are expressive predictors of readability, and are also semantically understandable. In the first part of the paper, we, therefore, discuss a semiautomatic feature selection approach that is used to choose appropriate measures from a collection of 141 candidate readability features. In the second part, we present the visual analysis tool VisRA, which allows the user to analyze the feature values across the text and within single sentences. Users can choose between different visual representations accounting for differences in the size of the documents and the availability of information about the physical and logical layout of the documents. We put special emphasis on providing as much transparency as possible to ensure that the user can purposefully improve the readability of a sentence. Several case studies are presented that show the wide range of applicability of our tool. Furthermore, an in-depth evaluation assesses the quality of the measure and investigates how well users do in revising a text with the help of the tool.
ERP correlates of German Sign Language processing in deaf native signers.
Hänel-Faulhaber, Barbara; Skotara, Nils; Kügow, Monique; Salden, Uta; Bottari, Davide; Röder, Brigitte
2014-05-10
The present study investigated the neural correlates of sign language processing of Deaf people who had learned German Sign Language (Deutsche Gebärdensprache, DGS) from their Deaf parents as their first language. Correct and incorrect signed sentences were presented sign by sign on a computer screen. At the end of each sentence the participants had to judge whether or not the sentence was an appropriate DGS sentence. Two types of violations were introduced: (1) semantically incorrect sentences containing a selectional restriction violation (implausible object); (2) morphosyntactically incorrect sentences containing a verb that was incorrectly inflected (i.e., incorrect direction of movement). Event-related brain potentials (ERPs) were recorded from 74 scalp electrodes. Semantic violations (implausible signs) elicited an N400 effect followed by a positivity. Sentences with a morphosyntactic violation (verb agreement violation) elicited a negativity followed by a broad centro-parietal positivity. ERP correlates of semantic and morphosyntactic aspects of DGS clearly differed from each other and showed a number of similarities with those observed in other signed and oral languages. These data suggest a similar functional organization of signed and oral languages despite the visual-spacial modality of sign language.
ERP correlates of German Sign Language processing in deaf native signers
2014-01-01
Background The present study investigated the neural correlates of sign language processing of Deaf people who had learned German Sign Language (Deutsche Gebärdensprache, DGS) from their Deaf parents as their first language. Correct and incorrect signed sentences were presented sign by sign on a computer screen. At the end of each sentence the participants had to judge whether or not the sentence was an appropriate DGS sentence. Two types of violations were introduced: (1) semantically incorrect sentences containing a selectional restriction violation (implausible object); (2) morphosyntactically incorrect sentences containing a verb that was incorrectly inflected (i.e., incorrect direction of movement). Event-related brain potentials (ERPs) were recorded from 74 scalp electrodes. Results Semantic violations (implausible signs) elicited an N400 effect followed by a positivity. Sentences with a morphosyntactic violation (verb agreement violation) elicited a negativity followed by a broad centro-parietal positivity. Conclusions ERP correlates of semantic and morphosyntactic aspects of DGS clearly differed from each other and showed a number of similarities with those observed in other signed and oral languages. These data suggest a similar functional organization of signed and oral languages despite the visual-spacial modality of sign language. PMID:24884527
Li, X; Yang, Y; Ren, G
2009-06-16
Language is often perceived together with visual information. Recent experimental evidences indicated that, during spoken language comprehension, the brain can immediately integrate visual information with semantic or syntactic information from speech. Here we used the mismatch negativity to further investigate whether prosodic information from speech could be immediately integrated into a visual scene context or not, and especially the time course and automaticity of this integration process. Sixteen Chinese native speakers participated in the study. The materials included Chinese spoken sentences and picture pairs. In the audiovisual situation, relative to the concomitant pictures, the spoken sentence was appropriately accented in the standard stimuli, but inappropriately accented in the two kinds of deviant stimuli. In the purely auditory situation, the speech sentences were presented without pictures. It was found that the deviants evoked mismatch responses in both audiovisual and purely auditory situations; the mismatch negativity in the purely auditory situation peaked at the same time as, but was weaker than that evoked by the same deviant speech sounds in the audiovisual situation. This pattern of results suggested immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention.
ERIC Educational Resources Information Center
Miolo, Giuliana; Chapman, Robins S.; Sindberg, Heidi A.
2005-01-01
The authors evaluated the roles of auditory-verbal short-term memory, visual short-term memory, and group membership in predicting language comprehension, as measured by an experimental sentence comprehension task (SCT) and the Test for Auditory Comprehension of Language--Third Edition (TACL-3; E. Carrow-Woolfolk, 1999) in 38 participants: 19 with…
Metaphors are Embodied, and so are Their Literal Counterparts
Santana, Eduardo; de Vega, Manuel
2011-01-01
This study investigates whether understanding up/down metaphors as well as semantically homologous literal sentences activates embodied representations online. Participants read orientational literal sentences (e.g., she climbed up the hill), metaphors (e.g., she climbed up in the company), and abstract sentences with similar meaning to the metaphors (e.g., she succeeded in the company). In Experiments 1 and 2, participants were asked to perform a speeded upward or downward hand motion while they were reading the sentence verb. The hand motion either matched or mismatched the direction connoted by the sentence. The results showed a meaning-action effect for metaphors and literals, that is, faster hand motion responses in the matching conditions. Notably, the matching advantage was also found for homologous abstract sentences, indicating that some abstract ideas are conceptually organized in the vertical dimension, even when they are expressed by means of literal sentences. In Experiment 3, participants responded to an upward or downward visual motion associated with the sentence verb by pressing a single key. In this case, the facilitation effect for matching visual motion-sentence meaning faded, indicating that the visual motion component is less important than the action component in conceptual metaphors. Most up and down metaphors convey emotionally positive and negative information, respectively. We suggest that metaphorical meaning elicits upward/downward movements because they are grounded on the bodily expression of the corresponding emotions. PMID:21687459
Guerra, Ernesto; Knoeferle, Pia
2014-12-01
A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants' reading times for sentences that convey similarity or difference between two abstract nouns (e.g., 'Peace and war are certainly different...'). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., 'peace', 'war'). In Experiments 2 and 3, they turned but remained blank. Participants' reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences. Copyright © 2014 Elsevier B.V. All rights reserved.
Determinants of structural choice in visually situated sentence production.
Myachykov, Andriy; Garrod, Simon; Scheepers, Christoph
2012-11-01
Three experiments investigated how perceptual, structural, and lexical cues affect structural choices during English transitive sentence production. Participants described transitive events under combinations of visual cueing of attention (toward either agent or patient) and structural priming with and without semantic match between the notional verb in the prime and the target event. Speakers had a stronger preference for passive-voice sentences (1) when their attention was directed to the patient, (2) upon reading a passive-voice prime, and (3) when the verb in the prime matched the target event. The verb-match effect was the by-product of an interaction between visual cueing and verb match: the increase in the proportion of passive-voice responses with matching verbs was limited to the agent-cued condition. Persistence of visual cueing effects in the presence of both structural and lexical cues suggests a strong coupling between referent-directed visual attention and Subject assignment in a spoken sentence. Copyright © 2012 Elsevier B.V. All rights reserved.
Chakarov, Vihren; Hummel, Sibylla; Losch, Florian; Schulte-Mönting, Jürgen; Kristeva, Rumyana
2006-01-01
Background The present study was aimed at investigating the writing parameters of writer's cramp patients and control subjects during handwriting of a test sentence in the absence of visual control. Methods Eight right-handed patients with writer's cramp and eight healthy volunteers as age-matched control subjects participated in the study. The experimental task consisted in writing a test sentence repeatedly for fifty times on a pressure-sensitive digital board. The subject did not have visual control on his handwriting. The writing performance was stored on a PC and analyzed off-line. Results During handwriting all patients developed a typical dystonic limb posture and reported an increase in muscular tension along the experimental session. The patients were significantly slower than the controls, with lower mean vertical pressure of the pen tip on the paper and they could not reach the endmost letter of the sentence in the given time window. No other handwriting parameter differences were found between the two groups. Conclusion Our findings indicate that during writing in the absence of visual feedback writer's cramp patients are slower and could not reach the endmost letter of the test sentence, but their level of automatization is not impaired and writer's cramp handwriting parameters are similar to those of the controls except for even lower vertical pressure of the pen tip on the paper, which is probably due to a changed strategy in such experimental conditions. PMID:16594993
Lieberman, Amy M; Borovsky, Arielle; Mayberry, Rachel I
2018-01-01
Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.
A Visual Literacy Approach to Developmental and Remedial Reading.
ERIC Educational Resources Information Center
Barley, Steven D.
Photography, films, and other visual materials offer a different approach to teaching reading. For example, photographs may be arranged in sequences analogous to the ways words form sentences and sentences for stories. If, as is possible, children respond first to pictures and later to words, training they receive in visual literacy may help them…
Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals
ERIC Educational Resources Information Center
Sekerina, Irina A.; Sauermann, Antje
2015-01-01
It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking…
ERIC Educational Resources Information Center
Weber-Fox, Christine; Hart, Laura J.; Spruill, John E., III
2006-01-01
This study examined how school-aged children process different grammatical categories. Event-related brain potentials elicited by words in visually presented sentences were analyzed according to seven grammatical categories with naturally varying characteristics of linguistic functions, semantic features, and quantitative attributes of length and…
Language-Mediated Eye Movements in the Absence of a Visual World: The "Blank Screen Paradigm"
ERIC Educational Resources Information Center
Altmann, Gerry T. M.
2004-01-01
The "visual world paradigm" typically involves presenting participants with a visual scene and recording eye movements as they either hear an instruction to manipulate objects in the scene or as they listen to a description of what may happen to those objects. In this study, participants heard each target sentence only after the corresponding…
Deaf Readers and Phrasal Verbs: Instructional Efficacy of Chunking as a Visual Tool
ERIC Educational Resources Information Center
Atwell, William R.
2013-01-01
The purpose of this study was to examine the effectiveness of a visual strategy that of chunking or visually bracketing phrasal verbs in sentences in short stories. A descriptive case study design was used for this study to compare the two instructional strategies. In this study, stories were presented to 14 severely and profound deaf students…
Two-year-olds can begin to acquire verb meanings in socially impoverished contexts.
Arunachalam, Sudha
2013-12-01
By two years of age, toddlers are adept at recruiting social, observational, and linguistic cues to discover the meanings of words. Here, we ask how they fare in impoverished contexts in which linguistic cues are provided, but no social or visual information is available. Novel verbs are presented in a stream of syntactically informative sentences, but the sentences are not embedded in a social context, and no visual access to the verb's referent is provided until the test phase. The results provide insight into how toddlers may benefit from overhearing contexts in which they are not directly attending to the ambient speech, and in which no conversational context, visual referent, or child-directed conversation is available. Copyright © 2013 Elsevier B.V. All rights reserved.
Communicating headings and preview sentences in text and speech.
Lorch, Robert F; Chen, Hung-Tao; Lemarié, Julie
2012-09-01
Two experiments tested the effects of preview sentences and headings on the quality of college students' outlines of informational texts. Experiment 1 found that performance was much better in the preview sentences condition than in a no-signals condition for both printed text and text-to-speech (TTS) audio rendering of the printed text. In contrast, performance in the headings condition was good for the printed text but poor for the auditory presentation because the TTS software failed to communicate nonverbal information carried by the visual headings. Experiment 2 compared outlining performance for five headings conditions during TTS presentation. Using a theoretical framework, "signaling available, relevant, accessible" (SARA) information, to provide an analysis of the information content of headings in the printed text, the manipulation of the headings systematically restored information that was omitted by the TTS application in Experiment 1. The result was that outlining performance improved to levels similar to the visual headings condition of Experiment 1. It is argued that SARA is a useful framework for guiding future development of TTS software for a wide variety of text signaling devices, not just headings.
Integration of moral values during L2 sentence processing.
Foucart, Alice; Moreno, Eva; Martin, Clara D; Costa, Albert
2015-11-01
This study reports an event-related potential (ERP) experiment examining whether valuation (i.e., one's own values) is integrated incrementally and whether it affects L2 speakers' online interpretation of the sentence. We presented Spanish native speakers and French-Spanish mid-proficiency late L2 speakers with visual sentences containing value-consistent and value-inconsistent statements (e.g., 'Nowadays, paedophilia should be prohibited/tolerated across the world.'). Participants' brain activity was recorded as they were reading the sentences and indicating whether they agreed with the statements or not. Behaviourally, the two groups revealed identical valuation. The ERP analyses showed both a semantic (N400) and an affect-related response (LPP) to value-inconsistent statements in the native group, but only an LPP in the non-native group. These results suggest that valuation is integrated online (presence of LPP) during L2 sentence comprehension but that it does not interfere with semantic processing (absence of N400).
On the domain-specificity of the visual and non-visual face-selective regions.
Axelrod, Vadim
2016-08-01
What happens in our brains when we see a face? The neural mechanisms of face processing - namely, the face-selective regions - have been extensively explored. Research has traditionally focused on visual cortex face-regions; more recently, the role of face-regions outside the visual cortex (i.e., non-visual-cortex face-regions) has been acknowledged as well. The major quest today is to reveal the functional role of each this region in face processing. To make progress in this direction, it is essential to understand the extent to which the face-regions, and particularly the non-visual-cortex face-regions, process only faces (i.e., face-specific, domain-specific processing) or rather are involved in a more domain-general cognitive processing. In the current functional MRI study, we systematically examined the activity of the whole face-network during face-unrelated reading task (i.e., written meaningful sentences with content unrelated to faces/people and non-words). We found that the non-visual-cortex (i.e., right lateral prefrontal cortex and posterior superior temporal sulcus), but not the visual cortex face-regions, responded significantly stronger to sentences than to non-words. In general, some degree of sentence selectivity was found in all non-visual-cortex cortex. Present result highlights the possibility that the processing in the non-visual-cortex face-selective regions might not be exclusively face-specific, but rather more or even fully domain-general. In this paper, we illustrate how the knowledge about domain-general processing in face-regions can help to advance our general understanding of face processing mechanisms. Our results therefore suggest that the problem of face processing should be approached in the broader scope of cognition in general. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Zekveld, Adriana A.; George, Erwin L. J.; Kramer, Sophia E.; Goverts, S. Theo; Houtgast, Tammo
2007-01-01
Purpose: In this study, the authors aimed to develop a visual analogue of the widely used Speech Reception Threshold (SRT; R. Plomp & A. M. Mimpen, 1979b) test. The Text Reception Threshold (TRT) test, in which visually presented sentences are masked by a bar pattern, enables the quantification of modality-aspecific variance in speech-in-noise…
Eye Movements Reveal the Dynamic Simulation of Speed in Language
ERIC Educational Resources Information Center
Speed, Laura J.; Vigliocco, Gabriella
2014-01-01
This study investigates how speed of motion is processed in language. In three eye-tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., "The lion ambled/dashed to the balloon"). Results showed that looking time to relevant objects in the visual scene was affected…
TMS-induced modulation of action sentence priming in the ventral premotor cortex.
Tremblay, Pascale; Sato, Marc; Small, Steven L
2012-01-01
Despite accumulating evidence that cortical motor areas, particularly the lateral premotor cortex, are activated during language comprehension, the question of whether motor processes help mediate the semantic encoding of language remains controversial. To address this issue, we examined whether low frequency (1 Hz) repetitive transcranial magnetic stimulation (rTMS) of the left ventral premotor cortex (PMv) can interfere with the comprehension of sentences describing manual actions, visual properties of manipulable and non-manipulable objects, and actions of the lips and mouth. Using a primed semantic decision task, sixteen participants were asked to determine for a given sentence whether or not an auditorily presented target word was congruent with the sentence. We hypothesized that if the left PMv is contributing semantic information that is used to comprehend action and object related sentences, then TMS applied over PMv should result in a disruption of semantic priming. Our results show that TMS reduces semantic priming, induces a shift in response bias, and increases response sensitivity, but does so only during the processing of manual action sentences. This suggests a preferential contribution of PMv to the processing of sentences describing manual actions compared to other types of sentences. Copyright © 2011 Elsevier Ltd. All rights reserved.
Electrophysiological Correlates of Language Processing in Schizotypal Personality Disorder
Niznikiewicz, Margaret A.; Voglmaier, Martina; Shenton, Martha E.; Seidman, Larry J.; Dickey, Chandlee C.; Rhoads, Richard; Teh, Enkeat; McCarley, Robert W.
2010-01-01
Objective This study examined whether the electrophysiological correlates of language processing found previously to be abnormal in schizophrenia are also abnormal in schizotypal individuals. The authors used the N400 component to evaluate language dysfunction in schizotypal individuals. Method Event-related potentials were recorded in 16 comparison subjects and 17 schizotypal individuals (who met full DSM-III-R criteria) to sentences presented both visually and aurally; half of the sentences ended with an expected word completion (congruent condition), and the other half ended with an unexpected word completion (incongruent condition). Results In the congruent condition, the N400 amplitude was more negative in individuals with schizotypal personality disorder than in comparison subjects in both the visual and auditory modalities. In addition, in the visual modality, the N400 latency was prolonged in the individuals with schizotypal personality disorder. Conclusions The N400 was found to be abnormal in the individuals with schizotypal personality disorder relative to comparison subjects. The abnormality was similar to the abnormality the authors’ laboratory reported earlier in schizophrenic subjects, in which the N400 amplitude was found to be more negative in both congruent and incongruent sentence completions. The N400 abnormality is consistent with the inefficient use of context. PMID:10401451
Situated Sentence Processing: The Coordinated Interplay Account and a Neurobehavioral Model
ERIC Educational Resources Information Center
Crocker, Matthew W.; Knoeferle, Pia; Mayberry, Marshall R.
2010-01-01
Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted…
[The key parameters of design research and analysis of the Chinese reading visual acuity chart].
Wang, Chen-xiao; Liu, Zhi-hui; Gao, Ji-tuo; Guo, Ying-xuan; He, Ji-cang; Qu, Jia; Lü, Fan
2013-06-01
Reading is a visual function human being used to understand environmental events based on writing materials. This study investigated the feasibility of reading visual acuity chart in assessment of reading ability by analysis of the key factors involved in the design of the visual acuity chart. The reading level was determined as grade 3 primary school with Song as the font and 30 characters included in the sentences. Each of the sentences consisted of 27 commonly-used Chinese characters (9 characters between any two punctuations) and 3 punctuations. There were no contextual clues between the 80 sentences selected. The characters had 13 different sizes with an increment of 0.1 log unit (e.g.1.2589) and 2.5 pt was determined as the critical threshold. Readable test for visual target was followed as (1) 29 candidates with a raw or corrected visual acuity (VA)of at least 1.0 were selected to read 80 selected sentences with the size of characters of 2.5 pt at a distance of 40 cm, (2) the time used for reading with the number of characters wrongly read was recorded, (3) 39 sentences were selected as visual targets based on reading speed, effective reading position and total number of character strokes, (4) The 39 selected sentences were then randomly divided into 3 groups with no significant difference among the groups in the 3 factors listed at (3) with paired t-test. This reading visual chart was at level of Grade 3 primary school with a total stroke number of 165-210(Mean 185 ± 10), 13 font sizes a 0.1 log unit increment, a song pattern and 2.5 pt as the critical threshold. All candidates achieved 100% correct in reading test under 2.5 pt with an effective reading speed as 120.65-162 wpm (Mean 142.93 ± 11.80) and effective reading position as 36.03-61.48(Mean 48.85 ± 6.81). The reading test for the 3 groups of sentences showed effective reading speed as (142.49 ± 12.14) wpm,(142.86 ± 12.55) wpm and (143.44 ± 11.63) wpm respectively(t1-2 = -0.899, t2-3 = -1.295, t1-3 = -1.435). The reading position was 48.55 ± 6.69, 48.99 ± 7.49 and 49.00 ± 6.76, respectively(t1-2 = -1.019, t2-3 = -0.019, t1-3 = -0.816). The total number of character strokes was 185.54 ± 7.55, 187.69 ± 13.76 and 182.62 ± 8.17, respectively(t1-2 = 0.191, t2-3 = 1.385, t1-3 = 1.686). A practical design of the Chinese reading visual chart should consider size, increment, legibility in selection of reading sentences. Reading visual acuity, critical threshold and effective reading speed could be used to express the reading visual function.
ERIC Educational Resources Information Center
White, Sarah J.; Hirotani, Masako; Liversedge, Simon P.
2012-01-01
Two experiments are presented that examine how the visual characteristics of Japanese words influence eye movement behaviour during reading. In Experiment 1, reading behaviour was compared for words comprising either one or two kanji characters. The one-character words were significantly less likely to be fixated on first-pass, and had…
Perceptual Span in Oral Reading: The Case of Chinese
ERIC Educational Resources Information Center
Pan, Jinger; Yan, Ming; Laubrock, Jochen
2017-01-01
The present study explores the perceptual span, that is, the physical extent of the area from which useful visual information is obtained during a single fixation, during oral reading of Chinese sentences. Characters outside a window of legible text were replaced by visually similar characters. Results show that the influence of window size on the…
Verifying visual properties in sentence verification facilitates picture recognition memory.
Pecher, Diane; Zanolie, Kiki; Zeelenberg, René
2007-01-01
According to the perceptual symbols theory (Barsalou, 1999), sensorimotor simulations underlie the representation of concepts. We investigated whether recognition memory for pictures of concepts was facilitated by earlier representation of visual properties of those concepts. During study, concept names (e.g., apple) were presented in a property verification task with a visual property (e.g., shiny) or with a nonvisual property (e.g., tart). Delayed picture recognition memory was better if the concept name had been presented with a visual property than if it had been presented with a nonvisual property. These results indicate that modality-specific simulations are used for concept representation.
Reading time allocation strategies and working memory using rapid serial visual presentation.
Busler, Jessica N; Lazarte, Alejandro A
2017-09-01
Rapid serial visual presentation (RSVP) is a useful method for controlling the timing of text presentations and studying how readers' characteristics, such as working memory (WM) and reading strategies for time allocation, influence text recall. In the current study, a modified version of RSVP (Moving Window RSVP [MW-RSVP]) was used to induce longer pauses at the ends of clauses and ends of sentences when reading texts with multiple embedded clauses. We studied if WM relates to allocation of time at end of clauses or sentences in a self-paced reading task and in 2 MW-RSVP reading conditions (Constant MW-RSVP and Paused MW-RSVP) in which the reading rate was kept constant or pauses were induced. Higher WM span readers were more affected by the restriction of time allocation in the MW-RSVP conditions. In addition, the recall of both higher and lower WM-span readers benefited from the paused MW-RSVP presentation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Mueller, Jutta L; Rueschemeyer, Shirley-Ann; Ono, Kentaro; Sugiura, Motoaki; Sadato, Norihiro; Nakamura, Akinori
2014-01-01
The present study used functional magnetic resonance imaging (fMRI) to investigate the neural correlates of language acquisition in a realistic learning environment. Japanese native speakers were trained in a miniature version of German prior to fMRI scanning. During scanning they listened to (1) familiar sentences, (2) sentences including a novel sentence structure, and (3) sentences containing a novel word while visual context provided referential information. Learning-related decreases of brain activation over time were found in a mainly left-hemispheric network comprising classical frontal and temporal language areas as well as parietal and subcortical regions and were largely overlapping for novel words and the novel sentence structure in initial stages of learning. Differences occurred at later stages of learning during which content-specific activation patterns in prefrontal, parietal and temporal cortices emerged. The results are taken as evidence for a domain-general network supporting the initial stages of language learning which dynamically adapts as learners become proficient.
ERIC Educational Resources Information Center
Huettig, Falk; McQueen, James M.
2007-01-01
Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with "beker," "beaker," for example, the display contained phonological (a beaver, "bever"), shape (a…
Event Processing in the Visual World: Projected Motion Paths during Spoken Sentence Comprehension
ERIC Educational Resources Information Center
Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue
2016-01-01
Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the…
Attention and Memory Play Different Roles in Syntactic Choice during Sentence Production
ERIC Educational Resources Information Center
Myachykov, Andriy; Garrod, Simon; Scheepers, Christoph
2018-01-01
Attentional control of referential information is an important contributor to the structure of discourse. We investigated how attention and memory interplay during visually situated sentence production. We manipulated speakers' attention to the agent or the patient of a described event by means of a referential or a dot visual cue. We also…
ERIC Educational Resources Information Center
Contemori, Carla; Carlson, Matthew; Marinis, Theodoros
2018-01-01
Previous research has shown that children demonstrate similar sentence processing reflexes to those observed in adults, but they have difficulties revising an erroneous initial interpretation when they process garden-path sentences, passives, and "wh"-questions. We used the visual-world paradigm to examine children's use of syntactic and…
The language of arithmetic across the hemispheres: An event-related potential investigation.
Dickson, Danielle S; Federmeier, Kara D
2017-05-01
Arithmetic expressions, like verbal sentences, incrementally lead readers to anticipate potential appropriate completions. Existing work in the language domain has helped us understand how the two hemispheres differently participate in and contribute to the cognitive process of sentence reading, but comparatively little work has been done with mathematical equation processing. In this study, we address this gap by examining the ERP response to provided answers to simple multiplication problems, which varied both in levels of correctness (given an equation context) and in visual field of presentation (joint attention in central presentation, or biased processing to the left or right hemisphere through contralateral visual field presentation). When answers were presented to any of the visual fields (hemispheres), there was an effect of correctness prior to the traditional N400 timewindow, which we interpret as a P300 in response to a detected target item (the correct answer). In addition to this response, equation answers also elicited a late positive complex (LPC) for incorrect answers. Notably, this LPC effect was most prominent in the left visual field (right hemisphere), and it was also sensitive to the confusability of the wrong answer - incorrect answers that were closely related to the correct answer elicited a smaller LPC. This suggests a special, prolonged role for the right hemisphere during answer evaluation. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhang, Zhengyi; Zhang, Gaoyan; Zhang, Yuanyuan; Liu, Hong; Xu, Junhai; Liu, Baolin
2017-12-01
This study aimed to investigate the functional connectivity in the brain during the cross-modal integration of polyphonic characters in Chinese audio-visual sentences. The visual sentences were all semantically reasonable and the audible pronunciations of the polyphonic characters in corresponding sentences contexts varied in four conditions. To measure the functional connectivity, correlation, coherence and phase synchronization index (PSI) were used, and then multivariate pattern analysis was performed to detect the consensus functional connectivity patterns. These analyses were confined in the time windows of three event-related potential components of P200, N400 and late positive shift (LPS) to investigate the dynamic changes of the connectivity patterns at different cognitive stages. We found that when differentiating the polyphonic characters with abnormal pronunciations from that with the appreciate ones in audio-visual sentences, significant classification results were obtained based on the coherence in the time window of the P200 component, the correlation in the time window of the N400 component and the coherence and PSI in the time window the LPS component. Moreover, the spatial distributions in these time windows were also different, with the recruitment of frontal sites in the time window of the P200 component, the frontal-central-parietal regions in the time window of the N400 component and the central-parietal sites in the time window of the LPS component. These findings demonstrate that the functional interaction mechanisms are different at different stages of audio-visual integration of polyphonic characters.
Toward developing a standardized Arabic continuous text reading chart.
Alabdulkader, Balsam; Leat, Susan Jennifer
Near visual acuity is an essential measurement during an oculo-visual assessment. Short duration continuous text reading charts measure reading acuity and other aspects of reading performance. There is no standardized version of such chart in Arabic. The aim of this study is to create sentences of equal readability to use in the development of a standardized Arabic continuous text reading chart. Initially, 109 Arabic pairs of sentences were created for use in constructing a chart with similar layout to the Colenbrander chart. They were created to have the same grade level of difficulty and physical length. Fifty-three adults and sixteen children were recruited to validate the sentences. Reading speed in correct words per minute (CWPM) and standard length words per minute (SLWPM) was measured and errors were counted. Criteria based on reading speed and errors made in each sentence pair were used to exclude sentence pairs with more outlying characteristics, and to select the final group of sentence pairs. Forty-five sentence pairs were selected according to the elimination criteria. For adults, the average reading speed for the final sentences was 166 CWPM and 187 SLWPM and the average number of errors per sentence pair was 0.21. Childrens' average reading speed for the final group of sentences was 61 CWPM and 72 SLWPM. Their average error rate was 1.71. The reliability analysis showed that the final 45 sentence pairs are highly comparable. They will be used in constructing an Arabic short duration continuous text reading chart. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Slattery, Timothy J.; Schotter, Elizabeth R.; Berry, Raymond W.; Rayner, Keith
2011-01-01
The processing of abbreviations in reading was examined with an eye movement experiment. Abbreviations were of two distinct types: Acronyms (abbreviations that can be read with the normal grapheme-phoneme correspondence rules, such as NASA) and initialisms (abbreviations in which the grapheme-phoneme correspondences are letter names, such as NCAA). Parafoveal and foveal processing of these abbreviations was assessed with the use of the boundary change paradigm (Rayner, 1975). Using this paradigm, previews of the abbreviations were either identical to the abbreviation (NASA or NCAA), orthographically legal (NUSO or NOBA), or illegal (NRSB or NRBA). The abbreviations were presented as capital letter strings within normal, predominantly lowercase sentences and also sentences in all capital letters such that the abbreviations would not be visually distinct. The results indicate that acronyms and initialisms undergo different processing during reading, and that readers can modulate their processing based on low-level visual cues (distinct capitalization) in parafoveal vision. In particular, readers may be biased to process capitalized letter strings as initialisms in parafoveal vision when the rest of the sentence is normal, lower case letters. PMID:21480754
Slattery, Timothy J; Schotter, Elizabeth R; Berry, Raymond W; Rayner, Keith
2011-07-01
The processing of abbreviations in reading was examined with an eye movement experiment. Abbreviations were of 2 distinct types: acronyms (abbreviations that can be read with the normal grapheme-phoneme correspondence [GPC] rules, such as NASA) and initialisms (abbreviations in which the GPCs are letter names, such as NCAA). Parafoveal and foveal processing of these abbreviations was assessed with the use of the boundary change paradigm (K. Rayner, 1975). Using this paradigm, previews of the abbreviations were either identical to the abbreviation (NASA or NCAA), orthographically legal (NUSO or NOBA), or illegal (NRSB or NRBA). The abbreviations were presented as capital letter strings within normal, predominantly lowercase sentences and also sentences in all capital letters such that the abbreviations would not be visually distinct. The results indicate that acronyms and initialisms undergo different processing during reading and that readers can modulate their processing based on low-level visual cues (distinct capitalization) in parafoveal vision. In particular, readers may be biased to process capitalized letter strings as initialisms in parafoveal vision when the rest of the sentence is normal, lowercase letters.
Planning in sentence production: Evidence for the phrase as a default planning scope
Martin, Randi C.; Crowther, Jason E.; Knight, Meredith; Tamborello, Franklin P.; Yang, Chin-Lung
2010-01-01
Controversy remains as to the scope of advanced planning in language production. Smith and Wheeldon (1999) found significantly longer onset latencies when subjects described moving picture displays by producing sentences beginning with a complex noun phrase than for matched sentences beginning with a simple noun phrase. While these findings are consistent with a phrasal scope of planning, they might also be explained on the basis of: 1) greater retrieval fluency for the second content word in the simple initial noun phrase sentences and 2) visual grouping factors. In Experiments 1 and 2, retrieval fluency for the second content word was equated for the complex and simple initial noun phrase conditions. Experiments 3 and 4 addressed the visual grouping hypothesis by using stationary displays and by comparing onset latencies for the same display for sentence and list productions. Longer onset latencies for the sentences beginning with a complex noun phrase were obtained in all experiments, supporting the phrasal scope of planning hypothesis. The results indicate that in speech, as in other motor production domains, planning occurs beyond the minimal production unit. PMID:20501338
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Guerra, Ernesto; Knoeferle, Pia
2018-01-01
Existing evidence has shown a processing advantage (or facilitation) when representations derived from a non-linguistic context (spatial proximity depicted by gambling cards moving together) match the semantic content of an ensuing sentence. A match, inspired by conceptual metaphors such as 'similarity is closeness' would, for instance, involve cards moving closer together and the sentence relates similarity between abstract concepts such as war and battle. However, other studies have reported a disadvantage (or interference) for congruence between the semantic content of a sentence and representations of spatial distance derived from this sort of non-linguistic context. In the present article, we investigate the cognitive mechanisms underlying the interaction between the representations of spatial distance and sentence processing. In two eye-tracking experiments, we tested the predictions of a mechanism that considers the competition, activation, and decay of visually and linguistically derived representations as key aspects in determining the qualitative pattern and time course of that interaction. Critical trials presented two playing cards, each showing a written abstract noun; the cards turned around, obscuring the nouns, and moved either farther apart or closer together. Participants then read a sentence expressing either semantic similarity or difference between these two nouns. When instructed to attend to the nouns on the cards (Experiment 1), participants' total reading times revealed interference between spatial distance (e.g., closeness) and semantic relations (similarity) as soon as the sentence explicitly conveyed similarity. But when instructed to attend to the cards (Experiment 2), cards approaching (vs. moving apart) elicited first interference (when similarity was implicit) and then facilitation (when similarity was made explicit) during sentence reading. We discuss these findings in the context of a competition mechanism of interference and facilitation effects.
ERIC Educational Resources Information Center
Chambers, Craig G.; Cooke, Hilary
2009-01-01
A spoken language eye-tracking methodology was used to evaluate the effects of sentence context and proficiency on parallel language activation during spoken language comprehension. Nonnative speakers with varying proficiency levels viewed visual displays while listening to French sentences (e.g., "Marie va decrire la poule" [Marie will…
Zhao, Jing; Kwok, Rosa K. W.; Liu, Menglian; Liu, Hanlong; Huang, Chen
2017-01-01
Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency. PMID:28119663
Zhao, Jing; Kwok, Rosa K W; Liu, Menglian; Liu, Hanlong; Huang, Chen
2016-01-01
Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency.
Neural Correlates of Bridging Inferences and Coherence Processing
ERIC Educational Resources Information Center
Kim, Sung-il; Yoon, Misun; Kim, Wonsik; Lee, Sunyoung; Kang, Eunjoo
2012-01-01
We explored the neural correlates of bridging inferences and coherence processing during story comprehension using Positron Emission Tomography (PET). Ten healthy right-handed volunteers were visually presented three types of stories (Strong Coherence, Weak Coherence, and Control) consisted of three sentences. The causal connectedness among…
ERIC Educational Resources Information Center
Harbusch, Karin; Hausdörfer, Annette
2016-01-01
COMPASS is an e-learning system that can visualize grammar errors during sentence production in German as a first or second language. Via drag-and-drop dialogues, it allows users to freely select word forms from a lexicon and to combine them into phrases and sentences. The system's core component is a natural-language generator that, for every new…
Interaction between language and vision: It’s momentary, abstract, and it develops
Dessalegn, Banchiamlack; Landau, Barbara
2013-01-01
In this paper, we present a case study that explores the nature and development of the mechanisms by which language interacts with and influences our ability to represent and retain information from one of our most important non-linguistic systems-- vision. In previous work (Dessalegn & Landau, 2008), we showed that 4 year-olds remembered conjunctions of visual features better when the visual target was accompanied by a sentence containing an asymmetric spatial predicate (e.g., the yellow is to the left of the black) but not when the visual target was accompanied by a sentence containing a novel noun (e.g., look at the dax) or a symmetric spatial predicate (e.g., the yellow is touching the black). In this paper, we extend these findings. In three experiments, 3, 4 and 6 year-olds were shown square blocks split in half by color vertically, horizontally or diagonally (e.g., yellow-left, black-right) and were asked to perform a delayed-matching task. We found that sentences containing spatial asymmetric predicates (e.g., the yellow is to the left of the black) and non-spatial asymmetric predicates (e.g., the yellow is prettier than the black) helped 4 year-olds, although not to the same extent. By contrast, 3 year-olds did not benefit from different linguistic instructions at all while 6 year-olds performed at ceiling in the task with or without the relevant sentences. Our findings suggest by age 4, the effects of language on non-linguistic tasks depend on highly abstract representations of the linguistic instructions and are momentary, seen only in the context of the task. We further speculate that language becomes more automatically engaged in nonlinguistic tasks over development. PMID:23545385
Planning in sentence production: evidence for the phrase as a default planning scope.
Martin, Randi C; Crowther, Jason E; Knight, Meredith; Tamborello, Franklin P; Yang, Chin-Lung
2010-08-01
Controversy remains as to the scope of advanced planning in language production. Smith and Wheeldon (1999) found significantly longer onset latencies when subjects described moving-picture displays by producing sentences beginning with a complex noun phrase than for matched sentences beginning with a simple noun phrase. While these findings are consistent with a phrasal scope of planning, they might also be explained on the basis of: (1) greater retrieval fluency for the second content word in the simple initial noun phrase sentences and (2) visual grouping factors. In Experiments 1 and 2, retrieval fluency for the second content word was equated for the complex and simple initial noun phrase conditions. Experiments 3 and 4 addressed the visual grouping hypothesis by using stationary displays and by comparing onset latencies for the same display for sentence and list productions. Longer onset latencies for the sentences beginning with a complex noun phrase were obtained in all experiments, supporting the phrasal scope of planning hypothesis. The results indicate that in speech, as in other motor production domains, planning occurs beyond the minimal production unit. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Kukona, Anuenue; Cho, Pyeong Whan; Magnuson, James S.; Tabor, Whitney
2014-01-01
Psycholinguistic research spanning a number of decades has produced diverging results with regard to the nature of constraint integration in online sentence processing. For example, evidence that language users anticipatorily fixate likely upcoming referents in advance of evidence in the speech signal supports rapid context integration. By contrast, evidence that language users activate representations that conflict with contextual constraints, or only indirectly satisfy them, supports non-integration or late integration. Here, we report on a self-organizing neural network framework that addresses one aspect of constraint integration: the integration of incoming lexical information (i.e., an incoming word) with sentence context information (i.e., from preceding words in an unfolding utterance). In two simulations, we show that the framework predicts both classic results concerned with lexical ambiguity resolution (Swinney, 1979; Tanenhaus, Leiman, & Seidenberg, 1979), which suggest late context integration, and results demonstrating anticipatory eye movements (e.g., Altmann & Kamide, 1999), which support rapid context integration. We also report two experiments using the visual world paradigm that confirm a new prediction of the framework. Listeners heard sentences like “The boy will eat the white…,” while viewing visual displays with objects like a white cake (i.e., a predictable direct object of “eat”), white car (i.e., an object not predicted by “eat,” but consistent with “white”), and distractors. Consistent with our simulation predictions, we found that while listeners fixated white cake most, they also fixated white car more than unrelated distractors in this highly constraining sentence (and visual) context. PMID:24245535
Amit, Elinor; Hoeflin, Caitlyn; Hamzah, Nada; Fedorenko, Evelina
2017-01-01
Humans rely on at least two modes of thought: verbal (inner speech) and visual (imagery). Are these modes independent, or does engaging in one entail engaging in the other? To address this question, we performed a behavioral and an fMRI study. In the behavioral experiment, participants received a prompt and were asked to either silently generate a sentence or create a visual image in their mind. They were then asked to judge the vividness of the resulting representation, and of the potentially accompanying representation in the other format. In the fMRI experiment, participants had to recall sentences or images (that they were familiarized with prior to the scanning session) given prompts, or read sentences and view images, in the control, perceptual, condition. An asymmetry was observed between inner speech and visual imagery. In particular, inner speech was engaged to a greater extent during verbal than visual thought, but visual imagery was engaged to a similar extent during both modes of thought. Thus, it appears that people generate more robust verbal representations during deliberate inner speech compared to when their intent is to visualize. However, they generate visual images regardless of whether their intent is to visualize or to think verbally. One possible interpretation of these results is that visual thinking is somehow primary, given the relatively late emergence of verbal abilities during human development and in the evolution of our species. PMID:28323162
Amit, Elinor; Hoeflin, Caitlyn; Hamzah, Nada; Fedorenko, Evelina
2017-05-15
Humans rely on at least two modes of thought: verbal (inner speech) and visual (imagery). Are these modes independent, or does engaging in one entail engaging in the other? To address this question, we performed a behavioral and an fMRI study. In the behavioral experiment, participants received a prompt and were asked to either silently generate a sentence or create a visual image in their mind. They were then asked to judge the vividness of the resulting representation, and of the potentially accompanying representation in the other format. In the fMRI experiment, participants had to recall sentences or images (that they were familiarized with prior to the scanning session) given prompts, or read sentences and view images, in the control, perceptual, condition. An asymmetry was observed between inner speech and visual imagery. In particular, inner speech was engaged to a greater extent during verbal than visual thought, but visual imagery was engaged to a similar extent during both modes of thought. Thus, it appears that people generate more robust verbal representations during deliberate inner speech compared to when their intent is to visualize. However, they generate visual images regardless of whether their intent is to visualize or to think verbally. One possible interpretation of these results is that visual thinking is somehow primary, given the relatively late emergence of verbal abilities during human development and in the evolution of our species. Copyright © 2017 Elsevier Inc. All rights reserved.
Audiovisual Integration in Children Listening to Spectrally Degraded Speech
ERIC Educational Resources Information Center
Maidment, David W.; Kang, Hi Jee; Stewart, Hannah J.; Amitay, Sygal
2015-01-01
Purpose: The study explored whether visual information improves speech identification in typically developing children with normal hearing when the auditory signal is spectrally degraded. Method: Children (n = 69) and adults (n = 15) were presented with noise-vocoded sentences from the Children's Co-ordinate Response Measure (Rosen, 2011) in…
Imagery Induction in the Pre-Imagery Child. Technical Report No. 282.
ERIC Educational Resources Information Center
Levin, Joel R.; And Others
This study extends some recently acquired knowledge about the development of visual imagery as an associative-learning strategy. Incorporating the present findings into the data already gathered, it appears that as a facilitator, sentence production precedes imagery generation since preoperational children benefit from instructions to engage in…
Situated sentence processing: the coordinated interplay account and a neurobehavioral model.
Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R
2010-03-01
Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms underlying interpretation, visual attention, and scene apprehension are not only in close temporal synchronization, but have co-adapted to optimize real-time visual grounding of situated spoken language, thus facilitating the association of linguistic, visual and motor representations that emerge during the course of our embodied linguistic experience in the world. Copyright 2009 Elsevier Inc. All rights reserved.
Measuring and Predicting Tag Importance for Image Retrieval.
Li, Shangwen; Purushotham, Sanjay; Chen, Chen; Ren, Yuzhuo; Kuo, C-C Jay
2017-12-01
Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.
Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults
Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener. PMID:27031343
Zekveld, Adriana A; Kramer, Sophia E; Rönnberg, Jerker; Rudner, Mary
2018-06-19
Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words. Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating). Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall. Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Violations of information structure: an electrophysiological study of answers to wh-questions.
Cowles, H W; Kluender, Robert; Kutas, Marta; Polinsky, Maria
2007-09-01
This study investigates brain responses to violations of information structure in wh-question-answer pairs, with particular emphasis on violations of focus assignment in it-clefts (It was the queen that silenced the banker). Two types of ERP responses in answers to wh-questions were found. First, all words in the focus-marking (cleft) position elicited a large positivity (P3b) characteristic of sentence-final constituents, as did the final words of these sentences, which suggests that focused elements may trigger integration effects like those seen at sentence end. Second, the focusing of an inappropriate referent elicited a smaller, N400-like effect. The results show that comprehenders actively use structural focus cues and discourse-level restrictions during online sentence processing. These results, based on visual stimuli, were different from the brain response to auditory focus violations indicated by pitch-accent [Hruska, C., Steinhauer, K., Alter, K., & Steube, A. (2000). ERP effects of sentence accents and violations of the information structure. In Poster presented at the 13th annual CUNY conference on human sentence processing, San Diego, CA.], but similar to brain responses to newly introduced discourse referents [Bornkessel, I., Schlesewsky, M., & Friederici, A. (2003). Contextual information modulated initial processes of syntactic integration: the role of inter- versus intrasentential predictions. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 871-882.].
Troyer, Melissa; Curley, Lauren B.; Miller, Luke E.; Saygin, Ayse P.; Bergen, Benjamin K.
2014-01-01
Language comprehension requires rapid and flexible access to information stored in long-term memory, likely influenced by activation of rich world knowledge and by brain systems that support the processing of sensorimotor content. We hypothesized that while literal language about biological motion might rely on neurocognitive representations of biological motion specific to the details of the actions described, metaphors rely on more generic representations of motion. In a priming and self-paced reading paradigm, participants saw video clips or images of (a) an intact point-light walker or (b) a scrambled control and read sentences containing literal or metaphoric uses of biological motion verbs either closely or distantly related to the depicted action (walking). We predicted that reading times for literal and metaphorical sentences would show differential sensitivity to the match between the verb and the visual prime. In Experiment 1, we observed interactions between the prime type (walker or scrambled video) and the verb type (close or distant match) for both literal and metaphorical sentences, but with strikingly different patterns. We found no difference in the verb region of literal sentences for Close-Match verbs after walker or scrambled motion primes, but Distant-Match verbs were read more quickly following walker primes. For metaphorical sentences, the results were roughly reversed, with Distant-Match verbs being read more slowly following a walker compared to scrambled motion. In Experiment 2, we observed a similar pattern following still image primes, though critical interactions emerged later in the sentence. We interpret these findings as evidence for shared recruitment of cognitive and neural mechanisms for processing visual and verbal biological motion information. Metaphoric language using biological motion verbs may recruit neurocognitive mechanisms similar to those used in processing literal language but be represented in a less-specific way. PMID:25538604
Task-selective memory effects for successfully implemented encoding strategies.
Leshikar, Eric D; Duarte, Audrey; Hertzog, Christopher
2012-01-01
Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies--visual imagery and sentence generation--facilitate memory through the production of different types of mediators (e.g., mental images and sentences). Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator). It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences) for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness) of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study.
Task-Selective Memory Effects for Successfully Implemented Encoding Strategies
Leshikar, Eric D.; Duarte, Audrey; Hertzog, Christopher
2012-01-01
Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies–visual imagery and sentence generation–facilitate memory through the production of different types of mediators (e.g., mental images and sentences). Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator). It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences) for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness) of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study. PMID:22693593
Engagement of the left extrastriate body area during body-part metaphor comprehension.
Lacey, Simon; Stilla, Randall; Deshpande, Gopikrishna; Zhao, Sinan; Stephens, Careese; McCormick, Kelly; Kemmerer, David; Sathian, K
2017-03-01
Grounded cognition explanations of metaphor comprehension predict activation of sensorimotor cortices relevant to the metaphor's source domain. We tested this prediction for body-part metaphors using functional magnetic resonance imaging while participants heard sentences containing metaphorical or literal references to body parts, and comparable control sentences. Localizer scans identified body-part-specific motor, somatosensory and visual cortical regions. Both subject- and item-wise analyses showed that, relative to control sentences, metaphorical but not literal sentences evoked limb metaphor-specific activity in the left extrastriate body area (EBA), paralleling the EBA's known visual limb-selectivity. The EBA focus exhibited resting-state functional connectivity with ipsilateral semantic processing regions. In some of these regions, the strength of resting-state connectivity correlated with individual preference for verbal processing. Effective connectivity analyses showed that, during metaphor comprehension, activity in some semantic regions drove that in the EBA. These results provide converging evidence for grounding of metaphor processing in domain-specific sensorimotor cortical activity. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Kilickaya, Ferit; Krajka, Jaroslaw
2012-01-01
Both teacher- and learner-made computer visuals are quite extensively reported in Computer-Assisted Language Learning literature, for instance, filming interviews, soap operas or mini-documentaries, creating storyboard projects, authoring podcasts and vodcasts, designing digital stories. Such student-made digital assets are used to present to…
Repetition Blindness: Out of Sight or Out of Mind?
ERIC Educational Resources Information Center
Morris, Alison L.; Harris, Catherine L.
2004-01-01
Does repetition blindness represent a failure of perception or of memory? In Experiment 1, participants viewed rapid serial visual presentation (RSVP) sentences. When critical words (C1 and C2) were orthographically similar, C2 was frequently omitted from serial report; however, repetition priming for C2 on a postsentence lexical decision task was…
A Theory for the Neural Basis of Language Part 2: Simulation Studies of the Model
ERIC Educational Resources Information Center
Baron, R. J.
1974-01-01
Computer simulation studies of the proposed model are presented. Processes demonstrated are (1) verbally directed recall of visual experience; (2) understanding of verbal information; (3) aspects of learning and forgetting; (4) the dependence of recognition and understanding in context; and (5) elementary concepts of sentence production. (Author)
Word-Category Violations in Patients with Broca's Aphasia: An ERP Study
ERIC Educational Resources Information Center
Wassenaar, Marlies; Hagoort, Peter
2005-01-01
An event-related brain potential experiment was carried out to investigate on-line syntactic processing in patients with Broca's aphasia. Subjects were visually presented with sentences that were either syntactically correct or contained violations of word-category. Three groups of subjects were tested: Broca patients (N=11), non-aphasic patients…
Grammatical verb aspect and event roles in sentence processing.
Madden-Lombardi, Carol; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2017-01-01
Two experiments examine how grammatical verb aspect constrains our understanding of events. According to linguistic theory, an event described in the perfect aspect (John had opened the bottle) should evoke a mental representation of a finished event with focus on the resulting object, whereas an event described in the imperfective aspect (John was opening the bottle) should evoke a representation of the event as ongoing, including all stages of the event, and focusing all entities relevant to the ongoing action (instruments, objects, agents, locations, etc.). To test this idea, participants saw rebus sentences in the perfect and imperfective aspect, presented one word at a time, self-paced. In each sentence, the instrument and the recipient of the action were replaced by pictures (John was using/had used a *corkscrew* to open the *bottle* at the restaurant). Time to process the two images as well as speed and accuracy on sensibility judgments were measured. Although experimental sentences always made sense, half of the object and instrument pictures did not match the temporal constraints of the verb. For instance, in perfect sentences aspect-congruent trials presented an image of the corkscrew closed (no longer in-use) and the wine bottle fully open. The aspect-incongruent yet still sensible versions either replaced the corkscrew with an in-use corkscrew (open, in-hand) or the bottle image with a half-opened bottle. In this case, the participant would still respond "yes", but with longer expected response times. A three-way interaction among Verb Aspect, Sentence Role, and Temporal Match on image processing times showed that participants were faster to process images that matched rather than mismatched the aspect of the verb, especially for resulting objects in perfect sentences. A second experiment replicated and extended the results to confirm that this was not due to the placement of the object in the sentence. These two experiments extend previous research, showing how verb aspect drives not only the temporal structure of event representation, but also the focus on specific roles of the event. More generally, the findings of visual match during online sentence-picture processing are consistent with theories of perceptual simulation.
Grammatical verb aspect and event roles in sentence processing
Madden-Lombardi, Carol; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2017-01-01
Two experiments examine how grammatical verb aspect constrains our understanding of events. According to linguistic theory, an event described in the perfect aspect (John had opened the bottle) should evoke a mental representation of a finished event with focus on the resulting object, whereas an event described in the imperfective aspect (John was opening the bottle) should evoke a representation of the event as ongoing, including all stages of the event, and focusing all entities relevant to the ongoing action (instruments, objects, agents, locations, etc.). To test this idea, participants saw rebus sentences in the perfect and imperfective aspect, presented one word at a time, self-paced. In each sentence, the instrument and the recipient of the action were replaced by pictures (John was using/had used a *corkscrew* to open the *bottle* at the restaurant). Time to process the two images as well as speed and accuracy on sensibility judgments were measured. Although experimental sentences always made sense, half of the object and instrument pictures did not match the temporal constraints of the verb. For instance, in perfect sentences aspect-congruent trials presented an image of the corkscrew closed (no longer in-use) and the wine bottle fully open. The aspect-incongruent yet still sensible versions either replaced the corkscrew with an in-use corkscrew (open, in-hand) or the bottle image with a half-opened bottle. In this case, the participant would still respond “yes”, but with longer expected response times. A three-way interaction among Verb Aspect, Sentence Role, and Temporal Match on image processing times showed that participants were faster to process images that matched rather than mismatched the aspect of the verb, especially for resulting objects in perfect sentences. A second experiment replicated and extended the results to confirm that this was not due to the placement of the object in the sentence. These two experiments extend previous research, showing how verb aspect drives not only the temporal structure of event representation, but also the focus on specific roles of the event. More generally, the findings of visual match during online sentence-picture processing are consistent with theories of perceptual simulation. PMID:29287091
Understanding Metaphors: Is the Right Hemisphere Uniquely Involved?
ERIC Educational Resources Information Center
Kacinik, Natalie A.; Chiarello, Christine
2007-01-01
Two divided visual field priming experiments examined cerebral asymmetries for understanding metaphors varying in sentence constraint. Experiment 1 investigated ambiguous words (e.g., SWEET and BRIGHT) with literal and metaphoric meanings in ambiguous and unambiguous sentence contexts, while Experiment 2 involved standard metaphors (e.g., "The…
Neger, Thordis M.; Rietveld, Toni; Janse, Esther
2014-01-01
Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly. PMID:25225475
Neger, Thordis M; Rietveld, Toni; Janse, Esther
2014-01-01
Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.
Coco, Moreno I; Keller, Frank; Malcolm, George L
2016-11-01
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. Copyright © 2015 Cognitive Science Society, Inc.
Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.
Kim, Jeesun; Davis, Chris; Groot, Christopher
2009-12-01
This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.
Sensing the Sentence: An Embodied Simulation Approach to Rhetorical Grammar
ERIC Educational Resources Information Center
Rule, Hannah J.
2017-01-01
This article applies the neuroscientific concept of embodied simulation--the process of understanding language through visual, motor, and spatial modalities of the body--to rhetorical grammar and sentence-style pedagogies. Embodied simulation invigorates rhetorical grammar instruction by attuning writers to the felt effects of written language,…
Illusory correlation: a function of availability or representativeness heuristics?
MacDonald, M G
2000-08-01
The present study sought to investigate the illusory correlation phenomenon by experimentally manipulating the availability of information through the use of the "lag" effect (Madigan, 1969). Seventy-four university students voluntarily participated in this study. Similar to Starr and Katkin's (1969) methodology, subjects were visually presented with each possible combination of four experimental problem descriptions and four sentence completions that were paired and shown twice at each of four lags (i.e., with 0, 2, 8 and 20 intervening variables). Subjects were required to make judgements concerning the frequency with which sentence completions and problem descriptions co-occurred. In agreement with previous research (Starr & Katkin, 1969), the illusory correlation effect was found for specific descriptions and sentence completions. Results also yielded a significant effect of lag for mean ratings between 0 and 2 lags; however, there was no reliable increase in judged co-occurrence at lags 8 and 20. Evidence failed to support the hypothesis that greater availability, through the experimental manipulation of lag, would result in increased frequency of co-occurrence judgements. Findings indicate that, in the present study, the illusory correlation effect is probably due to a situational bias based on the representativeness heuristic.
Audiovisual sentence recognition not predicted by susceptibility to the McGurk effect.
Van Engen, Kristin J; Xie, Zilong; Chandrasekaran, Bharath
2017-02-01
In noisy situations, visual information plays a critical role in the success of speech communication: listeners are better able to understand speech when they can see the speaker. Visual influence on auditory speech perception is also observed in the McGurk effect, in which discrepant visual information alters listeners' auditory perception of a spoken syllable. When hearing /ba/ while seeing a person saying /ga/, for example, listeners may report hearing /da/. Because these two phenomena have been assumed to arise from a common integration mechanism, the McGurk effect has often been used as a measure of audiovisual integration in speech perception. In this study, we test whether this assumed relationship exists within individual listeners. We measured participants' susceptibility to the McGurk illusion as well as their ability to identify sentences in noise across a range of signal-to-noise ratios in audio-only and audiovisual modalities. Our results do not show a relationship between listeners' McGurk susceptibility and their ability to use visual cues to understand spoken sentences in noise, suggesting that McGurk susceptibility may not be a valid measure of audiovisual integration in everyday speech processing.
Cohn, Neil; Paczynski, Martin
2013-01-01
Agents consistently appear prior to Patients in sentences, manual signs, and drawings, and Agents are responded to faster when presented in visual depictions of events. We hypothesized that this “Agent advantage” reflects Agents’ role in event structure. We investigated this question by manipulating the depictions of Agents and Patients in preparatory actions in a wordless visual narrative. We found that Agents elicited a greater degree of predictions regarding upcoming events than Patients, that Agents are viewed longer than Patients, independent of serial order, and that visual depictions of actions are processed more quickly following the presentation of an Agent versus a Patient. Taken together these findings support the notion that Agents initiate the building of event representation. We suggest that Agent First orders facilitate the interpretation of events as they unfold and that the saliency of Agents within visual representations of events is driven by anticipation of upcoming events. PMID:23959023
Knoeferle, Pia; Urbach, Thomas P.; Kutas, Marta
2010-01-01
To re-establish picture-sentence verification – discredited possibly for its over-reliance on post-sentence response time (RT) measures - as a task for situated comprehension, we collected event-related brain potentials (ERPs) as participants read a subject-verb-object sentence, and RTs indicating whether or not the verb matched a previously depicted action. For mismatches (vs matches), speeded RTs were longer, verb N400s over centro-parietal scalp larger, and ERPs to the object noun more negative. RTs (congruence effect) correlated inversely with the centro-parietal verb N400s, and positively with the object ERP congruence effects. Verb N400s, object ERPs, and verbal working memory scores predicted more variance in RT effects (50%) than N400s alone. Thus, (1) verification processing is not all post-sentence; (2) simple priming cannot account for these results; and (3) verification tasks can inform studies of situated comprehension. PMID:20701712
Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker
2016-06-17
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.
Thurlow, W R
1980-01-01
Messages were presented which moved from right to left along an electronic alphabetic display which was varied in "window" size from 4 through 32 letter spaces. Deaf subjects signed the messages they perceived. Relatively few errors were made even at the highest rate of presentation, which corresponded to a typing rate of 60 words/min. It is concluded that many deaf persons can make effective use of a small visual display. A reduced cost is then possible for visual communication instruments for these people through reduced display size. Deaf subjects who can profit from a small display can be located by a sentence test administered by tape recorder which drives the display of the communication device by means of the standard code of the deaf teletype network.
Eye movements during listening reveal spontaneous grammatical processing.
Huette, Stephanie; Winter, Bodo; Matlock, Teenie; Ardell, David H; Spivey, Michael
2014-01-01
Recent research using eye-tracking typically relies on constrained visual contexts in particular goal-oriented contexts, viewing a small array of objects on a computer screen and performing some overt decision or identification. Eyetracking paradigms that use pictures as a measure of word or sentence comprehension are sometimes touted as ecologically invalid because pictures and explicit tasks are not always present during language comprehension. This study compared the comprehension of sentences with two different grammatical forms: the past progressive (e.g., was walking), which emphasizes the ongoing nature of actions, and the simple past (e.g., walked), which emphasizes the end-state of an action. The results showed that the distribution and timing of eye movements mirrors the underlying conceptual structure of this linguistic difference in the absence of any visual stimuli or task constraint: Fixations were shorter and saccades were more dispersed across the screen, as if thinking about more dynamic events when listening to the past progressive stories. Thus, eye movement data suggest that visual inputs or an explicit task are unnecessary to solicit analog representations of features such as movement, that could be a key perceptual component to grammatical comprehension.
The effect of simultaneous text on the recall of noise-degraded speech.
Grossman, Irina; Rajan, Ramesh
2017-05-01
Written and spoken language utilize the same processing system, enabling text to modulate speech processing. We investigated how simultaneously presented text affected speech recall in babble noise using a retrospective recall task. Participants were presented with text-speech sentence pairs in multitalker babble noise and then prompted to recall what they heard or what they read. In Experiment 1, sentence pairs were either congruent or incongruent and they were presented in silence or at 1 of 4 noise levels. Audio and Visual control groups were also tested with sentences presented in only 1 modality. Congruent text facilitated accurate recall of degraded speech; incongruent text had no effect. Text and speech were seldom confused for each other. A consideration of the effects of the language background found that monolingual English speakers outperformed early multilinguals at recalling degraded speech; however the effects of text on speech processing were analogous. Experiment 2 considered if the benefit provided by matching text was maintained when the congruency of the text and speech becomes more ambiguous because of the addition of partially mismatching text-speech sentence pairs that differed only on their final keyword and because of the use of low signal-to-noise ratios. The experiment focused on monolingual English speakers; the results showed that even though participants commonly confused text-for-speech during incongruent text-speech pairings, these confusions could not fully account for the benefit provided by matching text. Thus, we uniquely demonstrate that congruent text benefits the recall of noise-degraded speech. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Video, An Extra Dimension to the Study of Literature.
ERIC Educational Resources Information Center
Bouman, Lenny
1996-01-01
Focuses on advantages of video as a tool in teaching literature in a foreign language class. Emphasizes that use of visual aids, such as video, can help the reader overcome his limitations in comprehending vocabulary meanings and context of sentences and lists two ways in which a film version of a story can be presented: in nonstop viewing or in…
Negation in context: Evidence from the visual world paradigm.
Orenes, Isabel; Moxey, Linda; Scheepers, Christoph; Santamaría, Carlos
2016-01-01
Literature assumes that negation is more difficult to understand than affirmation, but this might depend on the pragmatic context. The goal of this paper is to show that pragmatic knowledge modulates the unfolding processing of negation due to the previous activation of the negated situation. To test this, we used the visual world paradigm. In this task, we presented affirmative (e.g., her dad was rich) and negative sentences (e.g., her dad was not poor) while viewing two images of the affirmed and denied entities. The critical sentence in each item was preceded by one of three types of contexts: an inconsistent context (e.g., She supposed that her dad had little savings) that activates the negated situation (a poor man), a consistent context (e.g., She supposed that her dad had enough savings) that activates the actual situation (a rich man), or a neutral context (e.g., her dad lived on the other side of town) that activates neither of the two models previously suggested. The results corroborated our hypothesis. Pragmatics is implicated in the unfolding processing of negation. We found an increase in fixations on the target compared to the baseline for negative sentences at 800 ms in the neutral context, 600 ms in the inconsistent context, and 1450 ms in the consistent context. Thus, when the negated situation has been previously introduced via an inconsistent context, negation is facilitated.
Optimal linguistic expression in negotiations depends on visual appearance
Kwon, Jinhwan; Tamada, Hikaru; Hirahara, Yumi
2018-01-01
We investigate the influence of the visual appearance of a negotiator on persuasiveness within the context of negotiations. Psychological experiments were conducted to quantitatively analyze the relationship between visual appearance and the use of language. Male and female participants were shown three female and male photographs, respectively. They were asked to report how they felt about each photograph using a seven-point semantic differential (SD) scale for six affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, and agreeableness). Participants then answered how they felt about each negotiation scenario (they were presented with pictures and a situation combined with negotiation sentences) using a seven-point SD scale for seven affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, agreeableness, and degree of persuasion). Two experiments were conducted using different participant groups depending on the negotiation situations. Photographs with good or bad appearances were found to show high or low degrees of persuasion, respectively. A multiple regression equation was obtained, indicating the importance of the three language factors (euphemistic, honorific, and sympathy expressions) to impressions made during negotiation. The result shows that there are optimal negotiation sentences based on various negotiation factors, such as visual appearance and use of language. For example, persons with good appearance might worsen their impression during negotiations by using certain language, although their initial impression was positive, and persons with bad appearance could effectively improve their impressions in negotiations through their use of language, although the final impressions of their negotiation counterpart might still be more negative than those for persons with good appearance. In contrast, the impressions made by persons of normal appearance were not easily affected by their use of language. The results of the present study have significant implications for future studies of effective negotiation strategies considering visual appearance as well as gender. PMID:29621361
Optimal linguistic expression in negotiations depends on visual appearance.
Sakamoto, Maki; Kwon, Jinhwan; Tamada, Hikaru; Hirahara, Yumi
2018-01-01
We investigate the influence of the visual appearance of a negotiator on persuasiveness within the context of negotiations. Psychological experiments were conducted to quantitatively analyze the relationship between visual appearance and the use of language. Male and female participants were shown three female and male photographs, respectively. They were asked to report how they felt about each photograph using a seven-point semantic differential (SD) scale for six affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, and agreeableness). Participants then answered how they felt about each negotiation scenario (they were presented with pictures and a situation combined with negotiation sentences) using a seven-point SD scale for seven affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, agreeableness, and degree of persuasion). Two experiments were conducted using different participant groups depending on the negotiation situations. Photographs with good or bad appearances were found to show high or low degrees of persuasion, respectively. A multiple regression equation was obtained, indicating the importance of the three language factors (euphemistic, honorific, and sympathy expressions) to impressions made during negotiation. The result shows that there are optimal negotiation sentences based on various negotiation factors, such as visual appearance and use of language. For example, persons with good appearance might worsen their impression during negotiations by using certain language, although their initial impression was positive, and persons with bad appearance could effectively improve their impressions in negotiations through their use of language, although the final impressions of their negotiation counterpart might still be more negative than those for persons with good appearance. In contrast, the impressions made by persons of normal appearance were not easily affected by their use of language. The results of the present study have significant implications for future studies of effective negotiation strategies considering visual appearance as well as gender.
The Role of Working Memory and Contextual Constraints in Children's Processing of Relative Clauses
ERIC Educational Resources Information Center
Wieghall, Anna R.; Altmann, Gerry T. M.
2011-01-01
An auditory sentence comprehension task investigated the extent to which the integration of contextual and structural cues was mediated by verbal memory span with 32 English-speaking six- to eight-year-old children. Spoken relative clause sentences were accompanied by visual context pictures which fully (depicting the actions described within the…
Gradiency and Visual Context in Syntactic Garden-Paths
ERIC Educational Resources Information Center
Farmer, Thomas A.; Anderson, Sarah E.; Spivey, Michael J.
2007-01-01
Through recording the streaming x- and y-coordinates of computer-mouse movements, we report evidence that visual context provides an immediate constraint on the resolution of syntactic ambiguity in the visual-world paradigm. This finding converges with previous eye-tracking results that support a constraint-based account of sentence processing, in…
Electrophysiological signatures of phonological and semantic maintenance in sentence repetition.
Meltzer, Jed A; Kielar, Aneta; Panamsky, Lilia; Links, Kira A; Deschamps, Tiffany; Leigh, Rosie C
2017-08-01
Verbal short-term memory comprises resources for phonological rehearsal, which have been characterized anatomically, and for maintenance of semantic information, which are less understood. Sentence repetition tasks tap both processes interactively. To distinguish brain activity involved in phonological vs. semantic maintenance, we recorded magnetoencephalography during a sentence repetition task, incorporating three manipulations emphasizing one mechanism over the other. Participants heard sentences or word lists and attempted to repeat them verbatim after a 5-second delay. After MEG, participants completed a cued recall task testing how much they remembered of each sentence. Greater semantic engagement relative to phonological rehearsal was hypothesized for 1) sentences vs. word lists, 2) concrete vs. abstract sentences, and 3) well recalled vs. poorly recalled sentences. During auditory perception and the memory delay period, we found highly left-lateralized activation in the form of 8-30 Hz event-related desynchronization. Compared to abstract sentences, concrete sentences recruited posterior temporal cortex bilaterally, demonstrating a neural signature for the engagement of visual imagery in sentence maintenance. Maintenance of arbitrary word lists recruited right hemisphere dorsal regions, reflecting increased demands on phonological rehearsal. Sentences that were ultimately poorly recalled in the post-test also elicited extra right hemisphere activation when they were held in short-term memory, suggesting increased demands on phonological resources. Frontal midline theta oscillations also reflected phonological rather than semantic demand, being increased for word lists and poorly recalled sentences. These findings highlight distinct neural resources for phonological and semantic maintenance, with phonological maintenance associated with stronger oscillatory modulations. Copyright © 2017 Elsevier Inc. All rights reserved.
Time course of action representations evoked during sentence comprehension.
Heard, Alison W; Masson, Michael E J; Bub, Daniel N
2015-03-01
The nature of hand-action representations evoked during language comprehension was investigated using a variant of the visual-world paradigm in which eye fixations were monitored while subjects viewed a screen displaying four hand postures and listened to sentences describing an actor using or lifting a manipulable object. Displayed postures were related to either a functional (using) or volumetric (lifting) interaction with an object that matched or did not match the object mentioned in the sentence. Subjects were instructed to select the hand posture that matched the action described in the sentence. Even before the manipulable object was mentioned in the sentence, some sentence contexts allowed subjects to infer the object's identity and the type of action performed with it and eye fixations immediately favored the corresponding hand posture. This effect was assumed to be the result of ongoing motor or perceptual imagery in which the action described in the sentence was mentally simulated. In addition, the hand posture related to the manipulable object mentioned in a sentence, but not related to the described action (e.g., a writing posture in the context of a sentence that describes lifting, but not using, a pencil), was favored over other hand postures not related to the object. This effect was attributed to motor resonance arising from conceptual processing of the manipulable object, without regard to the remainder of the sentence context. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Amenta, Simona; Marelli, Marco; Crepaldi, Davide
2015-01-01
In this eye-tracking study, we investigated how semantics inform morphological analysis at the early stages of visual word identification in sentence reading. We exploited a feature of several derived Italian words, that is, that they can be read in a "morphologically transparent" way or in a "morphologically opaque" way…
ERIC Educational Resources Information Center
Devauchelle, Anne-Dominique; Oppenheim, Catherine; Rizzi, Luigi; Dehaene, Stanislas; Pallier, Christophe
2009-01-01
Priming effects have been well documented in behavioral psycholinguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a…
Task relevance induces momentary changes in the functional visual field during reading.
Kaakinen, Johanna K; Hyönä, Jukka
2014-02-01
In the research reported here, we examined whether task demands can induce momentary tunnel vision during reading. More specifically, we examined whether the size of the functional visual field depends on task relevance. Forty participants read an expository text with a specific task in mind while their eye movements were recorded. A display-change paradigm with random-letter strings as preview masks was used to study the size of the functional visual field within sentences that contained task-relevant and task-irrelevant information. The results showed that orthographic parafoveal-on-foveal effects and preview benefits were observed for words within task-irrelevant but not task-relevant sentences. The results indicate that the size of the functional visual field is flexible and depends on the momentary processing demands of a reading task. The higher cognitive processing requirements experienced when reading task-relevant text rather than task-irrelevant text induce momentary tunnel vision, which narrows the functional visual field.
Koeritzer, Margaret A; Rogers, Chad S; Van Engen, Kristin J; Peelle, Jonathan E
2018-03-15
The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. https://doi.org/10.23641/asha.5848059.
Cannito, Michael P; Chorna, Lesya B; Kahane, Joel C; Dworkin, James P
2014-05-01
This study evaluated the hypotheses that sentence production by speakers with adductor (AD) and abductor (AB) spasmodic dysphonia (SD) may be differentially influenced by consonant voicing and manner features, in comparison with healthy, matched, nondysphonic controls. This was a prospective, single blind study, using a between-groups, repeated measures design for the independent variables of perceived voice quality and sentence duration. Sixteen subjects with ADSD and 10 subjects with ABSD, as well as 26 matched healthy controls produced four short, simple sentences that were systematically loaded with voiced or voiceless consonants of either obstruant or continuant manner categories. Experienced voice clinicians, who were "blind" as to speakers' group affixations, used visual analog scaling to judge the overall voice quality of each sentence. Acoustic sentence durations were also measured. Speakers with ABSD or ADSD demonstrated significantly poorer than normal voice quality on all sentences. Speakers with ABSD exhibited longer than normal duration for voiceless consonant sentences. Speakers with ADSD had poorer voice quality for voiced than for voiceless consonant sentences. Speakers with ABSD had longer durations for voiceless than for voiced consonant sentences. The two subtypes of SD exhibit differential performance on the basis of consonant voicing in short, simple sentences; however, each subgroup manifested voicing-related differences on a different variable (voice quality vs sentence duration). Findings suggest different underlying pathophysiological mechanisms for ABSD and ADSD. Findings also support inclusion of short, simple sentences containing voiced or voiceless consonants as part of the diagnostic protocol for SD, with measurement of sentence duration in addition to judments of voice quality severity. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Cohn, Neil; Paczynski, Martin
2013-11-01
Agents consistently appear prior to Patients in sentences, manual signs, and drawings, and Agents are responded to faster when presented in visual depictions of events. We hypothesized that this "Agent advantage" reflects Agents' role in event structure. We investigated this question by manipulating the depictions of Agents and Patients in preparatory actions in wordless visual narratives. We found that Agents elicited a greater degree of predictions regarding upcoming events than Patients, that Agents are viewed longer than Patients, independent of serial order, and that visual depictions of actions are processed more quickly following the presentation of an Agent vs. a Patient. Taken together these findings support the notion that Agents initiate the building of event representation. We suggest that Agent First orders facilitate the interpretation of events as they unfold and that the saliency of Agents within visual representations of events is driven by anticipation of upcoming events. Copyright © 2013 Elsevier Inc. All rights reserved.
Anticipation in Real-world Scenes: The Role of Visual Context and Visual Memory
ERIC Educational Resources Information Center
Coco, Moreno I.; Keller, Frank; Malcolm, George L.
2016-01-01
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically…
Helping Remedial Readers Master the Reading Vocabulary through a Seven Step Method.
ERIC Educational Resources Information Center
Aaron, Robert L.
1981-01-01
An outline of seven important steps for teaching vocabulary development includes components of language development, visual memory, visual-auditory perception, speeded recall, spelling, reading the word in a sentence, and word comprehension in written context. (JN)
Lidestam, Björn; Rönnberg, Jerker
2016-01-01
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667
Flying under the radar: figurative language impairments in focal lesion patients
Ianni, Geena R.; Cardillo, Eileen R.; McQuire, Marguerite; Chatterjee, Anjan
2014-01-01
Despite the prevalent and natural use of metaphor in everyday language, the neural basis of this powerful communication device remains poorly understood. Early studies of brain-injured patients suggested the right hemisphere plays a critical role in metaphor comprehension, but more recent patient and neuroimaging studies do not consistently support this hypothesis. One explanation for this discrepancy is the challenge in designing optimal tasks for brain-injured populations. As traditional aphasia assessments do not assess figurative language comprehension, we designed a new metaphor comprehension task to consider whether impaired metaphor processing is missed by standard clinical assessments. Stimuli consisted of 60 pairs of moderately familiar metaphors and closely matched literal sentences. Sentences were presented visually in a randomized order, followed by four adjective-noun answer choices (target + three foil types). Participants were instructed to select the phrase that best matched the meaning of the sentence. We report the performance of three focal lesion patients and a group of 12 healthy, older controls. Controls performed near ceiling in both conditions, with slightly more accurate performance on literal than metaphoric sentences. While the Western Aphasia Battery (Kertesz, 1982) and the objects and actions naming battery (Druks and Masterson, 2000) indicated minimal to no language difficulty, our metaphor comprehension task indicated three different profiles of metaphor comprehension impairment in the patients’ performance. Single case statistics revealed comparable impairment on metaphoric and literal sentences, disproportionately greater impairment on metaphors than literal sentences, and selective impairment on metaphors. We conclude our task reveals that patients can have selective metaphor comprehension deficits. These deficits are not captured by traditional neuropsychological language assessments, suggesting overlooked communication difficulties. PMID:25404906
Effects of sentence-structure complexity on speech initiation time and disfluency.
Tsiamtsiouris, Jim; Cairns, Helen Smith
2013-03-01
There is general agreement that stuttering is caused by a variety of factors, and language formulation and speech motor control are two important factors that have been implicated in previous research, yet the exact nature of their effects is still not well understood. Our goal was to test the hypothesis that sentences of high structural complexity would incur greater processing costs than sentences of low structural complexity and these costs would be higher for adults who stutter than for adults who do not stutter. Fluent adults and adults who stutter participated in an experiment that required memorization of a sentence classified as low or high structural complexity followed by production of that sentence upon a visual cue. Both groups of speakers initiated most sentences significantly faster in the low structural complexity condition than in the high structural complexity condition. Adults who stutter were over-all slower in speech initiation than were fluent speakers, but there were no significant interactions between complexity and group. However, adults who stutter produced significantly more disfluencies in sentences of high structural complexity than in those of low complexity. After reading this article, the learner will be able to: (a) identify integral parts of all well-known models of adult sentence production; (b) summarize the way that sentence structure might negatively influence the speech production processes; (c) discuss whether sentence structure influences speech initiation time and disfluencies. Copyright © 2012 Elsevier Inc. All rights reserved.
Borovsky, Arielle; Burns, Erin; Elman, Jeffrey L.; Evans, Julia L.
2015-01-01
One remarkable characteristic of speech comprehension in typically developing (TD) children and adults is the speed with which the listener can integrate information across multiple lexical items to anticipate upcoming referents. Although children with Specific Language Impairment (SLI) show lexical deficits (Sheng & McGregor, 2010) and slower speed of processing (Leonard et al., 2007), relatively little is known about how these deficits manifest in real-time sentence comprehension. In this study, we examine lexical activation in the comprehension of simple transitive sentences in adolescents with a history of SLI and age-matched, TD peers. Participants listened to sentences that consisted of the form, Article-Agent-Action-Article-Theme, (e.g., The pirate chases the ship) while viewing pictures of four objects that varied in their relationship to the Agent and Action of the sentence (e.g., Target, Agent-Related, Action-Related, and Unrelated). Adolescents with SLI were as fast as their TD peers to fixate on the sentence’s final item (the Target) but differed in their post-action onset visual fixations to the Action-Related item. Additional exploratory analyses of the spatial distribution of their visual fixations revealed that the SLI group had a qualitatively different pattern of fixations to object images than did the control group. The findings indicate that adolescents with SLI integrate lexical information across words to anticipate likely or expected meanings with the same relative fluency and speed as do their TD peers. However, the failure of the SLI group to show increased fixations to Action-Related items after the onset of the action suggests lexical integration deficits that result in failure to consider alternate sentence interpretations. PMID:24099807
Scan Patterns Predict Sentence Production in the Cross-Modal Processing of Visual Scenes
ERIC Educational Resources Information Center
Coco, Moreno I.; Keller, Frank
2012-01-01
Most everyday tasks involve multiple modalities, which raises the question of how the processing of these modalities is coordinated by the cognitive system. In this paper, we focus on the coordination of visual attention and linguistic processing during speaking. Previous research has shown that objects in a visual scene are fixated before they…
Acoustic and perceptual effects of overall F0 range in a lexical pitch accent distinction
NASA Astrophysics Data System (ADS)
Wade, Travis
2002-05-01
A speaker's overall fundamental frequency range is generally considered a variable, nonlinguistic element of intonation. This study examined the precision with which overall F0 is predictable based on previous intonational context and the extent to which it may be perceptually significant. Speakers of Tokyo Japanese produced pairs of sentences differing lexically only in the presence or absence of a single pitch accent as responses to visual and prerecorded speech cues presented in an interactive manner. F0 placement of high tones (previously observed to be relatively variable in pitch contours) was found to be consistent across speakers and uniformly dependent on the intonation of the different sentences used as cues. In a subsequent perception experiment, continuous manipulation of these same sentences between typical accented and typical non-accent-containing versions were presented to Japanese listeners for lexical identification. Results showed that listeners' perception was not significantly altered in compensation for artificial manipulation of preceding intonation. Implications are discussed within an autosegmental analysis of tone. The current results are consistent with the notion that pitch range (i.e., specific vertical locations of tonal peaks) does not simply vary gradiently across speakers and situations but constitutes a predictable part of the phonetic specification of tones.
Visual functions and disability in diabetic retinopathy patients
Shrestha, Gauri Shankar; Kaiti, Raju
2013-01-01
Purpose This study was undertaken to find correlations between visual functions and visual disabilities in patients with diabetic retinopathy. Method A cross-sectional study was carried out among 38 visually impaired diabetic retinopathy subjects at the Low Vision Clinic of B.P. Koirala Lions Centre for Ophthalmic Studies, Kathmandu. The subjects underwent assessment of distance and near visual acuity, objective and subjective refraction, contrast sensitivity, color vision, and central and peripheral visual fields. The visual disabilities of each subject in their daily lives were evaluated using a standard questionnaire. Multiple regression analysis between visual functions and visual disabilities index was assessed. Result The majority of subjects (42.1%) were of the age group 60–70 years. Best corrected visual acuity was found to be 0.73 ± 0.2 in the better eye and 0.93 ± 0.27 in the worse eye, which was significantly different at p = 0.002. Visual disability scores were significantly higher for legibility of letters (1.2 ± 0.3) and sentences (1.4 ± 0.4), and least for clothing (0.7 ± 0.3). Visual disability index for legibility of letters and sentences was significantly correlated with near visual acuity and peripheral visual field. Contrast sensitivity was also significantly correlated with the visual disability index, and total scores. Conclusion Impairment of near visual acuity, contrast sensitivity, and peripheral visual field correlated significantly with different types of visual disability. Hence, these clinical tests should be an integral part of the visual assessment of diabetic eyes. PMID:24646899
Visual functions and disability in diabetic retinopathy patients.
Shrestha, Gauri Shankar; Kaiti, Raju
2014-01-01
This study was undertaken to find correlations between visual functions and visual disabilities in patients with diabetic retinopathy. A cross-sectional study was carried out among 38 visually impaired diabetic retinopathy subjects at the Low Vision Clinic of B.P. Koirala Lions Centre for Ophthalmic Studies, Kathmandu. The subjects underwent assessment of distance and near visual acuity, objective and subjective refraction, contrast sensitivity, color vision, and central and peripheral visual fields. The visual disabilities of each subject in their daily lives were evaluated using a standard questionnaire. Multiple regression analysis between visual functions and visual disabilities index was assessed. The majority of subjects (42.1%) were of the age group 60-70 years. Best corrected visual acuity was found to be 0.73±0.2 in the better eye and 0.93±0.27 in the worse eye, which was significantly different at p=0.002. Visual disability scores were significantly higher for legibility of letters (1.2±0.3) and sentences (1.4±0.4), and least for clothing (0.7±0.3). Visual disability index for legibility of letters and sentences was significantly correlated with near visual acuity and peripheral visual field. Contrast sensitivity was also significantly correlated with the visual disability index, and total scores. Impairment of near visual acuity, contrast sensitivity, and peripheral visual field correlated significantly with different types of visual disability. Hence, these clinical tests should be an integral part of the visual assessment of diabetic eyes. Copyright © 2013 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
Memory Effects in Syntactic ERP Tasks
ERIC Educational Resources Information Center
Sabourin, Laura; Stowe, Laurie
2004-01-01
The study presented here investigated the role of memory in normal sentence processing by looking at ERP effects to normal sentences and sentences containing grammatical violations. Sentences where the critical word was in the middle of the sentence were compared to sentences where the critical word always occurred in sentence-final position.…
Design of short Italian sentences to assess near vision performance.
Calossi, Antonio; Boccardo, Laura; Fossetti, Alessandro; Radner, Wolfgang
2014-01-01
To develop and validate 28 short Italian sentences for the construction of the Italian version of the Radner Reading Chart to simultaneously measure near visual acuity and reading speed. 41 sentences were constructed in Italian language, following the procedure defined by Radner, to obtain "sentence optotypes" with comparable structure and with the same lexical and grammatical difficulty. Sentences were statistically selected and used in 211 normal, non-presbyopic, native Italian-speaking persons. The most equally matched sentences in terms of reading speed and number of reading errors were selected. To assess the validity of the reading speed results obtained with the 28 selected short sentences, we compared the reading speed and reading errors with the average obtained by reading two long 4th-grade paragraphs (97 and 90 words) under the same conditions. The overall mean reading speed of the tested persons was 189±26wpm. The 28 sentences more similar in terms of reading times were selected, achieving a coefficient of variation (the relative SD) of 2.2%. The reliability analyses yielded an overall Cronbach's alpha coefficient of 0.98. The correlation between the short sentences and the long paragraph was high (r=0.85, P<0.0001). The 28 short single Italian sentences optotypes were highly comparable in syntactical structure, number, position, and length of words, lexical difficulty, and reading length. The resulting Italian Radner Reading Chart is precise (high consistency) and practical (short sentences) and therefore useful for research and clinical practice to simultaneously measure near reading acuity and reading speed. Copyright © 2013 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
Emotion computing using Word Mover's Distance features based on Ren_CECps.
Ren, Fuji; Liu, Ning
2018-01-01
In this paper, we propose an emotion separated method(SeTF·IDF) to assign the emotion labels of sentences with different values, which has a better visual effect compared with the values represented by TF·IDF in the visualization of a multi-label Chinese emotional corpus Ren_CECps. Inspired by the enormous improvement of the visualization map propelled by the changed distances among the sentences, we being the first group utilizes the Word Mover's Distance(WMD) algorithm as a way of feature representation in Chinese text emotion classification. Our experiments show that both in 80% for training, 20% for testing and 50% for training, 50% for testing experiments of Ren_CECps, WMD features get the best f1 scores and have a greater increase compared with the same dimension feature vectors obtained by dimension reduction TF·IDF method. Compared experiments in English corpus also show the efficiency of WMD features in the cross-language field.
Sodhi-Berry, Nita; Knuiman, Matthew; Preen, David B; Alan, Janine; Morgan, Vera A
2015-12-01
Little is known about whether or how offenders use mental health services after sentence completion. This study aimed to determine the likelihood of such service use by adult (18-44 years) first-time offenders up to 5 years after sentence completion and possible predictor variables. Pre-sentence and post-sentence mental health service use was obtained from whole-population linked administrative data on 23,661 adult offenders. Cox proportional hazard models were used to determine which socio-demographic, offending and pre-sentence health service variables were associated with such post-sentence service use. The estimated 5-year probability of any post-sentence mental health service use was 12% for offenders who had not previously used such services, but still only 42% for those who had. For the latter, best predictors of post-sentence use were past psychiatric diagnosis and history of self-harm; history of self-harm also predicted post-sentence use among new mental health services users and so also did past physical illness. Indigenous offenders had a greater likelihood of service use for any mental disorder or for substance use disorders than non-Indigenous offenders, irrespective of pre-sentence use. Among those with pre-sentence service contact, imprisoned offenders were less likely to use mental health services after sentence than those under community penalties; in its absence, socio-economic disadvantage and geographic accessibility were associated with greater likelihood of post-sentence use. Our findings highlight the discontinuity of mental healthcare for most sentenced offenders, but especially prisoners, and suggest a need for better management strategies for these vulnerable groups with mental disorders. Copyright © 2014 John Wiley & Sons, Ltd.
Influence of Visual Information on the Intelligibility of Dysarthric Speech
ERIC Educational Resources Information Center
Keintz, Connie K.; Bunton, Kate; Hoit, Jeannette D.
2007-01-01
Purpose: To examine the influence of visual information on speech intelligibility for a group of speakers with dysarthria associated with Parkinson's disease. Method: Eight speakers with Parkinson's disease and dysarthria were recorded while they read sentences. Speakers performed a concurrent manual task to facilitate typical speech production.…
Structured Natural-Language Descriptions for Semantic Content Retrieval of Visual Materials.
ERIC Educational Resources Information Center
Tam, A. M.; Leung, C. H. C.
2001-01-01
Proposes a structure for natural language descriptions of the semantic content of visual materials that requires descriptions to be (modified) keywords, phrases, or simple sentences, with components that are grammatical relations common to many languages. This structure makes it easy to implement a collection's descriptions as a relational…
ERIC Educational Resources Information Center
Chung-Fat-Yim, Ashley; Peterson, Jordan B.; Mar, Raymond A.
2017-01-01
Previous studies on discourse have employed a self-paced sentence-by-sentence paradigm to present text and record reading times. However, presenting discourse this way does not mirror real-world reading conditions; for example, this paradigm prevents regressions to earlier portions of the text. The purpose of the present study is to investigate…
Most, Tova; Aviner, Chen
2009-01-01
This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust. The emotional content was placed upon the same neutral sentence. The stimuli were presented in auditory, visual, and combined auditory-visual modes. The results revealed better auditory identification by the participants with NH in comparison to all groups of participants with hearing loss (HL). No differences were found among the groups with HL in each of the 3 modes. Although auditory-visual perception was better than visual-only perception for the participants with NH, no such differentiation was found among the participants with HL. The results question the efficiency of some currently used CIs in providing the acoustic cues required to identify the speaker's emotional state.
Effects of word frequency and modality on sentence comprehension impairments in people with aphasia.
DeDe, Gayle
2012-05-01
It is well known that people with aphasia have sentence comprehension impairments. The present study investigated whether lexical factors contribute to sentence comprehension impairments in both the auditory and written modalities using online measures of sentence processing. People with aphasia and non brain-damaged controls participated in the experiment (n = 8 per group). Twenty-one sentence pairs containing high- and low-frequency words were presented in self-paced listening and reading tasks. The sentences were syntactically simple and differed only in the critical words. The dependent variables were response times for critical segments of the sentence and accuracy on the comprehension questions. The results showed that word frequency influences performance on measures of sentence comprehension in people with aphasia. The accuracy data on the comprehension questions suggested that people with aphasia have more difficulty understanding sentences containing low-frequency words in the written compared to auditory modality. Both group and single-case analyses of the response time data also indicated that people with aphasia experience more difficulty with reading than listening. Sentence comprehension in people with aphasia is influenced by word frequency and presentation modality.
From a Gloss to a Learning Tool: Does Visual Aids Enhance Better Sentence Comprehension?
ERIC Educational Resources Information Center
Sato, Takeshi; Suzuki, Akio
2012-01-01
The aim of this study is to optimize CALL environments as a learning tool rather than a gloss, focusing on the learning of polysemous words which refer to spatial relationship between objects. A lot of research has already been conducted to examine the efficacy of visual glosses while reading L2 texts and has reported that visual glosses can be…
Word Order and Voice Influence the Timing of Verb Planning in German Sentence Production.
Sauppe, Sebastian
2017-01-01
Theories of incremental sentence production make different assumptions about when speakers encode information about described events and when verbs are selected, accordingly. An eye tracking experiment on German testing the predictions from linear and hierarchical incrementality about the timing of event encoding and verb planning is reported. In the experiment, participants described depictions of two-participant events with sentences that differed in voice and word order. Verb-medial active sentences and actives and passives with sentence-final verbs were compared. Linear incrementality predicts that sentences with verbs placed early differ from verb-final sentences because verbs are assumed to only be planned shortly before they are articulated. By contrast, hierarchical incrementality assumes that speakers start planning with relational encoding of the event. A weak version of hierarchical incrementality assumes that only the action is encoded at the outset of formulation and selection of lexical verbs only occurs shortly before they are articulated, leading to the prediction of different fixation patterns for verb-medial and verb-final sentences. A strong version of hierarchical incrementality predicts no differences between verb-medial and verb-final sentences because it assumes that verbs are always lexically selected early in the formulation process. Based on growth curve analyses of fixations to agent and patient characters in the described pictures, and the influence of character humanness and the lack of an influence of the visual salience of characters on speakers' choice of active or passive voice, the current results suggest that while verb planning does not necessarily occur early during formulation, speakers of German always create an event representation early.
Ma, Tengfei; Chen, Ran; Dunlap, Susan; Chen, Baoguo
2016-01-01
This paper presents the results of an experiment that investigated the effects of number and presentation order of high-constraint sentences on semantic processing of unknown second language (L2) words (pseudowords) through reading. All participants were Chinese native speakers who learned English as a foreign language. In the experiment, sentence constraint and order of different constraint sentences were manipulated in English sentences, as well as L2 proficiency level of participants. We found that the number of high-constraint sentences was supportive for L2 word learning except in the condition in which high-constraint exposure was presented first. Moreover, when the number of high-constraint sentences was the same, learning was significantly better when the first exposure was a high-constraint exposure. And no proficiency level effects were found. Our results provided direct evidence that L2 word learning benefited from high quality language input and first presentations of high quality language input.
Association with emotional information alters subsequent processing of neutral faces
Riggs, Lily; Fujioka, Takako; Chan, Jessica; McQuiggan, Douglas A.; Anderson, Adam K.; Ryan, Jennifer D.
2014-01-01
The processing of emotional as compared to neutral information is associated with different patterns in eye movement and neural activity. However, the ‘emotionality’ of a stimulus can be conveyed not only by its physical properties, but also by the information that is presented with it. There is very limited work examining the how emotional information may influence the immediate perceptual processing of otherwise neutral information. We examined how presenting an emotion label for a neutral face may influence subsequent processing by using eye movement monitoring (EMM) and magnetoencephalography (MEG) simultaneously. Participants viewed a series of faces with neutral expressions. Each face was followed by a unique negative or neutral sentence to describe that person, and then the same face was presented in isolation again. Viewing of faces paired with a negative sentence was associated with increased early viewing of the eye region and increased neural activity between 600 and 1200 ms in emotion processing regions such as the cingulate, medial prefrontal cortex, and amygdala, as well as posterior regions such as the precuneus and occipital cortex. Viewing of faces paired with a neutral sentence was associated with increased activity in the parahippocampal gyrus during the same time window. By monitoring behavior and neural activity within the same paradigm, these findings demonstrate that emotional information alters subsequent visual scanning and the neural systems that are presumably invoked to maintain a representation of the neutral information along with its emotional details. PMID:25566024
Sentence Combining: Everything for Everybody or Something for Somebody.
ERIC Educational Resources Information Center
Ney, James W.
Sentence combining exercises present material to the students to be mastered by processes similar to memorization. By taking ideas in short sentences and compacting them into larger sentences, students become familiar with the relationships between the ideas in the short sentences. At its best, sentence combining is a process that requires the…
Miller, Christi W; Stewart, Erin K; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A; Tremblay, Kelly
2017-08-16
This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.
Use of Closed-Circuit Television with a Severely Visually Impaired Young Child.
ERIC Educational Resources Information Center
Miller-Wood, D. J.; And Others
1990-01-01
A closed-circuit television system was used with a five-year-old girl with severely limited vision to develop visual skills, especially skills related to concept formation. At the end of training, the girl could recognize lines, forms, shapes, letters, numbers, and words and could read short sentences. (Author/JDD)
Visual or Auditory Processing Style and Strategy Effectiveness.
ERIC Educational Resources Information Center
Weed, Keri; Ryan, Ellen Bouchard
In a study that investigated differences in the processing styles of beginning readers, a Pictograph Sentence Memory Test (PSMT) was administered to first and second grade students to determine their processing style as well as to assess instructional effects. Based on their responses to the PSMT, the children were classified as either visual or…
Brandão, Lenisa; Monção, Ana Maria; Andersson, Richard; Holmqvist, Kenneth
2014-01-01
The goal of this study was to investigate whether on-topic visual cues can serve as aids for the maintenance of discourse coherence and informativeness in autobiographical narratives of persons with Alzheimer's disease (AD). The experiment consisted of three randomized conversation conditions: one without prompts, showing a blank computer screen; an on-topic condition, showing a picture and a sentence about the conversation; and an off-topic condition, showing a picture and a sentence which were unrelated to the conversation. Speech was recorded while visual attention was examined using eye tracking to measure how long participants looked at cues and the face of the listener. Results suggest that interventions using visual cues in the form of images and written information are useful to improve discourse informativeness in AD. This study demonstrated the potential of using images and short written messages as means of compensating for the cognitive deficits which underlie uninformative discourse in AD. Future studies should further investigate the efficacy of language interventions based in the use of these compensation strategies for AD patients and their family members and friends.
Payne, Brennan R; Stine-Morrow, Elizabeth A L
2014-06-01
We report a secondary data analysis investigating age differences in the effects of clause and sentence wrap-up on reading time distributions during sentence comprehension. Residual word-by-word self-paced reading times were fit to the ex-Gaussian distribution to examine age differences in the effects of clause and sentence wrap-up on both the location and shape of participants' reaction time (RT) distributions. The ex-Gaussian distribution showed good fit to the data in both younger and older adults. Sentence wrap-up increased the central tendency, the variability, and the tail of the distribution, and these effects were exaggerated among the old. In contrast, clause wrap-up influenced the tail of the distribution only, and did so differentially for older adults. Effects were confirmed via nonparametric vincentile plots. Individual differences in visual acuity, working memory, speed of processing, and verbal ability were differentially related to ex-Gaussian parameters reflecting wrap-up effects on underlying reading time distributions. These findings argue against simple pause mechanisms to explain end-of-clause and end-of-sentence reading time patterns; rather, the findings are consistent with a cognitively effortful view of wrap-up and suggest that age and individual differences in attentional allocation to semantic integration during reading, as revealed by RT distribution analyses, play an important role in sentence understanding. PsycINFO Database Record (c) 2014 APA, all rights reserved.
EEG Correlates of Song Prosody: A New Look at the Relationship between Linguistic and Musical Rhythm
Gordon, Reyna L.; Magne, Cyrille L.; Large, Edward W.
2011-01-01
Song composers incorporate linguistic prosody into their music when setting words to melody, a process called “textsetting.” Composers tend to align the expected stress of the lyrics with strong metrical positions in the music. The present study was designed to explore the idea that temporal alignment helps listeners to better understand song lyrics by directing listeners’ attention to instances where strong syllables occur on strong beats. Three types of textsettings were created by aligning metronome clicks with all, some or none of the strong syllables in sung sentences. Electroencephalographic recordings were taken while participants listened to the sung sentences (primes) and performed a lexical decision task on subsequent words and pseudowords (targets, presented visually). Comparison of misaligned and well-aligned sentences showed that temporal alignment between strong/weak syllables and strong/weak musical beats were associated with modulations of induced beta and evoked gamma power, which have been shown to fluctuate with rhythmic expectancies. Furthermore, targets that followed well-aligned primes elicited greater induced alpha and beta activity, and better lexical decision task performance, compared with targets that followed misaligned and varied sentences. Overall, these findings suggest that alignment of linguistic stress and musical meter in song enhances musical beat tracking and comprehension of lyrics by synchronizing neural activity with strong syllables. This approach may begin to explain the mechanisms underlying the relationship between linguistic and musical rhythm in songs, and how rhythmic attending facilitates learning and recall of song lyrics. Moreover, the observations reported here coincide with a growing number of studies reporting interactions between the linguistic and musical dimensions of song, which likely stem from shared neural resources for processing music and speech. PMID:22144972
Effects of Word Frequency and Modality on Sentence Comprehension Impairments in People with Aphasia
DeDe, Gayle
2014-01-01
Purpose It is well known that people with aphasia have sentence comprehension impairments. The present study investigated whether lexical factors contribute to sentence comprehension impairments in both the auditory and written modalities using on-line measures of sentence processing. Methods People with aphasia and non-brain-damaged controls participated in the experiment (n=8 per group). Twenty-one sentence pairs containing high and low frequency words were presented in self-paced listening and reading tasks. The sentences were syntactically simple and differed only in the critical words. The dependent variables were response times for critical segments of the sentence and accuracy on the comprehension questions. Results The results showed that word frequency influences performance on measures of sentence comprehension in people with aphasia. The accuracy data on the comprehension questions suggested that people with aphasia have more difficulty understanding sentences containing low frequency words in the written compared to auditory modality. Both group and single case analyses of the response time data also pointed to more difficulty with reading than listening. Conclusions The results show that sentence comprehension in people with aphasia is influenced by word frequency and presentation modality. PMID:22294411
Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina
2017-11-22
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. Copyright © 2017 the authors 0270-6474/17/3711495-10$15.00/0.
Kanjlia, Shipra; Merabet, Lotfi B.
2017-01-01
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the “VWFA” is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind (n = 10, 9 female, 1 male) and sighted control (n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. PMID:29061700
ERIC Educational Resources Information Center
Akin, Judy O'Neal
1978-01-01
Sample sentence-combining lessons developed to accompany the first-year A-LM German textbook are presented. The exercises are designed for language manipulation practice; they involve breaking down more complex sentences into simpler sentences and the subsequent recombination into complex sentences. All language skills, and particularly writing,…
Real-time comprehension of wh- movement in aphasia: Evidence from eyetracking while listening
Dickey, Michael Walsh; Choy, JungWon Janet; Thompson, Cynthia K.
2007-01-01
Sentences with non-canonical wh- movement are often difficult for individuals with agrammatic Broca's aphasia to understand (Caramazza & Zurif, 1976, inter alia). However, the explanation of this difficulty remains controversial, and little is known about how individuals with aphasia try to understand such sentences in real time. This study uses an eyetracking while listening paradigm (Tanenhaus, et al., 1995) to examine agrammatic aphasic individuals' on-line comprehension of movement sentences. Participants' eye-movements were monitored while they listened to brief stories. These stories were followed by comprehension probes involving wh- movement, and looked at visual displays depicting elements mentioned in the story. In line with previous results for young normal listeners (Sussman & Sedivy, 2003), the study finds that both older unimpaired control participants (n=8) and aphasic individuals (n=12) showed visual evidence of successful automatic comprehension of wh- questions (like “Who did the boy kiss that day at school?”). Specifically, both groups fixated on a picture corresponding to the moved element (“who,” the person kissed in the story) at the position of the verb. Interestingly, aphasic participants showed qualitatively different fixation patterns for trials eliciting correct and incorrect responses. Aphasic individuals looked to first the moved-element picture and then to a competitor following the verb in the incorrect trials, indicating initially correct automatic processing. However, they only showed looks to the moved-element picture for the correct trials, parallel to control participants. Furthermore, aphasic individuals' fixations during movement sentences were just as fast as control participants' fixations. These results are unexpected under slowed-processing accounts of aphasic comprehension deficits, in which the source of failed comprehension should be delayed application of the same processing routines used in successful comprehension. This pattern is also unexpected if aphasic individuals are using qualitatively different strategies to comprehend such sentences, as under impaired-representation accounts of agrammatism (Grodzinsky, 1990, 2000; Mauner, Fromkin & Cornell, 1993). Instead, it suggests that agrammatic aphasic individuals may process wh- questions similarly to unimpaired individuals, but that this process often fails. However, even in cases of failed comprehension, aphasic individuals showed visual evidence of successful automatic processing. PMID:16844211
Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia
2015-01-01
We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.
Meneghetti, Chiara; Labate, Enia; Pazzaglia, Francesca; Hamilton, Colin; Gyselinck, Valérie
2017-05-01
This study examines the involvement of spatial and visual working memory (WM) in the construction of flexible spatial models derived from survey and route descriptions. Sixty young adults listened to environment descriptions, 30 from a survey perspective and the other 30 from a route perspective, while they performed spatial (spatial tapping [ST]) and visual (dynamic visual noise [DVN]) secondary tasks - believed to overload the spatial and visual working memory (WM) components, respectively - or no secondary task (control, C). Their mental representations of the environment were tested by free recall and a verification test with both route and survey statements. Results showed that, for both recall tasks, accuracy was worse in the ST than in the C or DVN conditions. In the verification test, the effect of both ST and DVN was a decreasing accuracy for sentences testing spatial relations from the opposite perspective to the one learnt than if the perspective was the same; only ST had a stronger interference effect than the C condition for sentences from the opposite perspective from the one learnt. Overall, these findings indicate that both visual and spatial WM, and especially the latter, are involved in the construction of perspective-flexible spatial models. © 2016 The British Psychological Society.
ERIC Educational Resources Information Center
Obrecht, Dean H.
This report contrasts the results of a rigidly specified, pattern-oriented approach to learning Spanish with an approach that emphasizes the origination of sentences by the learner in direct response to stimuli. Pretesting and posttesting statistics are presented and conclusions are discussed. The experimental method, which required the student to…
Heim, Stefan; von Tongeln, Franziska; Hillen, Rebekka; Horbach, Josefine; Radach, Ralph; Günther, Thomas
2018-06-19
The Landolt paradigm is a visual scanning task intended to evoke reading-like eye-movements in the absence of orthographic or lexical information, thus allowing the dissociation of (sub-) lexical vs. visual processing. To that end, all letters in real word sentences are exchanged for closed Landolt rings, with 0, 1, or 2 open Landolt rings as targets in each Landolt sentence. A preliminary fMRI block-design study (Hillen et al. in Front Hum Neurosci 7:1-14, 2013) demonstrated that the Landolt paradigm has a special neural signature, recruiting the right IPS and SPL as part of the endogenous attention network. However, in that analysis, the brain responses to target detection could not be separated from those involved in processing Landolt stimuli without targets. The present study presents two fMRI experiments testing the question whether targets or the Landolt stimuli per se, led to the right IPS/SPL activation. Experiment 1 was an event-related re-analysis of the Hillen et al. (Front Hum Neurosci 7:1-14, 2013) data. Experiment 2 was a replication study with a new sample and identical procedures. In both experiments, the right IPS/SPL were recruited in the Landolt condition as compared to orthographic stimuli even in the absence of any target in the stimulus, indicating that the properties of the Landolt task itself trigger this right parietal activation. These findings are discussed against the background of behavioural and neuroimaging studies of healthy reading as well as developmental and acquired dyslexia. Consequently, this neuroimaging evidence might encourage the use of the Landolt paradigm also in the context of examining reading disorders, as it taps into the orientation of visual attention during reading-like scanning of stimuli without interfering sub-lexical information.
Emotion computing using Word Mover’s Distance features based on Ren_CECps
2018-01-01
In this paper, we propose an emotion separated method(SeTF·IDF) to assign the emotion labels of sentences with different values, which has a better visual effect compared with the values represented by TF·IDF in the visualization of a multi-label Chinese emotional corpus Ren_CECps. Inspired by the enormous improvement of the visualization map propelled by the changed distances among the sentences, we being the first group utilizes the Word Mover’s Distance(WMD) algorithm as a way of feature representation in Chinese text emotion classification. Our experiments show that both in 80% for training, 20% for testing and 50% for training, 50% for testing experiments of Ren_CECps, WMD features get the best f1 scores and have a greater increase compared with the same dimension feature vectors obtained by dimension reduction TF·IDF method. Compared experiments in English corpus also show the efficiency of WMD features in the cross-language field. PMID:29624573
The development of a multimedia online language assessment tool for young children with autism.
Lin, Chu-Sui; Chang, Shu-Hui; Liou, Wen-Ying; Tsai, Yu-Show
2013-10-01
This study aimed to provide early childhood special education professionals with a standardized and comprehensive language assessment tool for the early identification of language learning characteristics (e.g., hyperlexia) of young children with autism. In this study, we used computer technology to develop a multi-media online language assessment tool that presents auditory or visual stimuli. This online comprehensive language assessment consists of six subtests: decoding, homographs, auditory vocabulary comprehension, visual vocabulary comprehension, auditory sentence comprehension, and visual sentence comprehension. Three hundred typically developing children and 35 children with autism from Tao-Yuan County in Taiwan aged 4-6 participated in this study. The Cronbach α values of the six subtests ranged from .64 to .97. The variance explained by the six subtests ranged from 14% to 56%, the current validity of each subtest with the Peabody Picture Vocabulary Test-Revised ranged from .21 to .45, and the predictive validity of each subtest with WISC-III ranged from .47 to .75. This assessment tool was also found to be able to accurately differentiate children with autism up to 92%. These results indicate that this assessment tool has both adequate reliability and validity. Additionally, 35 children with autism have completed the entire assessment in this study without exhibiting any extremely troubling behaviors. However, future research is needed to increase the sample size of both typically developing children and young children with autism and to overcome the technical challenges associated with internet issues. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optical Phonetics and Visual Perception of Lexical and Phrasal Stress in English
ERIC Educational Resources Information Center
Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L.; Cho, Taehong; Alwan, Abeer
2009-01-01
In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of…
Strategies in Reading Comprehension: III. Visual Imagery as a Psychological Process.
ERIC Educational Resources Information Center
Levin, Joel R.; Divine-Hawkins, Patricia
The viability of visual imagery as a prose-learning process was evaluated in two experiments with elementary school children in this study. In experiment one, two concrete ten-sentence passages were constructed. The attributes of two subclasses were contrasted in each passage (two kinds of monkeys in one passage, and two kinds of cars in the…
ERIC Educational Resources Information Center
Hustad, Katherine C.; Dardis, Caitlin M.; Mccourt, Kelly A.
2007-01-01
This study examined the independent and interactive effects of visual information and linguistic class of words on intelligibility of dysarthric speech. Seven speakers with dysarthria participated in the study, along with 224 listeners who transcribed speech samples in audiovisual (AV) or audio-only (AO) listening conditions. Orthographic…
Ruthmann, Katja; Schacht, Annekathrin
2017-01-01
Abstract Emotional stimuli attract attention and lead to increased activity in the visual cortex. The present study investigated the impact of personal relevance on emotion processing by presenting emotional words within sentences that referred to participants’ significant others or to unknown agents. In event-related potentials, personal relevance increased visual cortex activity within 100 ms after stimulus onset and the amplitudes of the Late Positive Complex (LPC). Moreover, personally relevant contexts gave rise to augmented pupillary responses and higher arousal ratings, suggesting a general boost of attention and arousal. Finally, personal relevance increased emotion-related ERP effects starting around 200 ms after word onset; effects for negative words compared to neutral words were prolonged in duration. Source localizations of these interactions revealed activations in prefrontal regions, in the visual cortex and in the fusiform gyrus. Taken together, these results demonstrate the high impact of personal relevance on reading in general and on emotion processing in particular. PMID:28541505
Development of a Low-Cost, Noninvasive, Portable Visual Speech Recognition Program.
Kohlberg, Gavriel D; Gal, Ya'akov Kobi; Lalwani, Anil K
2016-09-01
Loss of speech following tracheostomy and laryngectomy severely limits communication to simple gestures and facial expressions that are largely ineffective. To facilitate communication in these patients, we seek to develop a low-cost, noninvasive, portable, and simple visual speech recognition program (VSRP) to convert articulatory facial movements into speech. A Microsoft Kinect-based VSRP was developed to capture spatial coordinates of lip movements and translate them into speech. The articulatory speech movements associated with 12 sentences were used to train an artificial neural network classifier. The accuracy of the classifier was then evaluated on a separate, previously unseen set of articulatory speech movements. The VSRP was successfully implemented and tested in 5 subjects. It achieved an accuracy rate of 77.2% (65.0%-87.6% for the 5 speakers) on a 12-sentence data set. The mean time to classify an individual sentence was 2.03 milliseconds (1.91-2.16). We have demonstrated the feasibility of a low-cost, noninvasive, portable VSRP based on Kinect to accurately predict speech from articulation movements in clinically trivial time. This VSRP could be used as a novel communication device for aphonic patients. © The Author(s) 2016.
Lorusso, Maria Luisa; Burigo, Michele; Borsa, Virginia; Molteni, Massimo
2015-01-01
Forty native Italian children (age 6–15) performed a sentence plausibility judgment task. ERP recordings were available for 12 children with specific language impairment (SLI), 11 children with nonverbal learning disabilities (NVLD), and 13 control children. Participants listened to verb-object combinations and judged them as acceptable or unacceptable. Stimuli belonged to four conditions, where concreteness and congruency were manipulated. All groups made more errors responding to abstract and to congruent sentences. Moreover, SLI participants performed worse than NVLD participants with abstract sentences. ERPs were analyzed in the time window 300–500 ms. SLI children show atypical, reversed effects of concreteness and congruence as compared to control and NVLD children, respectively. The results suggest that linguistic impairments disrupt abstract language processing more than visual-motor impairments. Moreover, ROI and SPM analyses of ERPs point to a predominant involvement of the left rather than the right hemisphere in the comprehension of figurative expressions. PMID:26246693
Resolving Conflicts Between Syntax and Plausibility in Sentence Comprehension
Andrews, Glenda; Ogden, Jessica E.; Halford, Graeme S.
2017-01-01
Comprehension of plausible and implausible object- and subject-relative clause sentences with and without prepositional phrases was examined. Undergraduates read each sentence then evaluated a statement as consistent or inconsistent with the sentence. Higher acceptance of consistent than inconsistent statements indicated reliance on syntactic analysis. Higher acceptance of plausible than implausible statements reflected reliance on semantic plausibility. There was greater reliance on semantic plausibility and lesser reliance on syntactic analysis for more complex object-relatives and sentences with prepositional phrases than for less complex subject-relatives and sentences without prepositional phrases. Comprehension accuracy and confidence were lower when syntactic analysis and semantic plausibility yielded conflicting interpretations. The conflict effect on comprehension was significant for complex sentences but not for less complex sentences. Working memory capacity predicted resolution of the syntax-plausibility conflict in more and less complex items only when sentences and statements were presented sequentially. Fluid intelligence predicted resolution of the conflict in more and less complex items under sequential and simultaneous presentation. Domain-general processes appear to be involved in resolving syntax-plausibility conflicts in sentence comprehension. PMID:28458748
Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation
Banks, Briony; Gowen, Emma; Munro, Kevin J.; Adank, Patti
2015-01-01
Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker’s facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants’ eye gaze was recorded to verify that they looked at the speaker’s face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation. PMID:26283946
Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.
Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti
2015-01-01
Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.
Creativity, Comprehension, Conversation and the Hippocampal Region: New Data and Theory
MacKay, Donald G.; Goldstein, Rutherford
2017-01-01
Present findings indicate that hippocampal region (HR) damage impairs aspects of everyday language comprehension and production that require creativity ___ defined as the ability to form new internal representations that satisfy relevant constraints for being useful or valuable in the real world. In two studies, seventeen people participated in extensive face-to-face interviews: sixteen normal individuals and H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Study 1 demonstrated deficits in H.M.’s comprehension of creative but not routine aspects of the interviews ___ extending to the real world twelve prior demonstrations that H.M. understands routine but not novel aspects of experimentally constructed sentences, deficits that reflected his HR damage, but not his cerebellar damage, his explicit or declarative memory problems, inability to comprehend or recall the instructions, forgetting, poor visual acuity, motoric slowing, time pressure, deficits in visual scanning or attentional allocation, lack of motivation, and excessive memory load in the tasks. Study 2 demonstrated similar deficits in H.M.’s ability to produce creative but not routine aspects of conversational discourse, extending findings in five prior sentence production experiments to real-world creativity. We discuss conceptual frameworks for explaining relations between new-and-useful creativity and the HR. PMID:29130066
The Distribution of Fixation Durations during Reading: Effects of Stimulus Quality
ERIC Educational Resources Information Center
White, Sarah J.; Staub, Adrian
2012-01-01
Participants' eye movements were recorded as they read single sentences presented normally, presented entirely in faint text, or presented normally except for a single faint word. Fixations were longer when the entire sentence was faint than when the sentence was presented normally. In addition, fixations were much longer on a single faint word…
ERIC Educational Resources Information Center
Scholes, Robert J.; And Others
The effects of sentence imitation and picture verification on the recall of subsequent digits were studied. Stimuli consisted of 20 sentences, each sentence followed by a string of five digit names, and five structural types of sentences were presented. Subjects were instructed to listen to the sentence and digit string and then either immediately…
Wolff, Susann; Schlesewsky, Matthias; Hirotani, Masako; Bornkessel-Schlesewsky, Ina
2008-11-01
We present two ERP studies on the processing of word order variations in Japanese, a language that is suited to shedding further light on the implications of word order freedom for neurocognitive approaches to sentence comprehension. Experiment 1 used auditory presentation and revealed that initial accusative objects elicit increased processing costs in comparison to initial subjects (in the form of a transient negativity) only when followed by a prosodic boundary. A similar effect was observed using visual presentation in Experiment 2, however only for accusative but not for dative objects. These results support a relational account of word order processing, in which the costs of comprehending an object-initial word order are determined by the linearization properties of the initial object in relation to the linearization properties of possible upcoming arguments. In the absence of a prosodic boundary, the possibility for subject omission in Japanese renders it likely that the initial accusative is the only argument in the clause. Hence, no upcoming arguments are expected and no linearization problem can arise. A prosodic boundary or visual segmentation, by contrast, indicate an object-before-subject word order, thereby leading to a mismatch between argument "prominence" (e.g. in terms of thematic roles) and linear order. This mismatch is alleviated when the initial object is highly prominent itself (e.g. in the case of a dative, which can bear the higher-ranking thematic role in a two argument relation). We argue that the processing mechanism at work here can be distinguished from more general aspects of "dependency processing" in object-initial sentences.
van Hoesel, Richard J M
2015-04-01
One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.
Stewart, Erin K.; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A.; Tremblay, Kelly
2017-01-01
Purpose This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Method Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. Results A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. Conclusion The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed. PMID:28744550
Matching voice and face identity from static images.
Mavica, Lauren W; Barenholtz, Elan
2013-04-01
Previous research has suggested that people are unable to correctly choose which unfamiliar voice and static image of a face belong to the same person. Here, we present evidence that people can perform this task with greater than chance accuracy. In Experiment 1, participants saw photographs of two, same-gender models, while simultaneously listening to a voice recording of one of the models pictured in the photographs and chose which of the two faces they thought belonged to the same model as the recorded voice. We included three conditions: (a) the visual stimuli were frontal headshots (including the neck and shoulders) and the auditory stimuli were recordings of spoken sentences; (b) the visual stimuli only contained cropped faces and the auditory stimuli were full sentences; (c) we used the same pictures as Condition 1 but the auditory stimuli were recordings of a single word. In Experiment 2, participants performed the same task as in Condition 1 of Experiment 1 but with the stimuli presented in sequence. Participants also rated the model's faces and voices along multiple "physical" dimensions (e.g., weight,) or "personality" dimensions (e.g., extroversion); the degree of agreement between the ratings for each model's face and voice was compared to performance for that model in the matching task. In all three conditions, we found that participants chose, at better than chance levels, which faces and voices belonged to the same person. Performance in the matching task was not correlated with the degree of agreement on any of the rated dimensions.
CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset
Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini
2014-01-01
People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738
A No-Grammar Approach to Sentence Power: John C. Mellon's Sentence-Combining Games.
ERIC Educational Resources Information Center
Cooper, Charles R.
1971-01-01
This study is concerned with increasing the rate at which children progress toward more highly differentiated sentence structure. The study recommends sentence-combining practices that will accelerate this progress. The two main purposes of grammar study have been to prevent errors in writing and to present the full range of sentence structures…
Do Example Sentences Work in Direct Vocabulary Learning?
ERIC Educational Resources Information Center
Baicheng, Zhang
2009-01-01
In the present study of language learning, three presentation modes (varying from providing or not providing example sentences by the teacher and by the students themselves) have been utilised to examine the effectiveness of using example sentences in vocabulary presentation and learning activities. The study is of 58 English majors as the…
Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea
2017-07-14
Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Chen, Ai-Hong; Buari, Noor Halilah; Jufri, Shauqiah
2017-01-01
Passages with continuous sentences are commonly used for the assessment of reading performance related to visual function, and rehabilitation in optometric practices. Passages created in native languages are crucial for a reliable interpretation in a real scenario. This study aimed to report the development of SAH Reading Passage Compendium…
Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui
2017-12-01
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
Brandão, Lenisa; Monção, Ana Maria; Andersson, Richard; Holmqvist, Kenneth
2014-01-01
Objective The goal of this study was to investigate whether on-topic visual cues can serve as aids for the maintenance of discourse coherence and informativeness in autobiographical narratives of persons with Alzheimer's disease (AD). Methods The experiment consisted of three randomized conversation conditions: one without prompts, showing a blank computer screen; an on-topic condition, showing a picture and a sentence about the conversation; and an off-topic condition, showing a picture and a sentence which were unrelated to the conversation. Speech was recorded while visual attention was examined using eye tracking to measure how long participants looked at cues and the face of the listener. Results Results suggest that interventions using visual cues in the form of images and written information are useful to improve discourse informativeness in AD. Conclusion This study demonstrated the potential of using images and short written messages as means of compensating for the cognitive deficits which underlie uninformative discourse in AD. Future studies should further investigate the efficacy of language interventions based in the use of these compensation strategies for AD patients and their family members and friends. PMID:29213914
Mashal, Nira; Faust, Miriam; Hendler, Talma; Jung-Beeman, Mark
2008-01-01
The present study examined the role of the left (LH) and right (RH) cerebral hemispheres in processing alternative meanings of idiomatic sentences. We conducted two experiments using ambiguous idioms with plausible literal interpretations as stimuli. In the first experiment we tested hemispheric differences in accessing either the literal or the idiomatic meaning of idioms for targets presented to either the left or the right visual field. In the second experiment, we used functional magnetic resonance imaging (fMRI) to define regional brain activation patterns in healthy adults processing either the idiomatic meaning of idioms or the literal meanings of either idioms or literal sentences. According to the Graded Salience Hypothesis (GSH, Giora, 2003), a selective RH involvement in the processing of nonsalient meanings, such as literal interpretations of idiomatic expressions, was expected. Results of the two experiments were consistent with the GSH predictions and show that literal interpretations of idioms are accessed faster than their idiomatic meanings in the RH. The fMRI data showed that processing the idiomatic interpretation of idioms and the literal interpretations of literal sentences involved LH regions whereas processing the literal interpretation of idioms was associated with increased activity in right brain regions including the right precuneus, right middle frontal gyrus (MFG), right posterior middle temporal gyrus (MTG), and right anterior superior temporal gyrus (STG). We suggest that these RH areas are involved in semantic ambiguity resolution and in processing nonsalient meanings of conventional idiomatic expressions.
Real-time comprehension of wh- movement in aphasia: evidence from eyetracking while listening.
Dickey, Michael Walsh; Choy, JungWon Janet; Thompson, Cynthia K
2007-01-01
Sentences with non-canonical wh- movement are often difficult for individuals with agrammatic Broca's aphasia to understand (, inter alia). However, the explanation of this difficulty remains controversial, and little is known about how individuals with aphasia try to understand such sentences in real time. This study uses an eyetracking while listening paradigm to examine agrammatic aphasic individuals' on-line comprehension of movement sentences. Participants' eye-movements were monitored while they listened to brief stories and looked at visual displays depicting elements mentioned in the stories. The stories were followed by comprehension probes involving wh- movement. In line with previous results for young normal listeners [Sussman, R. S., & Sedivy, J. C. (2003). The time-course of processing syntactic dependencies: evidence from eye movements. Language and Cognitive Processes, 18, 143-161], the study finds that both older unimpaired control participants (n=8) and aphasic individuals (n=12) showed visual evidence of successful automatic comprehension of wh- questions (like "Who did the boy kiss that day at school?"). Specifically, both groups fixated on a picture corresponding to the moved element ("who," the person kissed in the story) at the position of the verb. Interestingly, aphasic participants showed qualitatively different fixation patterns for trials eliciting correct and incorrect responses. Aphasic individuals looked first to the moved-element picture and then to a competitor following the verb in the incorrect trials. However, they only showed looks to the moved-element picture for the correct trials, parallel to control participants. Furthermore, aphasic individuals' fixations during movement sentences were just as fast as control participants' fixations. These results are unexpected under slowed-processing accounts of aphasic comprehension deficits, in which the source of failed comprehension should be delayed application of the same processing routines used in successful comprehension. This pattern is also unexpected if aphasic individuals are using qualitatively different strategies than normals to comprehend such sentences, as under impaired-representation accounts of agrammatism. Instead, it suggests that agrammatic aphasic individuals may process wh- questions similarly to unimpaired individuals, but that this process often fails to facilitate off-line comprehension of sentences with wh- movement.
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-06-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-01-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. PMID:28242517
Word Order and Linguistic Factors in the Second Language Processing of Spanish Passive Sentences
ERIC Educational Resources Information Center
Lee, James F.
2017-01-01
The present study examines how second language learners (L2) assign the thematic roles of agent/patient in Spanish passive sentences with "ser" (often referred to as the true passive) when it is their initial exposure to this structure. The target sentences were preceded by a contextual sentence. After hearing the two sentences,…
The Identification of Word Meaning from Sentence Contexts: An Effect of Presentation Order.
ERIC Educational Resources Information Center
Ammon, Paul R.; Graves, Jack A.
Sixty fourth- and fifth-grade children listened to six series of six sentences each, with each sentence in a series containing the same artificial word. The task was to assign to the artificial word a meaning which would fit all sentence contexts in the series. Preliminary data provided an estimate of the probability that a particular sentence,…
Sentence Position and Syntactic Complexity of Stuttering in Early Childhood: A Longitudinal Study
Buhr, Anthony P.; Zebrowski, Patricia M.
2009-01-01
The purpose of the present investigation was to assess longitudinal word- and sentence-level measures of stuttering in young children. Participants included 12 stuttering and non-stuttering children between 36 and 71 months of age at an initial who exhibited a range of stuttering rates. Parent-child spontaneous speech samples were obtained over a period of two years at six-month intervals. Each speech sample was transcribed, and both stuttering-like disfluencies (SLDs) and other disfluencies (ODs) were coded. Word and sentence-level measures of SLDs were used to assess linguistic characteristics of stuttering. Results of the word-level analysis indicated that stuttering was most likely to occur at the sentence-initial position, but that a tendency to stutter on function words was present only at the sentence-initial position. Results of the sentence-level analyses indicated that sentences containing ODs and those containing SLDs were both significantly longer and more complex than fluent sentences, but did not differ from each other. Word- and sentence-level measures also did not change across visits. Results were taken to suggest that both SLDs and ODs originate during the same stage of sentence planning. PMID:19948270
Processing Code-Switching in Algerian Bilinguals: Effects of Language Use and Semantic Expectancy
Kheder, Souad; Kaan, Edith
2016-01-01
Using a cross-modal naming paradigm this study investigated the effect of sentence constraint and language use on the expectancy of a language switch during listening comprehension. Sixty-five Algerian bilinguals who habitually code-switch between Algerian Arabic and French (AA-FR) but not between Standard Arabic and French (SA-FR) listened to sentence fragments and named a visually presented French target NP out loud. Participants’ speech onset times were recorded. The sentence context was either highly semantically constraining toward the French NP or not. The language of the sentence context was either in Algerian Arabic or in Standard Arabic, but the target NP was always in French, thus creating two code-switching contexts: a typical and recurrent code-switching context (AA-FR) and a non-typical code-switching context (SA-FR). Results revealed a semantic constraint effect indicating that the French switches were easier to process in the high compared to the low-constraint context. In addition, the effect size of semantic constraint was significant in the more typical code-switching context (AA-FR) suggesting that language use influences the processing of switching between languages. The effect of semantic constraint was also modulated by code-switching habits and the proficiency of L2 French. Semantic constraint was reduced in bilinguals who frequently code-switch and in bilinguals with high proficiency in French. Results are discussed with regards to the bilingual interactive activation model (Dijkstra and Van Heuven, 2002) and the control process model of code-switching (Green and Wei, 2014). PMID:26973559
Context and the Spelling-to-Sound Regularity Effect in Pronunciation.
ERIC Educational Resources Information Center
Parkin, Alan J.; Ilett, Alison
1986-01-01
Examines how spelling-to-sound irregularity affects pronunciation latencies when words are presented in a sentence, and concludes that pronunciation latencies are strongly affected by the type of preceding sentence, with the specific sentences producing shorter latencies than the general sentences. (HOD)
Sentence processing selectivity in Broca's area: evident for structure but not syntactic movement.
Rogalsky, Corianne; Almeida, Diogo; Sprouse, Jon; Hickok, Gregory
The role of Broca's area in sentence processing is hotly debated. Prominent hypotheses include that Broca's area supports sentence comprehension via syntax-specific processes ("syntactic movement" in particular), hierarchical structure building or working memory. In the present fMRI study we adopt a within subject, across task approach using targeted sentence-level contrasts and non-sentential comparison tasks to address these hypotheses regarding the role of Broca's area in sentence processing. For clarity, we have presented findings as three experiments: (i) Experiment 1 examines selectivity for a particular type of sentence construction, namely those containing syntactic movement. Standard syntactic movement distance effects in Broca's area were replicated but no difference was found between movement and non-movement sentences in Broca's area at the group level or consistently in individual subjects. (ii) Experiment 2 examines selectivity for sentences versus non-sentences, to assess claims regarding the role of Broca's area in hierarchical structure building. Group and individual results differ, but both identify subregions of Broca's area that are selective for sentence structure. (iii) Experiment 3 assesses whether activations in Broca's area are selective for sentences when contrasted with simple subvocal articulation. Group results suggest shared resources for sentence processing and articulation in Broca's area, but individual subject analyses contradict this finding. We conclude that Broca's area is not selectively involved in processing syntactic movement, but that subregions are selectively responsive to sentence structure. Our findings also reinforce Fedorenko & Kanwishser's call for the use of more individual subject analyses in functional imaging studies of sentence processing in Broca's area, as group findings can obscure selective response patterns.
Proficiency and sentence constraint effects on second language word learning.
Ma, Tengfei; Chen, Baoguo; Lu, Chunming; Dunlap, Susan
2015-07-01
This paper presents an experiment that investigated the effects of L2 proficiency and sentence constraint on semantic processing of unknown L2 words (pseudowords). All participants were Chinese native speakers who learned English as a second language. In the experiment, we used a whole sentence presentation paradigm with a delayed semantic relatedness judgment task. Both higher and lower-proficiency L2 learners could make use of the high-constraint sentence context to judge the meaning of novel pseudowords, and higher-proficiency L2 learners outperformed lower-proficiency L2 learners in all conditions. These results demonstrate that both L2 proficiency and sentence constraint affect subsequent word learning among second language learners. We extended L2 word learning into a sentence context, replicated the sentence constraint effects previously found among native speakers, and found proficiency effects in L2 word learning. Copyright © 2015 Elsevier B.V. All rights reserved.
Pupillary dynamics reveal computational cost in sentence planning.
Sevilla, Yamila; Maldonado, Mora; Shalóm, Diego E
2014-01-01
This study investigated the computational cost associated with grammatical planning in sentence production. We measured people's pupillary responses as they produced spoken descriptions of depicted events. We manipulated the syntactic structure of the target by training subjects to use different types of sentences following a colour cue. The results showed higher increase in pupil size for the production of passive and object dislocated sentences than for active canonical subject-verb-object sentences, indicating that more cognitive effort is associated with more complex noncanonical thematic order. We also manipulated the time at which the cue that triggered structure-building processes was presented. Differential increase in pupil diameter for more complex sentences was shown to rise earlier as the colour cue was presented earlier, suggesting that the observed pupillary changes are due to differential demands in relatively independent structure-building processes during grammatical planning. Task-evoked pupillary responses provide a reliable measure to study the cognitive processes involved in sentence production.
Neurobiology of Self-Awareness in Schizophrenia: an fMRI Study
Shad, Mujeeb U.; Keshavan, Matcheri S.; Steinberg, Joel L.; Mihalakos, Perry; Thomas, Binu P.; Motes, Michael A.; Soares, Jair C.; Tamminga, Carol A.
2012-01-01
Self-awareness (SA) is one of the core domains of higher cortical functions and is frequently compromised in schizophrenia. Deficits in SA have been associated with functional and psychosocial impairment in this patient population. However, despite its clinical significance, only a few studies have examined the neural substrates of self-referential processing in schizophrenia. The aim of this study was to assess self-awareness in schizophrenia using a functional magnetic resonance imaging (fMRI) paradigm designed to elicit judgments of self-reference in a simulated social context. While scanned, volunteers looked at visually-displayed sentences that had the volunteer’s own first name (self-directed sentence-stimulus) or an unknown other person’s first name (other-directed sentence stimulus) as the grammatical subject of the sentence. The volunteers were asked to discern whether each sentence-stimulus was about the volunteer personally (during a self-referential cue epoch) or asked whether each statement was about someone else (during an other-referential cue epoch). We predicted that individuals with schizophrenia would demonstrate altered functional activation to self- and other-directed sentence-stimuli as compared to controls. Fifteen controls and seventeen schizophrenia volunteers completed clinical assessments and SA fMRI task on a 3T Philips 3.0 T Achieva system. The results showed significantly greater activation in schizophrenia compared to controls for cortical midline structures in response to self- vs. other-directed sentence-stimuli. These findings support results from earlier studies and demonstrate selective alteration in the activation of cortical midline structures associated with evaluations of self-reference in schizophrenia as compared to controls. PMID:22480958
Neurobiology of self-awareness in schizophrenia: an fMRI study.
Shad, Mujeeb U; Keshavan, Matcheri S; Steinberg, Joel L; Mihalakos, Perry; Thomas, Binu P; Motes, Michael A; Soares, Jair C; Tamminga, Carol A
2012-07-01
Self-awareness (SA) is one of the core domains of higher cortical functions and is frequently compromised in schizophrenia. Deficits in SA have been associated with functional and psychosocial impairment in this patient population. However, despite its clinical significance, only a few studies have examined the neural substrates of self-referential processing in schizophrenia. The aim of this study was to assess self-awareness in schizophrenia using a functional magnetic resonance imaging (fMRI) paradigm designed to elicit judgments of self-reference in a simulated social context. While scanned, volunteers looked at visually-displayed sentences that had the volunteer's own first name (self-directed sentence-stimulus) or an unknown other person's first name (other-directed sentence stimulus) as the grammatical subject of the sentence. The volunteers were asked to discern whether each sentence-stimulus was about the volunteer personally (during a self-referential cue epoch) or asked whether each statement was about someone else (during an other-referential cue epoch). We predicted that individuals with schizophrenia would demonstrate altered functional activation to self- and other-directed sentence-stimuli as compared to controls. Fifteen controls and seventeen schizophrenia volunteers completed clinical assessments and SA fMRI task on a 3T Philips 3.0 T Achieva system. The results showed significantly greater activation in schizophrenia compared to controls for cortical midline structures in response to self- vs. other-directed sentence-stimuli. These findings support results from earlier studies and demonstrate selective alteration in the activation of cortical midline structures associated with evaluations of self-reference in schizophrenia as compared to controls. Copyright © 2012 Elsevier B.V. All rights reserved.
Event processing in the visual world: Projected motion paths during spoken sentence comprehension.
Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue
2016-05-01
Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Cunnings, Ian; Fotiadou, Georgia; Tsimpli, Ianthi
2017-01-01
In a visual world paradigm study, we manipulated gender congruence between a subject pronoun and two antecedents to investigate whether second language (L2) learners with a null subject first language (L1) acquire and process overt subject pronouns in a nonnull subject L2 in a nativelike way. We also investigated whether L2 speakers revise an…
32 CFR 16.4 - Sentencing procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 1 2011-07-01 2011-07-01 false Sentencing procedures. 16.4 Section 16.4 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE MILITARY COMMISSIONS SENTENCING... relevant to sentencing. 32 CFR 9.6(e)(10) permits the Prosecution and Defense to present information to aid...
32 CFR 16.4 - Sentencing procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 1 2010-07-01 2010-07-01 false Sentencing procedures. 16.4 Section 16.4 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE MILITARY COMMISSIONS SENTENCING... relevant to sentencing. 32 CFR 9.6(e)(10) permits the Prosecution and Defense to present information to aid...
32 CFR 16.4 - Sentencing procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 1 2013-07-01 2013-07-01 false Sentencing procedures. 16.4 Section 16.4 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE MILITARY COMMISSIONS SENTENCING... relevant to sentencing. 32 CFR 9.6(e)(10) permits the Prosecution and Defense to present information to aid...
32 CFR 16.4 - Sentencing procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 1 2012-07-01 2012-07-01 false Sentencing procedures. 16.4 Section 16.4 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE MILITARY COMMISSIONS SENTENCING... relevant to sentencing. 32 CFR 9.6(e)(10) permits the Prosecution and Defense to present information to aid...
Visual Grouping in Accordance With Utterance Planning Facilitates Speech Production.
Zhao, Liming; Paterson, Kevin B; Bai, Xuejun
2018-01-01
Research on language production has focused on the process of utterance planning and involved studying the synchronization between visual gaze and the production of sentences that refer to objects in the immediate visual environment. However, it remains unclear how the visual grouping of these objects might influence this process. To shed light on this issue, the present research examined the effects of the visual grouping of objects in a visual display on utterance planning in two experiments. Participants produced utterances of the form "The snail and the necklace are above/below/on the left/right side of the toothbrush" for objects containing these referents (e.g., a snail, a necklace and a toothbrush). These objects were grouped using classic Gestalt principles of color similarity (Experiment 1) and common region (Experiment 2) so that the induced perceptual grouping was congruent or incongruent with the required phrasal organization. The results showed that speech onset latencies were shorter in congruent than incongruent conditions. The findings therefore reveal that the congruency between the visual grouping of referents and the required phrasal organization can influence speech production. Such findings suggest that, when language is produced in a visual context, speakers make use of both visual and linguistic cues to plan utterances.
Knoeferle, Pia; Crocker, Matthew W; Scheepers, Christoph; Pickering, Martin J
2005-02-01
Studies monitoring eye-movements in scenes containing entities have provided robust evidence for incremental reference resolution processes. This paper addresses the less studied question of whether depicted event scenes can affect processes of incremental thematic role-assignment. In Experiments 1 and 2, participants inspected agent-action-patient events while listening to German verb-second sentences with initial structural and role ambiguity. The experiments investigated the time course with which listeners could resolve this ambiguity by relating the verb to the depicted events. Such verb-mediated visual event information allowed early disambiguation on-line, as evidenced by anticipatory eye-movements to the appropriate agent/patient role filler. We replicated this finding while investigating the effects of intonation. Experiment 3 demonstrated that when the verb was sentence-final and thus did not establish early reference to the depicted events, linguistic cues alone enabled disambiguation before people encountered the verb. Our results reveal the on-line influence of depicted events on incremental thematic role-assignment and disambiguation of local structural and role ambiguity. In consequence, our findings require a notion of reference that includes actions and events in addition to entities (e.g. Semantics and Cognition, 1983), and argue for a theory of on-line sentence comprehension that exploits a rich inventory of semantic categories.
Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo
2015-01-01
Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention. PMID:25745395
Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo
2015-01-01
Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.
Effects of motion speed in action representations
van Dam, Wessel O.; Speed, Laura J.; Lai, Vicky T.; Vigliocco, Gabriella; Desai, Rutvik H.
2017-01-01
Grounded cognition accounts of semantic representation posit that brain regions traditionally linked to perception and action play a role in grounding the semantic content of words and sentences. Sensory-motor systems are thought to support partially abstract simulations through which conceptual content is grounded. However, which details of sensory-motor experience are included in, or excluded from these simulations, is not well understood. We investigated whether sensory-motor brain regions are differentially involved depending on the speed of actions described in a sentence. We addressed this issue by examining the neural signature of relatively fast (The old lady scurried across the road) and slow (The old lady strolled across the road) action sentences. The results showed that sentences that implied fast motion modulated activity within the right posterior superior temporal sulcus and the angular and middle occipital gyri, areas associated with biological motion and action perception. Sentences that implied slow motion resulted in greater signal within the right primary motor cortex and anterior inferior parietal lobule, areas associated with action execution and planning. These results suggest that the speed of described motion influences representational content and modulates the nature of conceptual grounding. Fast motion events are represented more visually whereas motor regions play a greater role in representing conceptual content associated with slow motion. PMID:28160739
Retrieval of Sentence Sequences for an Image Stream via Coherence Recurrent Convolutional Networks.
Park, Cesc Chunseong; Kim, Youngjin; Kim, Gunhee
2018-04-01
We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their experiences, much online visual information exists in the form of image streams, for which it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences. For retrieving a coherent flow of multiple sentences for a photo stream, we propose a multimodal neural architecture called coherence recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional long short-term memory (LSTM) networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We collect more than 22 K unique blog posts with 170 K associated images for the travel topics of NYC, Disneyland , Australia, and Hawaii. We demonstrate that our approach outperforms other state-of-the-art image captioning methods for text sequence generation, using both quantitative measures and user studies via Amazon Mechanical Turk.
Normative Data on Audiovisual Speech Integration Using Sentence Recognition and Capacity Measures
Altieri, Nicholas; Hudock, Daniel
2016-01-01
Objective The ability to use visual speech cues and integrate them with auditory information is important, especially in noisy environments and for hearing-impaired (HI) listeners. Providing data on measures of integration skills that encompass accuracy and processing speed will benefit researchers and clinicians. Design The study consisted of two experiments: First, accuracy scores were obtained using CUNY sentences, and capacity measures that assessed reaction-time distributions were obtained from a monosyllabic word recognition task. Study Sample We report data on two measures of integration obtained from a sample comprised of 86 young and middle-age adult listeners: Results To summarize our results, capacity showed a positive correlation with accuracy measures of audiovisual benefit obtained from sentence recognition. More relevant, factor analysis indicated that a single-factor model captured audiovisual speech integration better than models containing more factors. Capacity exhibited strong loadings on the factor, while the accuracy-based measures from sentence recognition exhibited weaker loadings. Conclusions Results suggest that a listener’s integration skills may be assessed optimally using a measure that incorporates both processing speed and accuracy. PMID:26853446
Normative data on audiovisual speech integration using sentence recognition and capacity measures.
Altieri, Nicholas; Hudock, Daniel
2016-01-01
The ability to use visual speech cues and integrate them with auditory information is important, especially in noisy environments and for hearing-impaired (HI) listeners. Providing data on measures of integration skills that encompass accuracy and processing speed will benefit researchers and clinicians. The study consisted of two experiments: First, accuracy scores were obtained using City University of New York (CUNY) sentences, and capacity measures that assessed reaction-time distributions were obtained from a monosyllabic word recognition task. We report data on two measures of integration obtained from a sample comprised of 86 young and middle-age adult listeners: To summarize our results, capacity showed a positive correlation with accuracy measures of audiovisual benefit obtained from sentence recognition. More relevant, factor analysis indicated that a single-factor model captured audiovisual speech integration better than models containing more factors. Capacity exhibited strong loadings on the factor, while the accuracy-based measures from sentence recognition exhibited weaker loadings. Results suggest that a listener's integration skills may be assessed optimally using a measure that incorporates both processing speed and accuracy.
The language of future-thought: an fMRI study of embodiment and tense processing.
Gilead, Michael; Liberman, Nira; Maril, Anat
2013-01-15
The ability to comprehend and represent the temporal properties of an occurrence is a crucial aspect of human language and cognition. Despite advances in neurolinguistic research into semantic processing, surprisingly little is known regarding the mechanisms which support the comprehension of temporal semantics. We used fMRI to investigate neural activity associated with processing of concrete and abstract sentences across the three temporal categories: past, present, and future. Theories of embodied cognition predict that concreteness-related activity would be evident in sensory and motor areas regardless of tense. Contrastingly, relying upon construal level theory we hypothesized that: (1) the neural markers associated with concrete language processing would appear for past and present tense sentences, but not for future sentences; (2) future tense sentences would activate intention-processing areas. Consistent with our first prediction, the results showed that activation in the parahippocampal gyrus differentiated between concrete and abstract sentences for past and present tense sentences, but not for future sentences. Not consistent with our second prediction, future tense sentences did not activate most of the regions that are implicated in the processing of intentions, but only activated the vmPFC. We discuss the implications of the current results to theories of embodied cognition and tense semantics. Copyright © 2012 Elsevier Inc. All rights reserved.
Grammar for College Writing: A Sentence-Composing Approach
ERIC Educational Resources Information Center
Killgallon, Don; Killgallon, Jenny
2010-01-01
Across America, in thousands of classrooms, from elementary school to high school, the time-tested sentence-composing approach has given students tools to become better writers. Now the authors present a much anticipated sentence-composing grammar worktext for college writing. This book presents a new and easier way to understand grammar: (1) Noun…
Kurashige, Hiroki; Yamashita, Yuichi; Hanakawa, Takashi; Honda, Manabu
2018-01-01
Knowledge acquisition is a process in which one actively selects a piece of information from the environment and assimilates it with prior knowledge. However, little is known about the neural mechanism underlying selectivity in knowledge acquisition. Here we executed a 2-day human experiment to investigate the involvement of characteristic spontaneous activity resembling a so-called "preplay" in selectivity in sentence comprehension, an instance of knowledge acquisition. On day 1, we presented 10 sentences (prior sentences) that were difficult to understand on their own. On the following day, we first measured the resting-state functional magnetic resonance imaging (fMRI). Then, we administered a sentence comprehension task using 20 new sentences (posterior sentences). The posterior sentences were also difficult to understand on their own, but some could be associated with prior sentences to facilitate their understanding. Next, we measured the posterior sentence-induced fMRI to identify the neural representation. From the resting-state fMRI, we extracted the appearances of activity patterns similar to the neural representations for posterior sentences. Importantly, the resting-state fMRI was measured before giving the posterior sentences, and thus such appearances could be considered as preplay-like or prototypical neural representations. We compared the intensities of such appearances with the understanding of posterior sentences. This gave a positive correlation between these two variables, but only if posterior sentences were associated with prior sentences. Additional analysis showed the contribution of the entorhinal cortex, rather than the hippocampus, to the correlation. The present study suggests that prior knowledge-based arrangement of neural activity before an experience contributes to the active selection of information to be learned. Such arrangement prior to an experience resembles preplay activity observed in the rodent brain. In terms of knowledge acquisition, the present study leads to a new view of the brain (or more precisely of the brain's knowledge) as an autopoietic system in which the brain (or knowledge) selects what it should learn by itself, arranges preplay-like activity as a position for the new information in advance, and actively reorganizes itself.
Kurashige, Hiroki; Yamashita, Yuichi; Hanakawa, Takashi; Honda, Manabu
2018-01-01
Knowledge acquisition is a process in which one actively selects a piece of information from the environment and assimilates it with prior knowledge. However, little is known about the neural mechanism underlying selectivity in knowledge acquisition. Here we executed a 2-day human experiment to investigate the involvement of characteristic spontaneous activity resembling a so-called “preplay” in selectivity in sentence comprehension, an instance of knowledge acquisition. On day 1, we presented 10 sentences (prior sentences) that were difficult to understand on their own. On the following day, we first measured the resting-state functional magnetic resonance imaging (fMRI). Then, we administered a sentence comprehension task using 20 new sentences (posterior sentences). The posterior sentences were also difficult to understand on their own, but some could be associated with prior sentences to facilitate their understanding. Next, we measured the posterior sentence-induced fMRI to identify the neural representation. From the resting-state fMRI, we extracted the appearances of activity patterns similar to the neural representations for posterior sentences. Importantly, the resting-state fMRI was measured before giving the posterior sentences, and thus such appearances could be considered as preplay-like or prototypical neural representations. We compared the intensities of such appearances with the understanding of posterior sentences. This gave a positive correlation between these two variables, but only if posterior sentences were associated with prior sentences. Additional analysis showed the contribution of the entorhinal cortex, rather than the hippocampus, to the correlation. The present study suggests that prior knowledge-based arrangement of neural activity before an experience contributes to the active selection of information to be learned. Such arrangement prior to an experience resembles preplay activity observed in the rodent brain. In terms of knowledge acquisition, the present study leads to a new view of the brain (or more precisely of the brain’s knowledge) as an autopoietic system in which the brain (or knowledge) selects what it should learn by itself, arranges preplay-like activity as a position for the new information in advance, and actively reorganizes itself. PMID:29662446
Rosemann, Stephanie; Thiel, Christiane M
2018-07-15
Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.
Effects of Word Frequency and Modality on Sentence Comprehension Impairments in People with Aphasia
ERIC Educational Resources Information Center
DeDe, Gayle
2012-01-01
Purpose: It is well known that people with aphasia have sentence comprehension impairments. The present study investigated whether lexical factors contribute to sentence comprehension impairments in both the auditory and written modalities using online measures of sentence processing. Method: People with aphasia and non brain-damaged controls…
The Role of Constraints in Creative Sentence Production
ERIC Educational Resources Information Center
Haught, Catrinel
2015-01-01
Two experiments explored how people create novel sentences referring to given entities presented either in line drawings or in nouns. The line drawings yielded more creative sentences than the words, both as rated by judges and objectively by a measure of the amount of information that the sentences conveyed. A hypothesis about the cognitive…
Imageability effects on sentence judgement by right-brain-damaged adults
Lederer, Lisa Guttentag; Scott, April Gibbs; Tompkins, Connie A.; Dickey, Michael W.
2009-01-01
Background For decades researchers assumed visual image generation was the province of the right hemisphere. The lack of corresponding evidence was only recently noted, yet conflicting results still leave open the possibility that the right hemisphere plays a role. This study assessed imagery generation in adult participants with and without right hemisphere damage (RHD). Imagery was operationalised as the activation of representations retrieved from long-term memory similar to those that underlie sensory experience, in the absence of the usual sensory stimulation, and in the presence of communicative stimuli. Aims The primary aim of the study was to explore the widely held belief that there is an association between the right hemisphere and imagery generation ability. We also investigated whether visual and visuo-motor imagery generation abilities differ in adults with RHD. Methods & Procedures Participants included 34 adults with unilateral RHD due to cerebrovascular accident and 38 adults who served as non-brain-damaged (NBD) controls. To assess the potential effects of RHD on the processing of language stimuli that differ in imageability, participants performed an auditory sentence verification task. Participants listened to high- and low-imageability sentences from Eddy and Glass (1981) and indicated whether each sentence was true or false. The dependent measures for this task were performance accuracy and response times (RT). Outcomes & Results In general, accuracy was higher, and response time lower, for low-imagery than for high-imagery items. Although NBD participants’ RTs for low-imagery items were significantly faster than those for high-imagery items, this difference disappeared in the group with RHD. We confirmed that this result was not due to a speed–accuracy trade-off or to syntactic differences between stimulus sets. A post hoc analysis also suggested that the group with RHD was selectively impaired in motor, rather than visual, imagery generation. Conclusions The disproportionately high RT of participants with RHD in response to low-imagery items suggests that these items had other properties that made their verification difficult for this population. The nature and extent of right hemisphere patients’ deficits in processing different types of imagery should be considered. In addition, the capacity of adults with RHD to generate visual and motor imagery should be investigated separately in future studies. PMID:20054429
Self-corrected elaboration and spacing effects in incidental memory.
Toyota, Hiroshi
2006-04-01
The present study investigated the effect of self-corrected elaboration on incidental memory as a function of types of presentation (massed vs spaced) and sentence frames (image vs nonimage). The subjects were presented a target word and an incongruous sentence frame and asked to correct the target to make a common sentence in the self-corrected elaboration condition, whereas in the experimenter-corrected elaboration condition they were asked to rate the appropriateness of the congruous word presented, followed by free recall test. The superiority of the self-corrected elaboration to the experimenter-corrected elaboration was observed only in some situations of combinations by the types of presentation and sentence frames. These results were discussed in terms of the effectiveness of the self-corrected elaboration.
The effect of fMRI task combinations on determining the hemispheric dominance of language functions.
Niskanen, Eini; Könönen, Mervi; Villberg, Ville; Nissi, Mikko; Ranta-Aho, Perttu; Säisänen, Laura; Karjalainen, Pasi; Aikiä, Marja; Kälviäinen, Reetta; Mervaala, Esa; Vanninen, Ritva
2012-04-01
The purpose of this study is to establish the most suitable combination of functional magnetic resonance imaging (fMRI) language tasks for clinical use in determining language dominance and to define the variability in laterality index (LI) and activation power between different combinations of language tasks. Activation patterns of different fMRI analyses of five language tasks (word generation, responsive naming, letter task, sentence comprehension, and word pair) were defined for 20 healthy volunteers (16 right-handed). LIs and sums of T values were calculated for each task separately and for four combinations of tasks in predefined regions of interest. Variability in terms of activation power and lateralization was defined in each analysis. In addition, the visual assessment of lateralization of language functions based on the individual fMRI activation maps was conducted by an experienced neuroradiologist. A combination analysis of word generation, responsive naming, and sentence comprehension was the most suitable in terms of activation power, robustness to detect essential language areas, and scanning time. In general, combination analyses of the tasks provided higher overall activation levels than single tasks and reduced the number of outlier voxels disturbing the calculation of LI. A combination of auditory and visually presented tasks that activate different aspects of language functions with sufficient activation power may be a useful task battery for determining language dominance in patients.
[Intermodal timing cues for audio-visual speech recognition].
Hashimoto, Masahiro; Kumashiro, Masaharu
2004-06-01
The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.
Desjardins, Jamie L
2016-01-01
Older listeners with hearing loss may exert more cognitive resources to maintain a level of listening performance similar to that of younger listeners with normal hearing. Unfortunately, this increase in cognitive load, which is often conceptualized as increased listening effort, may come at the cost of cognitive processing resources that might otherwise be available for other tasks. The purpose of this study was to evaluate the independent and combined effects of a hearing aid directional microphone and a noise reduction (NR) algorithm on reducing the listening effort older listeners with hearing loss expend on a speech-in-noise task. Participants were fitted with study worn commercially available behind-the-ear hearing aids. Listening effort on a sentence recognition in noise task was measured using an objective auditory-visual dual-task paradigm. The primary task required participants to repeat sentences presented in quiet and in a four-talker babble. The secondary task was a digital visual pursuit rotor-tracking test, for which participants were instructed to use a computer mouse to track a moving target around an ellipse that was displayed on a computer screen. Each of the two tasks was presented separately and concurrently at a fixed overall speech recognition performance level of 50% correct with and without the directional microphone and/or the NR algorithm activated in the hearing aids. In addition, participants reported how effortful it was to listen to the sentences in quiet and in background noise in the different hearing aid listening conditions. Fifteen older listeners with mild sloping to severe sensorineural hearing loss participated in this study. Listening effort in background noise was significantly reduced with the directional microphones activated in the hearing aids. However, there was no significant change in listening effort with the hearing aid NR algorithm compared to no noise processing. Correlation analysis between objective and self-reported ratings of listening effort showed no significant relation. Directional microphone processing effectively reduced the cognitive load of listening to speech in background noise. This is significant because it is likely that listeners with hearing impairment will frequently encounter noisy speech in their everyday communications. American Academy of Audiology.
Sentence comprehension in agrammatic aphasia: history and variability to clinical implications.
Johnson, Danielle; Cannizzaro, Michael S
2009-01-01
Individuals with Broca's aphasia often present with deficits in their ability to comprehend non-canonical sentences. This has been contrastingly characterized as a systematic loss of specific grammatical abilities or as individual variability in the dynamics between processing load and resource availability. The present study investigated sentence level comprehension in participants with Broca's aphasia in an attempt to integrate these contrasting views into a clinically useful process. Two participants diagnosed with Broca's aphasia were assessed using a sentence-to-picture matching paradigm and a truth-value judgement task, across sentence constructions thought to be problematic for this population. The data demonstrate markedly different patterns of performance between participants, as well as variability within participants (e.g. by sentence type). These findings support the notion of individual performance variability in persons with aphasia. Syntactic theory was instructive for assessing sentence level comprehension, leading to a clinically relevant process of identifying treatment targets considering both performance variability and syntactic complexity for this population.
Ultrasound visual feedback in articulation therapy following partial glossectomy.
Blyth, Katrina M; Mccabe, Patricia; Madill, Catherine; Ballard, Kirrie J
2016-01-01
Disordered speech is common following treatment for tongue cancer, however there is insufficient high quality evidence to guide clinical decision making about treatment. This study investigated use of ultrasound tongue imaging as a visual feedback tool to guide tongue placement during articulation therapy with two participants following partial glossectomy. A Phase I multiple baseline design across behaviors was used to investigate therapeutic effect of ultrasound visual feedback during speech rehabilitation. Percent consonants correct and speech intelligibility at sentence level were used to measure acquisition, generalization and maintenance of speech skills for treated and untreated related phonemes, while unrelated phonemes were tested to demonstrate experimental control. Swallowing and oromotor measures were also taken to monitor change. Sentence intelligibility was not a sensitive measure of speech change, but both participants demonstrated significant change in percent consonants correct for treated phonemes. One participant also demonstrated generalization to non-treated phonemes. Control phonemes along with swallow and oromotor measures remained stable throughout the study. This study establishes therapeutic benefit of ultrasound visual feedback in speech rehabilitation following partial glossectomy. Readers will be able to explain why and how tongue cancer surgery impacts on articulation precision. Readers will also be able to explain the acquisition, generalization and maintenance effects in the study. Copyright © 2016. Published by Elsevier Inc.
The Thematic Structure of the Sentence in English and Polish.
ERIC Educational Resources Information Center
Szwedek, Aleksander
An important feature of the sentence in any language is its thematic structure, new/given information organization. It has been found that in English, where word order is grammatically determined, the thematic structure is signalled by the place of the sentence stress. If an indefinite noun (new information) is present in the sentence, it bears…
Retrieval of Sentence Relations: Semantic vs. Syntactic Deep Structure.
ERIC Educational Resources Information Center
Perfetti, Charles A.
Two experiments on unaided and cued recall of sentences presented in context to college students are reported in this study. Key nouns in the sentences were arranged to have uniform surface functions, but to vary independently in deep syntactic category and semantic function. Cued recall for sentences in which the semantic function of actor and…
Recalibration of vocal affect by a dynamic face.
Baart, Martijn; Vroomen, Jean
2018-04-25
Perception of vocal affect is influenced by the concurrent sight of an emotional face. We demonstrate that the sight of an emotional face also can induce recalibration of vocal affect. Participants were exposed to videos of a 'happy' or 'fearful' face in combination with a slightly incongruous sentence with ambiguous prosody. After this exposure, ambiguous test sentences were rated as more 'happy' when the exposure phase contained 'happy' instead of 'fearful' faces. This auditory shift likely reflects recalibration that is induced by error minimization of the inter-sensory discrepancy. In line with this view, when the prosody of the exposure sentence was non-ambiguous and congruent with the face (without audiovisual discrepancy), aftereffects went in the opposite direction, likely reflecting adaptation. Our results demonstrate, for the first time, that perception of vocal affect is flexible and can be recalibrated by slightly discrepant visual information.
Speech-in-speech perception and executive function involvement
Perrone-Bertolotti, Marcela; Tassin, Maxime
2017-01-01
This present study investigated the link between speech-in-speech perception capacities and four executive function components: response suppression, inhibitory control, switching and working memory. We constructed a cross-modal semantic priming paradigm using a written target word and a spoken prime word, implemented in one of two concurrent auditory sentences (cocktail party situation). The prime and target were semantically related or unrelated. Participants had to perform a lexical decision task on visual target words and simultaneously listen to only one of two pronounced sentences. The attention of the participant was manipulated: The prime was in the pronounced sentence listened to by the participant or in the ignored one. In addition, we evaluate the executive function abilities of participants (switching cost, inhibitory-control cost and response-suppression cost) and their working memory span. Correlation analyses were performed between the executive and priming measurements. Our results showed a significant interaction effect between attention and semantic priming. We observed a significant priming effect in the attended but not in the ignored condition. Only priming effects obtained in the ignored condition were significantly correlated with some of the executive measurements. However, no correlation between priming effects and working memory capacity was found. Overall, these results confirm, first, the role of attention for semantic priming effect and, second, the implication of executive functions in speech-in-noise understanding capacities. PMID:28708830
Neural networks mediating sentence reading in the deaf
Hirshorn, Elizabeth A.; Dye, Matthew W. G.; Hauser, Peter C.; Supalla, Ted R.; Bavelier, Daphne
2014-01-01
The present work addresses the neural bases of sentence reading in deaf populations. To better understand the relative role of deafness and spoken language knowledge in shaping the neural networks that mediate sentence reading, three populations with different degrees of English knowledge and depth of hearing loss were included—deaf signers, oral deaf and hearing individuals. The three groups were matched for reading comprehension and scanned while reading sentences. A similar neural network of left perisylvian areas was observed, supporting the view of a shared network of areas for reading despite differences in hearing and English knowledge. However, differences were observed, in particular in the auditory cortex, with deaf signers and oral deaf showing greatest bilateral superior temporal gyrus (STG) recruitment as compared to hearing individuals. Importantly, within deaf individuals, the same STG area in the left hemisphere showed greater recruitment as hearing loss increased. To further understand the functional role of such auditory cortex re-organization after deafness, connectivity analyses were performed from the STG regions identified above. Connectivity from the left STG toward areas typically associated with semantic processing (BA45 and thalami) was greater in deaf signers and in oral deaf as compared to hearing. In contrast, connectivity from left STG toward areas identified with speech-based processing was greater in hearing and in oral deaf as compared to deaf signers. These results support the growing literature indicating recruitment of auditory areas after congenital deafness for visually-mediated language functions, and establish that both auditory deprivation and language experience shape its functional reorganization. Implications for differential reliance on semantic vs. phonological pathways during reading in the three groups is discussed. PMID:24959127
Why do children pay more attention to grammatical morphemes at the ends of sentences?
Sundara, Megha
2018-05-01
Children pay more attention to the beginnings and ends of sentences rather than the middle. In natural speech, ends of sentences are prosodically and segmentally enhanced; they are also privileged by sensory and recall advantages. We contrasted whether acoustic enhancement or sensory and recall-related advantages are necessary and sufficient for the salience of grammatical morphemes at the ends of sentences. We measured 22-month-olds' listening times to grammatical and ungrammatical sentences with third person singular -s. Crucially, by cross-splicing the speech stimuli, acoustic enhancement and sensory and recall advantages were fully crossed. Only children presented with the verb in sentence-final position, a position with sensory and recall advantages, distinguished between the grammatical and ungrammatical sentences. Thus, sensory and recall advantages alone were necessary and sufficient to make grammatical morphemes at ends of sentences salient. These general processing constraints privilege ends of sentences over middles, regardless of the acoustic enhancement.
Schweppe, Judith; Rummer, Ralf; Bormann, Tobias; Martin, Randi C
2011-12-01
We present one experiment and a neuropsychological case study to investigate to what extent phonological and semantic representations contribute to short-term sentence recall. We modified Potter and Lombardi's (1990) intrusion paradigm, in which retention of a list interferes with sentence recall such that on the list a semantically related lure is presented, which is expected to intrude into sentence recall. In our version, lure words are either semantically related to target words in the sentence or semantically plus phonologically related. With healthy participants, intrusions are more frequent when lure and target overlap phonologically in addition to semantically than when they solely overlap semantically. When this paradigm is applied to a patient with a phonological short-term memory impairment, both lure types induce the same amount of intrusions. These findings indicate that usually phonological information is retained in sentence recall in addition to semantic information.
Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.
2017-01-01
Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797
Carroll, Rebecca; Uslar, Verena; Brand, Thomas; Ruigendijk, Esther
The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. Hearing impairment adds perceptual processing load during sentence processing, but affects grammatical processing beyond the word level to the same degree as in normal hearing, with minor differences in processing mechanisms. The data contribute to our understanding of individual differences in speech perception and language understanding. The authors interpret their results within the ease of language understanding model.
Oscillatory EEG dynamics underlying automatic chunking during sentence processing.
Bonhage, Corinna E; Meyer, Lars; Gruber, Thomas; Friederici, Angela D; Mueller, Jutta L
2017-05-15
Sentences are easier to remember than random word sequences, likely because linguistic regularities facilitate chunking of words into meaningful groups. The present electroencephalography study investigated the neural oscillations modulated by this so-called sentence superiority effect during the encoding and maintenance of sentence fragments versus word lists. We hypothesized a chunking-related modulation of neural processing during the encoding and retention of sentences (i.e., sentence fragments) as compared to word lists. Time-frequency analysis revealed a two-fold oscillatory pattern for the memorization of sentences: Sentence encoding was accompanied by higher delta amplitude (4Hz), originating both from regions processing syntax as well as semantics (bilateral superior/middle temporal regions and fusiform gyrus). Subsequent sentence retention was reflected in decreased theta (6Hz) and beta/gamma (27-32Hz) amplitude instead. Notably, whether participants simply read or properly memorized the sentences did not impact chunking-related activity during encoding. Therefore, we argue that the sentence superiority effect is grounded in highly automatized language processing mechanisms, which generate meaningful memory chunks irrespective of task demands. Copyright © 2017 Elsevier Inc. All rights reserved.
Visual information can hinder working memory processing of speech.
Mishra, Sushmit; Lunner, Thomas; Stenfelt, Stefan; Rönnberg, Jerker; Rudner, Mary
2013-08-01
The purpose of the present study was to evaluate the new Cognitive Spare Capacity Test (CSCT), which measures aspects of working memory capacity for heard speech in the audiovisual and auditory-only modalities of presentation. In Experiment 1, 20 young adults with normal hearing performed the CSCT and an independent battery of cognitive tests. In the CSCT, they listened to and recalled 2-digit numbers according to instructions inducing executive processing at 2 different memory loads. In Experiment 2, 10 participants performed a less executively demanding free recall task using the same stimuli. CSCT performance demonstrated an effect of memory load and was associated with independent measures of executive function and inference making but not with general working memory capacity. Audiovisual presentation was associated with lower CSCT scores but higher free recall performance scores. CSCT is an executively challenging test of the ability to process heard speech. It captures cognitive aspects of listening related to sentence comprehension that are quantitatively and qualitatively different from working memory capacity. Visual information provided in the audiovisual modality of presentation can hinder executive processing in working memory of nondegraded speech material.
Cognitive Load in Voice Therapy Carry-Over Exercises.
Iwarsson, Jenny; Morris, David Jackson; Balling, Laura Winther
2017-01-01
The cognitive load generated by online speech production may vary with the nature of the speech task. This article examines 3 speech tasks used in voice therapy carry-over exercises, in which a patient is required to adopt and automatize new voice behaviors, ultimately in daily spontaneous communication. Twelve subjects produced speech in 3 conditions: rote speech (weekdays), sentences in a set form, and semispontaneous speech. Subjects simultaneously performed a secondary visual discrimination task for which response times were measured. On completion of each speech task, subjects rated their experience on a questionnaire. Response times from the secondary, visual task were found to be shortest for the rote speech, longer for the semispontaneous speech, and longest for the sentences within the set framework. Principal components derived from the subjective ratings were found to be linked to response times on the secondary visual task. Acoustic measures reflecting fundamental frequency distribution and vocal fold compression varied across the speech tasks. The results indicate that consideration should be given to the selection of speech tasks during the process leading to automation of revised speech behavior and that self-reports may be a reliable index of cognitive load.
Sentence comprehension in autism: thinking in pictures with decreased functional connectivity
Kana, Rajesh K.; Keller, Timothy A.; Cherkassky, Vladimir L.; Minshew, Nancy J.; Just, Marcel Adam
2015-01-01
Comprehending high-imagery sentences like The number eight when rotated 90 degrees looks like a pair of eyeglasses involves the participation and integration of several cortical regions. The linguistic content must be processed to determine what is to be mentally imaged, and then the mental image must be evaluated and related to the sentence. A theory of cortical underconnectivity in autism predicts that the interregional collaboration required between linguistic and imaginal processing in this task would be underserved in autism. This functional MRI study examined brain activation in 12 participants with autism and 13 age- and IQ-matched control participants while they processed sentences with either high- or low-imagery content. The analysis of functional connectivity among cortical regions showed that the language and spatial centres in the participants with autism were not as well synchronized as in controls. In addition to the functional connectivity differences, there was also a group difference in activation. In the processing of low-imagery sentences (e.g. Addition, subtraction and multiplication are all math skills), the use of imagery is not essential to comprehension. Nevertheless, the autism group activated parietal and occipital brain regions associated with imagery for comprehending both the low and high-imagery sentences, suggesting that they were using mental imagery in both conditions. In contrast, the control group showed imagery-related activation primarily in the high-imagery condition. The findings provide further evidence of underintegration of language and imagery in autism (and hence expand the understanding of underconnectivity) but also show that people with autism are more reliant on visualization to support language comprehension. PMID:16835247
Shi, Lu-Feng; Koenig, Laura L
2016-01-01
Non-native listeners do not recognize English sentences as effectively as native listeners, especially in noise. It is not entirely clear to what extent such group differences arise from differences in relative weight of semantic versus syntactic cues. This study quantified the use and weighting of these contextual cues via Boothroyd and Nittrouer's j and k factors. The j represents the probability of recognizing sentences with or without context, whereas the k represents the degree to which context improves recognition performance. Four groups of 13 normal-hearing young adult listeners participated. One group consisted of native English monolingual (EMN) listeners, whereas the other three consisted of non-native listeners contrasting in their language dominance and first language: English-dominant Russian-English, Russian-dominant Russian-English, and Spanish-dominant Spanish-English bilinguals. All listeners were presented three sets of four-word sentences: high-predictability sentences included both semantic and syntactic cues, low-predictability sentences included syntactic cues only, and zero-predictability sentences included neither semantic nor syntactic cues. Sentences were presented at 65 dB SPL binaurally in the presence of speech-spectrum noise at +3 dB SNR. Listeners orally repeated each sentence and recognition was calculated for individual words as well as the sentence as a whole. Comparable j values across groups for high-predictability, low-predictability, and zero-predictability sentences suggested that all listeners, native and non-native, utilized contextual cues to recognize English sentences. Analysis of the k factor indicated that non-native listeners took advantage of syntax as effectively as EMN listeners. However, only English-dominant bilinguals utilized semantics to the same extent as EMN listeners; semantics did not provide a significant benefit for the two non-English-dominant groups. When combined, semantics and syntax benefitted EMN listeners significantly more than all three non-native groups of listeners. Language background influenced the use and weighting of semantic and syntactic cues in a complex manner. A native language advantage existed in the effective use of both cues combined. A language-dominance effect was seen in the use of semantics. No first-language effect was present for the use of either or both cues. For all non-native listeners, syntax contributed significantly more to sentence recognition than semantics, possibly due to the fact that semantics develops more gradually than syntax in second-language acquisition. The present study provides evidence that Boothroyd and Nittrouer's j and k factors can be successfully used to quantify the effectiveness of contextual cue use in clinically relevant, linguistically diverse populations.
Carey, Daniel; Mercure, Evelyne; Pizzioli, Fabrizio; Aydelott, Jennifer
2014-12-01
The effects of ear of presentation and competing speech on N400s to spoken words in context were examined in a dichotic sentence priming paradigm. Auditory sentence contexts with a strong or weak semantic bias were presented in isolation to the right or left ear, or with a competing signal presented in the other ear at a SNR of -12 dB. Target words were congruent or incongruent with the sentence meaning. Competing speech attenuated N400s to both congruent and incongruent targets, suggesting that the demand imposed by a competing signal disrupts the engagement of semantic comprehension processes. Bias strength affected N400 amplitudes differentially depending upon ear of presentation: weak contexts presented to the le/RH produced a more negative N400 response to targets than strong contexts, whereas no significant effect of bias strength was observed for sentences presented to the re/LH. The results are consistent with a model of semantic processing in which the RH relies on integrative processing strategies in the interpretation of sentence-level meaning. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Dambacher, Michael; Dimigen, Olaf; Braun, Mario; Wille, Kristin; Jacobs, Arthur M.; Kliegl, Reinhold
2012-01-01
Three ERP experiments examined the effect of word presentation rate (i.e., stimulus onset asynchrony, SOA) on the time course of word frequency and predictability effects in sentence reading. In Experiments 1 and 2, sentences were presented word-by-word in the screen center at an SOA of 700 and 490ms, respectively. While these rates are typical…
ERIC Educational Resources Information Center
Freedle, Roy; Hall, William S.
A total of 34 children, ages 2 and a half to 6, were presented with sentences for imitation that either violated or honored a prenominal adjective ordering rule, which requires that size adjectives must precede color adjectives. Two response measures were evaluated in terms of these sentence types: latency to begin a sentence imitation and recall…
Zaitchik, Deborah; Walker, Caren; Miller, Saul; LaViolette, Pete; Feczko, Eric; Dickerson, Bradford C
2010-07-01
By age 2, children attribute referential mental states such as perceptions and emotions to themselves and others, yet it is not until age 4 that they attribute representational mental states such as beliefs. This raises an interesting question: is attribution of beliefs different from attribution of perceptions and emotions in terms of its neural substrate? To address this question with a high degree of anatomic specificity, we partitioned the TPJ, a broad area often found to be recruited in theory of mind tasks, into 2 neuroanatomically specific regions of interest: Superior Temporal Sulcus (STS) and Inferior Parietal Lobule (IPL). To maximize behavioral specificity, we designed a tightly controlled verbal task comprised of sets of single sentences--sentences identical except for the type of mental state specified in the verb (belief, emotion, perception, syntax control). Results indicated that attribution of beliefs more strongly recruited both regions of interest than did emotions or perceptions. This is especially surprising with respect to STS, since it is widely reported in the literature to mediate the detection of referential states--among them emotions and perceptions--rather than the inference of beliefs. An explanation is offered that focuses on the differences between verbal stimuli and visual stimuli, and between a process of sentence comprehension and a process of visual detection. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Facilitation of listening comprehension by visual information under noisy listening condition
NASA Astrophysics Data System (ADS)
Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi
2009-02-01
Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.
Oh, Soo Hee; Donaldson, Gail S.; Kong, Ying-Yee
2016-01-01
Objectives Previous studies have documented the benefits of bimodal hearing as compared with a CI alone, but most have focused on the importance of bottom-up, low-frequency cues. The purpose of the present study was to evaluate the role of top-down processing in bimodal hearing by measuring the effect of sentence context on bimodal benefit for temporally interrupted sentences. It was hypothesized that low-frequency acoustic cues would facilitate the use of contextual information in the interrupted sentences, resulting in greater bimodal benefit for the higher context (CUNY) sentences than for the lower context (IEEE) sentences. Design Young normal-hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Speech recognition scores were measured in three listening conditions: vocoder-alone, vocoder combined with LP speech, and vocoder combined with LPHCs. Temporally interrupted versions of the CUNY and IEEE sentences were used to assess listeners’ ability to fill in missing segments of speech by using top-down linguistic processing. Sentences were square-wave gated at a rate of 5 Hz with a 50 percent duty cycle. Three vocoder channel conditions were tested for each type of sentence (8, 12, and 16 channels for CUNY; 12, 16, and 32 channels for IEEE) and bimodal benefit was compared for similar amounts of spectral degradation (matched-channel comparisons) and similar ranges of baseline performance. Two gain measures, percentage-point gain and normalized gain, were examined. Results Significant effects of context on bimodal benefit were observed when LP speech was presented to the residual-hearing ear. For the matched-channel comparisons, CUNY sentences showed significantly higher normalized gains than IEEE sentences for both the 12-channel (20 points higher) and 16-channel (18 points higher) conditions. For the individual gain comparisons that used a similar range of baseline performance, CUNY sentences showed bimodal benefits that were significantly higher (7 percentage points, or 15 points normalized gain) than those for IEEE sentences. The bimodal benefits observed here for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech (Kong et al., 2015). Further, unlike previous findings for continuous speech, no bimodal benefit was observed when LPHCs were presented to the LP ear. Conclusions Findings indicate that linguistic context has a significant influence on bimodal benefit for temporally interrupted speech and support the hypothesis that low-frequency acoustic information presented to the residual-hearing ear facilitates the use of top-down linguistic processing in bimodal hearing. However, bimodal benefit is reduced for temporally interrupted speech as compared to continuous speech, suggesting that listeners’ ability to restore missing speech information depends not only on top-down linguistic knowledge, but also on the quality of the bottom-up sensory input. PMID:27007220
Cho-Reyes, Soojin; Thompson, Cynthia K.
2015-01-01
Background Verbs and sentences are often impaired in individuals with aphasia, and differential impairment patterns are associated with different types of aphasia. With currently available test batteries, however, it is challenging to provide a comprehensive profile of aphasic language impairments because they do not examine syntactically important properties of verbs and sentences. Aims This study presents data derived from the Northwestern Assessment of Verbs and Sentences (NAVS; Thompson, 2011), a new test battery designed to examine syntactic deficits in aphasia. The NAVS includes tests for verb naming and comprehension, and production of verb argument structure in simple active sentences, with each examining the effects of the number and optionality of arguments. The NAVS also tests production and comprehension of canonical and non-canonical sentences. Methods & Procedures A total of 59 aphasic participants (35 agrammatic and 24 anomic) were tested using a set of action pictures. Participants produced verbs or sentences for the production subtests and identified pictures corresponding to auditorily provided verbs or sentences for the comprehension subtests. Outcomes & Results The agrammatic group, compared to the anomic group, performed significantly more poorly on all subtests except verb comprehension, and for both groups comprehension was less impaired than production. On verb naming and argument structure production tests both groups exhibited difficulty with three-argument verbs, affected by the number and optionality of arguments. However, production of sentences using three-argument verbs was more impaired in the agrammatic, compared to the anomic, group. On sentence production and comprehension tests, the agrammatic group showed impairments in all types of non-canonical sentences, whereas the anomic group exhibited difficulty primarily with the most difficult, object relative, structures. Conclusions Results show that verb and sentence deficits seen in individuals with agrammatic aphasia are largely influenced by syntactic complexity; however, individuals with anomic aphasia appear to exhibit these impairments only for the most complex forms of verbs and sentences. The present data indicate that the NAVS is useful for characterising verb and sentence deficits in people with aphasia. PMID:26379358
Wu, Chung-Hsien; Chiu, Yu-Hsien; Guo, Chi-Shiang
2004-12-01
This paper proposes a novel approach to the generation of Chinese sentences from ill-formed Taiwanese Sign Language (TSL) for people with hearing impairments. First, a sign icon-based virtual keyboard is constructed to provide a visualized interface to retrieve sign icons from a sign database. A proposed language model (LM), based on a predictive sentence template (PST) tree, integrates a statistical variable n-gram LM and linguistic constraints to deal with the translation problem from ill-formed sign sequences to grammatical written sentences. The PST tree trained by a corpus collected from the deaf schools was used to model the correspondence between signed and written Chinese. In addition, a set of phrase formation rules, based on trigger pair category, was derived for sentence pattern expansion. These approaches improved the efficiency of text generation and the accuracy of word prediction and, therefore, improved the input rate. For the assessment of practical communication aids, a reading-comprehension training program with ten profoundly deaf students was undertaken in a deaf school in Tainan, Taiwan. Evaluation results show that the literacy aptitude test and subjective satisfactory level are significantly improved.
Processing of Numerical and Proportional Quantifiers
ERIC Educational Resources Information Center
Shikhare, Sailee; Heim, Stefan; Klein, Elise; Huber, Stefan; Willmes, Klaus
2015-01-01
Quantifier expressions like "many" and "at least" are part of a rich repository of words in language representing magnitude information. The role of numerical processing in comprehending quantifiers was studied in a semantic truth value judgment task, asking adults to quickly verify sentences about visual displays using…
Automatic processing of pragmatic information in the human brain: a mismatch negativity study.
Zhao, Ming; Liu, Tao; Chen, Feiyan
2018-05-23
Language comprehension involves pragmatic information processing, which allows world knowledge to influence the interpretation of a sentence. This study explored whether pragmatic information can be automatically processed during spoken sentence comprehension. The experiment adopted the mismatch negativity (MMN) paradigm to capture the neurophysiological indicators of automatic processing of spoken sentences. Pragmatically incorrect ('Foxes have wings') and correct ('Butterflies have wings') sentences were used as the experimental stimuli. In condition 1, the pragmatically correct sentence was the deviant and the pragmatically incorrect sentence was the standard stimulus, whereas the opposite case was presented in condition 2. The experimental results showed that, compared with the condition that the pragmatically correct sentence is the deviant stimulus, when the condition that the pragmatically incorrect sentence is the deviant stimulus MMN effects were induced within 60-120 and 220-260 ms. The results indicated that the human brain can monitor for incorrect pragmatic information in the inattentive state and can automatically process pragmatic information at the beginning of spoken sentence comprehension.
Phonological Substitution Errors in L2 ASL Sentence Processing by Hearing M2L2 Learners
ERIC Educational Resources Information Center
Williams, Joshua; Newman, Sharlene
2016-01-01
In the present study we aimed to investigate phonological substitution errors made by hearing second language (M2L2) learners of American Sign Language (ASL) during a sentence translation task. Learners saw sentences in ASL that were signed by either a native signer or a M2L2 learner. Learners were to simply translate the sentence from ASL to…
Revisiting Huey: on the importance of the upper part of words during reading.
Perea, Manuel
2012-12-01
Recent research has shown that that the upper part of words enjoys an advantage over the lower part of words in the recognition of isolated words. The goal of the present article was to examine how removing the upper/lower part of the words influences eye movement control during silent normal reading. The participants' eye movements were monitored when reading intact sentences and when reading sentences in which the upper or the lower portion of the text was deleted. Results showed a greater reading cost (longer fixations) when the upper part of the text was removed than when the lower part of the text was removed (i.e., it influenced when to move the eyes). However, there was little influence on the initial landing position on a target word (i.e., on the decision as to where to move the eyes). In addition, lexical-processing difficulty (as inferred from the magnitude of the word frequency effect on a target word) was affected by text degradation. The implications of these findings for models of visual-word recognition and reading are discussed.
Enhancing biomedical text summarization using semantic relation extraction.
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.
Language-driven anticipatory eye movements in virtual reality.
Eichert, Nicole; Peeters, David; Hagoort, Peter
2018-06-01
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.
Frequency Interference in Children' Recognition of Sentence Information
ERIC Educational Resources Information Center
Levin, Joel R.; And Others
1978-01-01
Children listened to sentences under two instructional sets (imagery or repetition) and answered multiple choice alternatives--either identical or similar in meaning to correct information in the sentences; and including or not including previously presented irrelevant information. The sources of interference predicted from recognition memory…
Salis, Christos; Hwang, Faustina; Howard, David; Lallini, Nicole
2017-02-01
Although the roles of verbal short-term and working memory on spoken sentence comprehension skills in persons with aphasia have been debated for many years, the development of treatments to mitigate verbal short-term and working memory deficits as a way of improving spoken sentence comprehension is a new avenue in treatment research. In this article, we review and critically appraise this emerging evidence base. We also present new data from five persons with aphasia of a replication of a previously reported treatment that had resulted in some improvement of spoken sentence comprehension in a person with aphasia. The replicated treatment did not result in improvements in sentence comprehension. We forward recommendations for future research in this, admittedly weak at present, but important clinical research avenue that would help improve our understanding of the mechanisms of improvement of short-term and working memory training in relation to sentence comprehension. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Deficit-Lesion Correlations in Syntactic Comprehension in Aphasia
Caplan, David; Michaud, Jennifer; Hufford, Rebecca; Makris, Nikos
2015-01-01
The effects of lesions on syntactic comprehension were studied in thirty one people with aphasia (PWA). Participants were tested for the ability to parse and interpret four types of syntactic structures and elements -- passives, object extracted relative clauses, reflexives and pronouns – in three tasks – object manipulation, sentence picture matching with full sentence presentation and sentence picture matching with self-paced listening presentation. Accuracy, end-of-sentence RT and self-paced listening times for each word were measured. MR scans were obtained and analyzed for total lesion volume and for lesion size in 48 cortical areas. Lesion size in several areas of the left hemisphere was related to accuracy in particular sentence types in particular tasks and to self-paced listening times for critical words in particular sentence types. The results support a model of brain organization that includes areas that are specialized for the combination of particular syntactic and interpretive operations and the use of the meanings produced by those operations to accomplish task-related operations. PMID:26688433
Deficit-lesion correlations in syntactic comprehension in aphasia.
Caplan, David; Michaud, Jennifer; Hufford, Rebecca; Makris, Nikos
2016-01-01
The effects of lesions on syntactic comprehension were studied in thirty-one people with aphasia (PWA). Participants were tested for the ability to parse and interpret four types of syntactic structures and elements - passives, object extracted relative clauses, reflexives and pronouns - in three tasks - object manipulation, sentence picture matching with full sentence presentation and sentence picture matching with self-paced listening presentation. Accuracy, end-of-sentence RT and self-paced listening times for each word were measured. MR scans were obtained and analyzed for total lesion volume and for lesion size in 48 cortical areas. Lesion size in several areas of the left hemisphere was related to accuracy in particular sentence types in particular tasks and to self-paced listening times for critical words in particular sentence types. The results support a model of brain organization that includes areas that are specialized for the combination of particular syntactic and interpretive operations and the use of the meanings produced by those operations to accomplish task-related operations. Copyright © 2015 Elsevier Inc. All rights reserved.
Shen, Wei; Qu, Qingqing; Li, Xingshan
2016-07-01
In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.
Constructive Memory in Conserving and Nonconserving First Graders
ERIC Educational Resources Information Center
Prawat, Richard S.; Cancelli, Anthony
1976-01-01
This study assessed the recognition by conserving and nonconserving first graders, of true and false permise and inference sentences following story presentations. Conservers performed slightly better than nonconservers on sentences other than true inference sentences, thus indicating that concrete mental operations are related to the process of…
Reduced efficiency of audiovisual integration for nonnative speech.
Yi, Han-Gyol; Phelps, Jasmine E B; Smiljanic, Rajka; Chandrasekaran, Bharath
2013-11-01
The role of visual cues in native listeners' perception of speech produced by nonnative speakers has not been extensively studied. Native perception of English sentences produced by native English and Korean speakers in audio-only and audiovisual conditions was examined. Korean speakers were rated as more accented in audiovisual than in the audio-only condition. Visual cues enhanced word intelligibility for native English speech but less so for Korean-accented speech. Reduced intelligibility of Korean-accented audiovisual speech was associated with implicit visual biases, suggesting that listener-related factors partially influence the efficiency of audiovisual integration for nonnative speech perception.
Integrating mechanisms of visual guidance in naturalistic language production.
Coco, Moreno I; Keller, Frank
2015-05-01
Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.
Gender differences in identifying emotions from auditory and visual stimuli.
Waaramaa, Teija
2017-12-01
The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.
Phonological Advance Planning in Sentence Production
ERIC Educational Resources Information Center
Oppermann, Frank; Jescheniak, Jorg D.; Schriefers, Herbert
2010-01-01
Our study addresses the scope of phonological advance planning during sentence production using a novel experimental procedure. The production of German sentences in various syntactic formats (SVO, SOV, and VSO) was cued by presenting pictures of the agents of previously memorized agent-action-patient scenes. To tap the phonological activation of…
A Multiple-Channel Model of Task-Dependent Ambiguity Resolution in Sentence Comprehension
ERIC Educational Resources Information Center
Logacev, Pavel; Vasishth, Shravan
2016-01-01
Traxler, Pickering, and Clifton (1998) found that ambiguous sentences are read faster than their unambiguous counterparts. This so-called "ambiguity advantage" has presented a major challenge to classical theories of human sentence comprehension (parsing) because its most prominent explanation, in the form of the unrestricted race model…
Factors Affecting Sentence Severity for Young Adult Offenders.
ERIC Educational Resources Information Center
Greenwood, Peter W.; And Others
This document analyzes the sentencing of young adult defendants in comparison with older adult and younger juvenile offenders, and disputes prior research which held that young adults received more lenient sentencing, perhaps because of the restrictions on disclosing juvenile delinquency histories. The document presents data from samples of young…
The effect of contextual constraint on parafoveal processing in reading
Schotter, Elizabeth R.; Lee, Michelle; Reiderman, Michael; Rayner, Keith
2015-01-01
Semantic preview benefit in reading is an elusive and controversial effect because empirical studies do not always (but sometimes) find evidence for it. Its presence seems to depend on (at least) the language being read, visual properties of the text (e.g., initial letter capitalization), the type of relationship between preview and target, and as shown here, semantic constraint generated by the prior sentence context. Schotter (2013) reported semantic preview benefit for synonyms, but not semantic associates when the preview/target was embedded in a neutral sentence context. In Experiment 1, we embedded those same previews/targets into constrained sentence contexts and in Experiment 2 we replicated the effects reported by Schotter (2013; in neutral sentence contexts) and Experiment 1 (in constrained contexts) in a within-subjects design. In both experiments, we found an early (i.e., first-pass) apparent preview benefit for semantically associated previews in constrained contexts that went away in late measures (e.g., total time). These data suggest that sentence constraint (at least as manipulated in the current study) does not operate by making a single word form expected, but rather generates expectations about what kinds of words are likely to appear. Furthermore, these data are compatible with the assumption of the E-Z Reader model that early oculomotor decisions reflect “hedged bets” that a word will be identifiable and, when wrong, lead the system to identify the wrong word, triggering regressions. PMID:26257469
[Cognitive aging mechanism of signaling effects on the memory for procedural sentences].
Yamamoto, Hiroki; Shimada, Hideaki
2006-08-01
The aim of this study was to clarify the cognitive aging mechanism of signaling effects on the memory for procedural sentences. Participants were 60 younger adults (college students) and 60 older adults. Both age groups were assigned into two groups; half of each group was presented with procedural sentences with signals that highlighted their top-level structure and the other half with procedural sentences without them. Both groups were requested to perform the sentence arrangement task and the reconstruction task. Each task was composed of procedural sentences with or without signals. Results indicated that signaling supported changes in strategy utilization during the successive organizational processes and that changes in strategy utilization resulting from signaling improved the memory for procedural sentences. Moreover, age-related factors interfered with these signaling effects. This study clarified the cognitive aging mechanism of signaling effects in which signaling supports changes in the strategy utilization during organizational processes at encoding and this mediation promotes memory for procedural sentences, though disuse of the strategy utilization due to aging restrains their memory for procedural sentences.
Writing Deaf: Textualizing Deaf Literature
ERIC Educational Resources Information Center
Harmon, Kristen
2007-01-01
In this article, the author discusses why it is difficult to transliterate American Sign Language (ASL) and the visual realities of a deaf individual's life into creative texts written in English. Even on the sentence level, she says, written English resists the unsettling presence of transliteration across modalities. A sign cannot be "said." If…
Using Comic Strips in Language Classes
ERIC Educational Resources Information Center
Csabay, Noémi
2006-01-01
The author believes that using comic strips in language-learning classes has three main benefits. First, comic strips motivate younger learners. Second, they provide a context and logically connected sentences to help language learning. Third, their visual information is helpful for comprehension. The author argues that comic strips can be used in…
Attention Therapy Improves Reading Comprehension in Adjudicated Teens in a Residential Facility
ERIC Educational Resources Information Center
Shelley-Tremblay, John; Langhinrichsen-Rohling, Jennifer; Eyer, Joshua
2012-01-01
This study quantified the influence of visual Attention Therapy (AT) on reading skills and Coherent Motion Threshold (CMT) in adjudicated teens with moderate reading disabilities (RD) residing in a residential alternative sentencing program. Forty-two students with below-average reading scores were identified using standardized reading…
Spatial and Linguistic Aspects of Visual Imagery in Sentence Comprehension
ERIC Educational Resources Information Center
Bergen, Benjamin K.; Lindsay, Shane; Matlock, Teenie; Narayanan, Srini
2007-01-01
There is mounting evidence that language comprehension involves the activation of mental imagery of the content of utterances (Barsalou, 1999; Bergen, Chang, & Narayan, 2004; Bergen, Narayan, & Feldman, 2003; Narayan, Bergen, & Weinberg, 2004; Richardson, Spivey, McRae, & Barsalou, 2003; Stanfield & Zwaan, 2001; Zwaan, Stanfield, & Yaxley, 2002).…
Enhancing Speech Intelligibility: Interactions among Context, Modality, Speech Style, and Masker
ERIC Educational Resources Information Center
Van Engen, Kristin J.; Phelps, Jasmine E. B.; Smiljanic, Rajka; Chandrasekaran, Bharath
2014-01-01
Purpose: The authors sought to investigate interactions among intelligibility-enhancing speech cues (i.e., semantic context, clearly produced speech, and visual information) across a range of masking conditions. Method: Sentence recognition in noise was assessed for 29 normal-hearing listeners. Testing included semantically normal and anomalous…
ERIC Educational Resources Information Center
Chang, Xin; Wang, Pei
2016-01-01
To investigate the influence of L2 proficiency and syntactic similarity on English passive sentence processing, the present ERP study asked 40 late Chinese-English bilinguals (27 females and 13 males, mean age = 23.88) with high or intermediate L2 proficiency to read the sentences carefully and to indicate for each sentence whether or not it was…
Sauppe, Sebastian
2016-01-01
Studies on anticipatory processes during sentence comprehension often focus on the prediction of postverbal direct objects. In subject-initial languages (the target of most studies so far), however, the position in the sentence, the syntactic function, and the semantic role of arguments are often conflated. For example, in the sentence "The frog will eat the fly" the syntactic object ("fly") is at the same time also the last word and the patient argument of the verb. It is therefore not apparent which kind of information listeners orient to for predictive processing during sentence comprehension. A visual world eye tracking study on the verb-initial language Tagalog (Austronesian) tested what kind of information listeners use to anticipate upcoming postverbal linguistic input. The grammatical structure of Tagalog allows to test whether listeners' anticipatory gaze behavior is guided by predictions of the linear order of words, by syntactic functions (e.g., subject/object), or by semantic roles (agent/patient). Participants heard sentences of the type "Eat frog fly" or "Eat fly frog" (both meaning "The frog will eat the fly") while looking at displays containing an agent referent ("frog"), a patient referent ("fly") and a distractor. The verb carried morphological marking that allowed the order and syntactic function of agent and patient to be inferred. After having heard the verb, listeners fixated on the agent irrespective of its syntactic function or position in the sentence. While hearing the first-mentioned argument, listeners fixated on the corresponding referent in the display accordingly and then initiated saccades to the last-mentioned referent before it was encountered. The results indicate that listeners used verbal semantics to identify referents and their semantic roles early; information about word order or syntactic functions did not influence anticipatory gaze behavior directly after the verb was heard. In this verb-initial language, event semantics takes early precedence during the comprehension of sentences, while arguments are anticipated temporally more local to when they are encountered. The current experiment thus helps to better understand anticipation during language processing by employing linguistic structures not available in previously studied subject-initial languages.
Weber-Fox, Christine; Hart, Laura J; Spruill, John E
2006-07-01
This study examined how school-aged children process different grammatical categories. Event-related brain potentials elicited by words in visually presented sentences were analyzed according to seven grammatical categories with naturally varying characteristics of linguistic functions, semantic features, and quantitative attributes of length and frequency. The categories included nouns, adjectives, verbs, pronouns, conjunctions, prepositions, and articles. The findings indicate that by the age of 9-10 years, children exhibit robust neural indicators differentiating grammatical categories; however, it is also evident that development of language processing is not yet adult-like at this age. The current findings are consistent with the hypothesis that for beginning readers a variety of cues and characteristics interact to affect processing of different grammatical categories and indicate the need to take into account linguistic functions, prosodic salience, and grammatical complexity as they relate to the development of language abilities.
An Activation-Based Model of Sentence Processing as Skilled Memory Retrieval
ERIC Educational Resources Information Center
Lewis, Richard L.; Vasishth, Shravan
2005-01-01
We present a detailed process theory of the moment-by-moment working-memory retrievals and associated control structure that subserve sentence comprehension. The theory is derived from the application of independently motivated principles of memory and cognitive skill to the specialized task of sentence parsing. The resulting theory construes…
The Effects of Syntactic Complexity on Processing Sentences in Noise
ERIC Educational Resources Information Center
Carroll, Rebecca; Ruigendijk, Esther
2013-01-01
This paper discusses the influence of stationary (non-fluctuating) noise on processing and understanding of sentences, which vary in their syntactic complexity (with the factors canonicity, embedding, ambiguity). It presents data from two RT-studies with 44 participants testing processing of German sentences in silence and in noise. Results show a…
If Practice Makes Perfect, Why Does Familiarity Breed Contempt?
ERIC Educational Resources Information Center
McCreesh, Bernadine
1999-01-01
Investigated whether college-level second language learners would learn better from an exercise in which they repeated the original sentence they got wrong or when presented with a different, parallel sentence. Results found that some students preferred to redo the same sentence, while others preferred a different one. One main difference was in…
Investigating the Effects of Veridicality on Age Differences in Verbal Working Memory
ERIC Educational Resources Information Center
Shake, Matthew C.; Perschke, Meghan K.
2013-01-01
In the typical loaded verbal working memory (WM) span task (e.g., Daneman & Carpenter, 1980), participants judge the veridicality of a series of sentences while simultaneously storing the sentence final word for later recall. Performance declines as the number of sentences is increased; aging exacerbates this decline. The present study examined…
Condensed Representation of Sentences in Graphic Displays of Text Structures.
ERIC Educational Resources Information Center
Craven, Timothy C.
1990-01-01
Discusses ways in which sentences may be represented in a condensed form in graphic displays of a sentence dependency structure. A prototype of a text structure management system, TEXNET, is described; a quantitative evaluation of automatic abbreviation schemes is presented; full-text compression is discussed; and additional research is suggested.…
When Translation Makes the Difference: Sentence Processing in Reading and Translation
ERIC Educational Resources Information Center
Macizo, Pedro; Bajo, M. Teresa
2004-01-01
In two experiments we compared normal reading and reading for translation of object relative sentences presented word-by-word. In Experiment 1, professional translators were asked either to read and repeat Spanish sentences, or to read and translate them into English. In addition, we manipulated the availability of pragmatic information given in…
ERIC Educational Resources Information Center
Datchuk, Shawn M.; Kubina, Richard M., Jr.
2017-01-01
The present study used a multiple-baseline, single-case experimental design to investigate the effects of a multicomponent intervention on construction of simple sentences and word sequences. The intervention entailed sequential delivery of sentence instruction and frequency building to a performance criterion and paragraph instruction.…
Kawakami, A; Hatta, T; Kogure, T
2001-12-01
Relative engagements of the orthographic and semantic codes in Kanji and Hiragana word recognition were investigated. In Exp. 1, subjects judged whether the pairs of Kanji words (prime and target) presented sequentially were physically identical to each other in the word condition. In the sentence condition, subjects decided whether the target word was valid for the prime sentence presented in advance. The results showed that the response times to the target swords orthographically similar (to the prime) were significantly slower than to semantically related target words in the word condition and that this was also the case in the sentence condition. In Exp. 2, subjects judged whether the target word written in Hiragana was physically identical to the prime word in the word condition. In the sentence condition, subjects decided if the target word was valid for the previously presented prime sentence. Analysis indicated that response times to orthographically similar words were slower than to semantically related words in the word condition but not in the sentence condition wherein the response times to the semantically and orthographically similar words were largely the same. Based on these results, differential contributions of orthographic and semantic codes in cognitive processing of Japanese Kanji and Hiragana words was discussed.
Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).
Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566
Liuzza, Marco Tullio; Candidi, Matteo; Aglioti, Salvatore Maria
2011-01-01
Background Theories of embodied language suggest that the motor system is differentially called into action when processing motor-related versus abstract content words or sentences. It has been recently shown that processing negative polarity action-related sentences modulates neural activity of premotor and motor cortices. Methods and Findings We sought to determine whether reading negative polarity sentences brought about differential modulation of cortico-spinal motor excitability depending on processing hand-action related or abstract sentences. Facilitatory paired-pulses Transcranial Magnetic Stimulation (pp-TMS) was applied to the primary motor representation of the right-hand and the recorded amplitude of induced motor-evoked potentials (MEP) was used to index M1 activity during passive reading of either hand-action related or abstract content sentences presented in both negative and affirmative polarity. Results showed that the cortico-spinal excitability was affected by sentence polarity only in the hand-action related condition. Indeed, in keeping with previous TMS studies, reading positive polarity, hand action-related sentences suppressed cortico-spinal reactivity. This effect was absent when reading hand action-related negative polarity sentences. Moreover, no modulation of cortico-spinal reactivity was associated with either negative or positive polarity abstract sentences. Conclusions Our results indicate that grammatical cues prompting motor negation reduce the cortico-spinal suppression associated with affirmative action sentences reading and thus suggest that motor simulative processes underlying the embodiment may involve even syntactic features of language. PMID:21347305
Binding and unbinding the auditory and visual streams in the McGurk effect.
Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc
2012-08-01
Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.
Metusalem, Ross; Kutas, Marta; Urbach, Thomas P.; Elman, Jeffrey L.
2016-01-01
During incremental language comprehension, the brain activates knowledge of described events, including knowledge elements that constitute semantic anomalies in their linguistic context. The present study investigates hemispheric asymmetries in this process, with the aim of advancing our understanding of the neural basis and functional properties of event knowledge activation during incremental comprehension. In a visual half-field event-related brain potential (ERP) experiment, participants read brief discourses in which the third sentence contained a word that was either highly expected, semantically anomalous but related to the described event, or semantically anomalous but unrelated to the described event. For both visual fields of target word presentation, semantically anomalous words elicited N400 ERP components of greater amplitude than did expected words. Crucially, event-related anomalous words elicited a reduced N400 relative to event-unrelated anomalous words only with left visual field/right hemisphere presentation. This result suggests that right hemisphere processes are critical to the activation of event knowledge elements that violate the linguistic context, and in doing so informs existing theories of hemispheric asymmetries in semantic processing during language comprehension. Additionally, this finding coincides with past research suggesting a crucial role for the right hemisphere in elaborative inference generation, raises interesting questions regarding hemispheric coordination in generating event-specific linguistic expectancies, and more generally highlights the possibility of functional dissociation between event knowledge activation for the generation of elaborative inferences and for linguistic expectancies. PMID:26878980
Metusalem, Ross; Kutas, Marta; Urbach, Thomas P; Elman, Jeffrey L
2016-04-01
During incremental language comprehension, the brain activates knowledge of described events, including knowledge elements that constitute semantic anomalies in their linguistic context. The present study investigates hemispheric asymmetries in this process, with the aim of advancing our understanding of the neural basis and functional properties of event knowledge activation during incremental comprehension. In a visual half-field event-related brain potential (ERP) experiment, participants read brief discourses in which the third sentence contained a word that was either highly expected, semantically anomalous but related to the described event (Event-Related), or semantically anomalous but unrelated to the described event (Event-Unrelated). For both visual fields of target word presentation, semantically anomalous words elicited N400 ERP components of greater amplitude than did expected words. Crucially, Event-Related anomalous words elicited a reduced N400 relative to Event-Unrelated anomalous words only with left visual field/right hemisphere presentation. This result suggests that right hemisphere processes are critical to the activation of event knowledge elements that violate the linguistic context, and in doing so informs existing theories of hemispheric asymmetries in semantic processing during language comprehension. Additionally, this finding coincides with past research suggesting a crucial role for the right hemisphere in elaborative inference generation, raises interesting questions regarding hemispheric coordination in generating event-specific linguistic expectancies, and more generally highlights the possibility of functional dissociation of event knowledge activation for the generation of elaborative inferences and for linguistic expectancies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Schreitmüller, Stefan; Frenken, Miriam; Bentz, Lüder; Ortmann, Magdalene; Walger, Martin; Meister, Hartmut
Watching a talker's mouth is beneficial for speech reception (SR) in many communication settings, especially in noise and when hearing is impaired. Measures for audiovisual (AV) SR can be valuable in the framework of diagnosing or treating hearing disorders. This study addresses the lack of standardized methods in many languages for assessing lipreading, AV gain, and integration. A new method is validated that supplements a German speech audiometric test with visualizations of the synthetic articulation of an avatar that was used, for it is feasible to lip-sync auditory speech in a highly standardized way. Three hypotheses were formed according to the literature on AV SR that used live or filmed talkers. It was tested whether respective effects could be reproduced with synthetic articulation: (1) cochlear implant (CI) users have a higher visual-only SR than normal-hearing (NH) individuals, and younger individuals obtain higher lipreading scores than older persons. (2) Both CI and NH gain from presenting AV over unimodal (auditory or visual) sentences in noise. (3) Both CI and NH listeners efficiently integrate complementary auditory and visual speech features. In a controlled, cross-sectional study with 14 experienced CI users (mean age 47.4) and 14 NH individuals (mean age 46.3, similar broad age distribution), lipreading, AV gain, and integration of a German matrix sentence test were assessed. Visual speech stimuli were synthesized by the articulation of the Talking Head system "MASSY" (Modular Audiovisual Speech Synthesizer), which displayed standardized articulation with respect to the visibility of German phones. In line with the hypotheses and previous literature, CI users had a higher mean visual-only SR than NH individuals (CI, 38%; NH, 12%; p < 0.001). Age was correlated with lipreading such that within each group, younger individuals obtained higher visual-only scores than older persons (rCI = -0.54; p = 0.046; rNH = -0.78; p < 0.001). Both CI and NH benefitted by AV over unimodal speech as indexed by calculations of the measures visual enhancement and auditory enhancement (each p < 0.001). Both groups efficiently integrated complementary auditory and visual speech features as indexed by calculations of the measure integration enhancement (each p < 0.005). Given the good agreement between results from literature and the outcome of supplementing an existing validated auditory test with synthetic visual cues, the introduced method can be considered an interesting candidate for clinical and scientific applications to assess measures important for AV SR in a standardized manner. This could be beneficial for optimizing the diagnosis and treatment of individual listening and communication disorders, such as cochlear implantation.
ERIC Educational Resources Information Center
Mack, Jennifer E.; Thompson, Cynthia K.
2017-01-01
Purpose: The present study tested whether (and how) language treatment changed online sentence processing in individuals with aphasia. Method: Participants with aphasia (n = 10) received a 12-week program of Treatment of Underlying Forms (Thompson & Shapiro, 2005) focused on production and comprehension of passive sentences. Before and after…
ERIC Educational Resources Information Center
Vanhoutte, Sarah; De Letter, Miet; Corthals, Paul; Van Borsel, John; Santens, Patrick
2012-01-01
The present study examined language production skills in Parkinson's disease (PD) patients. A unique cued sentence generation task was created in order to reduce demands on memory and attention. Differences in sentence production abilities according to disease severity and cognitive impairments were assessed. Language samples were obtained from 20…
ERIC Educational Resources Information Center
Goldman, Susan R.
The comprehension of the Minimum Distance Principle was examined in three experiments, using the "tell/promise" sentence construction. Experiment one compared the listening and reading comprehension of singly presented sentences, e.g. "John tells Bill to bake the cake" and "John promises Bill to bake the cake." The…
Some Effects of Television Screen Size and Viewer Distance on Recognition of Short Sentences.
ERIC Educational Resources Information Center
Lewin, Earl P.
A study investigated changes in recognition time for short sentences presented on television screens of varying sizes with viewers at varying distances. In a posttest only control group design, subjects in several different groups viewed a series of similar sentences under conditions where screen size and distance from the screen were varied. The…
Vision improvement in pilots with presbyopia following perceptual learning.
Sterkin, Anna; Levy, Yuval; Pokroy, Russell; Lev, Maria; Levian, Liora; Doron, Ravid; Yehezkel, Oren; Fried, Moshe; Frenkel-Nir, Yael; Gordon, Barak; Polat, Uri
2017-11-24
Israeli Air Force (IAF) pilots continue flying combat missions after the symptoms of natural near-vision deterioration, termed presbyopia, begin to be noticeable. Because modern pilots rely on the displays of the aircraft control and performance instruments, near visual acuity (VA) is essential in the cockpit. We aimed to apply a method previously shown to improve visual performance of presbyopes, and test whether presbyopic IAF pilots can overcome the limitation imposed by presbyopia. Participants were selected by the IAF aeromedical unit as having at least initial presbyopia and trained using a structured personalized perceptual learning method (GlassesOff application), based on detecting briefly presented low-contrast Gabor stimuli, under the conditions of spatial and temporal constraints, from a distance of 40 cm. Our results show that despite their initial visual advantage over age-matched peers, training resulted in robust improvements in various basic visual functions, including static and temporal VA, stereoacuity, spatial crowding, contrast sensitivity and contrast discrimination. Moreover, improvements generalized to higher-level tasks, such as sentence reading and aerial photography interpretation (specifically designed to reflect IAF pilots' expertise in analyzing noisy low-contrast input). In concert with earlier suggestions, gains in visual processing speed are plausible to account, at least partially, for the observed training-induced improvements. Copyright © 2017 Elsevier Ltd. All rights reserved.
Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension
Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.
2016-01-01
The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974
(Pea)nuts and Bolts of Visual Narrative: Structure and Meaning in Sequential Image Comprehension
ERIC Educational Resources Information Center
Cohn, Neil; Paczynski, Martin; Jackendoff, Ray; Holcomb, Phillip J.; Kuperberg, Gina R.
2012-01-01
Just as syntax differentiates coherent sentences from scrambled word strings, the comprehension of sequential images must also use a cognitive system to distinguish coherent narrative sequences from random strings of images. We conducted experiments analogous to two classic studies of language processing to examine the contributions of narrative…
Right Hemisphere Metaphor Processing? Characterizing the Lateralization of Semantic Processes
ERIC Educational Resources Information Center
Schmidt, Gwen L.; DeBuse, Casey J.; Seger, Carol A.
2007-01-01
Previous laterality studies have implicated the right hemisphere in the processing of metaphors, however it is not clear if this result is due to metaphoricity per se or another aspect of semantic processing. Three divided visual field experiments varied metaphorical and literal sentence familiarity. We found a right hemisphere advantage for…
Strategy Access Rods: A Hands-On Approach.
ERIC Educational Resources Information Center
Worthing, Bernadette; Laster, Barbara
2002-01-01
Describes Strategy Access Rods (SARs), balsa-wood, prism-like or rectangular rods on which a one-sentence reading strategy phrase in the first person is printed. Notes SARs serve as a visual, auditory, kinesthetic, and tactile reminder of the strategies available to developing readers. Discusses use of SARs for word recognition and comprehension.…
ERIC Educational Resources Information Center
Tucha, Oliver; Lange, Klaus W.
2005-01-01
Two experiments were performed regarding the effect of conscious control on handwriting fluency in healthy adults and ADHD children. First, 26 healthy students were asked to write a sentence under different conditions. The results indicate that automated handwriting movements are independent from visual feedback. Second, the writing performance of…
From Seeing to Saying: Perceiving, Planning, Producing
ERIC Educational Resources Information Center
Kuchinsky, Stefanie Ellen
2009-01-01
Given the amount of visual information in a scene, how do speakers determine what to talk about first? One hypothesis is that speakers start talking about what has attentional priority, while another is that speakers first extract the scene gist, using the obtained relational information to generate a rudimentary sentence plan before retrieving…
ERIC Educational Resources Information Center
Stipancic, Kaila L.; Tjaden, Kris; Wilding, Gregory
2016-01-01
Purpose: This study obtained judgments of sentence intelligibility using orthographic transcription for comparison with previously reported intelligibility judgments obtained using a visual analog scale (VAS) for individuals with Parkinson's disease and multiple sclerosis and healthy controls (K. Tjaden, J. E. Sussman, & G. E. Wilding, 2014).…
Preserved Visual Language Identification Despite Severe Alexia
ERIC Educational Resources Information Center
Di Pietro, Marie; Ptak, Radek; Schnider, Armin
2012-01-01
Patients with letter-by-letter alexia may have residual access to lexical or semantic representations of words despite severely impaired overt word recognition (reading). Here, we report a multilingual patient with severe letter-by-letter alexia who rapidly identified the language of written words and sentences in French and English while he had…
Fine-Tuned: Phonology and Semantics Affect First- to Second-Language Zooming In
ERIC Educational Resources Information Center
Elston-Guttler, Kerrie E.; Gunter, Thomas C.
2009-01-01
We investigate how L1 phonology and semantics affect processing of interlingual homographs by manipulating language context before, and auditory input during, a visual experiment in the L2. Three experiments contained German-English homograph primes ("gift" = German "poison") in English sentences and was performed by German (L1) learners of…
"Pushing the Button While Pushing the Argument": Motor Priming of Abstract Action Language
ERIC Educational Resources Information Center
Schaller, Franziska; Weiss, Sabine; Müller, Horst M.
2017-01-01
In a behavioral study we analyzed the influence of visual action primes on abstract action sentence processing. We thereby aimed at investigating mental motor involvement during processes of meaning constitution of action verbs in abstract contexts. In the first experiment, participants executed either congruous or incongruous movements parallel…
Deaf Readers’ Response to Syntactic Complexity: Evidence from Self-Paced Reading
Traxler, Matthew J.; Corina, David P.; Morford, Jill P.; Hafer, Sarah; Hoversten, Liv J.
2013-01-01
This study was designed to determine the feasibility of using self-paced reading methods to study deaf readers and to assess how deaf readers respond to two syntactic manipulations. Three groups of participants read the test sentences: deaf readers, hearing monolingual English readers, and hearing bilingual readers whose second language was English. In Experiment 1, participants read sentences containing subject relative or object relative clauses. The test sentences contained semantic information that influences on-line processing outcomes (Traxler et al., 2002; 2005). All of the participant groups had greater difficulty processing sentences containing object relative clauses. This difficulty was reduced when helpful semantic cues were present. In Experiment 2, participants read active voice and passive voice sentences. The sentences were processed similarly by all three groups. Comprehension accuracy was higher in hearing readers than in deaf readers. Within deaf readers, native signers read the sentences faster and comprehended them to a higher degree than did non-native signers. These results indicate that self-paced reading is a useful method for studying sentence interpretation among deaf readers. PMID:23868696
Tamaoka, Katsuo; Asano, Michiko; Miyaoka, Yayoi; Yokosawa, Kazuhiko
2014-04-01
Using the eye-tracking method, the present study depicted pre- and post-head processing for simple scrambled sentences of head-final languages. Three versions of simple Japanese active sentences with ditransitive verbs were used: namely, (1) SO₁O₂V canonical, (2) SO₂O₁V single-scrambled, and (3) O₁O₂SV double-scrambled order. First pass reading times indicated that the third noun phrase just before the verb in both single- and double-scrambled sentences required longer reading times compared to canonical sentences. Re-reading times (the sum of all fixations minus the first pass reading) showed that all noun phrases including the crucial phrase before the verb in double-scrambled sentences required longer re-reading times than those required for single-scrambled sentences; single-scrambled sentences had no difference from canonical ones. Therefore, a single filler-gap dependency can be resolved in pre-head anticipatory processing whereas two filler-gap dependencies require much greater cognitive loading than a single case. These two dependencies can be resolved in post-head processing using verb agreement information.
Attention blinks for selection, not perception or memory: reading sentences and reporting targets.
Potter, Mary C; Wyble, Brad; Olejarczyk, Jennifer
2011-12-01
In whole report, a sentence presented sequentially at the rate of about 10 words/s can be recalled accurately, whereas if the task is to report only two target words (e.g., red words), the second target suffers an attentional blink if it appears shortly after the first target. If these two tasks are carried out simultaneously, is there an attentional blink, and does it affect both tasks? Here, sentence report was combined with report of two target words (Experiments 1 and 2) or two inserted target digits, Arabic numerals or word digits (Experiments 3 and 4). When participants reported only the targets an attentional blink was always observed. When they reported both the sentence and targets, sentence report was quite accurate but there was an attentional blink in picking out the targets when they were part of the sentence. When targets were extra digits inserted in the sentence there was no blink when viewers also reported the sentence. These results challenge some theories of the attentional blink: Blinks result from online selection, not perception or memory.
Hohlfeld, Annette; Martín-Loeches, Manuel; Sommer, Werner
2015-01-01
The present study contributes to the discussion on the automaticity of semantic processing. Whereas most previous research investigated semantic processing at word level, the present study addressed semantic processing during sentence reading. A dual task paradigm was combined with the recording of event-related brain potentials. Previous research at word level processing reported different patterns of interference with the N400 by additional tasks: attenuation of amplitude or delay of latency. In the present study, we presented Spanish sentences that were semantically correct or contained a semantic violation in a critical word. At different intervals preceding the critical word a tone was presented that required a high-priority choice response. At short intervals/high temporal overlap between the tasks mean amplitude of the N400 was reduced relative to long intervals/low temporal overlap, but there were no shifts of peak latency. We propose that processing at sentence level exerts a protective effect against the additional task. This is in accord with the attentional sensitization model (Kiefer & Martens, 2010), which suggests that semantic processing is an automatic process that can be enhanced by the currently activated task set. The present experimental sentences also induced a P600, which is taken as an index of integrative processing. Additional task effects are comparable to those in the N400 time window and are briefly discussed. PMID:26203312
The visual attention span deficit in Chinese children with reading fluency difficulty.
Zhao, Jing; Liu, Menglian; Liu, Hanlong; Huang, Chen
2018-02-01
With reading development, some children fail to learn to read fluently. However, reading fluency difficulty (RFD) has not been fully investigated. The present study explored the underlying mechanism of RFD from the aspect of visual attention span. Fourteen Chinese children with RFD and fourteen age-matched normal readers participated. The visual 1-back task was adopted to examine visual attention span. Reaction time and accuracy were recorded, and relevant d-prime (d') scores were computed. Results showed that children with RFD exhibited lower accuracy and lower d' values than the controls did in the visual 1-back task, revealing a visual attention span deficit. Further analyses on d' values revealed that the attention distribution seemed to exhibit an inverted U-shaped pattern without lateralization for normal readers, but a W-shaped pattern with a rightward bias for children with RFD, which was discussed based on between-group variation in reading strategies. Results of the correlation analyses showed that visual attention span was associated with reading fluency at the sentence level for normal readers, but was related to reading fluency at the single-character level for children with RFD. The different patterns in correlations between groups revealed that visual attention span might be affected by the variation in reading strategies. The current findings extend previous data from alphabetic languages to Chinese, a logographic language with a particularly deep orthography, and have implications for reading-dysfluency remediation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kovalenko, Lyudmyla Y; Chaumon, Maximilien; Busch, Niko A
2012-07-01
Semantic processing of verbal and visual stimuli has been investigated in semantic violation or semantic priming paradigms in which a stimulus is either related or unrelated to a previously established semantic context. A hallmark of semantic priming is the N400 event-related potential (ERP)--a deflection of the ERP that is more negative for semantically unrelated target stimuli. The majority of studies investigating the N400 and semantic integration have used verbal material (words or sentences), and standardized stimulus sets with norms for semantic relatedness have been published for verbal but not for visual material. However, semantic processing of visual objects (as opposed to words) is an important issue in research on visual cognition. In this study, we present a set of 800 pairs of semantically related and unrelated visual objects. The images were rated for semantic relatedness by a sample of 132 participants. Furthermore, we analyzed low-level image properties and matched the two semantic categories according to these features. An ERP study confirmed the suitability of this image set for evoking a robust N400 effect of semantic integration. Additionally, using a general linear modeling approach of single-trial data, we also demonstrate that low-level visual image properties and semantic relatedness are in fact only minimally overlapping. The image set is available for download from the authors' website. We expect that the image set will facilitate studies investigating mechanisms of semantic and contextual processing of visual stimuli.
Audiovisual integration in children listening to spectrally degraded speech.
Maidment, David W; Kang, Hi Jee; Stewart, Hannah J; Amitay, Sygal
2015-02-01
The study explored whether visual information improves speech identification in typically developing children with normal hearing when the auditory signal is spectrally degraded. Children (n=69) and adults (n=15) were presented with noise-vocoded sentences from the Children's Co-ordinate Response Measure (Rosen, 2011) in auditory-only or audiovisual conditions. The number of bands was adaptively varied to modulate the degradation of the auditory signal, with the number of bands required for approximately 79% correct identification calculated as the threshold. The youngest children (4- to 5-year-olds) did not benefit from accompanying visual information, in comparison to 6- to 11-year-old children and adults. Audiovisual gain also increased with age in the child sample. The current data suggest that children younger than 6 years of age do not fully utilize visual speech cues to enhance speech perception when the auditory signal is degraded. This evidence not only has implications for understanding the development of speech perception skills in children with normal hearing but may also inform the development of new treatment and intervention strategies that aim to remediate speech perception difficulties in pediatric cochlear implant users.
Who was the agent? The neural correlates of reanalysis processes during sentence comprehension.
Hirotani, Masako; Makuuchi, Michiru; Rüschemeyer, Shirley-Ann; Friederici, Angela D
2011-11-01
Sentence comprehension is a complex process. Besides identifying the meaning of each word and processing the syntactic structure of a sentence, it requires the computation of thematic information, that is, information about who did what to whom. The present fMRI study investigated the neural basis for thematic reanalysis (reanalysis of the thematic roles initially assigned to noun phrases in a sentence) and its interplay with syntactic reanalysis (reanalysis of the underlying syntactic structure originally constructed for a sentence). Thematic reanalysis recruited a network consisting of Broca's area, that is, the left pars triangularis (LPT), and the left posterior superior temporal gyrus, whereas only LPT showed greater sensitivity to syntactic reanalysis. These data provide direct evidence for a functional neuroanatomical basis for two linguistically motivated reanalysis processes during sentence comprehension. Copyright © 2010 Wiley-Liss, Inc.
Binocular coordination in response to stereoscopic stimuli
NASA Astrophysics Data System (ADS)
Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.
2009-02-01
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.
ERIC Educational Resources Information Center
Cimpian, Andrei; Meltzer, Trent J.; Markman, Ellen M.
2011-01-01
Generic sentences (e.g., "Birds lay eggs") convey generalizations about entire categories and may thus be an important source of knowledge for children. However, these sentences cannot be identified by a simple rule, requiring instead the integration of multiple cues. The present studies focused on 3- to 5-year-olds' (N = 91) use of…
Two Studies of the Syntactic Knowledge of Young Children. A Preliminary Report.
ERIC Educational Resources Information Center
Smith, Carlota S.
This paper deals with two experiments whose purposes are to investigate the linguistic competence of young children and their receptivity to adult speech. In the free response experiment, imperative sentences were presented to 1 1/2- to 2 1/2-year-olds. The sentences were minimal (a single noun), telegraphic, or full adult sentences. The youngest…
ERIC Educational Resources Information Center
Gamble, Charles W.; Hamblin, Arthur G.
1986-01-01
Discusses the use of a sentence completion instrument predicated on Lazarus' multimodal system. The instrument, entitled The Multimodal Sentence Completion Form for Children (MSCF-C), is designed to systematically assess client needs and assist in identifying intervention strategies. Presents a case study of a 12-year-old, sixth-grade student.…
The Role of Sentence Recall in Reading and Language Skills of Children with Learning Difficulties
ERIC Educational Resources Information Center
Alloway, Tracy Packiam; Gathercole, Susan Elizabeth
2005-01-01
The present study explores the relationship between sentence recall and reading and language skills in a group of 7--11-year-old children with learning difficulties. While recent studies have found that performance on sentence recall tasks plays a role in learning, it is possible that this contribution is a reflection of shared resources with…
ERIC Educational Resources Information Center
Tian, Shuang; Murao, Remi
2016-01-01
The present study examined the use of prosody in semantic and syntactic disambiguation by means of comparison between Japanese and Chinese speakers' production of English sentences. In Chinese and Japanese, lexical prosody is more prominent than sentence prosody, and the sentential meaning contrast is usually realized through particles or a change…
Sentence Imitation as a Marker of SLI in Czech: Disproportionate Impairment of Verbs and Clitics
ERIC Educational Resources Information Center
Smolík, Filip; Vávru, Petra
2014-01-01
Purpose: The authors examined sentence imitation as a potential clinical marker of specific language impairment (SLI) in Czech and its use to identify grammatical markers of SLI. Method: Children with SLI and the age-and language-matched control groups (total N = 57) were presented with a sentence imitation task, a receptive vocabulary task, and…
The effects of four variables on the intelligibility of synthesized sentences
NASA Astrophysics Data System (ADS)
Conroy, Carol; Raphael, Lawrence J.; Bell-Berti, Fredericka
2003-10-01
The experiments reported here examined the effects of four variables on the intelligibilty of synthetic speech: (1) listener age, (2) listener experience, (3) speech rate, and (4) the presence versus absence of interword pauses. The stimuli, eighty IEEE-Harvard Sentences, were generated by a DynaVox augmentative/alternative communication device equipped with a DECtalk synthesizer. The sentences were presented to four groups of 12 listeners each (children (9-11 years), teens (14-16 years), young adults (20-25 years), and adults (38-45 years). In the first experiment the sentences were heard at four rates: 105, 135, 165, and 195 wpm; in the second experiment half of the sentences (presented at two rates: 135 and 165 wpm), contained 250 ms interword pauses. Conditions in both experiments were counterbalanced and no sentence was presented twice. Results indicated a consistent decrease in error rates with increased exposure to the synthesized speech for all age groups. Error rates also varied inversely with listener age. Effects of rate variation were inconsistent across listener groups and between experiments. The presences versus absences of pauses affected listener groups differently: The youngest listeners had higher error rates, and the older listeners lower error rates when interword pauses were included in the stimuli. [Work supported by St. John's University and New York City Board of Education, Technology Solutions, District 75.
Enhancing Biomedical Text Summarization Using Semantic Relation Extraction
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336
Kleinman, Daniel; Runnqvist, Elin; Ferreira, Victor S.
2015-01-01
Comprehenders predict upcoming speech and text on the basis of linguistic input. How many predictions do comprehenders make for an upcoming word? If a listener strongly expects to hear the word “sock”, is the word “shirt” partially expected as well, is it actively inhibited, or is it ignored? The present research addressed these questions by measuring the “downstream” effects of prediction on the processing of subsequently presented stimuli using the cumulative semantic interference paradigm. In three experiments, subjects named pictures (sock) that were presented either in isolation or after strongly constraining sentence frames (“After doing his laundry, Mark always seemed to be missing one…”). Naming sock slowed the subsequent naming of the picture shirt – the standard cumulative semantic interference effect. However, although picture naming was much faster after sentence frames, the interference effect was not modulated by the context (bare vs. sentence) in which either picture was presented. According to the only model of cumulative semantic interference that can account for such a pattern of data, this indicates that comprehenders pre-activated and maintained the pre-activation of best sentence completions (sock) but did not maintain the pre-activation of less likely completions (shirt). Thus, comprehenders predicted only the most probable completion for each sentence. PMID:25917550
Exploring Use of the Coordinate Response Measure in a Multitalker Babble Paradigm
Kidd, Gary R.; Fogerty, Daniel
2017-01-01
Purpose Three experiments examined the use of competing coordinate response measure (CRM) sentences as a multitalker babble. Method In Experiment I, young adults with normal hearing listened to a CRM target sentence in the presence of 2, 4, or 6 competing CRM sentences with synchronous or asynchronous onsets. In Experiment II, the condition with 6 competing sentences was explored further. Three stimulus conditions (6 talkers saying same sentence, 1 talker producing 6 different sentences, and 6 talkers each saying a different sentence) were evaluated with different methods of presentation. Experiment III examined the performance of older adults with hearing impairment in a subset of conditions from Experiment II. Results In Experiment I, performance declined with increasing numbers of talkers and improved with asynchronous sentence onsets. Experiment II identified conditions under which an increase in the number of talkers led to better performance. In Experiment III, the relative effects of the number of talkers, messages, and onset asynchrony were the same for young and older listeners. Conclusions Multitalker babble composed of CRM sentences has masking properties similar to other types of multitalker babble. However, when the number of different talkers and messages are varied independently, performance is best with more talkers and fewer messages. PMID:28249093
Fogerty, Daniel
2014-01-01
The present study investigated the importance of overall segment amplitude and intrinsic segment amplitude modulation of consonants and vowels to sentence intelligibility. Sentences were processed according to three conditions that replaced consonant or vowel segments with noise matched to the long-term average speech spectrum. Segments were replaced with (1) low-level noise that distorted the overall sentence envelope, (2) segment-level noise that restored the overall syllabic amplitude modulation of the sentence, and (3) segment-modulated noise that further restored faster temporal envelope modulations during the vowel. Results from the first experiment demonstrated an incremental benefit with increasing resolution of the vowel temporal envelope. However, amplitude modulations of replaced consonant segments had a comparatively minimal effect on overall sentence intelligibility scores. A second experiment selectively noise-masked preserved vowel segments in order to equate overall performance of consonant-replaced sentences to that of the vowel-replaced sentences. Results demonstrated no significant effect of restoring consonant modulations during the interrupting noise when existing vowel cues were degraded. A third experiment demonstrated greater perceived sentence continuity with the preservation or addition of vowel envelope modulations. Overall, results support previous investigations demonstrating the importance of vowel envelope modulations to the intelligibility of interrupted sentences. PMID:24606291
Rogalsky, Corianne
2009-01-01
Numerous studies have identified an anterior temporal lobe (ATL) region that responds preferentially to sentence-level stimuli. It is unclear, however, whether this activity reflects a response to syntactic computations or some form of semantic integration. This distinction is difficult to investigate with the stimulus manipulations and anomaly detection paradigms traditionally implemented. The present functional magnetic resonance imaging study addresses this question via a selective attention paradigm. Subjects monitored for occasional semantic anomalies or occasional syntactic errors, thus directing their attention to semantic integration, or syntactic properties of the sentences. The hemodynamic response in the sentence-selective ATL region (defined with a localizer scan) was examined during anomaly/error-free sentences only, to avoid confounds due to error detection. The majority of the sentence-specific region of interest was equally modulated by attention to syntactic or compositional semantic features, whereas a smaller subregion was only modulated by the semantic task. We suggest that the sentence-specific ATL region is sensitive to both syntactic and integrative semantic functions during sentence processing, with a smaller portion of this area preferentially involved in the later. This study also suggests that selective attention paradigms may be effective tools to investigate the functional diversity of networks involved in sentence processing. PMID:18669589
Treatment of sentence comprehension and production in aphasia: is there cross-modal generalisation?
Adelt, Anne; Hanne, Sandra; Stadie, Nicole
2016-09-09
Exploring generalisation following treatment of language deficits in aphasia can provide insights into the functional relation of the cognitive processing systems involved. In the present study, we first review treatment outcomes of interventions targeting sentence processing deficits and, second report a treatment study examining the occurrence of practice effects and generalisation in sentence comprehension and production. In order to explore the potential linkage between processing systems involved in comprehending and producing sentences, we investigated whether improvements generalise within (i.e., uni-modal generalisation in comprehension or in production) and/or across modalities (i.e., cross-modal generalisation from comprehension to production or vice versa). Two individuals with aphasia displaying co-occurring deficits in sentence comprehension and production were trained on complex, non-canonical sentences in both modalities. Two evidence-based treatment protocols were applied in a crossover intervention study with sequence of treatment phases being randomly allocated. Both participants benefited significantly from treatment, leading to uni-modal generalisation in both comprehension and production. However, cross-modal generalisation did not occur. The magnitude of uni-modal generalisation in sentence production was related to participants' sentence comprehension performance prior to treatment. These findings support the assumption of modality-specific sub-systems for sentence comprehension and production, being linked uni-directionally from comprehension to production.
The Influence of Topic Status on Written and Spoken Sentence Production
Cowles, H. Wind; Ferreira, Victor S.
2012-01-01
Four experiments investigate the influence of topic status and givenness on how speakers and writers structure sentences. The results of these experiments show that when a referent is previously given, it is more likely to be produced early in both sentences and word lists, confirming prior work showing that givenness increases the accessibility of given referents. When a referent is previously given and assigned topic status, it is even more likely to be produced early in a sentence, but not in a word list. Thus, there appears to be an early mention advantage for topics that is present in both written and spoken modalities, but is specific to sentence production. These results suggest that information-structure constructs like topic exert an influence that is not based only on increased accessibility, but also reflects mapping to syntactic structure during sentence production. PMID:22408281
Changes across age groups in self-choice elaboration effects on incidental memory.
Toyota, Hiroshi; Konishi, Tomoko
2004-08-01
The present study investigated age differences in the effects of a self-choice elaboration and an experimenter-provided elaboration on incidental memory. Adults, sixth grade, and second grade subjects chose which of two sentence frames the target fit better in a self-choice elaboration condition. They then judged whether each target made sense in its sentence frame in the experimenter-provided elaboration, then did free recall tests. Only adults recalled better the targets with an image sentence with self-choice elaboration, rather than experimenter-provided elaboration. However, self-choice elaboration was far superior for the recall of targets with nonimage sentences only for second graders. Thus, the effects of self-choice elaboration were determined both by age and by type of sentence frame.
Schuster, Sarah; Hawelka, Stefan; Hutzler, Florian; Kronbichler, Martin; Richlan, Fabio
2016-01-01
Word length, frequency, and predictability count among the most influential variables during reading. Their effects are well-documented in eye movement studies, but pertinent evidence from neuroimaging primarily stem from single-word presentations. We investigated the effects of these variables during reading of whole sentences with simultaneous eye-tracking and functional magnetic resonance imaging (fixation-related fMRI). Increasing word length was associated with increasing activation in occipital areas linked to visual analysis. Additionally, length elicited a U-shaped modulation (i.e., least activation for medium-length words) within a brain stem region presumably linked to eye movement control. These effects, however, were diminished when accounting for multiple fixation cases. Increasing frequency was associated with decreasing activation within left inferior frontal, superior parietal, and occipito-temporal regions. The function of the latter region—hosting the putative visual word form area—was originally considered as limited to sublexical processing. An exploratory analysis revealed that increasing predictability was associated with decreasing activation within middle temporal and inferior frontal regions previously implicated in memory access and unification. The findings are discussed with regard to their correspondence with findings from single-word presentations and with regard to neurocognitive models of visual word recognition, semantic processing, and eye movement control during reading. PMID:27365297
Richlan, Fabio; Gagl, Benjamin; Hawelka, Stefan; Braun, Mario; Schurz, Matthias; Kronbichler, Martin; Hutzler, Florian
2014-10-01
The present study investigated the feasibility of using self-paced eye movements during reading (measured by an eye tracker) as markers for calculating hemodynamic brain responses measured by functional magnetic resonance imaging (fMRI). Specifically, we were interested in whether the fixation-related fMRI analysis approach was sensitive enough to detect activation differences between reading material (words and pseudowords) and nonreading material (line and unfamiliar Hebrew strings). Reliable reading-related activation was identified in left hemisphere superior temporal, middle temporal, and occipito-temporal regions including the visual word form area (VWFA). The results of the present study are encouraging insofar as fixation-related analysis could be used in future fMRI studies to clarify some of the inconsistent findings in the literature regarding the VWFA. Our study is the first step in investigating specific visual word recognition processes during self-paced natural sentence reading via simultaneous eye tracking and fMRI, thus aiming at an ecologically valid measurement of reading processes. We provided the proof of concept and methodological framework for the analysis of fixation-related fMRI activation in the domain of reading research. © The Author 2013. Published by Oxford University Press.
Role of semantic paradigms for optimization of language mapping in clinical FMRI studies.
Zacà, D; Jarso, S; Pillai, J J
2013-10-01
The optimal paradigm choice for language mapping in clinical fMRI studies is challenging due to the variability in activation among different paradigms, the contribution to activation of cognitive processes other than language, and the difficulties in monitoring patient performance. In this study, we compared language localization and lateralization between 2 commonly used clinical language paradigms and 3 newly designed dual-choice semantic paradigms to define a streamlined and adequate language-mapping protocol. Twelve healthy volunteers performed 5 language paradigms: Silent Word Generation, Sentence Completion, Visual Antonym Pair, Auditory Antonym Pair, and Noun-Verb Association. Group analysis was performed to assess statistically significant differences in fMRI percentage signal change and lateralization index among these paradigms in 5 ROIs: inferior frontal gyrus, superior frontal gyrus, middle frontal gyrus for expressive language activation, middle temporal gyrus, and superior temporal gyrus for receptive language activation. In the expressive ROIs, Silent Word Generation was the most robust and best lateralizing paradigm (greater percentage signal change and lateralization index than semantic paradigms at P < .01 and P < .05 levels, respectively). In the receptive region of interest, Sentence Completion and Noun-Verb Association were the most robust activators (greater percentage signal change than other paradigms, P < .01). All except Auditory Antonym Pair were good lateralizing tasks (the lateralization index was significantly lower than other paradigms, P < .05). The combination of Silent Word Generation and ≥1 visual semantic paradigm, such as Sentence Completion and Noun-Verb Association, is adequate to determine language localization and lateralization; Noun-Verb Association has the additional advantage of objective monitoring of patient performance.
An Analysis of Errors in Written English Sentences: A Case Study of Thai EFL Students
ERIC Educational Resources Information Center
Sermsook, Kanyakorn; Liamnimit, Jiraporn; Pochakorn, Rattaneekorn
2017-01-01
The purposes of the present study were to examine the language errors in a writing of English major students in a Thai university and to explore the sources of the errors. This study focused mainly on sentences because the researcher found that errors in Thai EFL students' sentence construction may lead to miscommunication. 104 pieces of writing…
ERIC Educational Resources Information Center
Zhu, Shufeng; Wong, Lena L. N.; Wang, Bin; Chen, Fei
2017-01-01
Purpose: The aim of the present study was to evaluate the influence of lexical tone contour and age on sentence perception in quiet and in noise conditions in Mandarin-speaking children ages 7 to 11 years with normal hearing. Method: Test materials were synthesized Mandarin sentences, each word with a manipulated lexical contour, that is, normal…
Montgomery, James W; Gillam, Ronald B; Evans, Julia L
2016-12-01
Compared with same-age typically developing peers, school-age children with specific language impairment (SLI) exhibit significant deficits in spoken sentence comprehension. They also demonstrate a range of memory limitations. Whether these 2 deficit areas are related is unclear. The present review article aims to (a) review 2 main theoretical accounts of SLI sentence comprehension and various studies supporting each and (b) offer a new, broader, more integrated memory-based framework to guide future SLI research, as we believe the available evidence favors a memory-based perspective of SLI comprehension limitations. We reviewed the literature on the sentence comprehension abilities of English-speaking children with SLI from 2 theoretical perspectives. The sentence comprehension limitations of children with SLI appear to be more fully captured by a memory-based perspective than by a syntax-specific deficit perspective. Although a memory-based view appears to be the better account of SLI sentence comprehension deficits, this view requires refinement and expansion. Current memory-based perspectives of adult sentence comprehension, with proper modification, offer SLI investigators new, more integrated memory frameworks within which to study and better understand the sentence comprehension abilities of children with SLI.
Gillam, Ronald B.; Evans, Julia L.
2016-01-01
Purpose Compared with same-age typically developing peers, school-age children with specific language impairment (SLI) exhibit significant deficits in spoken sentence comprehension. They also demonstrate a range of memory limitations. Whether these 2 deficit areas are related is unclear. The present review article aims to (a) review 2 main theoretical accounts of SLI sentence comprehension and various studies supporting each and (b) offer a new, broader, more integrated memory-based framework to guide future SLI research, as we believe the available evidence favors a memory-based perspective of SLI comprehension limitations. Method We reviewed the literature on the sentence comprehension abilities of English-speaking children with SLI from 2 theoretical perspectives. Results The sentence comprehension limitations of children with SLI appear to be more fully captured by a memory-based perspective than by a syntax-specific deficit perspective. Conclusions Although a memory-based view appears to be the better account of SLI sentence comprehension deficits, this view requires refinement and expansion. Current memory-based perspectives of adult sentence comprehension, with proper modification, offer SLI investigators new, more integrated memory frameworks within which to study and better understand the sentence comprehension abilities of children with SLI. PMID:27973643
To fly or not to fly? The automatic influence of negation on language-space associations.
Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara
2015-09-01
Embodied models of language understanding propose a close association between language comprehension and sensorimotor processes. Specifically, they suggest that meaning representation is grounded in modal experiences. Converging evidence suggests that words automatically activate spatial processing. For example, words such as 'sky' ('ground') facilitate motor and visual processing associated with upper (lower) space. However, very little is known regarding the influence of linguistic operators such as negation on these language-space associations. If these associations play a crucial role for language understanding beyond the word level, one would expect linguistic operators to automatically influence or modify these language-space associations. Participants read sentences describing an event implying an upward or a downward motion in an affirmative or negated version (e.g. The granny looks to the sky/ground vs. The granny does not look to the sky/ground). Subsequently, participants responded with an upward or downward arm movement according to the colour of a dot on the screen. The results showed that the motion direction implied in the sentences influenced subsequent spatially directed motor responses. For affirmative sentences, arm movements were faster if they matched the movement direction implied in the sentence. This language-space association was modified by the negation operator. Our results show that linguistic operators--such as negation--automatically modify language-space associations. Thus, language-space associations seem to reflect language processes beyond pure word-based activations.
The effect of character contextual diversity on eye movements in Chinese sentence reading.
Chen, Qingrong; Zhao, Guoxia; Huang, Xin; Yang, Yiming; Tanenhaus, Michael K
2017-12-01
Chen, Huang, et al. (Psychonomic Bulletin & Review, 2017) found that when reading two-character Chinese words embedded in sentence contexts, contextual diversity (CD), a measure of the proportion of texts in which a word appears, affected fixation times to words. When CD is controlled, however, frequency did not affect reading times. Two experiments used the same experimental designs to examine whether there are frequency effects of the first character of two-character words when CD is controlled. In Experiment 1, yoked triples of characters from a control group, a group matched for character CD that is lower in frequency, and a group matched in frequency with the control group, but higher in character CD, were rotated through the same sentence frame. In Experiment 2 each character from a larger set was embedded in a separate sentence frame, allowing for a larger difference in log frequency compared to Experiment 1 (0.8 and 0.4, respectively). In both experiments, early and later eye movement measures were significantly shorter for characters with higher CD than for characters with lower CD, with no effects of character frequency. These results place constraints on models of visual word recognition and suggest ways in which Chinese can be used to tease apart the nature of context effects in word recognition and language processing in general.
Shedding Light on Words and Sentences: Near-Infrared Spectroscopy in Language Research
ERIC Educational Resources Information Center
Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell; Obrig, Hellmuth
2012-01-01
Investigating the neuronal network underlying language processing may contribute to a better understanding of how the brain masters this complex cognitive function with surprising ease and how language is acquired at a fast pace in infancy. Modern neuroimaging methods permit to visualize the evolvement and the function of the language network. The…
ERIC Educational Resources Information Center
Troyer, Melissa; Borovsky, Arielle
2017-01-01
In infancy, maternal socioeconomic status (SES) is associated with real-time language processing skills, but whether or not (and if so, how) this relationship carries into adulthood is unknown. We explored the effects of maternal SES in college-aged adults on eye-tracked, spoken sentence comprehension tasks using the visual world paradigm. When…
Iijima, Kazuki; Sakai, Kuniyoshi L.
2014-01-01
Predictive syntactic processing plays an essential role in language comprehension. In our previous study using Japanese object-verb (OV) sentences, we showed that the left inferior frontal gyrus (IFG) responses to a verb increased at 120–140 ms after the verb onset, indicating predictive effects caused by a preceding object. To further elucidate the automaticity of the predictive effects in the present magnetoencephalography study, we examined whether a subliminally presented verb (“subliminal verb”) enhanced the predictive effects on the sentence-final verb (“target verb”) unconsciously, i.e., without awareness. By presenting a subliminal verb after the object, enhanced predictive effects on the target verb would be detected in the OV sentences when the transitivity of the target verb matched with that of the subliminal verb (“congruent condition”), because the subliminal verb just after the object could determine the grammaticality of the sentence. For the OV sentences under the congruent condition, we observed significantly increased left IFG responses at 140–160 ms after the target verb onset. In contrast, responses in the precuneus and midcingulate cortex (MCC) were significantly reduced for the OV sentences under the congruent condition at 110–140 and 280–300 ms, respectively. By using partial Granger causality analyses for the OV sentences under the congruent condition, we revealed a bidirectional interaction between the left IFG and MCC at 60–160 ms, as well as a significant influence from the MCC to the precuneus. These results indicate that a top-down influence from the left IFG to the MCC, and then to the precuneus, is critical in syntactic decisions, whereas the MCC shares its task-set information with the left IFG to achieve automatic and predictive processes of syntax. PMID:25404899
Iijima, Kazuki; Sakai, Kuniyoshi L
2014-01-01
Predictive syntactic processing plays an essential role in language comprehension. In our previous study using Japanese object-verb (OV) sentences, we showed that the left inferior frontal gyrus (IFG) responses to a verb increased at 120-140 ms after the verb onset, indicating predictive effects caused by a preceding object. To further elucidate the automaticity of the predictive effects in the present magnetoencephalography study, we examined whether a subliminally presented verb ("subliminal verb") enhanced the predictive effects on the sentence-final verb ("target verb") unconsciously, i.e., without awareness. By presenting a subliminal verb after the object, enhanced predictive effects on the target verb would be detected in the OV sentences when the transitivity of the target verb matched with that of the subliminal verb ("congruent condition"), because the subliminal verb just after the object could determine the grammaticality of the sentence. For the OV sentences under the congruent condition, we observed significantly increased left IFG responses at 140-160 ms after the target verb onset. In contrast, responses in the precuneus and midcingulate cortex (MCC) were significantly reduced for the OV sentences under the congruent condition at 110-140 and 280-300 ms, respectively. By using partial Granger causality analyses for the OV sentences under the congruent condition, we revealed a bidirectional interaction between the left IFG and MCC at 60-160 ms, as well as a significant influence from the MCC to the precuneus. These results indicate that a top-down influence from the left IFG to the MCC, and then to the precuneus, is critical in syntactic decisions, whereas the MCC shares its task-set information with the left IFG to achieve automatic and predictive processes of syntax.
2018-05-01
Reports an error in "Objectifying the subjective: Building blocks of metacognitive experiences in conflict tasks" by Laurence Questienne, Anne Atas, Boris Burle and Wim Gevers ( Journal of Experimental Psychology: General , 2018[Jan], Vol 147[1], 125-131). In this article, the second sentence of the second paragraph of the Data Processing section is incorrect due to a production error. The second sentence should read as follows: RTs slower/shorter than Median 3 Median Absolute Deviations computed by participant were removed. (The following abstract of the original article appeared in record 2017-52065-001.) Metacognitive appraisals are essential for optimizing our information processing. In conflict tasks, metacognitive appraisals can result from different interrelated features (e.g., motor activity, visual awareness, response speed). Thanks to an original approach combining behavioral and electromyographic measures, the current study objectified the contribution of three features (reaction time [RT], motor hesitation with and without response competition, and visual congruency) to the subjective experience of urge-to-err in a priming conflict task. Both RT and motor hesitation with response competition were major determinants of metacognitive appraisals. Importantly, motor hesitation in absence of response competition and visual congruency had limited effect. Because science aims to rely on objectivity, subjective experiences are often discarded from scientific inquiry. The current study shows that subjectivity can be objectified. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Pinheiro, Ana P; Dias, Marcelo; Pedrosa, João; Soares, Ana P
2017-04-01
During social communication, words and sentences play a critical role in the expression of emotional meaning. The Minho Affective Sentences (MAS) were developed to respond to the lack of a standardized sentence battery with normative affective ratings: 192 neutral, positive, and negative declarative sentences were strictly controlled for psycholinguistic variables such as numbers of words and letters and per-million word frequency. The sentences were designed to represent examples of each of the five basic emotions (anger, sadness, disgust, fear, and happiness) and of neutral situations. These sentences were presented to 536 participants who rated the stimuli using both dimensional and categorical measures of emotions. Sex differences were also explored. Additionally, we probed how personality, empathy, and mood from a subset of 40 participants modulated the affective ratings. Our results confirmed that the MAS affective norms are valid measures to guide the selection of stimuli for experimental studies of emotion. The combination of dimensional and categorical ratings provided a more fine-grained characterization of the affective properties of the sentences. Moreover, the affective ratings of positive and negative sentences were not only modulated by participants' sex, but also by individual differences in empathy and mood state. Together, our results indicate that, in their quest to reveal the neurofunctional underpinnings of verbal emotional processing, researchers should consider not only the role of sex, but also of interindividual differences in empathy and mood states, in responses to the emotional meaning of sentences.
Semantic context effects and priming in word association.
Zeelenberg, René; Pecher, Diane; Shiffrin, Richard M; Raaijmakers, Jeroen G W
2003-09-01
Two experiments investigated priming in word association, an implicit memory task. In the study phase of Experiment 1, semantically ambiguous target words were presented in sentences that biased their interpretation. The appropriate interpretation of the target was either congruent or incongruent with the cue presented in a subsequent word association task. Priming (i.e., a higher proportion of target responses relative to a nonstudied baseline) was obtained for the congruent condition, but not for the incongruent condition. In Experiment 2, study sentences emphasized particular meaning aspects of nonambiguous targets. The word association task showed a higher proportion of target responses for targets studied in the more congruent sentence context than for targets studied in the less congruent sentence context. These results indicate that priming in word association depends largely on the storage of information relating the cue and target.
Niikuni, Keiyu; Muramoto, Toshiaki
2014-06-01
This study explored the effects of a comma on the processing of structurally ambiguous Japanese sentences with a semantic bias. A previous study has shown that a comma which is incompatible with an ambiguous sentence's semantic bias affects the processing of the sentence, but the effects of a comma that is compatible with the bias are unclear. In the present study, we examined the role of a comma compatible with the sentence's semantic bias using the self-paced reading method, which enabled us to determine the reading times for the region of the sentence where readers would be expected to solve the ambiguity using semantic information (the "target region"). The results show that a comma significantly increases the reading time of the punctuated word but decreases the reading time in the target region. We concluded that even if the semantic information provided might be sufficient for disambiguation, the insertion of a comma would affect the processing cost of the ambiguity, indicating that readers use both the comma and semantic information in parallel for sentence processing.
Enjoying vs. smiling: Facial muscular activation in response to emotional language.
Fino, Edita; Menegatti, Michela; Avenanti, Alessio; Rubini, Monica
2016-07-01
The present study examined whether emotionally congruent facial muscular activation - a somatic index of emotional language embodiment can be elicited by reading subject-verb sentences composed of action verbs, that refer directly to facial expressions (e.g., Mario smiles), but also by reading more abstract state verbs, which provide more direct access to the emotions felt by the agent (e.g., Mario enjoys). To address this issue, we measured facial electromyography (EMG) while participants evaluated state and action verb sentences. We found emotional sentences including both verb categories to have valence-congruent effects on emotional ratings and corresponding facial muscle activations. As expected, state verb-sentences were judged with higher valence ratings than action verb-sentences. Moreover, despite emotional congruent facial activations were similar for the two linguistic categories, in a late temporal window we found a tendency for greater EMG modulation when reading action relative to state verb sentences. These results support embodied theories of language comprehension and suggest that understanding emotional action and state verb sentences relies on partially dissociable motor and emotional processes. Copyright © 2016 Elsevier B.V. All rights reserved.
Wendt, Dorothea; Brand, Thomas; Kollmeier, Birger
2014-01-01
An eye-tracking paradigm was developed for use in audiology in order to enable online analysis of the speech comprehension process. This paradigm should be useful in assessing impediments in speech processing. In this paradigm, two scenes, a target picture and a competitor picture, were presented simultaneously with an aurally presented sentence that corresponded to the target picture. At the same time, eye fixations were recorded using an eye-tracking device. The effect of linguistic complexity on language processing time was assessed from eye fixation information by systematically varying linguistic complexity. This was achieved with a sentence corpus containing seven German sentence structures. A novel data analysis method computed the average tendency to fixate the target picture as a function of time during sentence processing. This allowed identification of the point in time at which the participant understood the sentence, referred to as the decision moment. Systematic differences in processing time were observed as a function of linguistic complexity. These differences in processing time may be used to assess the efficiency of cognitive processes involved in resolving linguistic complexity. Thus, the proposed method enables a temporal analysis of the speech comprehension process and has potential applications in speech audiology and psychoacoustics.
Wendt, Dorothea; Brand, Thomas; Kollmeier, Birger
2014-01-01
An eye-tracking paradigm was developed for use in audiology in order to enable online analysis of the speech comprehension process. This paradigm should be useful in assessing impediments in speech processing. In this paradigm, two scenes, a target picture and a competitor picture, were presented simultaneously with an aurally presented sentence that corresponded to the target picture. At the same time, eye fixations were recorded using an eye-tracking device. The effect of linguistic complexity on language processing time was assessed from eye fixation information by systematically varying linguistic complexity. This was achieved with a sentence corpus containing seven German sentence structures. A novel data analysis method computed the average tendency to fixate the target picture as a function of time during sentence processing. This allowed identification of the point in time at which the participant understood the sentence, referred to as the decision moment. Systematic differences in processing time were observed as a function of linguistic complexity. These differences in processing time may be used to assess the efficiency of cognitive processes involved in resolving linguistic complexity. Thus, the proposed method enables a temporal analysis of the speech comprehension process and has potential applications in speech audiology and psychoacoustics. PMID:24950184
Perceptual invariance of coarticulated vowels over variations in speaking rate.
Stack, Janet W; Strange, Winifred; Jenkins, James J; Clarke, William D; Trent, Sonja A
2006-04-01
This study examined the perception and acoustics of a large corpus of vowels spoken in consonant-vowel-consonant syllables produced in citation-form (lists) and spoken in sentences at normal and rapid rates by a female adult. Listeners correctly categorized the speaking rate of sentence materials as normal or rapid (2% errors) but did not accurately classify the speaking rate of the syllables when they were excised from the sentences (25% errors). In contrast, listeners accurately identified the vowels produced in sentences spoken at both rates when presented the sentences and when presented the excised syllables blocked by speaking rate or randomized. Acoustical analysis showed that formant frequencies at syllable midpoint for vowels in sentence materials showed "target undershoot" relative to citation-form values, but little change over speech rate. Syllable durations varied systematically with vowel identity, speaking rate, and voicing of final consonant. Vowel-inherent-spectral-change was invariant in direction of change over rate and context for most vowels. The temporal location of maximum F1 frequency further differentiated spectrally adjacent lax and tense vowels. It was concluded that listeners were able to utilize these rate- and context-independent dynamic spectrotemporal parameters to identify coarticulated vowels, even when sentential information about speaking rate was not available.
A grammar-based semantic similarity algorithm for natural language sentences.
Lee, Ming Che; Chang, Jia Wei; Hsieh, Tung Cheng
2014-01-01
This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to "artificial language", such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.
Brief Report: Judicial Attitudes Regarding the Sentencing of Offenders with High Functioning Autism
Berryessa, Colleen M.
2016-01-01
This brief report presents preliminary data on the attitudes of judges on the sentencing of offenders with High Functioning Autism (HFA). Semi-structured telephone interviews were conducted with twenty-one California Superior Court Judges. Interviews were qualitatively coded and constant comparative analysis was utilized. Findings revealed that judges consider HFA as both a mitigating and aggravating factor in sentencing, and knowledge of an offender’s disorder could potentially help judges understand why a criminal action might have been committed. Judges voiced concerns about the criminal justice system being able to effectively help or offer sentencing options for offenders with HFA. Finally, judges reported that they are focused on using their their judicial powers and influence to provide treatment and other resources during sentencing. PMID:27106568
Knoll, L J; Obleser, J; Schipke, C S; Friederici, A D; Brauer, J
2012-08-01
Children's language skills develop rapidly with increasing age, and several studies indicate that they use language- and age-specific strategies to understand complex sentences. In the present experiment, functional magnetic resonance imaging (fMRI) and behavioral measures were used to investigate the acquisition of case-marking cues for sentence interpretation in the developing brain of German preschool children with a mean age of 6 years. Short sentences were presented auditorily, consisting of a transitive verb and two case-marked arguments with canonical subject-initial or non canonical object-initial word order. Overall group results revealed mainly left hemispheric activation in the perisylvian cortex with increased activation in the inferior parietal cortex (IPC), and the anterior cingulate cortex (ACC) for object-initial compared to subject-initial sentences. However, single-subject analysis suggested two distinct activation patterns within the group which allowed a classification into two subgroups. One subgroup showed the predicted activation increase in the left inferior frontal gyrus (IFG) for the more difficult object-initial compared to subject-initial sentences, while the other group showed the reverse effect. This activation in the left IFG can be taken to reflect the degree to which adult-like sentence processing strategies, necessary to integrate case-marking information, are applied. Additional behavioral data on language development tests show that these two subgroups differ in their grammatical knowledge. Together with these behavioral findings, the results indicate that the use of a particular processing strategy is not dependent on age as such, but rather on the child's individual grammatical knowledge and the ability to use specific language cues for successful sentence comprehension. Copyright © 2012 Elsevier Inc. All rights reserved.
Kollmeier, Birger; Brand, Thomas
2015-01-01
The main objective of this study was to investigate the extent to which hearing impairment influences the duration of sentence processing. An eye-tracking paradigm is introduced that provides an online measure of how hearing impairment prolongs processing of linguistically complex sentences; this measure uses eye fixations recorded while the participant listens to a sentence. Eye fixations toward a target picture (which matches the aurally presented sentence) were measured in the presence of a competitor picture. Based on the recorded eye fixations, the single target detection amplitude, which reflects the tendency of the participant to fixate the target picture, was used as a metric to estimate the duration of sentence processing. The single target detection amplitude was calculated for sentence structures with different levels of linguistic complexity and for different listening conditions: in quiet and in two different noise conditions. Participants with hearing impairment spent more time processing sentences, even at high levels of speech intelligibility. In addition, the relationship between the proposed online measure and listener-specific factors, such as hearing aid use and cognitive abilities, was investigated. Longer processing durations were measured for participants with hearing impairment who were not accustomed to using a hearing aid. Moreover, significant correlations were found between sentence processing duration and individual cognitive abilities (such as working memory capacity or susceptibility to interference). These findings are discussed with respect to audiological applications. PMID:25910503
Understanding environmental sounds in sentence context.
Uddin, Sophia; Heald, Shannon L M; Van Hedger, Stephen C; Klos, Serena; Nusbaum, Howard C
2018-03-01
There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. Copyright © 2017 Elsevier B.V. All rights reserved.
The role of working memory in inferential sentence comprehension.
Pérez, Ana Isabel; Paolieri, Daniela; Macizo, Pedro; Bajo, Teresa
2014-08-01
Existing literature on inference making is large and varied. Trabasso and Magliano (Discourse Process 21(3):255-287, 1996) proposed the existence of three types of inferences: explicative, associative and predictive. In addition, the authors suggested that these inferences were related to working memory (WM). In the present experiment, we investigated whether WM capacity plays a role in our ability to answer comprehension sentences that require text information based on these types of inferences. Participants with high and low WM span read two narratives with four paragraphs each. After each paragraph was read, they were presented with four true/false comprehension sentences. One required verbatim information and the other three implied explicative, associative and predictive inferential information. Results demonstrated that only the explicative and predictive comprehension sentences required WM: participants with high verbal WM were more accurate in giving explanations and also faster at making predictions relative to participants with low verbal WM span; in contrast, no WM differences were found in the associative comprehension sentences. These results are interpreted in terms of the causal nature underlying these types of inferences.
MacKay, Donald G; James, Lori E; Hadley, Christopher B
2008-04-01
To test conflicting hypotheses regarding amnesic H.M.'s language abilities, this study examined H.M.'s sentence production on the Language Competence Test (Wiig & Secord, 1988). The task for H.M. and 8 education-, age-, and IQ-matched controls was to describe pictures using a single grammatical sentence containing prespecified target words. The results indicated selective deficits in H.M.'s picture descriptions: H.M. produced fewer single grammatical sentences, included fewer target words, and described the pictures less completely and accurately than did the controls. However, H.M.'s deficits diminished with repeated processing of unfamiliar stimuli and disappeared for familiar stimuli-effects that help explain why other researchers have concluded that H.M.'s language production is intact. Besides resolving the conflicting hypotheses, present results replicated other well-controlled sentence production results and indicated that H.M.'s language and memory exhibit parallel deficits and sparing. Present results comport in detail with binding theory but pose problems for current systems theories of H.M.'s condition.
Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie
2014-01-01
Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087
Language-guided visual processing affects reasoning: the role of referential and spatial anchoring.
Dumitru, Magda L; Joergensen, Gitte H; Cruickshank, Alice G; Altmann, Gerry T M
2013-06-01
Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene. The consequences of this atypical visual processing are yet to be determined. Here, we investigated the integration of visual and linguistic input during a reasoning task. Participants listened to sentences containing conjunctions or disjunctions (Nancy examined an ant and/or a cloud) and looked at visual scenes containing two pictures that either matched or mismatched the nouns. Degree of match between nouns and pictures (referential anchoring) and between their expected and actual spatial positions (spatial anchoring) affected fixations as well as judgments. We conclude that language induces incremental processing of visual scenes, which in turn becomes susceptible to reasoning errors during the language-meaning verification process. Copyright © 2013 Elsevier Inc. All rights reserved.
Gated audiovisual speech identification in silence vs. noise: effects on time and accuracy
Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker
2013-01-01
This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the same stimuli, but presented in an audiovisual instead of an auditory-only manner. The results showed that noise impeded the identification of consonants and words (i.e., delayed IPs and lowered accuracy), but not the identification of final words in sentences. In comparison with the previous study by Moradi et al., it can be concluded that the provision of visual cues expedited IPs and increased the accuracy of speech stimuli identification in both silence and noise. The implication of the results is discussed in terms of models for speech understanding. PMID:23801980
Syntactic learning by mere exposure - An ERP study in adult learners
Mueller, Jutta L; Oberecker, Regine; Friederici, Angela D
2009-01-01
Background Artificial language studies have revealed the remarkable ability of humans to extract syntactic structures from a continuous sound stream by mere exposure. However, it remains unclear whether the processes acquired in such tasks are comparable to those applied during normal language processing. The present study compares the ERPs to auditory processing of simple Italian sentences in native and non-native speakers after brief exposure to Italian sentences of a similar structure. The sentences contained a non-adjacent dependency between an auxiliary and the morphologically marked suffix of the verb. Participants were presented four alternating learning and testing phases. During learning phases only correct sentences were presented while during testing phases 50 percent of the sentences contained a grammatical violation. Results The non-native speakers successfully learned the dependency and displayed an N400-like negativity and a subsequent anteriorily distributed positivity in response to rule violations. The native Italian group showed an N400 followed by a P600 effect. Conclusion The presence of the P600 suggests that native speakers applied a grammatical rule. In contrast, non-native speakers appeared to use a lexical form-based processing strategy. Thus, the processing mechanisms acquired in the language learning task were only partly comparable to those applied by competent native speakers. PMID:19640301
Syntactic learning by mere exposure--an ERP study in adult learners.
Mueller, Jutta L; Oberecker, Regine; Friederici, Angela D
2009-07-29
Artificial language studies have revealed the remarkable ability of humans to extract syntactic structures from a continuous sound stream by mere exposure. However, it remains unclear whether the processes acquired in such tasks are comparable to those applied during normal language processing. The present study compares the ERPs to auditory processing of simple Italian sentences in native and non-native speakers after brief exposure to Italian sentences of a similar structure. The sentences contained a non-adjacent dependency between an auxiliary and the morphologically marked suffix of the verb. Participants were presented four alternating learning and testing phases. During learning phases only correct sentences were presented while during testing phases 50 percent of the sentences contained a grammatical violation. The non-native speakers successfully learned the dependency and displayed an N400-like negativity and a subsequent anteriorily distributed positivity in response to rule violations. The native Italian group showed an N400 followed by a P600 effect. The presence of the P600 suggests that native speakers applied a grammatical rule. In contrast, non-native speakers appeared to use a lexical form-based processing strategy. Thus, the processing mechanisms acquired in the language learning task were only partly comparable to those applied by competent native speakers.
Evidence for the role of shape in mental representations of similes.
van Weelden, Lisanne; Schilperoord, Joost; Maes, Alfons
2014-03-01
People mentally represent the shapes of objects. For instance, the mental representation of an eagle is different when one thinks about a flying or resting eagle. This study examined the role of shape in mental representations of similes (i.e., metaphoric comparisons). We tested the prediction that when people process a simile they will mentally represent the entities of the comparison as having a similar shape. We conducted two experiments in which participants read sentences that either did (experimental sentences) or did not (control sentences) invite comparing two entities. For the experimental sentences, the ground of the comparison was explicit in Experiment 1 ("X has the ability to Z, just like Y") and implicit in Experiment 2 ("X is like Y"). After having read the sentence, participants were presented with line drawings of the two objects, which were either similarly or dissimilarly shaped. They judged whether both objects were mentioned in the preceding sentence. For the experimental sentences, recognition latencies were shorter for similarly shaped objects than for dissimilarly shaped objects. For the control sentences, we did not find such an effect of similarity in shape. These findings suggest that a perceptual symbol of shape is activated when processing similes. © 2013 Cognitive Science Society, Inc.
"Visual" Cortex of Congenitally Blind Adults Responds to Syntactic Movement.
Lane, Connor; Kanjlia, Shipra; Omaki, Akira; Bedny, Marina
2015-09-16
Human cortex is comprised of specialized networks that support functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity offer unique insights into this question. In congenitally blind individuals, "visual" cortex responds to auditory and tactile stimuli. Remarkably, recent evidence suggests that occipital areas participate in language processing. We asked whether in blindness, occipital cortices: (1) develop domain-specific responses to language and (2) respond to a highly specialized aspect of language-syntactic movement. Nineteen congenitally blind and 18 sighted participants took part in two fMRI experiments. We report that in congenitally blind individuals, but not in sighted controls, "visual" cortex is more active during sentence comprehension than during a sequence memory task with nonwords, or a symbolic math task. This suggests that areas of occipital cortex become selective for language, relative to other similar higher-cognitive tasks. Crucially, we find that these occipital areas respond more to sentences with syntactic movement but do not respond to the difficulty of math equations. We conclude that regions within the visual cortex of blind adults are involved in syntactic processing. Our findings suggest that the cognitive function of human cortical areas is largely determined by input during development. Human cortex is made up of specialized regions that perform different functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity show that cortical areas can change function from one sensory modality to another. Here we demonstrate that input during development can alter cortical function even more dramatically. In blindness a subset of "visual" areas becomes specialized for language processing. Crucially, we find that the same "visual" areas respond to a highly specialized and uniquely human aspect of language-syntactic movement. These data suggest that human cortex has broad functional capacity during development, and input plays a major role in determining functional specialization. Copyright © 2015 the authors 0270-6474/15/3512859-10$15.00/0.
ERIC Educational Resources Information Center
Tamaoka, Katsuo; Asano, Michiko; Miyaoka, Yayoi; Yokosawa, Kazuhiko
2014-01-01
Using the eye-tracking method, the present study depicted pre- and post-head processing for simple scrambled sentences of head-final languages. Three versions of simple Japanese active sentences with ditransitive verbs were used: namely, (1) SO[subscript 1]O[subscript 2]V canonical, (2) SO[subscript 2]O[subscript 1]V single-scrambled, and (3)…
Efficient Learning for the Poor: New Insights into Literacy Acquisition for Children
ERIC Educational Resources Information Center
Abadzi, Helen
2008-01-01
Reading depends on the speed of visual recognition and capacity of short-term memory. To understand a sentence, the mind must read it fast enough to capture it within the limits of the short-term memory. This means that children must attain a minimum speed of fairly accurate reading to understand a passage. Learning to read involves "tricking" the…
The Time Course of Argument Reactivation Revealed: Using the Visual World Paradigm
ERIC Educational Resources Information Center
Koring, Loes; Mak, Pim; Reuland, Eric
2012-01-01
Previous research has found that the single argument of unaccusative verbs (such as "fall") is reactivated during sentence processing, but the argument of agentive verbs (such as "jump") is not ( and ). An open question so far was whether this difference in processing is caused by a difference in thematic roles the verbs assign, or a difference in…
ERIC Educational Resources Information Center
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann
2013-01-01
Blind people can learn to understand speech at ultra-high syllable rates (ca. 20 syllables/s), a capability associated with hemodynamic activation of the central-visual system. To further elucidate the neural mechanisms underlying this skill, magnetoencephalographic (MEG) measurements during listening to sentence utterances were cross-correlated…
ERIC Educational Resources Information Center
Takenishi, Michelle; Takenishi, Hal
This book describes "Writing Pictures," a daily developmental writing exercise in which students visualize a moment in their life of their own choosing, sketch it quickly, and write four guided sentences in paragraph format about it. Beginning with level one, it takes students through the basic format, and, with time, students progress upward…
ERIC Educational Resources Information Center
Swanson, H. Lee; Lussier, Cathy; Orosco, Michael
2013-01-01
This study investigated the role of strategy instruction and cognitive abilities on word problem solving accuracy in children with math difficulties (MD). Elementary school children (N = 120) with and without MD were randomly assigned to 1 of 4 conditions: general-heuristic (e.g., underline question sentence), visual-schematic presentation…
Perceptual Decoding Processes for Language in a Visual Mode and for Language in an Auditory Mode.
ERIC Educational Resources Information Center
Myerson, Rosemarie Farkas
The purpose of this paper is to gain insight into the nature of the reading process through an understanding of the general nature of sensory processing mechanisms which reorganize and restructure input signals for central recognition, and an understanding of how the grammar of the language functions in defining the set of possible sentences in…
SEMANTIC DEMENTIA AND PERSISTING WERNICKE’S APHASIA: LINGUISTIC AND ANATOMICAL PROFILES
Ogar, JM; Baldo, JV; Wilson, SM; Brambati, SM; Miller, BL; Dronkers, NF; Gorno-Tempini, ML
2011-01-01
Few studies have directly compared the clinical and anatomical characteristics of patients with progressive aphasia to those of patients with aphasia caused by stroke. In the current study we examined fluent forms of aphasia in these two groups, specifically the semantic dementia (SD) and persisting Wernicke's aphasia (WA) due to stroke. We compared 10 patients with SD to 10 age- and education-matched patients with WA in three language domains: language comprehension (single words and sentences), spontaneous speech and visual semantics. Neuroanatomical involvement was analyzed using disease-specific image analysis techniques: voxel-based morphometry (VBM) for patients with SD and overlays of lesion masks in patients with WA. Patients with SD and WA were both impaired on tasks that involved visual semantics, but patients with SD were less impaired in spontaneous speech and sentence comprehension. The anatomical findings showed that different regions were most affected in the two disorders: the left anterior temporal lobe in SD and the left posterior middle temporal gyrus in chronic WA. This study highlights that the two syndromes classically associated with language comprehension deficits in aphasia due to stroke and neurodegenerative disease are clinically distinct, most likely due to distinct distributions of damage in the temporal lobe. PMID:21315437
Syntactic Prediction in Language Comprehension: Evidence From Either…or
Staub, Adrian; Clifton, Charles
2006-01-01
Readers’ eye movements were monitored as they read sentences in which two noun phrases or two independent clauses were connected by the word or (NP-coordination and S-coordination, respectively). The word either could be present or absent earlier in the sentence. When either was present, the material immediately following or was read more quickly, across both sentence types. In addition, there was evidence that readers misanalyzed the S-coordination structure as an NP-coordination structure only when either was absent. The authors interpret the results as indicating that the word either enabled readers to predict the arrival of a coordination structure; this predictive activation facilitated processing of this structure when it ultimately arrived, and in the case of S-coordination sentences, enabled readers to avoid the incorrect NP-coordination analysis. The authors argue that these results support parsing theories according to which the parser can build predictable syntactic structure before encountering the corresponding lexical input. PMID:16569157
Direct brain recordings reveal hippocampal rhythm underpinnings of language processing.
Piai, Vitória; Anderson, Kristopher L; Lin, Jack J; Dewar, Callum; Parvizi, Josef; Dronkers, Nina F; Knight, Robert T
2016-10-04
Language is classically thought to be supported by perisylvian cortical regions. Here we provide intracranial evidence linking the hippocampal complex to linguistic processing. We used direct recordings from the hippocampal structures to investigate whether theta oscillations, pivotal in memory function, track the amount of contextual linguistic information provided in sentences. Twelve participants heard sentences that were either constrained ("She locked the door with the") or unconstrained ("She walked in here with the") before presentation of the final word ("key"), shown as a picture that participants had to name. Hippocampal theta power increased for constrained relative to unconstrained contexts during sentence processing, preceding picture presentation. Our study implicates hippocampal theta oscillations in a language task using natural language associations that do not require memorization. These findings reveal that the hippocampal complex contributes to language in an active fashion, relating incoming words to stored semantic knowledge, a necessary process in the generation of sentence meaning.
Punctuation effects in english and esperanto texts
NASA Astrophysics Data System (ADS)
Ausloos, M.
2010-07-01
A statistical physics study of punctuation effects on sentence lengths is presented for written texts: Alice in wonderland and Through a looking glass. The translation of the first text into esperanto is also considered as a test for the role of punctuation in defining a style, and for contrasting natural and artificial, but written, languages. Several log-log plots of the sentence-length-rank relationship are presented for the major punctuation marks. Different power laws are observed with characteristic exponents. The exponent can take a value much less than unity ( ca. 0.50 or 0.30) depending on how a sentence is defined. The texts are also mapped into time series based on the word frequencies. The quantitative differences between the original and translated texts are very minutes, at the exponent level. It is argued that sentences seem to be more reliable than word distributions in discussing an author style.
Advanced Course in Engineering (ACE) - Cyber Security Boot Camp
2008-04-01
third person, and avoid the second person. Use the present tense or the simple past tense only. Do not use future, present progressive or past...and simplicity. It promotes short sentences, direct voices and active verbs. It favors past and present tenses over subjunctives. It avoids...progressive tenses . Do not mix tenses in the same paragraph, let alone in the same sentence. Use the present tense to describe activity, and use the
Caplan, David; Michaud, Jennifer; Hufford, Rebecca
2013-01-01
Sixty one pwa were tested on syntactic comprehension in three tasks: sentence-picture matching, sentence-picture matching with auditory moving window presentation, and object manipulation. There were significant correlations of performances on sentences across tasks. First factors in unrotated factor analyses accounted for most of the variance on which all sentence types loaded in each task. Dissociations in performance between sentence types that differed minimally in their syntactic structures were not consistent across tasks. These results replicate previous results with smaller samples and provide important validation of basic aspects of aphasic performance in this area of language processing. They point to the role of a reduction in processing resources and of the interaction of task demands and parsing and interpretive abilities in the genesis of patient performance. PMID:24061104
Dog Theft: A Case for Tougher Sentencing Legislation.
Harris, Lauren K
2018-05-22
Dogs, and other companion animals, are currently classed as "property" in theft sentencing legislation for England and Wales. This means that offenders who steal dogs are given similar sentences to those that steal inanimate objects. This review presents the argument that the penalty for dog theft should be more severe than for the theft of non-living property. Evidence of the unique bond between dogs and humans, and discussion of the implications of labelling a living being as mere "property" are used to support this argument. The review concludes that the Sentencing Council's guidelines should be amended so that offences involving the theft of a companion animal are deemed to be a Category 2 offence or above. The review further proposes that "theft of a companion animal" should be listed in the Sentencing Council's guidelines as an aggravating factor.
A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences
Chang, Jia Wei; Hsieh, Tung Cheng
2014-01-01
This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure. PMID:24982952
Generating Descriptions of Motion from Cognitive Representations
ERIC Educational Resources Information Center
Keil, Benjamin
2010-01-01
This dissertation presents a novel method of sentence generation, drawing on the insight from Cognitive Semantics (Talmy, 2000a,b) that the effect of uttering a sentence is to evoke a Cognitive Representation in the mind of the listener. Under the assumption that this Cognitive Representation is also present in the speaker and defines (part of)…
An architecture for encoding sentence meaning in left mid-superior temporal cortex
Frankland, Steven M.; Greene, Joshua D.
2015-01-01
Human brains flexibly combine the meanings of words to compose structured thoughts. For example, by combining the meanings of “bite,” “dog,” and “man,” we can think about a dog biting a man, or a man biting a dog. Here, in two functional magnetic resonance imaging (fMRI) experiments using multivoxel pattern analysis (MVPA), we identify a region of left mid-superior temporal cortex (lmSTC) that flexibly encodes “who did what to whom” in visually presented sentences. We find that lmSTC represents the current values of abstract semantic variables (“Who did it?” and “To whom was it done?”) in distinct subregions. Experiment 1 first identifies a broad region of lmSTC whose activity patterns (i) facilitate decoding of structure-dependent sentence meaning (“Who did what to whom?”) and (ii) predict affect-related amygdala responses that depend on this information (e.g., “the baby kicked the grandfather” vs. “the grandfather kicked the baby”). Experiment 2 then identifies distinct, but neighboring, subregions of lmSTC whose activity patterns carry information about the identity of the current “agent” (“Who did it?”) and the current “patient” (“To whom was it done?”). These neighboring subregions lie along the upper bank of the superior temporal sulcus and the lateral bank of the superior temporal gyrus, respectively. At a high level, these regions may function like topographically defined data registers, encoding the fluctuating values of abstract semantic variables. This functional architecture, which in key respects resembles that of a classical computer, may play a critical role in enabling humans to flexibly generate complex thoughts. PMID:26305927
Cutter, Michael G; Drieghe, Denis; Liversedge, Simon P
2018-04-25
In the current study we investigated whether readers adjust their preferred saccade length (PSL) during reading on a trial-by-trial basis. The PSL refers to the distance between a saccade launch site and saccade target (i.e., the word center during reading) when participants neither undershoot nor overshoot this target (McConkie, Kerr, Reddix, & Zola in Vision Research, 28, 1107-1118, 1988). The tendency for saccades longer or shorter than the PSL to under or overshoot their target is referred to as the range error. Recent research by Cutter, Drieghe, and Liversedge (Journal of Experimental Psychology: Human Perception and Performance, 2017) has shown that the PSL changes to be shorter when readers are presented with 30 consecutive sentences exclusively made of three-letter words, and longer when presented with 30 consecutive sentences exclusively made of five-letter words. We replicated and extended this work by this time presenting participants with these uniform sentences in an unblocked design. We found that adaptation still occurred across different sentence types despite participants only having one trial to adapt. Our analyses suggested that this effect was driven by the length of the words readers were making saccades away from, rather than the length of the words in the rest of the sentence. We propose an account of the range error in which readers use parafoveal word length information to estimate the length of a saccade between the center of two parafoveal words (termed the Centre-Based Saccade Length) prior to landing on the first of these words.
Sentence comprehension following moderate closed head injury in adults.
Leikin, Mark; Ibrahim, Raphiq; Aharon-Peretz, Judith
2012-09-01
The current study explores sentence comprehension impairments among adults following moderate closed head injury. It was hypothesized that if the factor of syntactic complexity significantly affects sentence comprehension in these patients, it would testify to the existence of syntactic processing deficit along with working-memory problems. Thirty-six adults (18 closed head injury patients and 18 healthy controls matched in age, gender, and IQ) participated in the study. A picture-sentence matching task together with various tests for memory, language, and reading abilities were used to explore whether sentence comprehension impairments exist as a result of a deficit in syntactic processing or of working-memory dysfunction. Results indicate significant impairment in sentence comprehension among adults with closed head injury compared with their non-head-injured peers. Results also reveal that closed head injury patients demonstrate considerable decline in working memory, short-term memory, and semantic knowledge. Analysis of the results shows that memory impairment and syntactic complexity contribute significantly to sentence comprehension difficulties in closed head injury patients. At the same time, the presentation mode (spoken or written language) was found to have no effect on comprehension among adults with closed head injury, and their reading abilities appear to be relatively intact.
Wendt, Dorothea; Kollmeier, Birger; Brand, Thomas
2015-04-24
The main objective of this study was to investigate the extent to which hearing impairment influences the duration of sentence processing. An eye-tracking paradigm is introduced that provides an online measure of how hearing impairment prolongs processing of linguistically complex sentences; this measure uses eye fixations recorded while the participant listens to a sentence. Eye fixations toward a target picture (which matches the aurally presented sentence) were measured in the presence of a competitor picture. Based on the recorded eye fixations, the single target detection amplitude, which reflects the tendency of the participant to fixate the target picture, was used as a metric to estimate the duration of sentence processing. The single target detection amplitude was calculated for sentence structures with different levels of linguistic complexity and for different listening conditions: in quiet and in two different noise conditions. Participants with hearing impairment spent more time processing sentences, even at high levels of speech intelligibility. In addition, the relationship between the proposed online measure and listener-specific factors, such as hearing aid use and cognitive abilities, was investigated. Longer processing durations were measured for participants with hearing impairment who were not accustomed to using a hearing aid. Moreover, significant correlations were found between sentence processing duration and individual cognitive abilities (such as working memory capacity or susceptibility to interference). These findings are discussed with respect to audiological applications. © The Author(s) 2015.
Neural correlates of processing sentences and compound words in Chinese
Hung, Yi-Hui; Tzeng, Ovid; Wu, Denise H.
2017-01-01
Sentence reading involves multiple linguistic operations including processing of lexical and compositional semantics, and determining structural and grammatical relationships among words. Previous studies on Indo-European languages have associated left anterior temporal lobe (aTL) and left interior frontal gyrus (IFG) with reading sentences compared to reading unstructured word lists. To examine whether these brain regions are also involved in reading a typologically distinct language with limited morphosyntax and lack of agreement between sentential arguments, an FMRI study was conducted to compare passive reading of Chinese sentences, unstructured word lists and disconnected character lists that are created by only changing the order of an identical set of characters. Similar to previous findings from other languages, stronger activation was found in mainly left-lateralized anterior temporal regions (including aTL) for reading sentences compared to unstructured word and character lists. On the other hand, stronger activation was identified in left posterior temporal sulcus for reading unstructured words compared to unstructured characters. Furthermore, reading unstructured word lists compared to sentences evoked stronger activation in left IFG and left inferior parietal lobule. Consistent with the literature on Indo-European languages, the present results suggest that left anterior temporal regions subserve sentence-level integration, while left IFG supports restoration of sentence structure. In addition, left posterior temporal sulcus is associated with morphological compounding. Taken together, reading Chinese sentences engages a common network as reading other languages, with particular reliance on integration of semantic constituents. PMID:29194453
Amichetti, Nicole M; Atagi, Eriko; Kong, Ying-Yee; Wingfield, Arthur
The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a greater degree of interference from other words that might also be activated by the context, with negative effects on ease of word recognition. These results are consistent with an age-related inhibition deficit extending to the domain of semantic constraints on word recognition.
Zhu, Shufeng; Wong, Lena L N; Wang, Bin; Chen, Fei
2017-07-12
The aim of the present study was to evaluate the influence of lexical tone contour and age on sentence perception in quiet and in noise conditions in Mandarin-speaking children ages 7 to 11 years with normal hearing. Test materials were synthesized Mandarin sentences, each word with a manipulated lexical contour, that is, normal contour, flat contour, or a tone contour randomly selected from the four Mandarin lexical tone contours. A convenience sample of 75 Mandarin-speaking participants with normal hearing, ages 7, 9, and 11 years (25 participants in each age group), was selected. Participants were asked to repeat the synthesized speech in quiet and in speech spectrum-shaped noise at 0 dB signal-to-noise ratio. In quiet, sentence recognition by the 11-year-old children was similar to that of adults, and misrepresented lexical tone contours did not have a detrimental effect. However, the performance of children ages 9 and 7 years was significantly poorer. The performance of all three age groups, especially the younger children, declined significantly in noise. The present research suggests that lexical tone contour plays an important role in Mandarin sentence recognition, and misrepresented tone contours result in greater difficulty in sentence recognition in younger children. These results imply that maturation and/or language use experience play a role in the processing of tone contours for Mandarin speech understanding, particularly in noise.
Raes, Filip; Hermans, Dirk; Williams, J. Mark G.; Eelen, Paul
2007-01-01
Overgeneral memory (OGM) has been proposed as a vulnerability factor for depression (Williams et al., 2007) or depressive reactivity to stressful life-events (e.g., Gibbs & Rude, 2004). Traditionally, a cue word procedure known as the Autobiographical Memory Test (AMT; Williams & Broadbent, 1986) is used to assess OGM. Although frequently and validly used in clinical populations, there is evidence suggesting that the AMT is insufficiently sensitive to measure OGM in non-clinical groups. Study 1 evaluated the usefulness of a sentence completion method to assess OGM in non-clinical groups, as an alternative to the AMT. Participants were 197 students who completed the AMT, the Sentence Completion for Events from the Past Test (SCEPT), a depression measure, and visual analogue scales assessing ruminative thinking. Results showed that the mean proportion of overgeneral responses was markedly higher for the SCEPT than for the standard AMT. Also, overgeneral responding on the SCEPT was positively associated to depression scores and depressive rumination scores, whereas overgeneral responding on the AMT was not. Results suggest that the SCEPT, relative to the AMT, is a more sensitive instrument to measure OGM, at least in non-clinical populations. Study 2 further showed that this enhanced sensitivity is most likely due to the omission of the instruction to be specific rather than to the SCEPT's sentence completion format (as opposed to free recall to cue words). PMID:17613793
Raes, Filip; Hermans, Dirk; Williams, J Mark G; Eelen, Paul
2007-07-01
Overgeneral memory (OGM) has been proposed as a vulnerability factor for depression (Williams et al., 2007) or depressive reactivity to stressful life-events (e.g., Gibbs & Rude, 2004). Traditionally, a cue word procedure known as the Autobiographical Memory Test (AMT; Williams & Broadbent, 1986) is used to assess OGM. Although frequently and validly used in clinical populations, there is evidence suggesting that the AMT is insufficiently sensitive to measure OGM in non-clinical groups. Study 1 evaluated the usefulness of a sentence completion method to assess OGM in non-clinical groups, as an alternative to the AMT. Participants were 197 students who completed the AMT, the Sentence Completion for Events from the Past Test (SCEPT), a depression measure, and visual analogue scales assessing ruminative thinking. Results showed that the mean proportion of overgeneral responses was markedly higher for the SCEPT than for the standard AMT. Also, overgeneral responding on the SCEPT was positively associated to depression scores and depressive rumination scores, whereas overgeneral responding on the AMT was not. Results suggest that the SCEPT, relative to the AMT, is a more sensitive instrument to measure OGM, at least in non-clinical populations. Study 2 further showed that this enhanced sensitivity is most likely due to the omission of the instruction to be specific rather than to the SCEPT's sentence completion format (as opposed to free recall to cue words).
Voice similarity in identical twins.
Van Gysel, W D; Vercammen, J; Debruyne, F
2001-01-01
If people are asked to discriminate visually the two individuals of a monozygotic twin (MT), they mostly get into trouble. Does this problem also exist when listening to twin voices? Twenty female and 10 male MT voices were randomly assembled with one "strange" voice to get voice trios. The listeners (10 female students in Speech and Language Pathology) were asked to label the twins (voices 1-2, 1-3 or 2-3) in two conditions: two standard sentences read aloud and a 2.5-second midsection of a sustained /a/. The proportion correctly labelled twins was for female voices 82% and 63% and for male voices 74% and 52% for the sentences and the sustained /a/ respectively, both being significantly greater than chance (33%). The acoustic analysis revealed a high intra-twin correlation for the speaking fundamental frequency (SFF) of the sentences and the fundamental frequency (F0) of the sustained /a/. So the voice pitch could have been a useful characteristic in the perceptual identification of the twins. We conclude that there is a greater perceptual resemblance between the voices of identical twins than between voices without genetic relationship. The identification however is not perfect. The voice pitch possibly contributes to the correct twin identifications.
Tjaden, Kris; Sussman, Joan E; Wilding, Gregory E
2014-06-01
The perceptual consequences of rate reduction, increased vocal intensity, and clear speech were studied in speakers with multiple sclerosis (MS), Parkinson's disease (PD), and healthy controls. Seventy-eight speakers read sentences in habitual, clear, loud, and slow conditions. Sentences were equated for peak amplitude and mixed with multitalker babble for presentation to listeners. Using a computerized visual analog scale, listeners judged intelligibility or speech severity as operationally defined in Sussman and Tjaden (2012). Loud and clear but not slow conditions improved intelligibility relative to the habitual condition. With the exception of the loud condition for the PD group, speech severity did not improve above habitual and was reduced relative to habitual in some instances. Intelligibility and speech severity were strongly related, but relationships for disordered speakers were weaker in clear and slow conditions versus habitual. Both clear and loud speech show promise for improving intelligibility and maintaining or improving speech severity in multitalker babble for speakers with mild dysarthria secondary to MS or PD, at least as these perceptual constructs were defined and measured in this study. Although scaled intelligibility and speech severity overlap, the metrics further appear to have some separate value in documenting treatment-related speech changes.
Zekveld, Adriana A; Festen, Joost M; Kramer, Sophia E
2013-08-01
In this study, the authors assessed the influence of masking level (29% or 71% sentence perception) and test modality on the processing load during language perception as reflected by the pupil response. In addition, the authors administered a delayed cued stimulus recall test to examine whether processing load affected the encoding of the stimuli in memory. Participants performed speech and text reception threshold tests, during which the pupil response was measured. In the cued recall test, the first half of correctly perceived sentences was presented, and participants were asked to complete the sentences. Reading and listening span tests of working memory capacity were presented as well. Regardless of test modality, the pupil response indicated higher processing load in the 29% condition than in the 71% correct condition. Cued recall was better for the 29% condition. The consistent effect of masking level on the pupil response during listening and reading support the validity of the pupil response as a measure of processing load during language perception. The absent relation between pupil response and cued recall may suggest that cued recall is not directly related to processing load, as reflected by the pupil response.
Neural correlates of Korean proverb processing: A functional magnetic resonance imaging study.
Yi, You Gyoung; Kim, Dae Yul; Shim, Woo Hyun; Oh, Joo Young; Kim, Sung Hyun; Kim, Ho Sung
2017-10-01
The Korean language is based on a syntactic system that is different from other languages. This study investigated the processing area of the Korean proverb in comparison with the literal sentence using functional magnetic resonance imaging. In addition, the effect of opacity and transparency of proverbs on the activation pattern, when familiarity is set to the same condition, was also examined. The experimental stimuli included 36 proverbs and 18 literal sentences. A cohort of 15 healthy participants silently read each sentence for 6 s. A total of 18 opaque proverbs, 18 transparent proverbs, and 18 literal sentences were presented pseudo-randomly in one of three predesigned sequences. Compared with the literal sentences, a significant activation pattern was observed in the left hemisphere, including the left inferior frontal gyrus, in association with the proverbs. Compared with the transparent proverbs, opaque proverbs elicited more activation in the right supramarginal gyrus and precuneus. Our study confirmed that the left inferior frontal gyrus mediates the retrieval and/or selection of semantic knowledge in the Korean language. The present findings indicated that the right precuneus and the right supramarginal gyrus may be involved in abstract language processing.
Rapid L2 Word Learning through High Constraint Sentence Context: An Event-Related Potential Study
Chen, Baoguo; Ma, Tengfei; Liang, Lijuan; Liu, Huanhuan
2017-01-01
Previous studies have found quantity of exposure, i.e., frequency of exposure (Horst et al., 1998; Webb, 2008; Pellicer-Sánchez and Schmitt, 2010), is important for second language (L2) contextual word learning. Besides this factor, context constraint and L2 proficiency level have also been found to affect contextual word learning (Pulido, 2003; Tekmen and Daloglu, 2006; Elgort et al., 2015; Ma et al., 2015). In the present study, we adopted the event-related potential (ERP) technique and chose high constraint sentences as reading materials to further explore the effects of quantity of exposure and proficiency on L2 contextual word learning. Participants were Chinese learners of English with different English proficiency levels. For each novel word, there were four high constraint sentences with the critical word at the end of the sentence. Learners read sentences and made semantic relatedness judgment afterwards, with ERPs recorded. Results showed that in the high constraint condition where each pseudoword was embedded in four sentences with consistent meaning, N400 amplitude upon this pseudoword decreased significantly as learners read the first two sentences. High proficiency learners responded faster in the semantic relatedness judgment task. These results suggest that in high quality sentence contexts, L2 learners could rapidly acquire word meaning without multiple exposures, and L2 proficiency facilitated this learning process. PMID:29375420
Rapid L2 Word Learning through High Constraint Sentence Context: An Event-Related Potential Study.
Chen, Baoguo; Ma, Tengfei; Liang, Lijuan; Liu, Huanhuan
2017-01-01
Previous studies have found quantity of exposure, i.e., frequency of exposure (Horst et al., 1998; Webb, 2008; Pellicer-Sánchez and Schmitt, 2010), is important for second language (L2) contextual word learning. Besides this factor, context constraint and L2 proficiency level have also been found to affect contextual word learning (Pulido, 2003; Tekmen and Daloglu, 2006; Elgort et al., 2015; Ma et al., 2015). In the present study, we adopted the event-related potential (ERP) technique and chose high constraint sentences as reading materials to further explore the effects of quantity of exposure and proficiency on L2 contextual word learning. Participants were Chinese learners of English with different English proficiency levels. For each novel word, there were four high constraint sentences with the critical word at the end of the sentence. Learners read sentences and made semantic relatedness judgment afterwards, with ERPs recorded. Results showed that in the high constraint condition where each pseudoword was embedded in four sentences with consistent meaning, N400 amplitude upon this pseudoword decreased significantly as learners read the first two sentences. High proficiency learners responded faster in the semantic relatedness judgment task. These results suggest that in high quality sentence contexts, L2 learners could rapidly acquire word meaning without multiple exposures, and L2 proficiency facilitated this learning process.
ERIC Educational Resources Information Center
Blything, Liam P.; Davies, Robert; Cain, Kate
2015-01-01
The present study investigated 3- to 7-year-olds' (N = 91) comprehension of two-clause sentences containing the temporal connectives before or after. The youngest children used an order of mention strategy to interpret the relation between clauses: They were more accurate when the presentation order matched the chronological order of events:…
Mobayyen, Forouzan; de Almeida, Roberto G
2005-03-01
One hundred and forty normal undergraduate students participated in a Proactive Interference (PI) experiment with sentences containing verbs from four different semantic and morphological classes (lexical causatives, morphological causatives, and morphologically complex and simplex perception verbs). Past research has shown significant PI build-up effects for semantically and morphologically complex verbs in isolation (de Almeida & Mobayyen, 2004). The results of the present study show that, when embedded into sentence contexts, semantically and morphologically complex verbs do not produce significant PI build-up effects. Different verb classes, however, yield different recall patterns: sentences with semantically complex verbs (e.g., causatives) were recalled significantly better than sentences with semantically simplex verbs (e.g., perception verbs). The implications for the nature of both verb-conceptual representations and category-specific semantic deficits are discussed.
Caplan, David; Michaud, Jennifer; Hufford, Rebecca
2013-10-01
Sixty-one pwa were tested on syntactic comprehension in three tasks: sentence-picture matching, sentence-picture matching with auditory moving window presentation, and object manipulation. There were significant correlations of performances on sentences across tasks. First factors on which all sentence types loaded in unrotated factor analyses accounted for most of the variance in each task. Dissociations in performance between sentence types that differed minimally in their syntactic structures were not consistent across tasks. These results replicate previous results with smaller samples and provide important validation of basic aspects of aphasic performance in this area of language processing. They point to the role of a reduction in processing resources and of the interaction of task demands and parsing and interpretive abilities in the genesis of patient performance. Copyright © 2013 Elsevier Inc. All rights reserved.
2016-01-01
In a touch-screen paradigm, we recorded 3- to 7-year-olds’ (N = 108) accuracy and response times (RTs) to assess their comprehension of 2-clause sentences containing before and after. Children were influenced by order: performance was most accurate when the presentation order of the 2 clauses matched the chronological order of events: “She drank the juice, before she walked in the park” (chronological order) versus “Before she walked in the park, she drank the juice” (reverse order). Differences in RTs for correct responses varied by sentence type: accurate responses were made more speedily for sentences that afforded an incremental processing of meaning. An independent measure of memory predicted this pattern of performance. We discuss these findings in relation to children’s knowledge of connective meaning and the processing requirements of sentences containing temporal connectives. PMID:27690492
ERIC Educational Resources Information Center
Kidd, Evan; Stewart, Andrew J.; Serratrice, Ludovica
2011-01-01
In this paper we report on a visual world eye-tracking experiment that investigated the differing abilities of adults and children to use referential scene information during reanalysis to overcome lexical biases during sentence processing. The results showed that adults incorporated aspects of the referential scene into their parse as soon as it…
Putting lexical constraints in context into the visual-world paradigm.
Novick, Jared M; Thompson-Schill, Sharon L; Trueswell, John C
2008-06-01
Prior eye-tracking studies of spoken sentence comprehension have found that the presence of two potential referents, e.g., two frogs, can guide listeners toward a Modifier interpretation of Put the frog on the napkin... despite strong lexical biases associated with Put that support a Goal interpretation of the temporary ambiguity (Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M. & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632-1634; Trueswell, J. C., Sekerina, I., Hill, N. M. & Logrip, M. L. (1999). The kindergarten-path effect: Studying on-line sentence processing in young children. Cognition, 73, 89-134). This pattern is not expected under constraint-based parsing theories: cue conflict between the lexical evidence (which supports the Goal analysis) and the visuo-contextual evidence (which supports the Modifier analysis) should result in uncertainty about the intended analysis and partial consideration of the Goal analysis. We reexamined these put studies (Experiment 1) by introducing a response time-constraint and a spatial contrast between competing referents (a frog on a napkin vs. a frog in a bowl). If listeners immediately interpret on the... as the start of a restrictive modifier, then their eye movements should rapidly converge on the intended referent (the frog on something). However, listeners showed this pattern only when the phrase was unambiguously a Modifier (Put the frog that's on the...). Syntactically ambiguous trials resulted in transient consideration of the Competitor animal (the frog in something). A reading study was also run on the same individuals (Experiment 2) and performance was compared between the two experiments. Those individuals who relied heavily on lexical biases to resolve a complement ambiguity in reading (The man heard/realized the story had been...) showed increased sensitivity to both lexical and contextual constraints in the put-task; i.e., increased consideration of the Goal analysis in 1-Referent Scenes, but also adeptness at using spatial constraints of prepositions (in vs. on) to restrict referential alternatives in 2-Referent Scenes. These findings cross-validate visual world and reading methods and support multiple-constraint theories of sentence processing in which individuals differ in their sensitivity to lexical contingencies.
Watkins, Greg D; Swanson, Brett A; Suaning, Gregg J
2018-02-22
Cochlear implant (CI) sound processing strategies are usually evaluated in clinical studies involving experienced implant recipients. Metrics which estimate the capacity to perceive speech for a given set of audio and processing conditions provide an alternative means to assess the effectiveness of processing strategies. The aim of this research was to assess the ability of the output signal to noise ratio (OSNR) to accurately predict speech perception. It was hypothesized that compared with the other metrics evaluated in this study (1) OSNR would have equivalent or better accuracy and (2) OSNR would be the most accurate in the presence of variable levels of speech presentation. For the first time, the accuracy of OSNR as a metric which predicts speech intelligibility was compared, in a retrospective study, with that of the input signal to noise ratio (ISNR) and the short-term objective intelligibility (STOI) metric. Because STOI measured audio quality at the input to a CI sound processor, a vocoder was applied to the sound processor output and STOI was also calculated for the reconstructed audio signal (vocoder short-term objective intelligibility [VSTOI] metric). The figures of merit calculated for each metric were Pearson correlation of the metric and a psychometric function fitted to sentence scores at each predictor value (Pearson sigmoidal correlation [PSIG]), epsilon insensitive root mean square error (RMSE*) of the psychometric function and the sentence scores, and the statistical deviance of the fitted curve to the sentence scores (D). Sentence scores were taken from three existing data sets of Australian Sentence Tests in Noise results. The AuSTIN tests were conducted with experienced users of the Nucleus CI system. The score for each sentence was the proportion of morphemes the participant correctly repeated. In data set 1, all sentences were presented at 65 dB sound pressure level (SPL) in the presence of four-talker Babble noise. Each block of sentences used an adaptive procedure, with the speech presented at a fixed level and the ISNR varied. In data set 2, sentences were presented at 65 dB SPL in the presence of stationary speech weighted noise, street-side city noise, and cocktail party noise. An adaptive ISNR procedure was used. In data set 3, sentences were presented at levels ranging from 55 to 89 dB SPL with two automatic gain control configurations and two fixed ISNRs. For data set 1, the ISNR and OSNR were equally most accurate. STOI was significantly different for deviance (p = 0.045) and RMSE* (p < 0.001). VSTOI was significantly different for RMSE* (p < 0.001). For data set 2, ISNR and OSNR had an equivalent accuracy which was significantly better than that of STOI for PSIG (p = 0.029) and VSTOI for deviance (p = 0.001), RMSE*, and PSIG (both p < 0.001). For data set 3, OSNR was the most accurate metric and was significantly more accurate than VSTOI for deviance, RMSE*, and PSIG (all p < 0.001). ISNR and STOI were unable to predict the sentence scores for this data set. The study results supported the hypotheses. OSNR was found to have an accuracy equivalent to or better than ISNR, STOI, and VSTOI for tests conducted at a fixed presentation level and variable ISNR. OSNR was a more accurate metric than VSTOI for tests with fixed ISNRs and variable presentation levels. Overall, OSNR was the most accurate metric across the three data sets. OSNR holds promise as a prediction metric which could potentially improve the effectiveness of sound processor research and CI fitting.
Subtle increases in interletter spacing facilitate the encoding of words during normal reading.
Perea, Manuel; Gomez, Pablo
2012-01-01
Several recent studies have revealed that words presented with a small increase in interletter spacing are identified faster than words presented with the default interletter spacing (i.e., w a t e r faster than water). Modeling work has shown that this advantage occurs at an early encoding level. Given the implications of this finding for the ease of reading in the new digital era, here we examined whether the beneficial effect of small increases in interletter spacing can be generalized to a normal reading situation. We conducted an experiment in which the participant's eyes were monitored when reading sentences varying in interletter spacing: i) sentences were presented with the default (0.0) interletter spacing; ii) sentences presented with a +1.0 interletter spacing; and iii) sentences presented with a +1.5 interletter spacing. Results showed shorter fixation duration times as an inverse function of interletter spacing (i.e., fixation durations were briefest with +1.5 spacing and slowest with the default spacing). Subtle increases in interletter spacing facilitate the encoding of the fixated word during normal reading. Thus, interletter spacing is a parameter that may affect the ease of reading, and it could be adjustable in future implementations of e-book readers.
Subtle Increases in Interletter Spacing Facilitate the Encoding of Words during Normal Reading
Perea, Manuel; Gomez, Pablo
2012-01-01
Background Several recent studies have revealed that words presented with a small increase in interletter spacing are identified faster than words presented with the default interletter spacing (i.e., w a t e r faster than water). Modeling work has shown that this advantage occurs at an early encoding level. Given the implications of this finding for the ease of reading in the new digital era, here we examined whether the beneficial effect of small increases in interletter spacing can be generalized to a normal reading situation. Methodology We conducted an experiment in which the participant’s eyes were monitored when reading sentences varying in interletter spacing: i) sentences were presented with the default (0.0) interletter spacing; ii) sentences presented with a +1.0 interletter spacing; and iii) sentences presented with a +1.5 interletter spacing. Principal Findings Results showed shorter fixation duration times as an inverse function of interletter spacing (i.e., fixation durations were briefest with +1.5 spacing and slowest with the default spacing). Conclusions Subtle increases in interletter spacing facilitate the encoding of the fixated word during normal reading. Thus, interletter spacing is a parameter that may affect the ease of reading, and it could be adjustable in future implementations of e-book readers. PMID:23082178
Foucart, Alice; Garcia, Xavier; Ayguasanosa, Meritxell; Thierry, Guillaume; Martin, Clara; Costa, Albert
2015-08-01
The present study investigated how pragmatic information is integrated during L2 sentence comprehension. We put forward that the differences often observed between L1 and L2 sentence processing may reflect differences on how various types of information are used to process a sentence, and not necessarily differences between native and non-native linguistic systems. Based on the idea that when a cue is missing or distorted, one relies more on other cues available, we hypothesised that late bilinguals favour the cues that they master during sentence processing. To verify this hypothesis we investigated whether late bilinguals take the speaker's identity (inferred by the voice) into account when incrementally processing speech and whether this affects their online interpretation of the sentence. To do so, we adapted Van Berkum, J.J.A., Van den Brink, D., Tesink, C.M.J.Y., Kos, M., Hagoort, P., 2008. J. Cogn. Neurosci. 20(4), 580-591, study in which sentences with either semantic violations or pragmatic inconsistencies were presented. While both the native and the non-native groups showed a similar response to semantic violations (N400), their response to speakers' inconsistencies slightly diverged; late bilinguals showed a positivity much earlier than native speakers (LPP). These results suggest that, like native speakers, late bilinguals process semantic and pragmatic information incrementally; however, what seems to differ between L1 and L2 processing is the time-course of the different processes. We propose that this difference may originate from late bilinguals' sensitivity to pragmatic information and/or their ability to efficiently make use of the information provided by the sentence context to generate expectations in relation to pragmatic information during L2 sentence comprehension. In other words, late bilinguals may rely more on speaker identity than native speakers when they face semantic integration difficulties. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Liability for loss of chance in neurological conditions in the Spanish public healthcare system].
Sardinero-García, Carlos; Santiago-Sáez, Andrés; Bravo-Llatas, M Del Carmen; Perea-Pérez, Bernardo; Albarrán-Juan, M Elena; Labajo-González, Elena; Benito-León, Julián
To analyse the sentences due to loss of chance that were passed by the Contentious-Administrative Court (i.e., in public medicine), in which both the origin of the disease to be treated and the damages were neurological. We analysed the 90 sentences concerning neurological conditions that referred to the concept of loss of chance that were passed in Spain from 2003 (year of the first sentence) until May 2014. Of the 90 sentences, 52 (57.8%) were passed due to diagnostic error and 30 (33.3%), due to inadequate treatment. 72 (80.0%) of the sentences were passed from 2009 onwards, which equates to more than a 300% increase with respect to the 18 (20.0%) issued in the first six years of the study (from 2003 to 2008). Most of the patients (66.7%) were men, and a 61.1% presented sequelae. Hypoxic-ischaemic encephalopathy (14.4%) and spinal cord disorders (14.4%) were the most common conditions to lead to sentencing. The litigant activity due to loss of chance in neurological disease in the Spanish public healthcare system has significantly increased in the last few years. The sentences were mainly passed because of diagnostic error or inadequate treatment. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Repetition and comprehension of spoken sentences by reading-disabled children.
Shankweiler, D; Smith, S T; Mann, V A
1984-11-01
The language problems of reading-disabled elementary school children are not confined to written language alone. These children often exhibit problems of ordered recall of verbal materials that are equally severe whether the materials are presented in printed or in spoken form. Sentences that pose problems of pronoun reference might be expected to place a special burden on short-term memory because close grammatical relationships obtain between words that are distant from one another. With this logic in mind, third-grade children with specific reading disability and classmates matched for age and IQ were tested on five sentence types, each of which poses a problem in assigning pronoun reference. On one occasion the children were tested for comprehension of the sentences by a forced-choice picture verification task. On a later occasion they received the same sentences as a repetition test. Good and poor readers differed significantly in immediate recall of the reflexive sentences, but not in comprehension of them as assessed by picture choice. It was suggested that the pictures provided cues which lightened the memory load, a possibility that could explain why the poor readers were not demonstrably inferior in comprehension of the sentences even though they made significantly more errors than the good readers in recalling them.
Baldwin, Carryl L; Struckman-Johnson, David
2002-01-15
Speech displays and verbal response technologies are increasingly being used in complex, high workload environments that require the simultaneous performance of visual and manual tasks. Examples of such environments include the flight decks of modern aircraft, advanced transport telematics systems providing invehicle route guidance and navigational information and mobile communication equipment in emergency and public safety vehicles. Previous research has established an optimum range for speech intelligibility. However, the potential for variations in presentation levels within this range to affect attentional resources and cognitive processing of speech material has not been examined previously. Results of the current experimental investigation demonstrate that as presentation level increases within this 'optimum' range, participants in high workload situations make fewer sentence-processing errors and generally respond faster. Processing errors were more sensitive to changes in presentation level than were measures of reaction time. Implications of these findings are discussed in terms of their application for the design of speech communications displays in complex multi-task environments.
Hackett, Paul M. W.
2016-01-01
When behavior is interpreted in a reliable manner (i.e., robustly across different situations and times) its explained meaning may be seen to possess hermeneutic consistency. In this essay I present an evaluation of the hermeneutic consistency that I propose may be present when the research tool known as the mapping sentence is used to create generic structural ontologies. I also claim that theoretical and empirical validity is a likely result of employing the mapping sentence in research design and interpretation. These claims are non-contentious within the realm of quantitative psychological and behavioral research. However, I extend the scope of both facet theory based research and claims for its structural utility, reliability and validity to philosophical and qualitative investigations. I assert that the hermeneutic consistency of a structural ontology is a product of a structural representation's ontological components and the mereological relationships between these ontological sub-units: the mapping sentence seminally allows for the depiction of such structure. PMID:27065932
Prediction is Production: The missing link between language production and comprehension.
Martin, Clara D; Branzi, Francesca M; Bar, Moshe
2018-01-18
Language comprehension often involves the generation of predictions. It has been hypothesized that such prediction-for-comprehension entails actual language production. Recent studies provided evidence that the production system is recruited during language comprehension, but the link between production and prediction during comprehension remains hypothetical. Here, we tested this hypothesis by comparing prediction during sentence comprehension (primary task) in participants having the production system either available or not (non-verbal versus verbal secondary task). In the primary task, sentences containing an expected or unexpected target noun-phrase were presented during electroencephalography recording. Prediction, measured as the magnitude of the N400 effect elicited by the article (expected versus unexpected), was hindered only when the production system was taxed during sentence context reading. The present study provides the first direct evidence that the availability of the speech production system is necessary for generating lexical prediction during sentence comprehension. Furthermore, these important results provide an explanation for the recruitment of language production during comprehension.
Gilead, Michael; Liberman, Nira; Maril, Anat
2014-01-01
Conscious thought involves an interpretive inner monologue pertaining to our waking experiences. Previous studies focused on the mechanisms that allow us to remember externally presented stimuli, but the neurobiological basis of the ability to remember one's internal mentations remains unknown. In order to investigate this question, we presented participants with sentences and scanned their neural activity using functional magnetic resonance imaging (fMRI) as they incidentally produced spontaneous internal mentations. After the scan, we presented the sentences again and asked participants to describe the specific thoughts they had during the initial presentation of each sentence. We categorized experimental trials for each participant according to whether they resulted in subsequently reported internal mentations or not. The results show that activation within classic language processing areas was associated with participants' ability to recollect their thoughts. Activation within mostly right lateralized and medial "default-mode network" regions was associated with not reporting such thoughts.
Wlotko, Edward W.; Federmeier, Kara D.
2015-01-01
Predictive processing is a core component of normal language comprehension, but the brain may not engage in prediction to the same extent in all circumstances. This study investigates the effects of timing on anticipatory comprehension mechanisms. Event-related brain potentials (ERPs) were recorded while participants read two-sentence mini-scenarios previously shown to elicit prediction-related effects for implausible items that are categorically related to expected items (‘They wanted to make the hotel look more like a tropical resort. So along the driveway they planted rows of PALMS/PINES/TULIPS.’). The first sentence of every pair was presented in its entirety and was self-paced. The second sentence was presented word-by-word with a fixed stimulus onset asynchrony (SOA) of either 500 ms or 250 ms that was manipulated in a within-subjects blocked design. Amplitudes of the N400 ERP component are taken as a neural index of demands on semantic processing. At 500 ms SOA, implausible words related to predictable words elicited reduced N400 amplitudes compared to unrelated words (PINES vs. TULIPS), replicating past studies. At 250 ms SOA this prediction-related semantic facilitation was diminished. Thus, timing is a factor in determining the extent to which anticipatory mechanisms are engaged. However, we found evidence that prediction can sometimes be engaged even under speeded presentation rates. Participants who first read sentences in the 250 ms SOA block showed no effect of semantic similarity for this SOA, although these same participants showed the effect in the second block with 500 ms SOA. However, participants who first read sentences in the 500 ms SOA block continued to show the N400 semantic similarity effect in the 250 ms SOA block. These findings add to results showing that the brain flexibly allocates resources to most effectively achieve comprehension goals given the current processing environment. PMID:25987437
Bader, Markus
2018-01-01
This paper presents three acceptability experiments investigating German verb-final clauses in order to explore possible sources of sentence complexity during human parsing. The point of departure was De Vries et al.'s (2011) generalization that sentences with three or more crossed or nested dependencies are too complex for being processed by the human parsing mechanism without difficulties. This generalization is partially based on findings from Bach et al. (1986) concerning the acceptability of complex verb clusters in German and Dutch. The first experiment tests this generalization by comparing two sentence types: (i) sentences with three nested dependencies within a single clause that contains three verbs in a complex verb cluster; (ii) sentences with four nested dependencies distributed across two embedded clauses, one center-embedded within the other, each containing a two-verb cluster. The results show that sentences with four nested dependencies are judged as acceptable as control sentences with only two nested dependencies, whereas sentences with three nested dependencies are judged as only marginally acceptable. This argues against De Vries et al.'s (2011) claim that the human parser can process no more than two nested dependencies. The results are used to refine the Verb-Cluster Complexity Hypothesis of Bader and Schmid (2009a). The second and the third experiment investigate sentences with four nested dependencies in more detail in order to explore alternative sources of sentence complexity: the number of predicted heads to be held in working memory (storage cost in terms of the Dependency Locality Theory [DLT], Gibson, 2000) and the length of the involved dependencies (integration cost in terms of the DLT). Experiment 2 investigates sentences for which storage cost and integration cost make conflicting predictions. The results show that storage cost outweighs integration cost. Experiment 3 shows that increasing integration cost in sentences with two degrees of center embedding leads to decreased acceptability. Taken together, the results argue in favor of a multifactorial account of the limitations on center embedding in natural languages. PMID:29410633
Bader, Markus
2017-01-01
This paper presents three acceptability experiments investigating German verb-final clauses in order to explore possible sources of sentence complexity during human parsing. The point of departure was De Vries et al.'s (2011) generalization that sentences with three or more crossed or nested dependencies are too complex for being processed by the human parsing mechanism without difficulties. This generalization is partially based on findings from Bach et al. (1986) concerning the acceptability of complex verb clusters in German and Dutch. The first experiment tests this generalization by comparing two sentence types: (i) sentences with three nested dependencies within a single clause that contains three verbs in a complex verb cluster; (ii) sentences with four nested dependencies distributed across two embedded clauses, one center-embedded within the other, each containing a two-verb cluster. The results show that sentences with four nested dependencies are judged as acceptable as control sentences with only two nested dependencies, whereas sentences with three nested dependencies are judged as only marginally acceptable. This argues against De Vries et al.'s (2011) claim that the human parser can process no more than two nested dependencies. The results are used to refine the Verb-Cluster Complexity Hypothesis of Bader and Schmid (2009a). The second and the third experiment investigate sentences with four nested dependencies in more detail in order to explore alternative sources of sentence complexity: the number of predicted heads to be held in working memory (storage cost in terms of the Dependency Locality Theory [DLT], Gibson, 2000) and the length of the involved dependencies (integration cost in terms of the DLT). Experiment 2 investigates sentences for which storage cost and integration cost make conflicting predictions. The results show that storage cost outweighs integration cost. Experiment 3 shows that increasing integration cost in sentences with two degrees of center embedding leads to decreased acceptability. Taken together, the results argue in favor of a multifactorial account of the limitations on center embedding in natural languages.
Cognitive control and its impact on recovery from aphasic stroke
Warren, Jane E.; Geranmayeh, Fatemeh; Woodhead, Zoe; Leech, Robert; Wise, Richard J. S.
2014-01-01
Aphasic deficits are usually only interpreted in terms of domain-specific language processes. However, effective human communication and tests that probe this complex cognitive skill are also dependent on domain-general processes. In the clinical context, it is a pragmatic observation that impaired attention and executive functions interfere with the rehabilitation of aphasia. One system that is important in cognitive control is the salience network, which includes dorsal anterior cingulate cortex and adjacent cortex in the superior frontal gyrus (midline frontal cortex). This functional imaging study assessed domain-general activity in the midline frontal cortex, which was remote from the infarct, in relation to performance on a standard test of spoken language in 16 chronic aphasic patients both before and after a rehabilitation programme. During scanning, participants heard simple sentences, with each listening trial followed immediately by a trial in which they repeated back the previous sentence. Listening to sentences in the context of a listen–repeat task was expected to activate regions involved in both language-specific processes (speech perception and comprehension, verbal working memory and pre-articulatory rehearsal) and a number of task-specific processes (including attention to utterances and attempts to overcome pre-response conflict and decision uncertainty during impaired speech perception). To visualize the same system in healthy participants, sentences were presented to them as three-channel noise-vocoded speech, thereby impairing speech perception and assessing whether this evokes domain general cognitive systems. As expected, contrasting the more difficult task of perceiving and preparing to repeat noise-vocoded speech with the same task on clear speech demonstrated increased activity in the midline frontal cortex in the healthy participants. The same region was activated in the aphasic patients as they listened to standard (undistorted) sentences. Using a region of interest defined from the data on the healthy participants, data from the midline frontal cortex was obtained from the patients. Across the group and across different scanning sessions, activity correlated significantly with the patients’ communicative abilities. This correlation was not influenced by the sizes of the lesion or the patients’ chronological ages. This is the first study that has directly correlated activity in a domain general system, specifically the salience network, with residual language performance in post-stroke aphasia. It provides direct evidence in support of the clinical intuition that domain-general cognitive control is an essential factor contributing to the potential for recovery from aphasic stroke. PMID:24163248
DeCaro, Renee; Peelle, Jonathan E; Grossman, Murray; Wingfield, Arthur
2016-01-01
Reduced hearing acuity is among the most prevalent of chronic medical conditions among older adults. An experiment is reported in which comprehension of spoken sentences was tested for older adults with good hearing acuity or with a mild-to-moderate hearing loss, and young adults with age-normal hearing. Comprehension was measured by participants' ability to determine the agent of an action in sentences that expressed this relation with a syntactically less complex subject-relative construction or a syntactically more complex object-relative construction. Agency determination was further challenged by inserting a prepositional phrase into sentences between the person performing an action and the action being performed. As a control, prepositional phrases of equivalent length were also inserted into sentences in a non-disruptive position. Effects on sentence comprehension of age, hearing acuity, prepositional phrase placement and sound level of stimulus presentations appeared only for comprehension of sentences with the more syntactically complex object-relative structures. Working memory as tested by reading span scores accounted for a significant amount of the variance in comprehension accuracy. Once working memory capacity and hearing acuity were taken into account, chronological age among the older adults contributed no further variance to comprehension accuracy. Results are discussed in terms of the positive and negative effects of sensory-cognitive interactions in comprehension of spoken sentences and lend support to a framework in which domain-general executive resources, notably verbal working memory, play a role in both linguistic and perceptual processing.
Reconciling Time, Space and Function: A New Dorsal-Ventral Stream Model of Sentence Comprehension
ERIC Educational Resources Information Center
Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias
2013-01-01
We present a new dorsal-ventral stream framework for language comprehension which unifies basic neurobiological assumptions (Rauschecker & Scott, 2009) with a cross-linguistic neurocognitive sentence comprehension model (eADM; Bornkessel & Schlesewsky, 2006). The dissociation between (time-dependent) syntactic structure-building and…
Measuring effectiveness of semantic cues in degraded English sentences in non-native listeners.
Shi, Lu-Feng
2014-01-01
This study employed Boothroyd and Nittrouer's k (1988) to directly quantify effectiveness in native versus non-native listeners' use of semantic cues. Listeners were presented speech-perception-in-noise sentences processed at three levels of concurrent multi-talker babble and reverberation. For each condition, 50 sentences with multiple semantic cues and 50 with minimum semantic cues were randomly presented. Listeners verbally reported and wrote down the target words. The metric, k, was derived from percent-correct scores for sentences with and without semantics. Ten native and 33 non-native listeners participated. The presence of semantics increased recognition benefit by over 250% for natives, but access to semantics remained limited for non-native listeners (90-135%). The k was comparable across conditions for native listeners, but level-dependent for non-natives. The k for non-natives was significantly different from 1 in all conditions, suggesting semantic cues, though reduced in importance in difficult conditions, were helpful for non-natives. Non-natives as a group were not as effective in using semantics to facilitate English sentence recognition as natives. Poor listening conditions were particularly adverse to the use of semantics in non-natives, who may rely on clear acoustic-phonetic cues before benefitting from semantic cues when recognizing connected speech.
Possibility of death sentence has divergent effect on verdicts for Black and White defendants.
Glaser, Jack; Martin, Karin D; Kahn, Kimberly B
2015-12-01
When anticipating the imposition of the death penalty, jurors may be less inclined to convict defendants. On the other hand, minority defendants have been shown to be treated more punitively, particularly in capital cases. Given that the influence of anticipated sentence severity on verdicts may vary as a function of defendant race, the goal of this study was to test the independent and interactive effects of these factors. We conducted a survey-embedded experiment with a nationally representative sample to examine the effect on verdicts of sentence severity as a function of defendant race, presenting respondents with a triple murder trial summary that manipulated the maximum penalty (death vs. life without parole) and the race of the defendant. Respondents who were told life-without-parole was the maximum sentence were not significantly more likely to convict Black (67.7%) than White (66.7%) defendants. However, when death was the maximum sentence, respondents presented with Black defendants were significantly more likely to convict (80.0%) than were those with White defendants (55.1%). The results indicate that the death penalty may be a cause of racial disparities in criminal justice, and implicate threats to civil rights and to effective criminal justice. (c) 2015 APA, all rights reserved).
Musical Emotions: Functions, Origins, Evolution
2010-01-01
might be contentious) neural mechanisms added to our perception of originally mechanical properties of ear. I’ll add that Helmholtz did not touch the main...significant part of conceptual perception is an unconscious process ; for example, visual perception takes about 150 ms, which is a long time when measured...missing in terms of neural mechanisms? How do children learn which words and sentences correspond to which objects and situations? Many psychologists
Are Emojis Creating a New or Old Visual Language for New Generations? A Socio-Semiotic Study
ERIC Educational Resources Information Center
Alshenqeeti, Hamza
2016-01-01
The increasing use of emojis, digital images that can represent a word or feeling in a text or email, and the fact that they can be strung together to create a sentence with real and full meaning raises the question of whether they are creating a new language amongst technologically savvy youth, or devaluing existing language. There is however a…
Buil-Legaz, Lucia; Aguilar-Mediavilla, Eva; Adrover-Roig, Daniel
2016-10-01
Language development in children with Specific Language Impairment (SLI) is still poorly understood, especially if children with SLI are bilingual. This study describes the longitudinal trajectory of several linguistic abilities in bilingual children with SLI relative to bilingual control children matched by their age and socioeconomic status. A set of measures of non-word repetition, sentence repetition, phonological awareness, rapid automatic naming and verbal fluency were collected at three time points, from 6-12 years of age using a prospective longitudinal design. Results revealed that, at all ages, children with SLI obtained lower values in measures of sentence repetition, non-word repetition, phonological fluency and phonological awareness (without visual cues) when compared to typically-developing children. Other measures, such as rapid automatic naming, improved over time, given that differences at 6 years of age did not persist at further moments of testing. Other linguistic measures, such as phonological awareness (with visual cues) and semantic fluency were equivalent between both groups across time. Children with SLI manifest persistent difficulties in tasks involved in manipulating segments of words and in maintaining verbal units active in phonological working memory, while other abilities, such as the access to underlying phonological representations are unaffected.
Semantic dementia and persisting Wernicke's aphasia: linguistic and anatomical profiles.
Ogar, J M; Baldo, J V; Wilson, S M; Brambati, S M; Miller, B L; Dronkers, N F; Gorno-Tempini, M L
2011-04-01
Few studies have directly compared the clinical and anatomical characteristics of patients with progressive aphasia to those of patients with aphasia caused by stroke. In the current study we examined fluent forms of aphasia in these two groups, specifically semantic dementia (SD) and persisting Wernicke's aphasia (WA) due to stroke. We compared 10 patients with SD to 10 age- and education-matched patients with WA in three language domains: language comprehension (single words and sentences), spontaneous speech and visual semantics. Neuroanatomical involvement was analyzed using disease-specific image analysis techniques: voxel-based morphometry (VBM) for patients with SD and overlays of lesion digitized lesion reconstructions in patients with WA. Patients with SD and WA were both impaired on tasks that involved visual semantics, but patients with SD were less impaired in spontaneous speech and sentence comprehension. The anatomical findings showed that different regions were most affected in the two disorders: the left anterior temporal lobe in SD and the left posterior middle temporal gyrus in chronic WA. This study highlights that the two syndromes classically associated with language comprehension deficits in aphasia due to stroke and neurodegenerative disease are clinically distinct, most likely due to distinct distributions of damage in the temporal lobe. Copyright © 2010 Elsevier Inc. All rights reserved.
Page mode reading with simulated scotomas: a modest effect of interline spacing on reading speed.
Bernard, Jean-Baptiste; Scherlen, Anne-Catherine; Anne-Catherine, Scherlen; Castet, Eric; Eric, Castet
2007-12-01
Crowding is thought to be one potent limiting factor of reading in peripheral vision. While several studies investigated how crowding between horizontally adjacent letters or words can influence eccentric reading, little attention has been paid to the influence of vertically adjacent lines of text. The goal of this study was to examine the dependence of page mode reading performance (speed and accuracy) on interline spacing. A gaze-contingent visual display was used to simulate a visual central scotoma while normally sighted observers read meaningful French sentences following MNREAD principles. The sensitivity of this new material to low-level factors was confirmed by showing strong effects of perceptual learning, print size and scotoma size on reading performance. In contrast, reading speed was only slightly modulated by interline spacing even for the largest range tested: a 26% gain for a 178% increase in spacing. This modest effect sharply contrasts with the dramatic influence of vertical word spacing found in a recent RSVP study. This discrepancy suggests either that vertical crowding is minimized when reading meaningful sentences, or that the interaction between crowding and other factors such as attention and/or visuo-motor control is dependent on the paradigm used to assess reading speed (page vs. RSVP mode).
Effect(s) of Language Tasks on Severity of Disfluencies in Preschool Children with Stuttering.
Zamani, Peyman; Ravanbakhsh, Majid; Weisi, Farzad; Rashedi, Vahid; Naderi, Sara; Hosseinzadeh, Ayub; Rezaei, Mohammad
2017-04-01
Speech disfluency in children can be increased or decreased depending on the type of linguistic task presented to them. In this study, the effect of sentence imitation and sentence modeling on severity of speech disfluencies in preschool children with stuttering is investigated. In this cross-sectional descriptive analytical study, 58 children with stuttering (29 with mild stuttering and 29 with moderate stuttering) and 58 typical children aged between 4 and 6 years old participated. The severity of speech disfluencies was assessed by SSI-3 and TOCS before and after offering each task. In boys with mild stuttering, The mean stuttering severity scores in two tasks of sentence imitation and sentence modeling were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). But, in boys with moderate stuttering the stuttering severity in the both tasks were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). In girls with mild stuttering, the stuttering severity in two tasks of sentence imitation and sentence modeling were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). But, in girls with moderate stuttering the mean stuttering severity in the both tasks were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). In both gender of typical children, the score of speech disfluencies had no significant difference between two tasks ([Formula: see text]). In preschool children with mild stuttering and peer non-stutters, performing the tasks of sentence imitation and sentence modeling could not increase the severity of stuttering. But, in preschool children with moderate stuttering, doing the task of sentence modeling increased the stuttering severity score.
DTU BCI speller: an SSVEP-based spelling system with dictionary support.
Vilic, Adnan; Kjaer, Troels W; Thomsen, Carsten E; Puthusserypady, S; Sorensen, Helge B D
2013-01-01
In this paper, a new brain computer interface (BCI) speller, named DTU BCI speller, is introduced. It is based on the steady-state visual evoked potential (SSVEP) and features dictionary support. The system focuses on simplicity and user friendliness by using a single electrode for the signal acquisition and displays stimuli on a liquid crystal display (LCD). Nine healthy subjects participated in writing full sentences after a five minutes introduction to the system, and obtained an information transfer rate (ITR) of 21.94 ± 15.63 bits/min. The average amount of characters written per minute (CPM) is 4.90 ± 3.84 with a best case of 8.74 CPM. All subjects reported systematically on different user friendliness measures, and the overall results indicated the potentials of the DTU BCI Speller system. For subjects with high classification accuracies, the introduced dictionary approach greatly reduced the time it took to write full sentences.
Luo, Sean X; Shinall, Jacqueline A; Peterson, Bradley S; Gerber, Andrew J
2016-08-01
Adults with autism spectrum disorder (ASD) may describe other individuals differently compared with typical adults. In this study, we first asked participants to describe closely related individuals such as parents and close friends with 10 positive and 10 negative characteristics. We then used standard natural language processing methods to digitize and visualize these descriptions. The complex patterns of these descriptive sentences exhibited a difference in semantic space between individuals with ASD and control participants. Machine learning algorithms were able to automatically detect and discriminate between these two groups. Furthermore, we showed that these descriptive sentences from adults with ASD exhibited fewer connections as defined by word-word co-occurrences in descriptions, and these connections in words formed a less "small-world" like network. Autism Res 2016, 9: 846-853. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Chen, Xuqian; Yang, Wei; Ma, Lijun; Li, Jiaxin
2018-01-01
Recent findings have shown that information about changes in an object's environmental location in the context of discourse is stored in working memory during sentence comprehension. However, in these studies, changes in the object's location were always consistent with world knowledge (e.g., in “The writer picked up the pen from the floor and moved it to the desk,” the floor and the desk are both common locations for a pen). How do people accomplish comprehension when the object-location information in working memory is inconsistent with world knowledge (e.g., a pen being moved from the floor to the bathtub)? In two visual world experiments, with a “look-and-listen” task, we used eye-tracking data to investigate comprehension of sentences that described location changes under different conditions of appropriateness (i.e., the object and its location were typically vs. unusually coexistent, based on world knowledge) and antecedent context (i.e., contextual information that did vs. did not temporarily normalize unusual coexistence between object and location). Results showed that listeners' retrieval of the critical location was affected by both world knowledge and working memory, and the effect of world knowledge was reduced when the antecedent context normalized unusual coexistence of object and location. More importantly, activation of world knowledge and working memory seemed to change during the comprehension process. These results are important because they demonstrate that interference between world knowledge and information in working memory, appears to be activated dynamically during sentence comprehension. PMID:29520249
The course of language functions after temporal lobe epilepsy surgery: a prospective study.
Giovagnoli, A R; Parente, A; Didato, G; Manfredi, V; Deleo, F; Tringali, G; Villani, F
2016-12-01
Anterior temporal lobectomy (ATL) within the language-dominant hemisphere can impair naming. This prospective study examined the pre-operative to post-operative course of different language components, clarifying which changes are relevant within the short-term and long-term outcome of language. Patients with drug-resistant temporal lobe epilepsy (TLE) were evaluated using the Token, Boston Naming and Word Fluency tests assessing sentence comprehension and word-finding on visual, semantic or phonemic cues. A total of 106 patients were evaluated before and 6 months, 1 and 2 years after ATL; 60 patients were also evaluated after 5 years and 38 controls were assessed at baseline. Seizure outcome was comparable between the left and right TLE patients. Before surgery, naming and word fluency were impaired in the left and right TLE patients, whereas sentence comprehension was normal. After left or right ATL, word fluency progressively improved, naming showed early worsening and late improvement after left ATL and progressive improvement after right ATL, and sentence comprehension did not change. At the 5-year follow-up, naming improvement was clinically significant in 31% and 71% of the left and right TLE patients, respectively. Pre-operative naming, ATL laterality, schooling, and post-operative seizure frequency and number of antiepileptic drugs predicted post-operative naming. Pre-operative word fluency and schooling predicted post-operative word fluency. Left or right TLE can impair word-finding but not sentence comprehension. After ATL, word-finding may improve for a long time, depending on TLE laterality, seizure control and mental reserve. These findings may clarify prognosis prior to treatment. © 2016 EAN.
Chen, Xuqian; Yang, Wei; Ma, Lijun; Li, Jiaxin
2018-01-01
Recent findings have shown that information about changes in an object's environmental location in the context of discourse is stored in working memory during sentence comprehension. However, in these studies, changes in the object's location were always consistent with world knowledge (e.g., in "The writer picked up the pen from the floor and moved it to the desk," the floor and the desk are both common locations for a pen). How do people accomplish comprehension when the object-location information in working memory is inconsistent with world knowledge (e.g., a pen being moved from the floor to the bathtub)? In two visual world experiments, with a "look-and-listen" task, we used eye-tracking data to investigate comprehension of sentences that described location changes under different conditions of appropriateness (i.e., the object and its location were typically vs. unusually coexistent, based on world knowledge) and antecedent context (i.e., contextual information that did vs. did not temporarily normalize unusual coexistence between object and location). Results showed that listeners' retrieval of the critical location was affected by both world knowledge and working memory, and the effect of world knowledge was reduced when the antecedent context normalized unusual coexistence of object and location. More importantly, activation of world knowledge and working memory seemed to change during the comprehension process. These results are important because they demonstrate that interference between world knowledge and information in working memory, appears to be activated dynamically during sentence comprehension.
The Best Question: Explaining the Projection Behavior of Factives
ERIC Educational Resources Information Center
Simons, Mandy; Beaver, David; Roberts, Craige; Tonhauser, Judith
2017-01-01
This article deals with projection in factive sentences. The article first challenges standard assumptions by presenting a series of detailed observations about the interpretations of factive sentences in context, showing that what implication projects, if any, is quite variable and that projection is tightly constrained by prosodic and contextual…
Second Language Writing Classification System Based on Word-Alignment Distribution
ERIC Educational Resources Information Center
Kotani, Katsunori; Yoshimi, Takehiko
2010-01-01
The present paper introduces an automatic classification system for assisting second language (L2) writing evaluation. This system, which classifies sentences written by L2 learners as either native speaker-like or learner-like sentences, is constructed by machine learning algorithms using word-alignment distributions as classification features…
Examining Teacher Grades Using Rasch Measurement Theory
ERIC Educational Resources Information Center
Randall, Jennifer; Engelhard, George, Jr.
2009-01-01
In this study, we present an approach to questionnaire design within educational research based on Guttman's mapping sentences and Many-Facet Rasch Measurement Theory. We designed a 54-item questionnaire using Guttman's mapping sentences to examine the grading practices of teachers. Each item in the questionnaire represented a unique student…
Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension
ERIC Educational Resources Information Center
Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica
2015-01-01
When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…
On the nature of hand-action representations evoked during written sentence comprehension.
Bub, Daniel N; Masson, Michael E J
2010-09-01
We examine the nature of motor representations evoked during comprehension of written sentences describing hand actions. We distinguish between two kinds of hand actions: a functional action, applied when using the object for its intended purpose, and a volumetric action, applied when picking up or holding the object. In Experiment 1, initial activation of both action representations was followed by selection of the functional action, regardless of sentence context. Experiment 2 showed that when the sentence was followed by a picture of the object, clear context-specific effects on evoked action representations were obtained. Experiment 3 established that when a picture of an object was presented alone, the time course of both functional and volumetric actions was the same. These results provide evidence that representations of object-related hand actions are evoked as part of sentence processing. In addition, we discuss the conditions that elicit context-specific evocation of motor representations. 2010 Elsevier B.V. All rights reserved.
Zeng, Tao; Mao, Wen; Lu, Qing
2016-05-25
Scalp-recorded event-related potentials are known to be sensitive to particular aspects of sentence processing. The N400 component is widely recognized as an effect closely related to lexical-semantic processing. The absence of an N400 effect in participants performing tasks in Indo-European languages has been considered evidence that failed syntactic category processing appears to block lexical-semantic integration and that syntactic structure building is a prerequisite of semantic analysis. An event-related potential experiment was designed to investigate whether such syntactic primacy can be considered to apply equally to Chinese sentence processing. Besides correct middles, sentences with either single semantic or single syntactic violation as well as double syntactic and semantic anomaly were used in the present research. Results showed that both purely semantic and combined violation induced a broad negativity in the time window 300-500 ms, indicating the independence of lexical-semantic integration. These findings provided solid evidence that lexical-semantic parsing plays a crucial role in Chinese sentence comprehension.
Interpreting Quantifier Scope Ambiguity: Evidence of Heuristic First, Algorithmic Second Processing
Dwivedi, Veena D.
2013-01-01
The present work suggests that sentence processing requires both heuristic and algorithmic processing streams, where the heuristic processing strategy precedes the algorithmic phase. This conclusion is based on three self-paced reading experiments in which the processing of two-sentence discourses was investigated, where context sentences exhibited quantifier scope ambiguity. Experiment 1 demonstrates that such sentences are processed in a shallow manner. Experiment 2 uses the same stimuli as Experiment 1 but adds questions to ensure deeper processing. Results indicate that reading times are consistent with a lexical-pragmatic interpretation of number associated with context sentences, but responses to questions are consistent with the algorithmic computation of quantifier scope. Experiment 3 shows the same pattern of results as Experiment 2, despite using stimuli with different lexical-pragmatic biases. These effects suggest that language processing can be superficial, and that deeper processing, which is sensitive to structure, only occurs if required. Implications for recent studies of quantifier scope ambiguity are discussed. PMID:24278439
The role of prominence in Spanish sentence comprehension: An ERP study.
Gattei, Carolina A; Tabullo, Ángel; París, Luis; Wainselboim, Alejandro J
2015-11-01
Prominence is the hierarchical relation among arguments that allows us to understand 'Who did what to whom' in a sentence. The present study aimed to provide evidence about the role of prominence information for the incremental interpretation of arguments in Spanish. We investigated the time course of neural correlates associated to the comprehension of sentences that require a reversal of argument prominence hierarchization. We also studied how the amount of available prominence information may affect the incremental build-up of verbal expectations. Results of the ERP data revealed that at the disambiguating verb region, object-initial sentences (only one argument available) elicited a centro-parietal negativity with a peak at 400 ms post-onset. Subject-initial sentences (two arguments available) yielded a broadly distributed positivity at around 650 ms. This dissociation suggests that argument interpretation may depend on their morphosyntactic features, and also on the amount of prominence information available before the verb is encountered. Copyright © 2015 Elsevier Inc. All rights reserved.
Ng, Manwa L; Chen, Yang
2011-12-01
The present study examined English sentence stress produced by native Cantonese speakers who were speaking English as a second language (ESL). Cantonese ESL speakers' proficiency in English stress production as perceived by English-speaking listeners was also studied. Acoustical parameters associated with sentence stress including fundamental frequency (F0), vowel duration, and intensity were measured from the English sentences produced by 40 Cantonese ESL speakers. Data were compared with those obtained from 40 native speakers of American English. The speech samples were also judged by eight native listeners who were native speakers of American English for placement, degree, and naturalness of stress. Results showed that Cantonese ESL speakers were able to use F0, vowel duration, and intensity to differentiate sentence stress patterns. Yet, both female and male Cantonese ESL speakers exhibited consistently higher F0 in stressed words than English speakers. Overall, Cantonese ESL speakers were found to be proficient in using duration and intensity to signal sentence stress, in a way comparable with English speakers. In addition, F0 and intensity were found to correlate closely with perceptual judgement and the degree of stress with the naturalness of stress.
Relating (Un)acceptability to Interpretation. Experimental Investigations on Negation
Etxeberria, Urtzi; Tubau, Susagna; Deprez, Viviane; Borràs-Comes, Joan; Espinal, M. Teresa
2018-01-01
Although contemporary linguistic studies routinely use unacceptable sentences to determine the boundary of what falls outside the scope of grammar, investigations far more rarely take into consideration the possible interpretations of such sentences, perhaps because these interpretations are commonly prejudged as irrelevant or unreliable across speakers. In this paper we provide the results of two experiments in which participants had to make parallel acceptability and interpretation judgments of sentences presenting various types of negative dependencies in Basque and in two varieties of Spanish (Castilian Spanish and Basque Country Spanish). Our results show that acceptable sentences are uniformly assigned a single negation reading in the two languages. However, while unacceptable sentences consistently convey single negation in Basque, they are interpreted at chance in both varieties of Spanish. These results confirm that judgment data that distinguish between acceptable and unacceptable negative utterances can inform us not only about an adult’s grammar of his/her particular language but also about interesting cross-linguistic differences. We conclude that the acceptability and interpretation of (un)grammatical negative sentences can serve linguistic theory construction by helping to disentangle basic assumptions about the nature of various negative dependencies. PMID:29456515
Sentence processing and verbal working memory in a white-matter-disconnection patient.
Meyer, Lars; Cunitz, Katrin; Obleser, Jonas; Friederici, Angela D
2014-08-01
The Arcuate Fasciculus/Superior Longitudinal Fasciculus (AF/SLF) is the white-matter bundle that connects posterior superior temporal and inferior frontal cortex. Its causal functional role in sentence processing and verbal working memory is currently under debate. While impairments of sentence processing and verbal working memory often co-occur in patients suffering from AF/SLF damage, it is unclear whether these impairments result from shared white-matter damage to the verbal-working-memory network. The present study sought to specify the behavioral consequences of focal AF/SLF damage for sentence processing and verbal working memory, which were assessed in a single patient suffering from a cleft-like lesion spanning the deep left superior temporal gyrus, sparing most surrounding gray matter. While tractography suggests that the ventral fronto-temporal white-matter bundle is intact in this patient, the AF/SLF was not visible to tractography. In line with the hypothesis that the AF/SLF is causally involved in sentence processing, the patient׳s performance was selectively impaired on sentences that jointly involve both complex word orders and long word-storage intervals. However, the patient was unimpaired on sentences that only involved long word-storage intervals without involving complex word orders. On the contrary, the patient performed generally worse than a control group across standard verbal-working-memory tests. We conclude that the AF/SLF not only plays a causal role in sentence processing, linking regions of the left dorsal inferior frontal gyrus to the temporo-parietal region, but moreover plays a crucial role in verbal working memory, linking regions of the left ventral inferior frontal gyrus to the left temporo-parietal region. Together, the specific sentence-processing impairment and the more general verbal-working-memory impairment may imply that the AF/SLF subserves both sentence processing and verbal working memory, possibly pointing to the AF and SLF respectively supporting each. Copyright © 2014 Elsevier Ltd. All rights reserved.
The development and validation of the Closed-set Mandarin Sentence (CMS) test.
Tao, Duo-Duo; Fu, Qian-Jie; Galvin, John J; Yu, Ya-Feng
2017-09-01
Matrix-styled sentence tests offer a closed-set paradigm that may be useful when evaluating speech intelligibility. Ideally, sentence test materials should reflect the distribution of phonemes within the target language. We developed and validated the Closed-set Mandarin Sentence (CMS) test to assess Mandarin speech intelligibility in noise. CMS test materials were selected to be familiar words and to represent the natural distribution of vowels, consonants, and lexical tones found in Mandarin Chinese. Ten key words in each of five categories (Name, Verb, Number, Color, and Fruit) were produced by a native Mandarin talker, resulting in a total of 50 words that could be combined to produce 100,000 unique sentences. Normative data were collected in 10 normal-hearing, adult Mandarin-speaking Chinese listeners using a closed-set test paradigm. Two test runs were conducted for each subject, and 20 sentences per run were randomly generated while ensuring that each word was presented only twice in each run. First, the level of the words in each category were adjusted to produce equal intelligibility in noise. Test-retest reliability for word-in-sentence recognition was excellent according to Cronbach's alpha (0.952). After the category level adjustments, speech reception thresholds (SRTs) for sentences in noise, defined as the signal-to-noise ratio (SNR) that produced 50% correct whole sentence recognition, were adaptively measured by adjusting the SNR according to the correctness of response. The mean SRT was -7.9 (SE=0.41) and -8.1 (SE=0.34) dB for runs 1 and 2, respectively. The mean standard deviation across runs was 0.93 dB, and paired t-tests showed no significant difference between runs 1 and 2 (p=0.74) despite random sentences being generated for each run and each subject. The results suggest that the CMS provides large stimulus set with which to repeatedly and reliably measure Mandarin-speaking listeners' speech understanding in noise using a closed-set paradigm.
Conway, Christopher M.; Deocampo, Joanne A.; Walk, Anne M.; Anaya, Esperanza M.; Pisoni, David B.
2015-01-01
Purpose The authors investigated the ability of deaf children with cochlear implants (CIs) to use sentence context to facilitate the perception of spoken words. Method Deaf children with CIs (n = 24) and an age-matched group of children with normal hearing (n = 31) were presented with lexically controlled sentences and were asked to repeat each sentence in its entirety. Performance was analyzed at each of 3 word positions of each sentence (first, second, and third key word). Results Whereas the children with normal hearing showed robust effects of contextual facilitation—improved speech perception for the final words in a sentence—the deaf children with CIs on average showed no such facilitation. Regression analyses indicated that for the deaf children with CIs, Forward Digit Span scores significantly predicted accuracy scores for all 3 positions, whereas performance on the Stroop Color and Word Test, Children’s Version (Golden, Freshwater, & Golden, 2003) predicted how much contextual facilitation was observed at the final word. Conclusions The pattern of results suggests that some deaf children with CIs do not use sentence context to improve spoken word recognition. The inability to use sentence context may be due to possible interactions between language experience and cognitive factors that affect the ability to successfully integrate temporal–sequential information in spoken language. PMID:25029170
Memory mechanisms supporting syntactic comprehension.
Caplan, David; Waters, Gloria
2013-04-01
Efforts to characterize the memory system that supports sentence comprehension have historically drawn extensively on short-term memory as a source of mechanisms that might apply to sentences. The focus of these efforts has changed significantly in the past decade. As a result of changes in models of short-term working memory (ST-WM) and developments in models of sentence comprehension, the effort to relate entire components of an ST-WM system, such as those in the model developed by Baddeley (Nature Reviews Neuroscience 4: 829-839, 2003) to sentence comprehension has largely been replaced by an effort to relate more specific mechanisms found in modern models of ST-WM to memory processes that support one aspect of sentence comprehension--the assignment of syntactic structure (parsing) and its use in determining sentence meaning (interpretation) during sentence comprehension. In this article, we present the historical background to recent studies of the memory mechanisms that support parsing and interpretation and review recent research into this relation. We argue that the results of this research do not converge on a set of mechanisms derived from ST-WM that apply to parsing and interpretation. We argue that the memory mechanisms supporting parsing and interpretation have features that characterize another memory system that has been postulated to account for skilled performance-long-term working memory. We propose a model of the relation of different aspects of parsing and interpretation to ST-WM and long-term working memory.
The Effect of Dioptric Blur on Reading Performance
Chung, Susana T.L.; Jarvis, Samuel H.; Cheung, Sing-Hang
2013-01-01
Little is known about the systematic impact of blur on reading performance. The purpose of this study was to quantify the effect of dioptric blur on reading performance in a group of normally sighted young adults. We measured monocular reading performance and visual acuity for 19 observers with normal vision, for five levels of optical blur (no blur, 0.5, 1, 2 and 3D). Dioptric blur was induced using convex trial lenses placed in front of the testing eye, with the pupil dilated and in the presence of a 3 mm artificial pupil. Reading performance was assessed using eight versions of the MNREAD Acuity Chart. For each level of dioptric blur, observers read aloud sentences on one of these charts, from large to small print. Reading time for each sentence and the number of errors made were recorded and converted to reading speed in words per minute. Visual acuity was measured using 4-orientation Landolt C stimuli. For all levels of dioptric blur, reading speed increased with print size up to a certain print size and then remained constant at the maximum reading speed. By fitting nonlinear mixed-effects models, we found that the maximum reading speed was minimally affected by blur up to 2D, but was ~23% slower for 3D of blur. When the amount of blur increased from 0 (no-blur) to 3D, the threshold print size (print size corresponded to 80% of the maximum reading speed) increased from 0.01 to 0.88 logMAR, reading acuity worsened from −0.16 to 0.58 logMAR, and visual acuity worsened from −0.19 to 0.64 logMAR. The similar rates of change with blur for threshold print size, reading acuity and visual acuity implicates that visual acuity is a good predictor of threshold print size and reading acuity. Like visual acuity, reading performance is susceptible to the degrading effect of optical blur. For increasing amount of blur, larger print sizes are required to attain the maximum reading speed. PMID:17442363
The effect of orthographic neighborhood in the reading span task.
Robert, Christelle; Postal, Virginie; Mathey, Stéphanie
2015-04-01
This study aimed at examining whether and to what extent orthographic neighborhood of words influences performance in a working memory span task. Twenty-five participants performed a reading span task in which final words to be memorized had either no higher frequency orthographic neighbor or at least one. In both neighborhood conditions, each participant completed three series of two, three, four, or five sentences. Results indicated an interaction between orthographic neighborhood and list length. In particular, an inhibitory effect of orthographic neighborhood on recall appeared in list length 5. A view is presented suggesting that words with higher frequency neighbors require more resources to be memorized than words with no such neighbors. The implications of the results are discussed with regard to memory processes and current models of visual word recognition.
Ishizuka, K; Kashiwakura, M; Oiji, A
1998-05-01
In order to explore a possible association between psychiatric symptoms and eye movements, 32 patients with schizophrenia were examined using an eye mark recorder in combination with the Positive and Negative Syndrome Scale, and were compared with 32 controls. Four types of figures were presented to the subjects: geometrical figures, drawings, story drawings, and sentences. Mean eye fixation time was significantly longer and mean eye scanning length was significantly shorter for the patients than for controls, not only in response to the geometric figures, but also in response to the story drawings. Eye fixation time and scanning velocity were positively correlated with degrees of thought disturbance. The number of eye fixations, eye fixation time and scanning velocity were negatively correlated with degree of depressive tendency.
Müller, Jana Annina; Wendt, Dorothea; Kollmeier, Birger; Brand, Thomas
2016-01-01
The aim of this study was to validate a procedure for performing the audio-visual paradigm introduced by Wendt et al. (2015) with reduced practical challenges. The original paradigm records eye fixations using an eye tracker and calculates the duration of sentence comprehension based on a bootstrap procedure. In order to reduce practical challenges, we first reduced the measurement time by evaluating a smaller measurement set with fewer trials. The results of 16 listeners showed effects comparable to those obtained when testing the original full measurement set on a different collective of listeners. Secondly, we introduced electrooculography as an alternative technique for recording eye movements. The correlation between the results of the two recording techniques (eye tracker and electrooculography) was r = 0.97, indicating that both methods are suitable for estimating the processing duration of individual participants. Similar changes in processing duration arising from sentence complexity were found using the eye tracker and the electrooculography procedure. Thirdly, the time course of eye fixations was estimated with an alternative procedure, growth curve analysis, which is more commonly used in recent studies analyzing eye tracking data. The results of the growth curve analysis were compared with the results of the bootstrap procedure. Both analysis methods show similar processing durations. PMID:27764125
Wang, J Jessica; Ali, Muna; Frisson, Steven; Apperly, Ian A
2016-09-01
Basic competence in theory of mind is acquired during early childhood. Nonetheless, evidence suggests that the ability to take others' perspectives in communication improves continuously from middle childhood to the late teenage years. This indicates that theory of mind performance undergoes protracted developmental changes after the acquisition of basic competence. Currently, little is known about the factors that constrain children's performance or that contribute to age-related improvement. A sample of 39 8-year-olds and 56 10-year-olds were tested on a communication task in which a speaker's limited perspective needed to be taken into account and the complexity of the speaker's utterance varied. Our findings showed that 10-year-olds were generally less egocentric than 8-year-olds. Children of both ages committed more egocentric errors when a speaker uttered complex sentences compared with simple sentences. Both 8- and 10-year-olds were affected by the demand to integrate complex sentences with the speaker's limited perspective and to a similar degree. These results suggest that long after children's development of simple visual perspective-taking, their use of this ability to assist communication is substantially constrained by the complexity of the language involved. Copyright © 2015 Elsevier Inc. All rights reserved.
The Influence of Biomedical Information and Childhood History on Sentencing.
Kim, JongHan; Boytos, Abby; Seong, Yoori; Park, Kwangbai
2015-01-01
A recent trend in court is for defense attorneys to introduce brain scans and other forms of biomedical information (BI) into criminal trials as mitigating evidence. The present study investigates how BI, when considered in combination with a defendant's childhood information (CI), can influence the length of a defendant's sentence. We hypothesized that certain combinations of BI and CI result in shorter sentences because they suggest that the defendant poses less of a threat to society. Participants were asked to read accounts of the trial of a murder suspect and, based on the information therein, recommend a sentence as if they were the judge. The defendant was diagnosed with psychopathy, but biomedical information regarding that diagnosis was included or excluded depending on the BI condition. The defendant was further described as growing up in either a loving or abusive family. The results showed that, if BI was present in the trial account, the defendant from an abusive family was perceived as less of a threat to society and received a shorter recommended sentence than if the defendant had been from a loving family. If BI was absent from the account, the pattern was reversed: the defendant from a loving family was perceived as less of a threat to society and received a shorter recommended sentence than if he had been from an abusive family. Implications for the use of BI and CI in court trials are discussed, as well as their relationship to free will and the function of punishment as retribution and utility. Copyright © 2015 John Wiley & Sons, Ltd.
On the role of attention for the processing of emotions in speech: sex differences revisited.
Schirmer, Annett; Kotz, Sonja A; Friederici, Angela D
2005-08-01
In a previous cross-modal priming study [A. Schirmer, A.S. Kotz, A.D. Friederici, Sex differentiates the role of emotional prosody during word processing, Cogn. Brain Res. 14 (2002) 228-233.], we found that women integrated emotional prosody and word valence earlier than men. Both sexes showed a smaller N400 in the event-related potential to emotional words when these words were preceded by a sentence with congruous compared to incongruous emotional prosody. However, women showed this effect with a 200-ms interval between prime sentence and target word whereas men showed the effect with a 750-ms interval. The present study was designed to determine whether these sex differences prevail when attention is directed towards the emotional content of prosody and word meaning. To this end, we presented the same prime sentences and target words as in our previous study. Sentences were spoken with happy or sad prosody and followed by a congruous or incongruous emotional word or pseudoword. The interval between sentence offset and target onset was 200 ms. In addition to performing a lexical decision, participants were asked to decide whether or not a word matched the emotional prosody of the preceding sentence. The combined lexical and congruence judgment failed to reveal differences in emotional-prosodic priming between men and women. Both sexes showed smaller N400 amplitudes to emotionally congruent compared to incongruent words. This suggests that the presence of sex differences in emotional-prosodic priming depends on whether or not participants are instructed to take emotional prosody into account.
Jordan, Timothy R; McGowan, Victoria A; Kurtev, Stoyan; Paterson, Kevin B
2016-02-01
When reading from left to right, useful information acquired during each fixational pause is widely assumed to extend 14 to 15 characters to the right of fixation but just 3 to 4 characters to the left, and certainly no further than the beginning of the fixated word. However, this leftward extent is strikingly small and seems inconsistent with other aspects of reading performance and with the general horizontal symmetry of visual input. Accordingly, 2 experiments were conducted to examine the influence of text located to the left of fixation during each fixational pause using an eye-tracking paradigm in which invisible boundaries were created in sentence displays. Each boundary corresponded to the leftmost edge of each word so that, as each sentence was read, the normal letter content of text to the left of each fixated word was corrupted by letter replacements that were either visually similar or visually dissimilar to the originals. The proximity of corrupted text to the left of fixation was maintained at 1, 2, 3, or 4 words from the left boundary of each fixated word. In both experiments, relative to completely normal text, reading performance was impaired when each type of letter replacement was up to 2 words to the left of fixated words but letter replacements further from fixation produced no impairment. These findings suggest that key aspects of reading are influenced by information acquired during each fixational pause from much further leftward than is usually assumed. Some of the implications of these findings for reading are discussed. (c) 2016 APA, all rights reserved).
DeCaro, Renee; Peelle, Jonathan E.; Grossman, Murray; Wingfield, Arthur
2016-01-01
Reduced hearing acuity is among the most prevalent of chronic medical conditions among older adults. An experiment is reported in which comprehension of spoken sentences was tested for older adults with good hearing acuity or with a mild-to-moderate hearing loss, and young adults with age-normal hearing. Comprehension was measured by participants’ ability to determine the agent of an action in sentences that expressed this relation with a syntactically less complex subject-relative construction or a syntactically more complex object-relative construction. Agency determination was further challenged by inserting a prepositional phrase into sentences between the person performing an action and the action being performed. As a control, prepositional phrases of equivalent length were also inserted into sentences in a non-disruptive position. Effects on sentence comprehension of age, hearing acuity, prepositional phrase placement and sound level of stimulus presentations appeared only for comprehension of sentences with the more syntactically complex object-relative structures. Working memory as tested by reading span scores accounted for a significant amount of the variance in comprehension accuracy. Once working memory capacity and hearing acuity were taken into account, chronological age among the older adults contributed no further variance to comprehension accuracy. Results are discussed in terms of the positive and negative effects of sensory–cognitive interactions in comprehension of spoken sentences and lend support to a framework in which domain-general executive resources, notably verbal working memory, play a role in both linguistic and perceptual processing. PMID:26973557
The influence of sense-contingent argument structure frequencies on ambiguity resolution in aphasia.
Huck, Anneline; Thompson, Robin L; Cruice, Madeline; Marshall, Jane
2017-06-01
Verbs with multiple senses can show varying argument structure frequencies, depending on the underlying sense. When acknowledge is used to mean 'recognise', it takes a direct object (DO), but when it is used to mean 'admit' it prefers a sentence complement (SC). The purpose of this study was to investigate whether people with aphasia (PWA) can exploit such meaning-structure probabilities during the reading of temporarily ambiguous sentences, as demonstrated for neurologically healthy individuals (NHI) in a self-paced reading study (Hare et al., 2003). Eleven people with mild or moderate aphasia and eleven neurologically healthy control participants read sentences while their eyes were tracked. Using adapted materials from the study by Hare et al. target sentences containing an SC structure (e.g. He acknowledged (that) his friends would probably help him a lot) were presented following a context prime that biased either a direct object (DO-bias) or sentence complement (SC-bias) reading of the verbs. Half of the stimuli sentences did not contain that so made the post verbal noun phrase (his friends) structurally ambiguous. Both groups of participants were influenced by structural ambiguity as well as by the context bias, indicating that PWA can, like NHI, use their knowledge of a verb's sense-based argument structure frequency during online sentence reading. However, the individuals with aphasia showed delayed reading patterns and some individual differences in their sensitivity to context and ambiguity cues. These differences compared to the NHI may contribute to difficulties in sentence comprehension in aphasia. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sentences with core knowledge violations increase the size of N400 among paranormal believers.
Lindeman, Marjaana; Cederström, Sebastian; Simola, Petteri; Simula, Anni; Ollikainen, Sara; Riekki, Tapani
2008-01-01
A major problem in research on paranormal beliefs is that the concept of "paranormality" remains to be adequately defined. The aim of this study was to empirically justify the following definition: paranormal beliefs are beliefs in physical, biological, or psychological phenomena that contain core ontological attributes of one of the other two categories [e.g., a stone (physical) having thoughts (psychological)]. We hypothesized that individuals who believe in paranormal phenomena are slower in understanding whether sentences with core knowledge violations are literally true than skeptics, and that this difference would be reflected by a more negative N400. Ten believers and 10 skeptics (six men, age range 23-49) participated in the study. Event-related potentials (N400) were recorded as the participants read 210 three-word Finnish sentences, of which 70 were normal ("The house has a history"), 70 were anomalies ("The house writes its history") and 70 included violations of core knowledge ("The house knows its history"). The participants were presented with a question that contextualized the sentences: "Is this sentence literally true?" While the N400 effects were similar for normal and anomalous sentences among the believers and the skeptics, a more negative N400 effect was found among the believers than among the skeptics for sentences with core knowledge violations. The results support the new definition of "paranormality", because participants who believed in paranormal phenomena appeared to find it more difficult to construct a reasonable interpretation of the sentences with core knowledge violations than the skeptics did as indicated by the N400.
Optical phonetics and visual perception of lexical and phrasal stress in English.
Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L; Cho, Taehong; Alwan, Abeer
2009-01-01
In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of facial movement, which were generally larger and faster in the stressed condition. In a visual perception experiment, 16 perceivers identified the location of stress in forced-choice judgments of video clips of these utterances (without audio). Phrasal stress was better perceived than lexical stress. The relation of the visual intelligibility of the prosody of these utterances to the optical characteristics of their production was analyzed to determine which cues are associated with successful visual perception. While most optical measures were correlated with perception performance, chin measures, especially Chin Opening Displacement, contributed the most to correct perception independently of the other measures. Thus, our results indicate that the information for visual stress perception is mainly associated with mouth opening movements.
Zhao, Jing; Liu, Menglian; Liu, Hanlong; Huang, Chen
2018-02-16
It has been suggested that orthographic transparency and age changes may affect the relationship between visual attention span (VAS) deficit and reading difficulty. The present study explored the developmental trend of VAS in children with developmental dyslexia (DD) in Chinese, a logographic language with a deep orthography. Fifty-seven Chinese children with DD and fifty-four age-matched normal readers participated. The visual 1-back task was adopted to examine VAS. Phonological and morphological awareness tests, and reading tests in single-character and sentence levels were used for reading skill measurements. Results showed that only high graders with dyslexia exhibited lower accuracy than the controls in the VAS task, revealing an increased VAS deficit with development in the dyslexics. Moreover, the developmental trajectory analyses demonstrated that the dyslexics seemed to exhibit an atypical but not delayed pattern in their VAS development as compared to the controls. A correlation analysis indicated that VAS was only associated with morphological awareness for dyslexic readers in high grades. Further regression analysis showed that VAS skills and morphological awareness made separate and significant contributions to single-character reading for high grader with dyslexia. These findings suggested a developmental increasing trend in the relationship between VAS skills and reading (dis)ability in Chinese.
The Perception of "Sine-Wave Speech" by Adults with Developmental Dyslexia.
ERIC Educational Resources Information Center
Rosner, Burton S.; Talcott, Joel B.; Witton, Caroline; Hogg, James D.; Richardson, Alexandra J.; Hansen, Peter C.; Stein, John F.
2003-01-01
"Sine-wave speech" sentences contain only four frequency-modulated sine waves, lacking many acoustic cues present in natural speech. Adults with (n=19) and without (n=14) dyslexia were asked to reproduce orally sine-wave utterances in successive trials. Results suggest comprehension of sine-wave sentences is impaired in some adults with…
ERIC Educational Resources Information Center
Sadoski, Mark; And Others
1993-01-01
Presents and tests a theoretically derived causal model of the recall of sentences. Notes that the causal model identifies familiarity and concreteness as causes of comprehensibility; familiarity, concreteness, and comprehensibility as causes of interestingness; and all the identified variables as causes of both immediate and delayed recall.…
Determining the Scope of English Quantifiers.
1978-06-01
experimentation, the following " flashcard " mode of presentation was adopted. Each sentence was typed on a file card, and submitted to the informant to...l10leentS of section 4 are from Reinhart 1976). I suspect that my flashcard technique -- where informants read a sentence typed on a file card and paraphrase
Effectiveness of Automated Chinese Sentence Scoring with Latent Semantic Analysis
ERIC Educational Resources Information Center
Liao, Chen-Huei; Kuo, Bor-Chen; Pai, Kai-Chih
2012-01-01
Automated scoring by means of Latent Semantic Analysis (LSA) has been introduced lately to improve the traditional human scoring system. The purposes of the present study were to develop a LSA-based assessment system to evaluate children's Chinese sentence construction skills and to examine the effectiveness of LSA-based automated scoring function…
No Fear of Commitment: Children's Incremental Interpretation in English and Japanese Wh-Questions
ERIC Educational Resources Information Center
Omaki, Akira; Davidson White, Imogen; Goro, Takuya; Lidz, Jeffrey; Phillips, Colin
2014-01-01
Much work on child sentence processing has demonstrated that children are able to use various linguistic cues to incrementally resolve temporary syntactic ambiguities, but they fail to use syntactic or interpretability cues that arrive later in the sentence. The present study explores whether children incrementally resolve filler-gap dependencies,…
EEG: Elements of English Grammar: Rules Explained Simply.
ERIC Educational Resources Information Center
Van Winkle, Harold
Intended to help interested people speak and write more correctly through self-instruction, this book presents the basic rules of standard English grammar in an easy-to-understand manner. The book's six chapters are as follows: (1) The Sentence; (2) Parts of Speech; (3) Case; (4) Modifiers; (5) Agreement; and (6) Building Better Sentences. The…
ERIC Educational Resources Information Center
Riches, N. G.; Loucas, T.; Baird, G.; Charman, T.; Simonoff, E.
2010-01-01
Background: Recent studies have indicated that many children with autism spectrum disorders present with language difficulties that are similar to those of children with specific language impairments, leading some to argue for similar structural deficits in these two disorders. Aims: Repetition of sentences involving long-distance dependencies was…
Memory Illusion in High-Functioning Autism and Asperger's Disorder
ERIC Educational Resources Information Center
Kamio, Yoko; Toichi, Motomi
2007-01-01
In this study, 13 individuals with high-functioning autism (HFA), 15 individuals with Asperger's disorder (AD), and age-, and IQ-matched controls were presented a list of sentences auditorily. Participants then evaluated semantically related but new sentences and reported whether they were old or new. The total rates of false recognition for…
The Sensitivity of the Right Hemisphere to Contextual Information in Sentences
ERIC Educational Resources Information Center
Gouldthorp, Bethanie; Coney, Jeffrey
2009-01-01
One explanation for the inconsistencies in research examining the sentence comprehension abilities of the right hemisphere (RH) is the presence of confounding variables that have generally served to disadvantage the processing capacities of the RH. As such, the present study aimed to investigate hemispheric differences in the use of message-level…
Reading in a Root-Based-Morphology Language: The Case of Arabic.
ERIC Educational Resources Information Center
Abu-Rabia, S.
2002-01-01
Reviews the reading process in Arabic as a function of vowels and sentence context. Reviews reading accuracy and reading comprehension results in light of cross-cultural reading to develop a more comprehensive reading theory. Presents the phonology, morphology and sentence context of Arabic in two suggested reading models for poor/beginner Arabic…
Bögels, Sara; Schriefers, Herbert; Vonk, Wietske; Chwilla, Dorothee J; Kerkhofs, Roel
2013-11-01
This ERP study investigates whether a superfluous prosodic break (i.e., a prosodic break that does not coincide with a syntactic break) has more severe processing consequences during auditory sentence comprehension than a missing prosodic break (i.e., the absence of a prosodic break at the position of a syntactic break). Participants listened to temporarily ambiguous sentences involving a prosody-syntax match or mismatch. The disambiguation of these sentences was always lexical in nature in the present experiment. This contrasts with a related study by Pauker, Itzhak, Baum, and Steinhauer (2011), where the disambiguation was of a lexical type for missing PBs and of a prosodic type for superfluous PBs. Our results converge with those of Pauker et al. (2011): superfluous prosodic breaks lead to more severe processing problems than missing prosodic breaks. Importantly, the present results extend those of Pauker et al. (2011) showing that this holds when the disambiguation is always lexical in nature. Furthermore, our results show that the way listeners use prosody can change over the course of the experiment which bears consequences for future studies. © 2013 Elsevier Ltd. All rights reserved.
Toledo Piza, Carolina M. J.; de Macedo, Elizeu C.; Miranda, Monica C.; Bueno, Orlando F. A.
2014-01-01
The analysis of cognitive processes underpinning reading and writing skills may help to distinguish different reading ability profiles. The present study used a Brazilian reading and writing battery to compare performance of students with dyslexia with two individually matched control groups: one contrasting on reading competence but not age and the other group contrasting on age but not reading competence. Participants were 28 individuals with dyslexia (19 boys) with a mean age of 9.82 (SD ± 1.44) drawn from public and private schools. These were matched to: (1) an age control group (AC) of 26 good readers with a mean age of 9.77 (SD ± 1.44) matched by age, sex, years of schooling, and type of school; (2) reading control group (RC) of 28 younger controls with a mean age of 7.82 (SD ± 1.06) matched by sex, type of school, and reading level. All groups were tested on four tasks from the Brazilian Reading and Writing Assessment battery (“BALE”): Written Sentence Comprehension Test (WSCT); Spoken Sentence Comprehension Test (OSCT); Picture-Print Writing Test (PPWT 1.1-Writing); and the Reading Competence Test (RCT). These tasks evaluate reading and listening comprehension for sentences, spelling, and reading isolated words and pseudowords (non-words). The dyslexia group scored lower and took longer to complete tasks than the AC group. Compared with the RC group, there were no differences in total scores on reading or oral comprehension tasks. However, dyslexics presented slower reading speeds, longer completion times, and lower scores on spelling tasks, even compared with younger controls. Analysis of types of errors on word and pseudoword reading items showed students with dyslexia scoring lower for pseudoword reading than the other two groups. These findings suggest that the dyslexics overall scores were similar to those of younger readers. However, specific phonological and visual decoding deficits showed that the two groups differ in terms of underpinning reading strategies. PMID:25132829
A Preliminary Empirical Evaluation of Virtual Reality as a Training Tool for Visual-Spatial Tasks
1993-05-01
Hillsdale, NJ: Lawrence Erlbaum Associates. Craik , F.I.M., & Lockhart , R.S. (1972). Levels of processing ; A framework for memory research. Journal of...short-term memory (Bower, 1972; Kanigel, 1981), elaborative rehearsai in short-term memory, and subsequent retrieval from long-term memory ( Craik ... Lockhart , 1972; Chase & Ericsson, 1981), ?nd the superiority of gist over verbatim recall of sentences (Bransford & Franks, 1971). Even memory for simple
Understanding Charts and Graphs.
1987-07-28
34notational.* English, then, is obviously not a notational system because ambiguous words or sentences are possible, whereas musical notion is notational...how lines and regions are detected and organized; these principles grow out of discoveries about human visual information processing. A syntactic...themselves name other colors (e.g., the word "red" is printed in blue ink; this is known as the OStroop effecto ). Similarly, if "left" and "right" are
Rubio Ballester, Belén; Nirme, Jens; Duarte, Esther; Cuxart, Ampar; Rodriguez, Susana; Verschure, Paul; Duff, Armin
2015-11-27
Unfortunately, in the original version of this article [1] the sentence "This project was supported through ERC project cDAC (FP7-IDEAS-ERC 341196), EC H2020 project socSMCs (H2020-EU.1.2.2. 641321) and MINECO project SANAR (Gobierno de España)" was missing from the acknowledgements.The acknowledgements have been correctly included in full in this erratum.
Neural Basis of Action Understanding: Evidence from Sign Language Aphasia.
Rogalsky, Corianne; Raphel, Kristin; Tomkovicz, Vivian; O'Grady, Lucinda; Damasio, Hanna; Bellugi, Ursula; Hickok, Gregory
2013-01-01
The neural basis of action understanding is a hotly debated issue. The mirror neuron account holds that motor simulation in fronto-parietal circuits is critical to action understanding including speech comprehension, while others emphasize the ventral stream in the temporal lobe. Evidence from speech strongly supports the ventral stream account, but on the other hand, evidence from manual gesture comprehension (e.g., in limb apraxia) has led to contradictory findings. Here we present a lesion analysis of sign language comprehension. Sign language is an excellent model for studying mirror system function in that it bridges the gap between the visual-manual system in which mirror neurons are best characterized and language systems which have represented a theoretical target of mirror neuron research. Twenty-one life long deaf signers with focal cortical lesions performed two tasks: one involving the comprehension of individual signs and the other involving comprehension of signed sentences (commands). Participants' lesions, as indicated on MRI or CT scans, were mapped onto a template brain to explore the relationship between lesion location and sign comprehension measures. Single sign comprehension was not significantly affected by left hemisphere damage. Sentence sign comprehension impairments were associated with left temporal-parietal damage. We found that damage to mirror system related regions in the left frontal lobe were not associated with deficits on either of these comprehension tasks. We conclude that the mirror system is not critically involved in action understanding.
Combinatorial semantics strengthens angular-anterior temporal coupling.
Molinaro, Nicola; Paz-Alonso, Pedro M; Duñabeitia, Jon Andoni; Carreiras, Manuel
2015-04-01
The human semantic combinatorial system allows us to create a wide number of new meanings from a finite number of existing representations. The present study investigates the neural dynamics underlying the semantic processing of different conceptual constructions based on predictions from previous neuroanatomical models of the semantic processing network. In two experiments, participants read sentences for comprehension containing noun-adjective pairs in three different conditions: prototypical (Redundant), nonsense (Anomalous) and low-typical but composable (Contrastive). In Experiment 1 we examined the processing costs associated to reading these sentences and found a processing dissociation between Anomalous and Contrastive word pairs, compared to prototypical (Redundant) stimuli. In Experiment 2, functional connectivity results showed strong co-activation across conditions between inferior frontal gyrus (IFG) and posterior middle temporal gyrus (MTG), as well as between these two regions and middle frontal gyrus (MFG), anterior temporal cortex (ATC) and fusiform gyrus (FG), consistent with previous neuroanatomical models. Importantly, processing of low-typical (but composable) meanings relative to prototypical and anomalous constructions was associated with a stronger positive coupling between ATC and angular gyrus (AG). Our results underscore the critical role of IFG-MTG co-activation during semantic processing and how other relevant nodes within the semantic processing network come into play to handle visual-orthographic information, to maintain multiple lexical-semantic representations in working memory and to combine existing representations while creatively constructing meaning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Skotara, Nils; Salden, Uta; Kügow, Monique; Hänel-Faulhaber, Barbara; Röder, Brigitte
2012-05-03
To examine which language function depends on early experience, the present study compared deaf native signers, deaf non-native signers and hearing German native speakers while processing German sentences. The participants watched simple written sentences while event-related potentials (ERPs) were recorded. At the end of each sentence they were asked to judge whether the sentence was correct or not. Two types of violations were introduced in the middle of the sentence: a semantically implausible noun or a violation of subject-verb number agreement. The results showed a similar ERP pattern after semantic violations (an N400 followed by a positivity) in all three groups. After syntactic violations, native German speakers and native signers of German sign language (DGS) with German as second language (L2) showed a left anterior negativity (LAN) followed by a P600, whereas no LAN but a negativity over the right hemisphere instead was found in deaf participants with a delayed onset of first language (L1) acquisition. The P600 of this group had a smaller amplitude and a different scalp distribution as compared to German native speakers. The results of the present study suggest that language deprivation in early childhood alters the cerebral organization of syntactic language processing mechanisms for L2. Semantic language processing instead was unaffected.
Eye Movement Evidence for Hierarchy Effects on Memory Representation of Discourses.
Wu, Yingying; Yang, Xiaohong; Yang, Yufang
2016-01-01
In this study, we applied the text-change paradigm to investigate whether and how discourse hierarchy affected the memory representation of a discourse. Three kinds of three-sentence discourses were constructed. In the hierarchy-high condition and the hierarchy-low condition, the three sentences of the discourses were hierarchically organized and the last sentence of each discourse was located at the high level and the low level of the discourse hierarchy, respectively. In the linear condition, the three sentences of the discourses were linearly organized. Critical words were always located at the last sentence of the discourses. These discourses were successively presented twice and the critical words were changed to semantically related words in the second presentation. The results showed that during the early processing stage, the critical words were read for longer times when they were changed in the hierarchy-high and the linear conditions, but not in the hierarchy-low condition. During the late processing stage, the changed-critical words were again found to induce longer reading times only when they were in the hierarchy-high condition. These results suggest that words in a discourse have better memory representation when they are located at the higher rather than at the lower level of the discourse hierarchy. Global discourse hierarchy is established as an important factor in constructing the mental representation of a discourse.
Eye Movement Evidence for Hierarchy Effects on Memory Representation of Discourses
Wu, Yingying; Yang, Xiaohong; Yang, Yufang
2016-01-01
In this study, we applied the text-change paradigm to investigate whether and how discourse hierarchy affected the memory representation of a discourse. Three kinds of three-sentence discourses were constructed. In the hierarchy-high condition and the hierarchy-low condition, the three sentences of the discourses were hierarchically organized and the last sentence of each discourse was located at the high level and the low level of the discourse hierarchy, respectively. In the linear condition, the three sentences of the discourses were linearly organized. Critical words were always located at the last sentence of the discourses. These discourses were successively presented twice and the critical words were changed to semantically related words in the second presentation. The results showed that during the early processing stage, the critical words were read for longer times when they were changed in the hierarchy-high and the linear conditions, but not in the hierarchy-low condition. During the late processing stage, the changed-critical words were again found to induce longer reading times only when they were in the hierarchy-high condition. These results suggest that words in a discourse have better memory representation when they are located at the higher rather than at the lower level of the discourse hierarchy. Global discourse hierarchy is established as an important factor in constructing the mental representation of a discourse. PMID:26789002
Processing grammatical gender in Dutch: Evidence from eye movements.
Brouwer, Susanne; Sprenger, Simone; Unsworth, Sharon
2017-07-01
Previous research has demonstrated that grammatical gender in Dutch is typically acquired late. Most of this work used production data only, and consequently children's knowledge of Dutch gender may have been underestimated. In this study, therefore, we examined whether 49 4- to 7-year-old Dutch-speaking children (and 19 adult controls) were able to use gender marking in the article preceding the object label during online sentence processing to (a) anticipate the upcoming object label or to (b) facilitate the processing of that label as it is presented. In addition, we investigated whether children's online processing and production of gender marking on articles were related. In an eye-tracking task, participants were presented with sentences and visual displays with two objects, representing nouns of either the same gender (uninformative) or different genders (informative). Children were divided into a non-targetlike group and a targetlike group on the basis of their scores for neuter nouns in the production task. Our analyses examined whether participants could use gender marking anticipatorily (i.e., before the onset of the noun) and facilitatively (i.e., from noun onset). Results showed that Dutch-speaking adults and children who were successful in production used gender marking anticipatorily. However, children who did not systematically produce gender-marked articles used gender marking only facilitatively. These findings reveal that successful online comprehension may in part be possible before targetlike production is completely in place, but at the same time targetlike production may be a trigger for online comprehension to be completely successful. Copyright © 2017 Elsevier Inc. All rights reserved.
Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G
2009-03-01
Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.
Presentation format effects in working memory: the role of attention.
Foos, Paul W; Goolkasian, Paula
2005-04-01
Four experiments are reported in which participants attempted to remember three or six concrete nouns, presented as pictures, spoken words, or printed words, while also verifying the accuracy of sentences. Hypotheses meant to explain the higher recall of pictures and spoken words over printed words were tested. Increasing the difficulty and changing the type of processing task from arithmetic to a visual/spatial reasoning task did not influence recall. An examination of long-term modality effects showed that those effects were not sufficient to explain the superior performance with spoken words and pictures. Only when we manipulated the allocation of attention to the items in the storage task by requiring the participants to articulate the items and by presenting the stimulus items under a degraded condition were we able to reduce or remove the effect of presentation format. The findings suggest that the better recall of pictures and spoken words over printed words result from the fact that under normal presentation conditions, printed words receive less processing attention than pictures and spoken words do.
Getting ahead of yourself: Parafoveal word expectancy modulates the N400 during sentence reading
Stites, Mallory C.; Payne, Brennan R.; Federmeier, Kara D.
2017-01-18
An important question in the reading literature regards the nature of the semantic information readers can extract from the parafovea (i.e., the next word in a sentence). Recent eye-tracking findings have found a semantic parafoveal preview benefit under many circumstances, and findings from event-related brain potentials (ERPs) also suggest that readers can at least detect semantic anomalies parafoveally. We use ERPs to ask whether fine-grained aspects of semantic expectancy can affect the N400 elicited by a word appearing in the parafovea. In an RSVP-with-flankers paradigm, sentences were presented word by word, flanked 2° bilaterally by the previous and upcoming words.more » Stimuli consisted of high constraint sentences that were identical up to the target word, which could be expected, unexpected but plausible, or anomalous, as well as low constraint sentences that were always completed with the most expected ending. Findings revealed an N400 effect to the target word when it appeared in the parafovea, which was graded with respect to the target’s expectancy and congruency within the sentence context. Moreover, when targets appeared at central fixation, this graded congruency effect was mitigated, suggesting that the semantic information gleaned from parafoveal vision functionally changes the semantic processing of those words when foveated.« less
Getting ahead of yourself: Parafoveal word expectancy modulates the N400 during sentence reading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stites, Mallory C.; Payne, Brennan R.; Federmeier, Kara D.
An important question in the reading literature regards the nature of the semantic information readers can extract from the parafovea (i.e., the next word in a sentence). Recent eye-tracking findings have found a semantic parafoveal preview benefit under many circumstances, and findings from event-related brain potentials (ERPs) also suggest that readers can at least detect semantic anomalies parafoveally. We use ERPs to ask whether fine-grained aspects of semantic expectancy can affect the N400 elicited by a word appearing in the parafovea. In an RSVP-with-flankers paradigm, sentences were presented word by word, flanked 2° bilaterally by the previous and upcoming words.more » Stimuli consisted of high constraint sentences that were identical up to the target word, which could be expected, unexpected but plausible, or anomalous, as well as low constraint sentences that were always completed with the most expected ending. Findings revealed an N400 effect to the target word when it appeared in the parafovea, which was graded with respect to the target’s expectancy and congruency within the sentence context. Moreover, when targets appeared at central fixation, this graded congruency effect was mitigated, suggesting that the semantic information gleaned from parafoveal vision functionally changes the semantic processing of those words when foveated.« less
Memory mechanisms supporting syntactic comprehension
Waters, Gloria
2013-01-01
Efforts to characterize the memory system that supports sentence comprehension have historically drawn extensively on short-term memory as a source of mechanisms that might apply to sentences. The focus of these efforts has changed significantly in the past decade. As a result of changes in models of short-term working memory (ST-WM) and developments in models of sentence comprehension, the effort to relate entire components of an ST-WM system, such as those in the model developed by Baddeley (Nature Reviews Neuroscience 4: 829–839, 2003) to sentence comprehension has largely been replaced by an effort to relate more specific mechanisms found in modern models of ST-WM to memory processes that support one aspect of sentence comprehension—the assignment of syntactic structure (parsing) and its use in determining sentence meaning (interpretation) during sentence comprehension. In this article, we present the historical background to recent studies of the memory mechanisms that support parsing and interpretation and review recent research into this relation. We argue that the results of this research do not converge on a set of mechanisms derived from ST-WM that apply to parsing and interpretation. We argue that the memory mechanisms supporting parsing and interpretation have features that characterize another memory system that has been postulated to account for skilled performance—long-term working memory. We propose a model of the relation of different aspects of parsing and interpretation to ST-WM and long-term working memory. PMID:23319178
The effect of foveal and parafoveal masks on the eye movements of older and younger readers.
Rayner, Keith; Yang, Jinmian; Schuett, Susanne; Slattery, Timothy J
2014-06-01
In the present study, we examined foveal and parafoveal processing in older compared with younger readers by using gaze-contingent paradigms with 4 conditions. Older and younger readers read sentences in which the text was either a) presented normally, b) the foveal word was masked as soon as it was fixated, c) all of the words to the left of the fixated word were masked, or d) all of the words to the right of the fixated word were masked. Although older and younger readers both found reading when the fixated word was masked quite difficult, the foveal mask increased sentence reading time more than 3-fold (3.4) for the older readers (in comparison with the control condition in which the sentence was presented normally) compared with the younger readers who took 1.3 times longer to read sentences in the foveal mask condition (in comparison with the control condition). The left and right parafoveal masks did not disrupt reading as severely as the foveal mask, though the right mask was more disruptive than the left mask. Also, there was some indication that the younger readers found the right mask condition relatively more disruptive than the left mask condition. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Herbert, C; Kissler, J
2014-09-26
In sentences such as dogs cannot fly/bark, evaluation of the truth-value of the sentence is assumed to appear after the negation has been integrated into the sentence structure. Moreover negation processing and truth-value processing are considered effortful processes, whereas processing of the semantic relatedness of the words within sentences is thought to occur automatically. In the present study, modulation of event-related brain potentials (N400 and late positive potential, LPP) was investigated during an implicit task (silent listening) and active truth-value evaluation to test these theoretical assumptions and determine if truth-value evaluation will be modulated by the way participants processed the negated information implicitly prior to truth-value verification. Participants first listened to negated sentences and then evaluated these sentences for their truth-value in an active evaluation task. During passive listening, the LPP was generally more pronounced for targets in false negative (FN) than true negative (TN) sentences, indicating enhanced attention allocation to semantically-related but false targets. N400 modulation by truth-value (FN>TN) was observed in 11 out of 24 participants. However, during active evaluation, processing of semantically-unrelated but true targets (TN) elicited larger N400 and LPP amplitudes as well as a pronounced frontal negativity. This pattern was particularly prominent in those 11 individuals, whose N400 modulation during silent listening indicated that they were more sensitive to violations of the truth-value than to semantic priming effects. The results provide evidence for implicit truth-value processing during silent listening of negated sentences and for task dependence related to inter-individual differences in implicit negation processing. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
BIOSSES: a semantic sentence similarity estimation system for the biomedical domain.
Sogancioglu, Gizem; Öztürk, Hakime; Özgür, Arzucan
2017-07-15
The amount of information available in textual format is rapidly increasing in the biomedical domain. Therefore, natural language processing (NLP) applications are becoming increasingly important to facilitate the retrieval and analysis of these data. Computing the semantic similarity between sentences is an important component in many NLP tasks including text retrieval and summarization. A number of approaches have been proposed for semantic sentence similarity estimation for generic English. However, our experiments showed that such approaches do not effectively cover biomedical knowledge and produce poor results for biomedical text. We propose several approaches for sentence-level semantic similarity computation in the biomedical domain, including string similarity measures and measures based on the distributed vector representations of sentences learned in an unsupervised manner from a large biomedical corpus. In addition, ontology-based approaches are presented that utilize general and domain-specific ontologies. Finally, a supervised regression based model is developed that effectively combines the different similarity computation metrics. A benchmark data set consisting of 100 sentence pairs from the biomedical literature is manually annotated by five human experts and used for evaluating the proposed methods. The experiments showed that the supervised semantic sentence similarity computation approach obtained the best performance (0.836 correlation with gold standard human annotations) and improved over the state-of-the-art domain-independent systems up to 42.6% in terms of the Pearson correlation metric. A web-based system for biomedical semantic sentence similarity computation, the source code, and the annotated benchmark data set are available at: http://tabilab.cmpe.boun.edu.tr/BIOSSES/ . gizemsogancioglu@gmail.com or arzucan.ozgur@boun.edu.tr. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Working Memory Training and Speech in Noise Comprehension in Older Adults.
Wayne, Rachel V; Hamilton, Cheryl; Jones Huyck, Julia; Johnsrude, Ingrid S
2016-01-01
Understanding speech in the presence of background sound can be challenging for older adults. Speech comprehension in noise appears to depend on working memory and executive-control processes (e.g., Heald and Nusbaum, 2014), and their augmentation through training may have rehabilitative potential for age-related hearing loss. We examined the efficacy of adaptive working-memory training (Cogmed; Klingberg et al., 2002) in 24 older adults, assessing generalization to other working-memory tasks (near-transfer) and to other cognitive domains (far-transfer) using a cognitive test battery, including the Reading Span test, sensitive to working memory (e.g., Daneman and Carpenter, 1980). We also assessed far transfer to speech-in-noise performance, including a closed-set sentence task (Kidd et al., 2008). To examine the effect of cognitive training on benefit obtained from semantic context, we also assessed transfer to open-set sentences; half were semantically coherent (high-context) and half were semantically anomalous (low-context). Subjects completed 25 sessions (0.5-1 h each; 5 sessions/week) of both adaptive working memory training and placebo training over 10 weeks in a crossover design. Subjects' scores on the adaptive working-memory training tasks improved as a result of training. However, training did not transfer to other working memory tasks, nor to tasks recruiting other cognitive domains. We did not observe any training-related improvement in speech-in-noise performance. Measures of working memory correlated with the intelligibility of low-context, but not high-context, sentences, suggesting that sentence context may reduce the load on working memory. The Reading Span test significantly correlated only with a test of visual episodic memory, suggesting that the Reading Span test is not a pure-test of working memory, as is commonly assumed.
Working Memory Training and Speech in Noise Comprehension in Older Adults
Wayne, Rachel V.; Hamilton, Cheryl; Jones Huyck, Julia; Johnsrude, Ingrid S.
2016-01-01
Understanding speech in the presence of background sound can be challenging for older adults. Speech comprehension in noise appears to depend on working memory and executive-control processes (e.g., Heald and Nusbaum, 2014), and their augmentation through training may have rehabilitative potential for age-related hearing loss. We examined the efficacy of adaptive working-memory training (Cogmed; Klingberg et al., 2002) in 24 older adults, assessing generalization to other working-memory tasks (near-transfer) and to other cognitive domains (far-transfer) using a cognitive test battery, including the Reading Span test, sensitive to working memory (e.g., Daneman and Carpenter, 1980). We also assessed far transfer to speech-in-noise performance, including a closed-set sentence task (Kidd et al., 2008). To examine the effect of cognitive training on benefit obtained from semantic context, we also assessed transfer to open-set sentences; half were semantically coherent (high-context) and half were semantically anomalous (low-context). Subjects completed 25 sessions (0.5–1 h each; 5 sessions/week) of both adaptive working memory training and placebo training over 10 weeks in a crossover design. Subjects' scores on the adaptive working-memory training tasks improved as a result of training. However, training did not transfer to other working memory tasks, nor to tasks recruiting other cognitive domains. We did not observe any training-related improvement in speech-in-noise performance. Measures of working memory correlated with the intelligibility of low-context, but not high-context, sentences, suggesting that sentence context may reduce the load on working memory. The Reading Span test significantly correlated only with a test of visual episodic memory, suggesting that the Reading Span test is not a pure-test of working memory, as is commonly assumed. PMID:27047370
Bilateral parietal contributions to spatial language.
Conder, Julie; Fridriksson, Julius; Baylis, Gordon C; Smith, Cameron M; Boiteau, Timothy W; Almor, Amit
2017-01-01
It is commonly held that language is largely lateralized to the left hemisphere in most individuals, whereas spatial processing is associated with right hemisphere regions. In recent years, a number of neuroimaging studies have yielded conflicting results regarding the role of language and spatial processing areas in processing language about space (e.g., Carpenter, Just, Keller, Eddy, & Thulborn, 1999; Damasio et al., 2001). In the present study, we used sparse scanning event-related functional magnetic resonance imaging (fMRI) to investigate the neural correlates of spatial language, that is; language used to communicate the spatial relationship of one object to another. During scanning, participants listened to sentences about object relationships that were either spatial or non-spatial in nature (color or size relationships). Sentences describing spatial relationships elicited more activation in the superior parietal lobule and precuneus bilaterally in comparison to sentences describing size or color relationships. Activation of the precuneus suggests that spatial sentences elicit spatial-mental imagery, while the activation of the SPL suggests sentences containing spatial language involve integration of two distinct sets of information - linguistic and spatial. Copyright © 2016 Elsevier Inc. All rights reserved.
What is it that lingers? Garden-path (mis)interpretations in younger and older adults.
Malyutina, Svetlana; den Ouden, Dirk-Bart
2016-01-01
Previous research has shown that comprehenders do not always conduct a full (re)analysis of temporarily ambiguous "garden-path" sentences. The present study used a sentence-picture matching task to investigate what kind of representations are formed when full reanalysis is not performed: Do comprehenders "blend" two incompatible representations as a result of shallow syntactic processing or do they erroneously maintain the initial incorrect parsing without incorporating new information, and does this vary with age? Twenty-five younger and 15 older adults performed a multiple-choice sentence-picture matching task with stimuli including early-closure garden-path sentences. The results suggest that the type of erroneous representation is affected by linguistic variables, such as sentence structure, verb type, and semantic plausibility, as well as by age. Older adults' response patterns indicate an increased reliance on inferencing based on lexical and semantic cues, with a lower bar for accepting an initial parse and with a weaker drive to reanalyse a syntactic representation. Among younger adults, there was a tendency to blend two representations into a single interpretation, even if this was not licensed by the syntax.
ERIC Educational Resources Information Center
Ma, Dongmei; Yu, Xiaoru; Zhang, Haomin
2017-01-01
The present study aimed to investigate second language (L2) word-level and sentence-level automatic processing among English as a foreign language students through a comparative analysis of students with different proficiency levels. As a multidimensional and dynamic construct, automaticity is conceptualized as processing speed, stability, and…