Sample records for agrammatic aphasic speakers

  1. Action Naming in Anomic Aphasic Speakers: Effects of Instrumentality and Name Relation

    ERIC Educational Resources Information Center

    Jonkers, Roel; Bastiaanse, Roelien

    2007-01-01

    Many studies reveal effects of verb type on verb retrieval, mainly in agrammatic aphasic speakers. In the current study, two factors that might play a role in action naming in anomic aphasic speakers were considered: the conceptual factor instrumentality and the lexical factor name relation to a noun. Instrumental verbs were shown to be better…

  2. Why Reference to the Past Is Difficult for Agrammatic Speakers

    ERIC Educational Resources Information Center

    Bastiaanse, Roelien

    2013-01-01

    Many studies have shown that verb inflections are difficult to produce for agrammatic aphasic speakers: they are frequently omitted and substituted. The present article gives an overview of our search to understanding why this is the case. The hypothesis is that grammatical morphology referring to the past is selectively impaired in agrammatic…

  3. Grammatical Encoding and Learning in Agrammatic Aphasia: Evidence from Structural Priming

    PubMed Central

    Cho-Reyes, Soojin; Mack, Jennifer E.; Thompson, Cynthia K.

    2017-01-01

    The present study addressed open questions about the nature of sentence production deficits in agrammatic aphasia. In two structural priming experiments, 13 aphasic and 13 age-matched control speakers repeated visually- and auditorily-presented prime sentences, and then used visually-presented word arrays to produce dative sentences. Experiment 1 examined whether agrammatic speakers form structural and thematic representations during sentence production, whereas Experiment 2 tested the lasting effects of structural priming in lags of two and four sentences. Results of Experiment 1 showed that, like unimpaired speakers, the aphasic speakers evinced intact structural priming effects, suggesting that they are able to generate such representations. Unimpaired speakers also evinced reliable thematic priming effects, whereas agrammatic speakers did so in some experimental conditions, suggesting that access to thematic representations may be intact. Results of Experiment 2 showed structural priming effects of comparable magnitude for aphasic and unimpaired speakers. In addition, both groups showed lasting structural priming effects in both lag conditions, consistent with implicit learning accounts. In both experiments, aphasic speakers with more severe language impairments exhibited larger priming effects, consistent with the “inverse preference” prediction of implicit learning accounts. The findings indicate that agrammatic speakers are sensitive to structural priming across levels of representation and that such effects are lasting, suggesting that structural priming may be beneficial for the treatment of sentence production deficits in agrammatism. PMID:28924328

  4. The neural correlates of agrammatism: Evidence from aphasic and healthy speakers performing an overt picture description task

    PubMed Central

    Schönberger, Eva; Heim, Stefan; Meffert, Elisabeth; Pieperhoff, Peter; da Costa Avelar, Patricia; Huber, Walter; Binkofski, Ferdinand; Grande, Marion

    2014-01-01

    Functional brain imaging studies have improved our knowledge of the neural localization of language functions and the functional reorganization after a lesion. However, the neural correlates of agrammatic symptoms in aphasia remain largely unknown. The present fMRI study examined the neural correlates of morpho-syntactic encoding and agrammatic errors in continuous language production by combining three approaches. First, the neural mechanisms underlying natural morpho-syntactic processing in a picture description task were analyzed in 15 healthy speakers. Second, agrammatic-like speech behavior was induced in the same group of healthy speakers to study the underlying functional processes by limiting the utterance length. In a third approach, five agrammatic participants performed the picture description task to gain insights in the neural correlates of agrammatism and the functional reorganization of language processing after stroke. In all approaches, utterances were analyzed for syntactic completeness, complexity, and morphology. Event-related data analysis was conducted by defining every clause-like unit (CLU) as an event with its onset-time and duration. Agrammatic and correct CLUs were contrasted. Due to the small sample size as well as heterogeneous lesion sizes and sites with lesion foci in the insula lobe, inferior frontal, superior temporal and inferior parietal areas the activation patterns in the agrammatic speakers were analyzed on a single subject level. In the group of healthy speakers, posterior temporal and inferior parietal areas were associated with greater morpho-syntactic demands in complete and complex CLUs. The intentional manipulation of morpho-syntactic structures and the omission of function words were associated with additional inferior frontal activation. Overall, the results revealed that the investigation of the neural correlates of agrammatic language production can be reasonably conducted with an overt language production paradigm

  5. Grammatical Planning Units during Real-Time Sentence Production in Speakers with Agrammatic Aphasia and Healthy Speakers

    ERIC Educational Resources Information Center

    Lee, Jiyeon; Yoshida, Masaya; Thompson, Cynthia K.

    2015-01-01

    Purpose: Grammatical encoding (GE) is impaired in agrammatic aphasia; however, the nature of such deficits remains unclear. We examined grammatical planning units during real-time sentence production in speakers with agrammatic aphasia and control speakers, testing two competing models of GE. We queried whether speakers with agrammatic aphasia…

  6. Grammatical Planning Units During Real-Time Sentence Production in Speakers With Agrammatic Aphasia and Healthy Speakers.

    PubMed

    Lee, Jiyeon; Yoshida, Masaya; Thompson, Cynthia K

    2015-08-01

    Grammatical encoding (GE) is impaired in agrammatic aphasia; however, the nature of such deficits remains unclear. We examined grammatical planning units during real-time sentence production in speakers with agrammatic aphasia and control speakers, testing two competing models of GE. We queried whether speakers with agrammatic aphasia produce sentences word by word without advanced planning or whether hierarchical syntactic structure (i.e., verb argument structure; VAS) is encoded as part of the advanced planning unit. Experiment 1 examined production of sentences with a predefined structure (i.e., "The A and the B are above the C") using eye tracking. Experiment 2 tested production of transitive and unaccusative sentences without a predefined sentence structure in a verb-priming study. In Experiment 1, both speakers with agrammatic aphasia and young and age-matched control speakers used word-by-word strategies, selecting the first lemma (noun A) only prior to speech onset. However, in Experiment 2, unlike controls, speakers with agrammatic aphasia preplanned transitive and unaccusative sentences, encoding VAS before speech onset. Speakers with agrammatic aphasia show incremental, word-by-word production for structurally simple sentences, requiring retrieval of multiple noun lemmas. However, when sentences involve functional (thematic to grammatical) structure building, advanced planning strategies (i.e., VAS encoding) are used. This early use of hierarchical syntactic information may provide a scaffold for impaired GE in agrammatism.

  7. Extended turn construction and test question sequences in the conversations of three speakers with agrammatic aphasia

    PubMed Central

    Beckley, Firle; Best, Wendy; Johnson, Fiona; Edwards, Susan; Maxim, Jane

    2013-01-01

    The application of Conversation Analysis (CA) to the investigation of agrammatic aphasia reveals that utterances produced by speakers with agrammatism engaged in everyday conversation differ significantly from utterances produced in response to decontextualised assessment and therapy tasks. Early studies have demonstrated that speakers with agrammatism construct turns from sequences of nouns, adjectives, discourse markers and conjunctions, packaged by a distinct pattern of prosody. This article presents examples of turn construction methods deployed by three people with agrammatism as they take an extended turn, in order to recount a past event, initiate a discussion or have a disagreement. This is followed by examples of sequences occurring in the talk of two of these speakers that result in different, and more limited, turn construction opportunities, namely “test” questions asked in order to initiate a new topic of talk, despite the conversation partner knowing the answer. The contrast between extended turns and test question sequences illustrates the effect of interactional context on aphasic turn construction practices, and the potential of less than optimal sequences to mask turn construction skills. It is suggested that the interactional motivation for test question sequences in these data are to invite people with aphasia to contribute to conversation, rather than to practise saying words in an attempt to improve language skills. The idea that test question sequences may have their origins in early attempts to deal with acute aphasia, and the potential for conversation partnerships to become “stuck” in such interactional patterns after they may have outlived their usefulness, are discussed with a view to clinical implications. PMID:23848370

  8. Sentence Comprehension in Swahili-English Bilingual Agrammatic Speakers

    ERIC Educational Resources Information Center

    Abuom, Tom O.; Shah, Emmah; Bastiaanse, Roelien

    2013-01-01

    For this study, sentence comprehension was tested in Swahili-English bilingual agrammatic speakers. The sentences were controlled for four factors: (1) order of the arguments (base vs. derived); (2) embedding (declarative vs. relative sentences); (3) overt use of the relative pronoun "who"; (4) language (English and Swahili). Two…

  9. Comprehension of Sentences with Stylistic Inversion by French Aphasic Patients

    ERIC Educational Resources Information Center

    Rigalleau, Francois; Baudiffier, Vanessa; Caplan, David

    2004-01-01

    Three French-speaking agrammatic aphasics and three French-speaking Conduction aphasics were tested for comprehension of Active, Passive, Cleft-Subject, Cleft-Object, and Cleft-Object sentences with Stylistic Inversion using an object manipulation test. The agrammatic patients consistently reversed thematic roles in the latter sentence type, and…

  10. Tense and Agreement in German Agrammatism

    ERIC Educational Resources Information Center

    Wenzlaff, Michaela; Clahsen, Harald

    2004-01-01

    This study presents results from sentence-completion and grammaticality-judgment tasks with 7 German-speaking agrammatic aphasics and 7 age-matched control subjects examining tense and subject-verb agreement marking. For both experimental tasks, we found that the aphasics achieved high correctness scores for agreement, while tense marking was…

  11. Time reference in agrammatic aphasia: A cross-linguistic study

    PubMed Central

    Bastiaanse, Roelien; Bamyaci, Elif; Hsu, Chien-Ju; Lee, Jiyeon; Duman, Tuba Yarbay; Thompson, Cynthia K.

    2015-01-01

    It has been shown across several languages that verb inflection is difficult for agrammatic aphasic speakers. In particular, Tense inflection is vulnerable. Several theoretical accounts for this have been posed, for example, a pure syntactic one suggesting that the Tense node is unavailable due to its position in the syntactic tree (Friedmann & Grodzinsky, 1997); one suggesting that the interpretable features of the Tense node are underspecified (Burchert, Swoboda-Moll, & De Bleser, 2005; Wenzlaff & Clahsen, 2004, 2005); and a morphosemantic one, arguing that the diacritic Tense features are affected in agrammatism (Faroqi–Shah & Dickey, 2009; Lee, Milman, & Thompson, 2008). However recent findings (Bastiaanse, 2008) and a reanalysis of some oral production studies (e.g. Lee et al., 2008; Nanousi, Masterson, Druks, & Atkinson, 2006) suggest that both Tense and Aspect are impaired and, most importantly, reference to the past is selectively impaired, both through simple verb forms (such as simple past in English) and through periphrastic verb forms (such as the present perfect, ‘has V-ed’, in English). It will be argued that reference to the past is discourse linked and reference to the present and future is not (Zagona, 2003, in press). In-line with Avrutin’s (2000) theory that suggests discourse linking is impaired in Broca’s aphasia, the PAst DIscourse LInking Hypothesis (PADILIH) has been formulated. Three predictions were tested: (1) patients with agrammatic aphasia are selectively impaired in use of grammatical morphology associated with reference to the past, whereas, inflected forms which refer to the present and future are relatively spared; (2) this impairment is language-independent; and (3) this impairment will occur in both production and comprehension. Agrammatic Chinese, English and Turkish speakers were tested with the Test for Assessing Reference of Time (TART; Bastiaanse, Jonkers, & Thompson, unpublished). Results showed that both the

  12. Syntax and conversation in aphasia. A strategic restrictive use of Spanish and Catalan connector QUE by aphasic speakers.

    PubMed

    Hernández-Sacristán, Carlos; Rosell-Clari, Vicent

    2009-10-01

    Oral conversational data are deemed to be a relevant empirical source when it comes to formulating and supporting hypotheses about cognitive processes involved in aphasic linguistic production. With this assumption in mind, free conversational uses of the Spanish and Catalan connector QUE by fluent and non-fluent aphasic speakers are examined by contrasting them with normal speakers' (i.e. conversational partners') productions. Strictly ungrammatical uses in aphasic speakers are practically non-existent in free conversation. Nevertheless, this data permits one to characterize the aphasic production of the morpheme QUE as restrictive--to different degrees--with respect to normal production. Moreover, this restriction, selectively affecting the types of syntactic environments examined, can be considered strategic in nature: it is guided by some kind of knowledge about the administration of remnant linguistic resources.

  13. Functional categories in agrammatism: evidence from Greek.

    PubMed

    Stavrakaki, Stavroula; Kouvava, Sofia

    2003-07-01

    The aim of this study is twofold. First, to investigate the use of functional categories by two Greek agrammatic aphasics. Second, to discuss the implications of our findings for the characterization of the deficit in agrammatism. The functional categories under investigation were the following: definite and indefinite articles, personal pronouns, aspect, tense, subject-verb agreement, wh-pronouns, complementizers and the mood marker na (=to). Based on data collected through different methods, it is argued that the deficit in agrammatism cannot be described in terms of a structural account but rather by means of difficulties in the implementation of grammatical knowledge.

  14. Binding in agrammatic aphasia: Processing to comprehension

    PubMed Central

    Janet Choy, Jungwon; Thompson, Cynthia K.

    2010-01-01

    Background Theories of comprehension deficits in Broca’s aphasia have largely been based on the pattern of deficit found with movement constructions. However, some studies have found comprehension deficits with binding constructions, which do not involve movement. Aims This study investigates online processing and offline comprehension of binding constructions, such as reflexive (e.g., himself) and pronoun (e.g., him) constructions in unimpaired and aphasic individuals in an attempt to evaluate theories of agrammatic comprehension. Methods & Procedures Participants were eight individuals with agrammatic Broca’s aphasia and eight age-matched unimpaired individuals. We used eyetracking to examine online processing of binding constructions while participants listened to stories. Offline comprehension was also tested. Outcomes & Results The eye movement data showed that individuals with Broca’s aphasia were able to automatically process the correct antecedent of reflexives and pronouns. In addition, their syntactic processing of binding was not delayed compared to normal controls. Nevertheless, offline comprehension of both pronouns and reflexives was significantly impaired compared to the control participants. This comprehension failure was reflected in the aphasic participants’ eye movements at sentence end, where fixations to the competitor increased. Conclusions These data suggest that comprehension difficulties with binding constructions seen in agrammatic aphasic patients are not due to a deficit in automatic syntactic processing or delayed processing. Rather, they point to a possible deficit in lexical integration. PMID:20535243

  15. Mental Representation of Prepositional Compounds: Evidence from Italian Agrammatic Patients

    ERIC Educational Resources Information Center

    Mondini, S.; Luzzatti, C.; Saletta, P.; Allamano, N.; Semenza, C.

    2005-01-01

    The processing of Prepositional compounds (typical Neo-latin noun-noun modifications where a head noun is modified by a prepositional phrase, e.g., mulino a vento, windmill) was preliminarily studied with a group of six agrammatic aphasic patients, and, in more detail, with a further agrammatic patient (MB). Omission was the most frequent error…

  16. Parallel functional category deficits in clauses and nominal phrases: The case of English agrammatism

    PubMed Central

    Wang, Honglei; Yoshida, Masaya; Thompson, Cynthia K.

    2015-01-01

    Individuals with agrammatic aphasia exhibit restricted patterns of impairment of functional morphemes, however, syntactic characterization of the impairment is controversial. Previous studies have focused on functional morphology in clauses only. This study extends the empirical domain by testing functional morphemes in English nominal phrases in aphasia and comparing patients’ impairment to their impairment of functional morphemes in English clauses. In the linguistics literature, it is assumed that clauses and nominal phrases are structurally parallel but exhibit inflectional differences. The results of the present study indicated that aphasic speakers evinced similar impairment patterns in clauses and nominal phrases. These findings are consistent with the Distributed Morphology Hypothesis (DMH), suggesting that the source of functional morphology deficits among agrammatics relates to difficulty implementing rules that convert inflectional features into morphemes. Our findings, however, are inconsistent with the Tree Pruning Hypothesis (TPH), which suggests that patients have difficulty building complex hierarchical structures. PMID:26379370

  17. The forgotten grammatical category: Adjective use in agrammatic aphasia.

    PubMed

    Meltzer-Asscher, Aya; Thompson, Cynthia K

    2014-07-01

    In contrast to nouns and verbs, the use of adjectives in agrammatic aphasia has not been systematically studied. However, because of the linguistic and psycholinguistic attributes of adjectives, some of which overlap with nouns and some with verbs, analysis of adjective production is important for testing theories of word class production deficits in agrammatism. The objective of the current study was to compare adjective use in agrammatic and healthy individuals, focusing on three factors: overall adjective production rate, production of predicative and attributive adjectives, and production of adjectives with complex argument structure. Narratives elicited from 14 agrammatic and 14 control participants were coded for open class grammatical category production (i.e., nouns, verbs, adjectives), with each adjective also coded for its syntactic environment (attributive/predicative) and argument structure. Overall, agrammatic speakers used adjectives in proportions similar to that of cognitively healthy speakers. However, they exhibited a greater proportion of predicative adjectives and a lesser proportion of attributive adjectives, compared to controls. Additionally, agrammatic participants produced adjectives with less complex argument structure than controls. The overall normal-like frequency of adjectives produced by agrammatic speakers suggests that agrammatism does not involve an inherent difficulty with adjectives as a word class or with predication, or that it entails a deficit in processing low imageability words. However, agrammatic individuals' reduced production of attributive adjectives and adjectives with complements extends previous findings of an adjunction deficit and of impairment in complex argument structure processing, respectively, to the adjectival domain. The results suggest that these deficits are not tied to a specific grammatical category.

  18. The forgotten grammatical category: Adjective use in agrammatic aphasia

    PubMed Central

    Meltzer-Asscher, Aya; Thompson, Cynthia K.

    2014-01-01

    Background In contrast to nouns and verbs, the use of adjectives in agrammatic aphasia has not been systematically studied. However, because of the linguistic and psycholinguistic attributes of adjectives, some of which overlap with nouns and some with verbs, analysis of adjective production is important for testing theories of word class production deficits in agrammatism. Aims The objective of the current study was to compare adjective use in agrammatic and healthy individuals, focusing on three factors: overall adjective production rate, production of predicative and attributive adjectives, and production of adjectives with complex argument structure. Method & Procedures Narratives elicited from 14 agrammatic and 14 control participants were coded for open class grammatical category production (i.e., nouns, verbs, adjectives), with each adjective also coded for its syntactic environment (attributive/predicative) and argument structure. Outcomes & Results Overall, agrammatic speakers used adjectives in proportions similar to that of cognitively healthy speakers. However, they exhibited a greater proportion of predicative adjectives and a lesser proportion of attributive adjectives, compared to controls. Additionally, agrammatic participants produced adjectives with less complex argument structure than controls. Conclusions The overall normal-like frequency of adjectives produced by agrammatic speakers suggests that agrammatism does not involve an inherent difficulty with adjectives as a word class or with predication, or that it entails a deficit in processing low imageability words. However, agrammatic individuals’ reduced production of attributive adjectives and adjectives with complements extends previous findings of an adjunction deficit and of impairment in complex argument structure processing, respectively, to the adjectival domain. The results suggest that these deficits are not tied to a specific grammatical category. PMID:24882945

  19. Agrammatism in Jordanian-Arabic Speakers

    ERIC Educational Resources Information Center

    Albustanji, Yusuf Mohammed

    2009-01-01

    Agrammatism is a frequent sequela of Broca's aphasia that manifests itself in omission and/or substitution of the grammatical morphemes in spontaneous and constrained speech. The hierarchical structure of syntactic trees has been proposed as an account for difficulty across grammatical morphemes (e.g., tense, agreement, and negation). Supporting…

  20. Tense and Agreement Dissociations in German Agrammatic Speakers: Underspecification Vs. Hierarchy

    ERIC Educational Resources Information Center

    Burchert, F.; Swoboda-Moll, M.; Bleser, R.D.

    2005-01-01

    The aim of the present paper was to investigate whether German agrammatic production data are compatible with the Tree-Pruning-Hypothesis (TPH; Friedmann & Grodzinsky, 1997). The theory predicts unidirectional patterns of dissociation in agrammatic production data with respect to Tense and Agreement. However, there was evidence of a double…

  1. Semantic, Lexical, and Phonological Influences on the Production of Verb Inflections in Agrammatic Aphasia

    ERIC Educational Resources Information Center

    Faroqi-Shah, Yasmeen; Thompson, Cynthia K.

    2004-01-01

    Verb inflection errors, often seen in agrammatic aphasic speech, have been attributed to either impaired encoding of diacritical features that specify tense and aspect, or to impaired affixation during phonological encoding. In this study we examined the effect of semantic markedness, word form frequency and affix frequency, as well as accuracy…

  2. La perception des morphemes grammaticaux chez les aphasiques (The Perception of Grammatical Morphemes in Aphasics). Montreal Working Papers in Linguistics, Vol. 2.

    ERIC Educational Resources Information Center

    Goodenough, Cheryl; And Others

    Studies have indicated that agrammatical aphasics tend to better realize morphemes with a high level of semantic value. A study sought to examine the effect of the variation of the information content of the article on its comprehension by the aphasic. The appropriate and the significant nature of the function words "the" and "a" were varied with…

  3. Psychogenic or neurogenic origin of agrammatism and foreign accent syndrome in a bipolar patient: a case report.

    PubMed

    Poulin, Stéphane; Macoir, Joël; Paquet, Nancy; Fossard, Marion; Gagnon, Louis

    2007-01-04

    Foreign accent syndrome (FAS) is a rare speech disorder characterized by the appearance of a new accent, different from the speaker's native language and perceived as foreign by the speaker and the listener. In most of the reported cases, FAS follows stroke but has also been found following traumatic brain injury, cerebral haemorrhage and multiple sclerosis. In very few cases, FAS was reported in patients presenting with psychiatric disorders but the link between this condition and FAS was confirmed in only one case. In this report, we present the case of FG, a bipolar patient presenting with language disorders characterized by a foreign accent and agrammatism, initially categorized as being of psychogenic origin. The patient had an extensive neuropsychological and language evaluation as well as brain imaging exams. In addition to FAS and agrammatism, FG also showed a working memory deficit and executive dysfunction. Moreover, these clinical signs were related to altered cerebral activity on an FDG-PET scan that showed diffuse hypometabolism in the frontal, parietal and temporal lobes bilaterally as well as a focal deficit in the area of the anterior left temporal lobe. When compared to the MRI, these deficits were related to asymmetric atrophy, which was retrospectively seen in the left temporal and frontal opercular/insular region without a focal lesion. To our knowledge, FG is the first case of FAS imaged with an 18F-FDG-PET scan. The nature and type of neuropsychological and linguistic deficits, supported by neuroimaging data, exclude a neurotoxic or neurodegenerative origin for this patient's clinical manifestations. For similar reasons, a psychogenic etiology is also highly improbable. To account for the FAS and agrammatism in FG, various explanations have been ruled out. Because of the focal deficit seen on the brain imaging, involving the left insular and anterior temporal cortex, two brain regions frequently involved in aphasic syndrome but also in FAS, a

  4. A characterization of verb use in Turkish agrammatic narrative speech.

    PubMed

    Arslan, Seçkin; Bamyacı, Elif; Bastiaanse, Roelien

    2016-01-01

    This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large and regular verb inflection paradigm where verbs are inflected for evidentiality (i.e. direct versus indirect evidence available to the speaker). Particularly, we explored the general characteristics of the speech samples (e.g. utterance length) and the uses of lexical, finite and non-finite verbs and direct and indirect evidentials. The results show that speech rate is slow, verbs per utterance are lower than normal and the verb diversity is reduced in the agrammatic speakers. Verb inflection is relatively intact; however, a trade-off pattern between inflection for direct evidentials and verb diversity is found. The implications of the data are discussed in connection with narrative-speech production studies on other languages.

  5. Electrophysiological responses to argument structure violations in healthy adults and individuals with agrammatic aphasia

    PubMed Central

    Kielar, Aneta; Meltzer-Asscher, Aya; Thompson, Cynthia

    2012-01-01

    Sentence comprehension requires processing of argument structure information associated with verbs, i.e. the number and type of arguments that they select. Many individuals with agrammatic aphasia show impaired production of verbs with greater argument structure density. The extent to which these participants also show argument structure deficits during comprehension, however, is unclear. Some studies find normal access to verb arguments, whereas others report impaired ability. The present study investigated verb argument structure processing in agrammatic aphasia by examining event-related potentials associated with argument structure violations in healthy young and older adults as well as aphasic individuals. A semantic violation condition was included to investigate possible differences in sensitivity to semantic and argument structure information during sentence processing. Results for the healthy control participants showed a negativity followed by a positive shift (N400-P600) in the argument structure violation condition, as found in previous ERP studies (Friederici & Frisch, 2000; Frisch, Hahne, & Friederici, 2004). In contrast, individuals with agrammatic aphasia showed a P600, but no N400, response to argument structure mismatches. Additionally, compared to the control groups, the agrammatic participants showed an attenuated, but relatively preserved, N400 response to semantic violations. These data show that agrammatic individuals do not demonstrate normal real-time sensitivity to verb argument structure requirements during sentence processing. PMID:23022079

  6. Morphological and Phonological Factors in the Production of Verbal Inflection in Adult L2 Learners and Patients with Agrammatic Aphasia

    ERIC Educational Resources Information Center

    Szupica-Pyrzanowski, Malgorzata

    2009-01-01

    Failure to supply inflection is common in adult L2 learners of English and agrammatic aphasics (AAs), who are known to resort to bare verb forms. Among attempts to explain the absence of inflection are competing morphological and phonological explanations. In the L2 acquisition literature, omission of inflection is explained in terms of: mapping…

  7. Nasal Consonant Production in Broca's and Wernicke's Aphasics: Speech Deficits and Neuroanatomical Correlates

    ERIC Educational Resources Information Center

    Kurowski, Kathleen M.; Blumstein, Sheila E.; Palumbo, Carole L.; Waldstein, Robin S.; Burton, Martha W.

    2007-01-01

    The present study investigated the articulatory implementation deficits of Broca's and Wernicke's aphasics and their potential neuroanatomical correlates. Five Broca's aphasics, two Wernicke's aphasics, and four age-matched normal speakers produced consonant-vowel-(consonant) real word tokens consisting of [m, n] followed by [i, e, a, o, u]. Three…

  8. Language deficits, localization, and grammar: evidence for a distributive model of language breakdown in aphasic patients and neurologically intact individuals.

    PubMed

    Dick, F; Bates, E; Wulfeck, B; Utman, J A; Dronkers, N; Gernsbacher, M A

    2001-10-01

    Selective deficits in aphasic patients' grammatical production and comprehension are often cited as evidence that syntactic processing is modular and localizable in discrete areas of the brain (e.g., Y. Grodzinsky, 2000). The authors review a large body of experimental evidence suggesting that morpho-syntactic deficits can be observed in a number of aphasic and neurologically intact populations. They present new data showing that receptive agrammatism is found not only over a range of aphasic groups, but is also observed in neurologically intact individuals processing under stressful conditions. The authors suggest that these data are most compatible with a domain-general account of language, one that emphasizes the interaction of linguistic distributions with the properties of an associative processor working under normal or suboptimal conditions.

  9. Template construction grammar: from visual scene description to language comprehension and agrammatism.

    PubMed

    Barrès, Victor; Lee, Jinyong

    2014-01-01

    How does the language system coordinate with our visual system to yield flexible integration of linguistic, perceptual, and world-knowledge information when we communicate about the world we perceive? Schema theory is a computational framework that allows the simulation of perceptuo-motor coordination programs on the basis of known brain operating principles such as cooperative computation and distributed processing. We present first its application to a model of language production, SemRep/TCG, which combines a semantic representation of visual scenes (SemRep) with Template Construction Grammar (TCG) as a means to generate verbal descriptions of a scene from its associated SemRep graph. SemRep/TCG combines the neurocomputational framework of schema theory with the representational format of construction grammar in a model linking eye-tracking data to visual scene descriptions. We then offer a conceptual extension of TCG to include language comprehension and address data on the role of both world knowledge and grammatical semantics in the comprehension performances of agrammatic aphasic patients. This extension introduces a distinction between heavy and light semantics. The TCG model of language comprehension offers a computational framework to quantitatively analyze the distributed dynamics of language processes, focusing on the interactions between grammatical, world knowledge, and visual information. In particular, it reveals interesting implications for the understanding of the various patterns of comprehension performances of agrammatic aphasics measured using sentence-picture matching tasks. This new step in the life cycle of the model serves as a basis for exploring the specific challenges that neurolinguistic computational modeling poses to the neuroinformatics community.

  10. Perception of co-speech gestures in aphasic patients: a visual exploration study during the observation of dyadic conversations.

    PubMed

    Preisig, Basil C; Eggenberger, Noëmi; Zito, Giuseppe; Vanbellingen, Tim; Schumacher, Rahel; Hopfner, Simone; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Müri, René M

    2015-03-01

    Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Compound nouns in spoken language production by speakers with aphasia compared to neurologically healthy speakers: an exploratory study.

    PubMed

    Eiesland, Eli Anne; Lind, Marianne

    2012-03-01

    Compounds are words that are made up of at least two other words (lexemes), featuring lexical and syntactic characteristics and thus particularly interesting for the study of language processing. Most studies of compounds and language processing have been based on data from experimental single word production and comprehension tasks. To enhance the ecological validity of morphological processing research, data from other contexts, such as discourse production, need to be considered. This study investigates the production of nominal compounds in semi-spontaneous spoken texts by a group of speakers with fluent types of aphasia compared to a group of neurologically healthy speakers. The speakers with aphasia produce significantly fewer nominal compound types in their texts than the non-aphasic speakers, and the compounds they produce exhibit fewer different types of semantic relations than the compounds produced by the non-aphasic speakers. The results are discussed in relation to theories of language processing.

  12. Production of non-canonical sentences in agrammatic aphasia: limits in representation or rule application?

    PubMed

    Burchert, Frank; Meissner, Nadine; De Bleser, Ria

    2008-02-01

    The study reported here compares two linguistically informed hypotheses on agrammatic sentence production, the TPH [Friedmann, N., & Grodzinsky, Y. (1997). Tense and agreement in agrammatic production: Pruning the syntactic tree. Brain and Language, 56, 397-425.] and the DOP [Bastiaanse, R., & van Zonneveld, R. (2005). Sentence production with verbs of alternating transitivity in agrammatic Broca's aphasia. Journal of Neurolinguistics, 18, 59-66]. To explain impaired production of non-canonical sentences in agrammatism, the TPH basically relies on deleted or pruned clause structure positions in the left periphery, whereas the DOP appeals to limitations in the application of movement rules. Certain non-canonical sentences such as object-questions and object-relative clauses require the availability of nodes in the left periphery as well as movement to these nodes. In languages with relatively fixed word order such as English, the relevant test cases generally involve a coincidence of left periphery and movement, such that the predictions of the TPH and the DOP are identical although for different reasons. In languages with relatively free word order such as German, on the other hand, it is possible to devise specific tests of the different predictions due to the availability of scrambling. Scrambled object sentences, for example, do not involve the left periphery but do require application of movement in a domain below the left periphery. A study was conducted with German agrammatic subjects which elicited canonical sentences without object movement and non-canonical scrambled sentences with object movement. The results show that agrammatic speakers have a particular problem with the production of scrambled sentences. Further evidence reported in the study from spontaneous speech, elicitation of object relatives, questions and passives and with different agrammatic subjects confirms that non-canonical sentences are generally harder to produce for agrammatics. These

  13. Semantic Interference during Object Naming in Agrammatic and Logopenic Primary Progressive Aphasia (PPA)

    ERIC Educational Resources Information Center

    Thompson, Cynthia K.; Cho, Soojin; Price, Charis; Wieneke, Christina; Bonakdarpour, Borna; Rogalski, Emily; Weintraub, Sandra; Mesulam, M-Marsel

    2012-01-01

    This study examined the time course of object naming in 21 individuals with primary progressive aphasia (PPA) (8 agrammatic (PPA-G); 13 logopenic (PPA-L)) and healthy age-matched speakers (n=17) using a semantic interference paradigm with related and unrelated interfering stimuli presented at stimulus onset asynchronies (SOAs) of -1000, -500, -100…

  14. Aphasic variant of Alzheimer disease

    PubMed Central

    Sridhar, Jaiashre; Rader, Benjamin; Martersteck, Adam; Chen, Kewei; Cobia, Derin; Thompson, Cynthia K.; Weintraub, Sandra; Bigio, Eileen H.; Mesulam, M.-Marsel

    2016-01-01

    Objective: To identify features of primary progressive aphasia (PPA) associated with Alzheimer disease (AD) neuropathology. A related objective was to determine whether logopenic PPA is a clinical marker for AD. Methods: A total of 139 prospectively enrolled participants with a root diagnosis of PPA constituted the reference set. Those with autopsy or biomarker evidence of AD, and who had been evaluated at mild disease stages (Aphasia Quotient ≥85), were included (n = 19). All had quantitative language testing and APOE genotyping. Fifteen had MRI morphometry. Results: Impaired word-finding was the universal presenting complaint in the aphasic AD group. PPA clinical subtype was logopenic (n = 13) and agrammatic (n = 6). Fluency, repetition, naming, and grammaticality ranged from preserved to severely impaired. All had relative preservation of word comprehension. Eight of the 15 aphasic participants with AD showed no appreciable cortical atrophy at the individual level on MRI. As a group, atrophy was asymmetrically concentrated in the left perisylvian cortex. APOE ε4 frequency was not elevated. Conclusions: There is a close, but not obligatory, association between logopenic PPA and AD. No language measure, with the possible exception of word comprehension, can confirm or exclude AD in PPA. Biomarkers are therefore essential for diagnosis. Asymmetry of cortical atrophy and normal APOE ε4 prevalence constitute deviations from typical AD. These and additional neuropathologic features suggest that AD has biological subtypes, one of which causes PPA. Better appreciation of this fact should promote the inclusion of individuals with PPA and positive AD biomarkers into relevant clinical trials. PMID:27566743

  15. Analysis of VOT in Turkish Speakers with Aphasia

    ERIC Educational Resources Information Center

    Kopkalli-Yavuz, Handan; Mavis, Ilknur; Akyildiz, Didem

    2011-01-01

    Studies investigating voicing onset time (VOT) production by speakers with aphasia have shown that nonfluent aphasics show a deficit in the articulatory programming of speech sounds based on the range of VOT values produced by aphasic individuals. If the VOT value lies between the normal range of VOT for the voiced and voiceless categories, then…

  16. Verb and auxiliary movement in agrammatic Broca’s aphasia

    PubMed Central

    Bastiaanse, Roelien; Thompson, Cynthia K.

    2011-01-01

    Verb production in agrammatic Broca’s aphasia has repeatedly been shown to be impaired by a number of investigators. Not only is the number of verbs produced often significantly reduced, but verb inflections and auxiliaries are often omitted as well (e.g., Bastiaanse, Jonkers, & Moltmaker-Osinga, 1996; Saffran, Berndt, & Schwartz, 1989; Thompson, Shapiro, Li, & Schendel, 1994, 1997). It has been suggested that these problems are, in part, caused by the fact that finite verbs need to be moved from their base-generated position to inflectional nodes in the syntactic tree (e.g., Bastiaanse & Van Zonneveld, 1998). Others have suggested that production deficits in agrammatism can be predicted based on the position that certain structures take in the syntactic tree (Friedmann & Grodzinsky, 1997; Hagiwara, 1995). If the former theory is correct, several predictions can be made. First of all, the discrepancy between production of finite verbs in the matrix and embedded clause that has been found for Dutch (Bastiaanse & Van Zonneveld, 1998) should not be observed in English, since the word order of the matrix and embedded clause are the same in the latter language. Second, if verb movement (including movement of auxiliaries) is problematic for speakers with agrammatic aphasia, then a hierarchy in the production of auxiliaries in yes/no questions, auxiliaries, and finite verbs in declarative sentences in English would be expected, since the former has been moved and the two latter are in base-generated position. In the present paper, these hypotheses were tested in a cross-linguistic study of Dutch and English. Results showed the position in the syntactic tree does not predict deficit patterns; rather the critical factor appears to relate to whether or not verb or auxiliary movement is required. PMID:12590917

  17. A model of serial order problems in fluent, stuttered and agrammatic speech.

    PubMed

    Howell, Peter

    2007-10-01

    Many models of speech production have attempted to explain dysfluent speech. Most models assume that the disruptions that occur when speech is dysfluent arise because the speakers make errors while planning an utterance. In this contribution, a model of the serial order of speech is described that does not make this assumption. It involves the coordination or 'interlocking' of linguistic planning and execution stages at the language-speech interface. The model is examined to determine whether it can distinguish two forms of dysfluent speech (stuttered and agrammatic speech) that are characterized by iteration and omission of whole words and parts of words.

  18. Aphasic variant of Alzheimer disease: Clinical, anatomic, and genetic features.

    PubMed

    Rogalski, Emily; Sridhar, Jaiashre; Rader, Benjamin; Martersteck, Adam; Chen, Kewei; Cobia, Derin; Thompson, Cynthia K; Weintraub, Sandra; Bigio, Eileen H; Mesulam, M-Marsel

    2016-09-27

    To identify features of primary progressive aphasia (PPA) associated with Alzheimer disease (AD) neuropathology. A related objective was to determine whether logopenic PPA is a clinical marker for AD. A total of 139 prospectively enrolled participants with a root diagnosis of PPA constituted the reference set. Those with autopsy or biomarker evidence of AD, and who had been evaluated at mild disease stages (Aphasia Quotient ≥85), were included (n = 19). All had quantitative language testing and APOE genotyping. Fifteen had MRI morphometry. Impaired word-finding was the universal presenting complaint in the aphasic AD group. PPA clinical subtype was logopenic (n = 13) and agrammatic (n = 6). Fluency, repetition, naming, and grammaticality ranged from preserved to severely impaired. All had relative preservation of word comprehension. Eight of the 15 aphasic participants with AD showed no appreciable cortical atrophy at the individual level on MRI. As a group, atrophy was asymmetrically concentrated in the left perisylvian cortex. APOE ε4 frequency was not elevated. There is a close, but not obligatory, association between logopenic PPA and AD. No language measure, with the possible exception of word comprehension, can confirm or exclude AD in PPA. Biomarkers are therefore essential for diagnosis. Asymmetry of cortical atrophy and normal APOE ε4 prevalence constitute deviations from typical AD. These and additional neuropathologic features suggest that AD has biological subtypes, one of which causes PPA. Better appreciation of this fact should promote the inclusion of individuals with PPA and positive AD biomarkers into relevant clinical trials. © 2016 American Academy of Neurology.

  19. A mapping theory of agrammatic comprehension deficits.

    PubMed

    O'Grady, William; Lee, Miseon

    2005-01-01

    This paper offers evidence for the Isomorphic Mapping Hypothesis, which holds that individuals with agrammatic aphasia tend to have difficulty comprehending sentences in which the order of NPs is not aligned with the structure of the corresponding event. We begin by identifying a set of constructions in English and Korean for which the IMH makes predictions distinct from those of canonical order and trace-based theories of agrammatic comprehension. Then, drawing on data involving the interpretation of those patterns by English-speaking and Korean-speaking agrammatics, we argue for the conceptual and empirical superiority of the isomorphic mapping account.

  20. Real-time comprehension of wh- movement in aphasia: Evidence from eyetracking while listening

    PubMed Central

    Dickey, Michael Walsh; Choy, JungWon Janet; Thompson, Cynthia K.

    2007-01-01

    Sentences with non-canonical wh- movement are often difficult for individuals with agrammatic Broca's aphasia to understand (Caramazza & Zurif, 1976, inter alia). However, the explanation of this difficulty remains controversial, and little is known about how individuals with aphasia try to understand such sentences in real time. This study uses an eyetracking while listening paradigm (Tanenhaus, et al., 1995) to examine agrammatic aphasic individuals' on-line comprehension of movement sentences. Participants' eye-movements were monitored while they listened to brief stories. These stories were followed by comprehension probes involving wh- movement, and looked at visual displays depicting elements mentioned in the story. In line with previous results for young normal listeners (Sussman & Sedivy, 2003), the study finds that both older unimpaired control participants (n=8) and aphasic individuals (n=12) showed visual evidence of successful automatic comprehension of wh- questions (like “Who did the boy kiss that day at school?”). Specifically, both groups fixated on a picture corresponding to the moved element (“who,” the person kissed in the story) at the position of the verb. Interestingly, aphasic participants showed qualitatively different fixation patterns for trials eliciting correct and incorrect responses. Aphasic individuals looked to first the moved-element picture and then to a competitor following the verb in the incorrect trials, indicating initially correct automatic processing. However, they only showed looks to the moved-element picture for the correct trials, parallel to control participants. Furthermore, aphasic individuals' fixations during movement sentences were just as fast as control participants' fixations. These results are unexpected under slowed-processing accounts of aphasic comprehension deficits, in which the source of failed comprehension should be delayed application of the same processing routines used in successful

  1. Tracking the development of agrammatic aphasia: A tensor-based morphometry study.

    PubMed

    Whitwell, Jennifer L; Duffy, Joseph R; Machulda, Mary M; Clark, Heather M; Strand, Edythe A; Senjem, Matthew L; Gunter, Jeffrey L; Spychalla, Anthony J; Petersen, Ronald C; Jack, Clifford R; Josephs, Keith A

    2017-05-01

    Agrammatic aphasia can be observed in neurodegenerative disorders and has been traditionally linked with damage to Broca's area, although there have been disagreements concerning whether damage to Broca's area is necessary or sufficient for the development of agrammatism. We aimed to investigate the neuroanatomical correlates of the emergence of agrammatic aphasia utilizing a unique cohort of patients with primary progressive apraxia of speech (PPAOS) that did not have agrammatism at baseline but developed agrammatic aphasia over time. Twenty PPAOS patients were recruited and underwent detailed speech/language assessments and 3T MRI at two visits, approximately two years apart. None of the patients showed evidence of agrammatism in writing or speech at baseline. Eight patients developed aphasia at follow-up (progressors) and 12 did not (non-progressors). Tensor-based morphometry utilizing symmetric normalization (SyN) was used to assess patterns of grey matter atrophy and voxel-based morphometry was used to assess patterns of grey matter loss at baseline. The progressors were younger at onset and more likely to show distorted sound substitutions or additions compared to non-progressors. Both groups showed change over time in premotor and motor cortices, posterior frontal lobe, basal ganglia, thalamus and midbrain, but the progressors showed greater rates of atrophy in left pars triangularis, thalamus and putamen compared to non-progressors. The progressors also showed greater grey matter loss in pars triangularis and putamen at baseline. This cohort provided a unique opportunity to assess the anatomical changes that accompany the development of agrammatic aphasia. The results suggest that damage to a network of regions including Broca's area, thalamus and basal ganglia are responsible for the development of agrammatic aphasia in PPAOS. Clinical and neuroimaging abnormalities were also present before the onset of agrammatism that could help improve prognosis in

  2. Real-time comprehension of wh- movement in aphasia: evidence from eyetracking while listening.

    PubMed

    Dickey, Michael Walsh; Choy, JungWon Janet; Thompson, Cynthia K

    2007-01-01

    Sentences with non-canonical wh- movement are often difficult for individuals with agrammatic Broca's aphasia to understand (, inter alia). However, the explanation of this difficulty remains controversial, and little is known about how individuals with aphasia try to understand such sentences in real time. This study uses an eyetracking while listening paradigm to examine agrammatic aphasic individuals' on-line comprehension of movement sentences. Participants' eye-movements were monitored while they listened to brief stories and looked at visual displays depicting elements mentioned in the stories. The stories were followed by comprehension probes involving wh- movement. In line with previous results for young normal listeners [Sussman, R. S., & Sedivy, J. C. (2003). The time-course of processing syntactic dependencies: evidence from eye movements. Language and Cognitive Processes, 18, 143-161], the study finds that both older unimpaired control participants (n=8) and aphasic individuals (n=12) showed visual evidence of successful automatic comprehension of wh- questions (like "Who did the boy kiss that day at school?"). Specifically, both groups fixated on a picture corresponding to the moved element ("who," the person kissed in the story) at the position of the verb. Interestingly, aphasic participants showed qualitatively different fixation patterns for trials eliciting correct and incorrect responses. Aphasic individuals looked first to the moved-element picture and then to a competitor following the verb in the incorrect trials. However, they only showed looks to the moved-element picture for the correct trials, parallel to control participants. Furthermore, aphasic individuals' fixations during movement sentences were just as fast as control participants' fixations. These results are unexpected under slowed-processing accounts of aphasic comprehension deficits, in which the source of failed comprehension should be delayed application of the same processing

  3. Limb apraxia in aphasic patients.

    PubMed

    Ortiz, Karin Zazo; Mantovani-Nagaoka, Joana

    2017-11-01

    Limb apraxia is usually associated with left cerebral hemisphere damage, with numerous case studies involving aphasic patients. The aim of this study was to verify the occurrence of limb apraxia in aphasic patients and analyze its nature. This study involved 44 healthy volunteers and 28 aphasic patients matched for age and education. AH participants were assessed using a limb apraxia battery comprising subtests evaluating lexical-semantic aspects related to the comprehension/production of gestures as well as motor movements. Aphasics had worse performances on many tasks related to conceptual components of gestures. The difficulty found on the imitation of dynamic gesture tasks also indicated that there were specific motor difficulties in gesture planning. These results reinforce the importance of conducting limb apraxia assessment in aphasic patients and also highlight pantomime difficulties as a good predictor for semantic disturbances.

  4. Verb deficits in Alzheimer’s disease and agrammatism: Implications for lexical organization☆

    PubMed Central

    Kim, Mikyong; Thompson, Cynthia K.

    2011-01-01

    This study examined the nature of verb deficits in 14 individuals with probable Alzheimer’s Disease (PrAD) and nine with agrammatic aphasia. Production was tested, controlling both semantic and syntactic features of verbs, using noun and verb naming, sentence completion, and narrative tasks. Noun and verb comprehension and a grammaticality judgment task also were administered. Results showed that while both PrAD and agrammatic subjects showed impaired verb naming, the syntactic features of verbs (i.e., argument structure) influenced agrammatic, but not Alzheimer’s disease patients’ verb production ability. That is, agrammatic patients showed progressively greater difficulty with verbs associated with more arguments, as has been shown in previous studies (e.g., Kim & Thompson, 2000; Thompson, 2003; Thompson, Lange, Schneider, & Shapiro, 1997), and suggest a syntactic basis for verb production deficits in agrammatism. Conversely, the semantic complexity of verbs affected PrAD, but not agrammatic, patients’ performance, suggesting “bottom-up” breakdown in their verb lexicon, paralleling that of nouns, resulting from the degradation or loss of semantic features of verbs. PMID:14698726

  5. Analysis of prototypical narratives produced by aphasic individuals and cognitively healthy subjects

    PubMed Central

    Silveira, Gabriela; Mansur, Letícia Lessa

    2015-01-01

    Aphasia can globally or selectively affect comprehension and production of verbal and written language. Discourse analysis can aid language assessment and diagnosis. Objective [1] To explore narratives that produce a number of valid indicators for diagnosing aphasia in speakers of Brazilian Portuguese. [2] To analyze the macrostructural aspects of the discourse of normal individuals. [3] To analyze the macrostructural aspects of the discourse of aphasic individuals. Methods The macrostructural aspects of three narratives produced by aphasic individuals and cognitively healthy subjects were analyzed. Results A total of 30 volunteers were examined comprising 10 aphasic individuals (AG) and 20 healthy controls (CG). The CG included 5 males. The CG had a mean age of 38.9 years (SD=15.61) and mean schooling of 13 years (SD=2.67) whereas the AG had a mean age of 51.7 years (SD=17.3) and mean schooling of 9.1 years (SD=3.69). Participants were asked to narrate three fairy tales as a basis for analyzing the macrostructure of discourse. Comparison of the three narratives revealed no statistically significant difference in number of propositions produced by the groups. A significant negative correlation was found between age and number of propositions produced. Also, statistically significant differences were observed in the number of propositions produced by the individuals in the CG and the AG for the three tales. Conclusion It was concluded that the three tales are applicable for discourse assessment, containing a similar number of propositions and differentiating aphasic individuals and cognitively healthy subjects based on analysis of the macrostructure of discourse. PMID:29213973

  6. The influence of phonological context on the sound errors of a speaker with Wernicke's aphasia.

    PubMed

    Goldmann, R E; Schwartz, M F; Wilshire, C E

    2001-09-01

    A corpus of phonological errors produced in narrative speech by a Wernicke's aphasic speaker (R.W.B.) was tested for context effects using two new methods for establishing chance baselines. A reliable anticipatory effect was found using the second method, which estimated chance from the distance between phoneme repeats in the speech sample containing the errors. Relative to this baseline, error-source distances were shorter than expected for anticipations, but not perseverations. R.W.B.'s anticipation/perseveration ratio measured intermediate between a nonaphasic error corpus and that of a more severe aphasic speaker (both reported in Schwartz et al., 1994), supporting the view that the anticipatory bias correlates to severity. Finally, R.W.B's anticipations favored word-initial segments, although errors and sources did not consistently share word or syllable position. Copyright 2001 Academic Press.

  7. Preserved processing of musical structure in a person with agrammatic aphasia.

    PubMed

    Slevc, L Robert; Faroqi-Shah, Yasmeen; Saxena, Sadhvi; Okada, Brooke M

    2016-12-01

    Evidence for shared processing of structure (or syntax) in language and in music conflicts with neuropsychological dissociations between the two. However, while harmonic structural processing can be impaired in patients with spared linguistic syntactic abilities (Peretz, I. (1993). Auditory atonalia for melodies. Cognitive Neuropsychology, 10, 21-56. doi:10.1080/02643299308253455), evidence for the opposite dissociation-preserved harmonic processing despite agrammatism-is largely lacking. Here, we report one such case: HV, a former musician with Broca's aphasia and agrammatic speech, was impaired in making linguistic, but not musical, acceptability judgments. Similarly, she showed no sensitivity to linguistic structure, but normal sensitivity to musical structure, in implicit priming tasks. To our knowledge, this is the first non-anecdotal report of a patient with agrammatic aphasia demonstrating preserved harmonic processing abilities, supporting claims that aspects of musical and linguistic structure rely on distinct neural mechanisms.

  8. Comprehension of Idioms in Turkish Aphasic Participants.

    PubMed

    Aydin, Burcu; Barin, Muzaffer; Yagiz, Oktay

    2017-12-01

    Brain damaged participants offer an opportunity to evaluate the cognitive and linguistic processes and make assumptions about how the brain works. Cognitive linguists have been investigating the underlying mechanisms of idiom comprehension to unravel the ongoing debate on hemispheric specialization in figurative language comprehension. The aim of this study is to evaluate and compare the comprehension of idiomatic expressions in left brain damaged (LBD) aphasic, right brain damaged (RBD) and healthy control participants. Idiom comprehension in eleven LBD aphasic participants, ten RBD participants and eleven healthy control participants were assessed with three tasks: String to Picture Matching Task, Literal Sentence Comprehension Task and Oral Idiom Definition Task. The results of the tasks showed that in overall idiom comprehension category, the left brain-damaged aphasic participants interpret idioms more literally compared to right brain-damaged participants. What is more, there is a significant difference in opaque idiom comprehension implying that left brain-damaged aphasic participants perform worse compared to right brain-damaged participants. On the other hand, there is no statistically significant difference in scores of transparent idiom comprehension between the left brain-damaged aphasic and right brain-damaged participants. This result also contribute to the idea that while figurative processing system is damaged in LBD aphasics, the literal comprehension mechanism is spared to some extent. The results of this study support the view that idiom comprehension sites are mainly left lateralized. Furthermore, the results of this study are in consistence with the Giora's Graded Salience Hypothesis.

  9. Production of Non-Canonical Sentences in Agrammatic Aphasia: Limits in Representation or Rule Application?

    ERIC Educational Resources Information Center

    Burchert, Frank; Meissner, Nadine; De Bleser, Ria

    2008-01-01

    The study reported here compares two linguistically informed hypotheses on agrammatic sentence production, the TPH [Friedmann, N., & Grodzinsky, Y. (1997). "Tense and agreement in agrammatic production: Pruning the syntactic tree." "Brain and Language," 56, 397-425.] and the DOP [Bastiaanse, R., & van Zonneveld, R. (2005). "Sentence production…

  10. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    NASA Astrophysics Data System (ADS)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  11. An Investigation of Luria's Hypothesis on Prompting in Aphasic Naming Disturbances.

    ERIC Educational Resources Information Center

    Li, Edith Chin; Canter, Gerald J.

    1987-01-01

    The study investigated A. R. Luria's hypothesis that aphasic subgroups (Broca's, conduction, Wernicke's, and anomic aphasics) would respond differentially to phonemic prompts. Results, with the exception of the anomic aphasic group, supported Luria's predictions. (Author/DB)

  12. Time Course of Grammatical Encoding in Agrammatism

    ERIC Educational Resources Information Center

    Lee, Jiyeon

    2011-01-01

    Producing a sentence involves encoding a preverbal message into a grammatical structure by retrieving lexical items and integrating them into a functional (semantic-to-grammatical) structure. Individuals with agrammatism are impaired in this grammatical encoding process. However, it is unclear what aspect of grammatical encoding is impaired and…

  13. Aphasic and amnesic patients' verbal vs. nonverbal retentive abilities.

    PubMed

    Cermak, L S; Tarlow, S

    1978-03-01

    Four different groups of patients (aphasics, alcoholic Korsakoffs, chronic alcoholics, and control patients) were asked to detect either repeated words presented orally, repeated words presented visually, repeated pictures or repeated shapes, during the presentation of a list of similarly constructed stimuli. It was discovered that on the verbal tasks, the number of words intervening between repetitions had more effect on the aphasics than on the other groups of patients. However, for the nonverbal picture repetition and shape repetition tasks, the aphasics' performance was normal, while the alcoholic Korsakoff patients were most affected by the number of intervening items. It was concluded that the aphasics' memory deficit demonstrated by the use of this paradigm was specific to the presentation of verbal material.

  14. Proform-Antecedent Linking in Individuals with Agrammatic Aphasia: A Test of the Intervener Hypothesis.

    PubMed

    Engel, Samantha; Shapiro, Lewis P; Love, Tracy

    2018-02-01

    To evaluate processing and comprehension of pronouns and reflexives in individuals with agrammatic (Broca's) aphasia and age-matched control participants. Specifically, we evaluate processing and comprehension patterns in terms of a specific hypothesis -- the Intervener Hypothesis - that posits that the difficulty of individuals with agrammatic (Broca's) aphasia results from similarity-based interference caused by the presence of an intervening NP between two elements of a dependency chain. We used an eye tracking-while-listening paradigm to investigate real-time processing (Experiment 1) and a sentence-picture matching task to investigate final interpretive comprehension (Experiment 2) of sentences containing proforms in complement phrase and subject relative constructions. Individuals with agrammatic aphasia demonstrated a greater proportion of gazes to the correct referent of reflexives relative to pronouns and significantly greater comprehension accuracy of reflexives relative to pronouns. These results provide support for the Intervener Hypothesis, previous support for which comes from studies of Wh- questions and unaccusative verbs, and we argue that this account provides an explanation for the deficits of individuals with agrammatic aphasia across a growing set of sentence constructions. The current study extends this hypothesis beyond filler-gap dependencies to referential dependencies and allows us to refine the hypothesis in terms of the structural constraints that meet the description of the Intervener Hypothesis.

  15. Application of binaural beat phenomenon with aphasic patients.

    PubMed

    Barr, D F; Mullin, T A; Herbert, P S

    1977-04-01

    We investigated whether six aphasics and six normal subjects could binaurally fuse two slightly differing frequencies of constant amplitude. The aphasics were subdivided into two groups: (1) two men who had had mild cerebrovascular accidents (CVAs) during the past 15 months; (2) four men who had had severe CVAs during the last 15 months. Two tones of different frequency levels but equal in intensity were presented dichotically to the subjects at 40 dB sensation level. All subjects had normal hearing at 500 Hz (0 to 25 dB). All six normal subjects and the two aphasics who had had mild CVAs could hear the binaural beats. The four aphasics who had had severe CVAs could not hear them. A 2 X 2 design resulting from this study was compared using chi2 test with Yates correction and was found to be significantly different (P less than .05). Two theories are presented to explain these findings: the "depression theory" and the "temporal time-sequencing theory." Therapeutic implications are also discussed relative to cerebral and/or brain stem involvement in the fusion of binaural stimuli.

  16. Acquisition of a Non-Vocal 'Language' by Aphasic Children

    ERIC Educational Resources Information Center

    Hughes, Jennifer

    1974-01-01

    Aphasic children were taught to communicate via a system of visual symbols devised by Premack (1969) for use with chimpanzees. Subjects readily learned to express several language functions in this way. "Premackese" is seen better viewed as a communication system. It may be that Aphasic children lack some specifically linguistic ability.…

  17. Reference Assignment: Using Language Breakdown to Choose between Theoretical Approaches

    ERIC Educational Resources Information Center

    Ruigendijk, Esther; Vasic, Nada; Avrutin, Sergey

    2006-01-01

    We report results of an experimental study with Dutch agrammatic aphasics that investigated their ability to interpret pronominal elements in transitive clauses and Exceptional Case Marking constructions (ECM). Using the obtained experimental results as a tool, we distinguish between three competing linguistic theories that aim at determining…

  18. The use of the picture–word interference paradigm to examine naming abilities in aphasic individuals

    PubMed Central

    Hashimoto, Naomi; Thompson, Cynthia K.

    2015-01-01

    Background Although naming deficits are well documented in aphasia, on-line measures of naming processes have been little investigated. The use of on-line measures may offer further insight into the nature of aphasic naming deficits that would otherwise be difficult to interpret when using off-line measures. Aims The temporal activation of semantic and phonological processes was tracked in older normal control and aphasic individuals using a picture–word interference paradigm. The purpose of the study was to examine how word interference results can augment and/or corroborate standard language testing in the aphasic group, as well as to examine temporal patterns of activation in the aphasic group when compared to a normal control group. Methods & Procedures A total of 20 older normal individuals and 11 aphasic individuals participated. Detailed measures of each aphasic individual's language and naming skills were obtained. A visual picture–word interference paradigm was used in which the words bore either a semantic, phonological, or no relationship to 25 pictures. These competitor words were presented at stimulus onset asynchronies of −300 ms, +300 ms, and 0 ms. Outcomes & Results Analyses of naming RTs in both groups revealed significant early semantic interference effects, mid-semantic interference effects, and mid-phonological facilitation effects. A matched control-aphasic group comparison revealed no differences in the temporal activation of effects during the course of naming. Partial support for this RT pattern was found in the aphasic naming error pattern. The aphasic group also demonstrated greater SIEs and PFEs compared to the matched control group, which indicated disruptions of the phonological processing stage. Analyses of behavioural performances of the aphasic group corroborated this finding. Conclusions The aphasic naming RTs results were unexpected given the results from the priming literature, which has supported the idea of slowed or

  19. Variation in the pattern of omissions and substitutions of grammatical morphemes in the spontaneous speech of so-called agrammatic patients.

    PubMed

    Miceli, G; Silveri, M C; Romani, C; Caramazza, A

    1989-04-01

    We describe the patterns of omissions (and substitutions) of freestanding grammatical morphemes and the patterns of substitutions of bound grammatical morphemes in 20 so-called agrammatic patients. Extreme variation was observed in the patterns of omissions and substitutions of grammatical morphemes, both in terms of the distribution of errors for different grammatical morphemes as well as in terms of the distribution of omissions versus substitutions. Results are discussed in the context of current debates concerning the possibility of a theoretically motivated distinction between the clinical categories of agrammatism and paragrammatism and, more generally, concerning the theoretical usefulness of any clinical category. The conclusion is reached that the observed heterogeneity in the production of grammatical morphemes among putatively agrammatic patients renders the clinical category of agrammatism, and by extension all other clinical categories from the classical classification scheme (e.g., Broca's aphasia, Wernicke's aphasia, and so forth) to more recent classificatory attempts (e.g., surface dyslexia, deep dysgraphia, and so forth), theoretically useless.

  20. Pragmatic-mode mediation of sentence comprehension among aphasic bilinguals and hispanophones.

    PubMed

    Schnitzer, M L

    1989-01-01

    A test of sentence comprehension administered in four input-output modality combinations to a group of aphasic bilinguals and monolingual hispanophones provides evidence that aphasics tend to use pragmatic-mode (in the sense of Givón, 1979, On understanding-grammar, New York, Academic Press) strategies in approaching this task. When five factors were identified and dichotomized with respect to the pragmatic-mode-syntactic-mode dimension, the patients performed significantly better on items classified as pragmatic than on those classified as syntactic, in both languages. The results support a vertical/hierarchical view of aphasic language dissolution.

  1. Interpretation of Pronouns in VP-Ellipsis Constructions in Dutch Broca's and Wernicke's Aphasia

    ERIC Educational Resources Information Center

    Vasic, Nada; Avrutin, Sergey; Ruigendijk, Esther

    2006-01-01

    In this paper, we investigate the ability of Dutch agrammatic Broca's and Wernicke's aphasics to assign reference to possessive pronouns in elided VP constructions. The assumption is that the comprehension problems in these two populations have different sources that are revealed in distinct patterns of responses. The focus is primarily on the…

  2. Neighbourhood Density Effects in Auditory Non-Word Processing in Aphasic Listeners

    ERIC Educational Resources Information Center

    Janse, Esther

    2009-01-01

    This study investigates neighbourhood density effects on lexical decision performance (both accuracy and response times) of aphasic patients. Given earlier results on lexical activation and deactivation in Broca's and Wernicke's aphasia, the prediction was that smaller neighbourhood density effects would be found for Broca's aphasic patients,…

  3. Making non-fluent aphasics speak: sing along!

    PubMed

    Racette, Amélie; Bard, Céline; Peretz, Isabelle

    2006-10-01

    A classic observation in neurology is that aphasics can sing words they cannot pronounce otherwise. To further assess this claim, we investigated the production of sung and spoken utterances in eight brain-damaged patients suffering from a variety of speech disorders as a consequence of a left-hemisphere lesion. In Experiment 1, the patients were tested in the repetition and recall of words and notes of familiar material. Lyrics of familiar songs, as well as words of proverbs and prayers, were not better pronounced in singing than in speaking. Notes were better produced than words. In Experiment 2, the aphasic patients repeated and recalled lyrics from novel songs. Again, they did not produce more words in singing than in speaking. In Experiment 3, when allowed to sing or speak along with an auditory model while learning novel songs, aphasics repeated and recalled more words when singing than when speaking. Reduced speed or shadowing cannot account for this advantage of singing along over speaking in unison. The results suggest that singing in synchrony with an auditory model--choral singing--is more effective than choral speech, at least in French, in improving word intelligibility because choral singing may entrain more than one auditory-vocal interface. Thus, choral singing appears to be an effective means of speech therapy.

  4. Eye movements during reading in aphasics.

    PubMed

    Klingelhöfer, J; Conrad, B

    1984-01-01

    In 40 normal subjects and in 21 patients with anomic, Wernicke's, and Broca's aphasia, eye movements were registered with DC-EOG during reading of two standardized texts and analysed with respect to the number of fixations and regressions and reading time. Patients with these aphasic syndromes developed different internal strategies of their saccadic construction: Patients with Wernicke's aphasia showed increasing difficulty in overcoming the text with a tendency to make smaller leaps over the line, with almost a complete disintegration of the saccadic structure ("strategy of small and smallest steps"). The saccadic pattern in Broca's aphasics was clearly better preserved. During oral reading there was a characteristic increase in fixation times and number of regressions ("motor waiting and searching behaviour"). Patients with anomic aphasia showed alterations most similar to the reading behaviour of unskilled normal readers.

  5. Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study.

    PubMed

    Eggenberger, Noëmi; Preisig, Basil C; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M

    2016-01-01

    Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.

  6. Production of Verb Tense in Agrammatic Aphasia: A Meta-Analysis and Further Data

    PubMed Central

    Faroqi-Shah, Yasmeen; Friedman, Laura

    2015-01-01

    In a majority of languages, the time of an event is expressed by marking tense on the verb. There is substantial evidence that the production of verb tense in sentences is more severely impaired than other functional categories in persons with agrammatic aphasia. The underlying source of this verb tense impairment is less clear, particularly in terms of the relative contribution of conceptual-semantic and processing demands. This study aimed to provide a more precise characterization of verb tense impairment by examining if there is dissociation within tenses (due to conceptual-semantic differences) and an effect of experimental task (mediated by processing limitations). Two sources of data were used: a meta-analysis of published research (which yielded 143 datasets) and new data from 16 persons with agrammatic aphasia. Tensed verbs were significantly more impaired than neutral (nonfinite) verbs, but there were no consistent differences between past, present, and future tenses. Overall, tense accuracy was mediated by task, such that picture description task was the most challenging, relative to sentence completion, sentence production priming, and grammaticality judgment. An interaction between task and tense revealed a past tense disadvantage for a sentence production priming task. These findings indicate that verb tense impairment is exacerbated by processing demands of the elicitation task and the conceptual-semantic differences between tenses are too subtle to show differential performance in agrammatism. PMID:26457004

  7. Training verb argument structure production in agrammatic aphasia: Behavioral and neural recovery patterns

    PubMed Central

    Thompson, Cynthia K.; Riley, Ellyn A.; den Ouden, Dirk-Bart; Meltzer-Asscher, Aya; Lukic, Sladjana

    2013-01-01

    Introduction Neuroimaging and lesion studies indicate a left hemisphere network for verb and verb argument structure processing, involving both frontal and temporoparietal brain regions. Although their verb comprehension is generally unimpaired, it is well known that individuals with agrammatic aphasia often present with verb production deficits, characterized by an argument structure complexity hierarchy, indicating faulty access to argument structure representations for production and integration into syntactic contexts. Recovery of verb processing in agrammatism, however, has received little attention and no studies have examined the neural mechanisms associated with improved verb and argument structure processing. In the present study we trained agrammatic individuals on verbs with complex argument structure in sentence contexts and examined generalization to verbs with less complex argument structure. The neural substrates of improved verb production were examined using functional magnetic resonance imaging (fMRI). Methods Eight individuals with chronic agrammatic aphasia participated in the study (four experimental and four control participants). Production of three-argument verbs in active sentences was trained using a sentence generation task emphasizing the verb’s argument structure and the thematic roles of sentential noun phrases. Before and after training, production of trained and untrained verbs was tested in naming and sentence production and fMRI scans were obtained, using an action naming task. Results Significant pre- to post-training improvement in trained and untrained (one- and two-argument) verbs was found for treated, but not control, participants, with between-group differences found for verb naming, production of verbs in sentences, and production of argument structure. fMRI activation derived from post-treatment compared to pre-treatment scans revealed upregulation in cortical regions implicated for verb and argument structure processing

  8. Verb and sentence production and comprehension in aphasia: Northwestern Assessment of Verbs and Sentences (NAVS)

    PubMed Central

    Cho-Reyes, Soojin; Thompson, Cynthia K.

    2015-01-01

    Background Verbs and sentences are often impaired in individuals with aphasia, and differential impairment patterns are associated with different types of aphasia. With currently available test batteries, however, it is challenging to provide a comprehensive profile of aphasic language impairments because they do not examine syntactically important properties of verbs and sentences. Aims This study presents data derived from the Northwestern Assessment of Verbs and Sentences (NAVS; Thompson, 2011), a new test battery designed to examine syntactic deficits in aphasia. The NAVS includes tests for verb naming and comprehension, and production of verb argument structure in simple active sentences, with each examining the effects of the number and optionality of arguments. The NAVS also tests production and comprehension of canonical and non-canonical sentences. Methods & Procedures A total of 59 aphasic participants (35 agrammatic and 24 anomic) were tested using a set of action pictures. Participants produced verbs or sentences for the production subtests and identified pictures corresponding to auditorily provided verbs or sentences for the comprehension subtests. Outcomes & Results The agrammatic group, compared to the anomic group, performed significantly more poorly on all subtests except verb comprehension, and for both groups comprehension was less impaired than production. On verb naming and argument structure production tests both groups exhibited difficulty with three-argument verbs, affected by the number and optionality of arguments. However, production of sentences using three-argument verbs was more impaired in the agrammatic, compared to the anomic, group. On sentence production and comprehension tests, the agrammatic group showed impairments in all types of non-canonical sentences, whereas the anomic group exhibited difficulty primarily with the most difficult, object relative, structures. Conclusions Results show that verb and sentence deficits seen in

  9. Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study

    PubMed Central

    Eggenberger, Noëmi; Preisig, Basil C.; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M.

    2016-01-01

    Background Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Method Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. Results In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Conclusion Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients’ comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes. PMID:26735917

  10. A Taiwanese Mandarin Main Concept Analysis (TM-MCA) for quantification of aphasic oral discourse.

    PubMed

    Kong, Anthony Pak-Hin; Yeh, Chun-Chih

    2015-01-01

    Various quantitative systems have been proposed to examine aphasic oral narratives in English. A clinical tool for assessing discourse produced by Cantonese-speaking persons with aphasia (PWA), namely Main Concept Analysis (MCA), was developed recently for quantifying the presence, accuracy and completeness of a narrative. Similar tools for Mandarin speakers are currently absent. The first aim is to develop and establish the validity of the Taiwanese Mandarin Main Concept Analysis (TM-MCA) for the Mandarin-speaking population in Taiwan, given the paucity of related investigations. Another aim is to establish the influence of age and education level on Taiwanese Mandarin speakers' oral narrative abilities. The third purpose is to examine how well the TM-MCA could distinguish between native speakers with and without aphasia in Taiwan. The final aim is to examine the reliability and validity of the TM-MCA. Eight speech-language pathologists (SLPs) and eight neurologically intact participants were involved to establish the TM-MCA main concepts. Another 36 neurologically intact participants and 10 PWA participated to validate the TM-MCA by contrasting their performance. Both age and educational level affected the oral discourse performance among the neurologically intact adults. Significant differences on the TM-MCA measures were noted between the control group and the group with aphasia. Moreover, the degree of aphasia significantly affected the oral discourse of PWA. The TM-MCA is a culturally appropriate quantitative system for the Taiwanese Mandarin population. It can be used to supplement standardized aphasia tests to help SLPs make more informative decisions not only on clinical diagnosis but also on treatment planning. © 2015 Royal College of Speech and Language Therapists.

  11. Comparing the production of complex sentences in Persian patients with post-stroke aphasia and non-damaged people with normal speaking.

    PubMed

    Mehri, Azar; Ghorbani, Askar; Darzi, Ali; Jalaie, Shohreh; Ashayeri, Hassan

    2016-01-05

    Cerebrovascular disease leading to stroke is the most common cause of aphasia. Speakers with agrammatic non-fluent aphasia have difficulties in production of movement-derived sentences such as passive sentences, topicalized constituents, and Wh-questions. To assess the production of complex sentences, some passive, topicalized and focused sentences were designed for patients with non-fluent Persian aphasic. Afterwards, patients' performance in sentence production was tested and compared with healthy non-damaged subjects. In this cross sectional study, a task was designed to assess the different types of sentences (active, passive, topicalized and focused) adapted to Persian structures. Seven Persian patients with post-stroke non-fluent agrammatic aphasia (5 men and 2 women) and seven healthy non-damaged subjects participated in this study. The computed tomography (CT) scan or magnetic resonance imaging (MRI) showed that all the patients had a single left hemisphere lesion involved middle cerebral artery (MCA), Broca`s area and in its white matter. In addition, based on Bedside version of Persian Western Aphasia Battery (P-WAB-1), all of them were diagnosed with moderate Broca aphasia. Then, the production task of Persian complex sentences was administered. There was a significant difference between four types of sentences in patients with aphasia [Degree of freedom (df) = 3, P < 0.001]. All the patients showed worse performance than the healthy participants in all the four types of sentence production (P < 0.050). In general, it is concluded that topicalized and focused sentences as non-canonical complex sentences in Persian are very difficult to produce for patients with agrammatic non-fluent aphasia. It seems that sentences with A-movement are simpler for the patients than sentences involving A`-movement; since they include shorter movements in compare to topicalized and focused sentences.

  12. Melodic intonation in the rehabilitation of Romanian aphasics with bucco-lingual apraxia.

    PubMed

    Popovici, M; Mihăilescu, L

    1992-01-01

    The main objective of the present study was to assess the efficiency of the melodic intonation therapy (MIT) in the rehabilitation of Romanian aphasics. Eighty predominantly Broca aphasics used the melodic intonation therapy when no other therapy methods were very efficient. The speech therapist intonated the respective word, then together with the patient and finally let him continue alone. The control group counted 80 aphasics and it applied other therapy methods. Each patient, regardless of the group, was tested twice, before and after therapy. Since most of the patients displayed severe language disorders, and other therapy methods failed in rehabilitating them, MIT was considered an efficient method in the early stages of Broca aphasia with bucco-lingual apraxia.

  13. Linguistic Structures in Stereotyped Aphasic Speech

    ERIC Educational Resources Information Center

    Buckingham, Hugh W., Jr.; And Others

    1975-01-01

    The linguistic structure of specific introductory type clauses, which appear at a relatively high frequency in the utterances of a severely brain damaged fluent aphasic with neologistic jargon speech, is examined. The analysis is restricted to one fifty-six-year-old male patient who suffered massive subdural hematoma. (SCC)

  14. [Course studies of spontaneous speech and graphic achievements in 175 aphasics].

    PubMed

    Leischner, A; Mattes, K

    1982-01-01

    In 175 aphasic patients with agraphia the course of the expressive oral and graphic performances was compared. Spontaneous speech and writing and the writing of dictated words and sentences were investigated and evaluated. In addition, several peculiarities of this syndrome were examined. The investigations showed that the relationship between the expressive oral and graphic performances changes in the course of improvement depending on the type of aphasia. In the first period of testing no difference was found in the performances of patients with total aphasia and motor-amnesic aphasia; in the group of mixed aphasics and sensory-amnesic aphasics, however, the oral performance predominated the writing. Investigations at later periods showed that in the cases of total aphasia the improvement of the oral performance was better whereas in the cases of motor-amnesic and sensory-amnesic aphasia the graphic performance was more improved.

  15. Testing Idiom Comprehension in Aphasic Patients: The Effects of Task and Idiom Type

    ERIC Educational Resources Information Center

    Papagno, C.; Caporali, A.

    2007-01-01

    Idiom comprehension in 15 aphasic patients was assessed with three tasks: a sentence-to-picture matching task, a sentence-to-word matching task and an oral definition task. The results of all three tasks showed that the idiom comprehension in aphasic patients was impaired compared to that of the control group, and was significantly affected by the…

  16. Aphasic Speech in Interaction: Relearning to Communicate by Gesture When a Word Is Lacking

    ERIC Educational Resources Information Center

    Colon De Carvajal, Isabel; Teston-Bonnard, Sandra

    2015-01-01

    Resolving the inability to produce a word through a gestural realization is often a compensatory strategy used with aphasic patients. However, context and interpersonal knowledge between participants are also essential factors for finding or guessing the right word or the right gesture. In the "Interactions between Aphasic people &…

  17. Comparison for aphasic and control subjects of eye movements hypothesized in neurolinguistic programming.

    PubMed

    Dooley, K O; Farmer, A

    1988-08-01

    Neurolinguistic programming's hypothesized eye movements were measured independently using videotapes of 10 nonfluent aphasic and 10 control subjects matched for age and sex. Chi-squared analysis indicated that eye-position responses were significantly different for the groups. Although earlier research has not supported the hypothesized eye positions for normal subjects, the present findings support the contention that eye-position responses may differ between neurologically normal and aphasic individuals.

  18. Adaptation to Early-Stage Nonfluent/Agrammatic Variant Primary Progressive Aphasia: A First-Person Account.

    PubMed

    Douglas, Joanne T

    2014-06-01

    Primary progressive aphasia (PPA) is a young-onset neurodegenerative disorder characterized by declining language ability. The nonfluent/agrammatic variant of PPA (PPA-G) has the core features of agrammatism in language production and effortful, halting speech. As with other frontotemporal spectrum disorders, there is currently no cure for PPA, nor is it possible to slow the course of progression. The primary goal of treatment is therefore palliative in nature. However, there is a paucity of published information about strategies to make meaningful improvements to the quality of life of people with PPA, particularly in the early stages of the disease where any benefit could most be appreciated by the affected person. This report describes a range of strategies and adaptations designed to improve the quality of life of a person with early-stage PPA-G, based on my experience under the care of a multidisciplinary medical team. © The Author(s) 2014.

  19. In vivo signatures of nonfluent/agrammatic primary progressive aphasia caused by FTLD pathology

    PubMed Central

    Caso, Francesca; Mandelli, Maria Luisa; Henry, Maya; Gesierich, Benno; Bettcher, Brianne M.; Ogar, Jennifer; Filippi, Massimo; Comi, Giancarlo; Magnani, Giuseppe; Sidhu, Manu; Trojanowski, John Q.; Huang, Eric J.; Grinberg, Lea T.; Miller, Bruce L.; Dronkers, Nina; Seeley, William W.

    2014-01-01

    Objective: To identify early cognitive and neuroimaging features of sporadic nonfluent/agrammatic variant of primary progressive aphasia (nfvPPA) caused by frontotemporal lobar degeneration (FTLD) subtypes. Methods: We prospectively collected clinical, neuroimaging, and neuropathologic data in 11 patients with sporadic nfvPPA with FTLD-tau (nfvPPA-tau, n = 9) or FTLD–transactive response DNA binding protein pathology of 43 kD type A (nfvPPA-TDP, n = 2). We analyzed patterns of cognitive and gray matter (GM) and white matter (WM) atrophy at presentation in the whole group and in each pathologic subtype separately. We also considered longitudinal clinical data. Results: At first evaluation, regardless of pathologic FTLD subtype, apraxia of speech (AOS) was the most common cognitive feature and atrophy involved the left posterior frontal lobe. Each pathologic subtype showed few distinctive features. At presentation, patients with nfvPPA-tau presented with mild to moderate AOS, mixed dysarthria with prominent hypokinetic features, clear agrammatism, and atrophy in the GM of the left posterior frontal regions and in left frontal WM. While speech and language deficits were prominent early, within 3 years of symptom onset, all patients with nfvPPA-tau developed significant extrapyramidal motor signs. At presentation, patients with nfvPPA-TDP had severe AOS, dysarthria with spastic features, mild agrammatism, and atrophy in left posterior frontal GM only. Selective mutism occurred early, when general neurologic examination only showed mild decrease in finger dexterity in the right hand. Conclusions: Clinical features in sporadic nfvPPA caused by FTLD subtypes relate to neurodegeneration of GM and WM in frontal motor speech and language networks. We propose that early WM atrophy in nfvPPA is suggestive of FTLD-tau pathology while early selective GM loss might be indicative of FTLD-TDP. PMID:24353332

  20. Making Non-Fluent Aphasics Speak: Sing along!

    ERIC Educational Resources Information Center

    Racette, Amelie; Bard, Celine; Peretz, Isabelle

    2006-01-01

    A classic observation in neurology is that aphasics can sing words they cannot pronounce otherwise. To further assess this claim, we investigated the production of sung and spoken utterances in eight brain-damaged patients suffering from a variety of speech disorders as a consequence of a left-hemisphere lesion. In Experiment 1, the patients were…

  1. Comprehension of main ideas and details in coherent and noncoherent discourse by aphasic and nonaphasic listeners.

    PubMed

    Wegner, M L; Brookshire, R H; Nicholas, L E

    1984-01-01

    Aphasic and nonaphasic listeners' comprehension of main ideas and details within coherent and noncoherent narrative discourse was examined. Coherent paragraphs contained one topic to which all sentences in the paragraph related. Noncoherent paragraphs contained a change in topic with every third or fourth sentence. Each paragraph contained four main ideas and one or more details that related to each main idea. Listeners' responses to yes/no questions following each paragraph yielded the following results: (1) Nonaphasic listeners comprehended the paragraphs better than aphasic listeners. (2) Both aphasic and nonaphasic listeners comprehended main ideas better than they comprehended details. (3) Coherence did not affect comprehension of main ideas for either group. (4) Coherence did not affect comprehension of details by nonaphasic subjects. (5) Coherence affected comprehension of details by aphasic subjects, and their comprehension of details in coherent paragraphs was worse than their comprehension of details in noncoherent paragraphs. There was no significant correlation between Token Test scores and measures of paragraph comprehension.

  2. Taboo: A Novel Paradigm to Elicit Aphasia-Like Trouble-Indicating Behaviour in Normally Speaking Individuals

    ERIC Educational Resources Information Center

    Meffert, Elisabeth; Tillmanns, Eva; Heim, Stefan; Jung, Stefanie; Huber, Walter; Grande, Marion

    2011-01-01

    Two important research lines in neuro- and psycholinguistics are studying natural or experimentally induced slips of the tongue and investigating the symptom patterns of aphasic individuals. Only few studies have focused on explaining aphasic symptoms by provoking aphasic symptoms in healthy speakers. While all experimental techniques have so far…

  3. The development and validation of the Visual Analogue Self-Esteem Scale (VASES).

    PubMed

    Brumfitt, S M; Sheeran, P

    1999-11-01

    To develop a visual analogue measure of self-esteem and test its psychometric properties. Two correlational studies involving samples of university students and aphasic speakers. Two hundred and forty-three university students completed multiple measures of self-esteem, depression and anxiety as well as measures of transitory mood and social desirability (Study 1). Two samples of aphasic speakers (N = 14 and N = 20) completed the Visual Analogue Self-Esteem Scale (VASES), the Rosenberg (1965) self-esteem scale and measures of depression and anxiety. (Study 2). Study 1 found evidence of good internal and test-retest reliability, construct validity and convergent and discriminant validity for a 10-item VASES. Study 2 demonstrated good internal reliability among aphasic speakers. The VASES is a short and easy to administer measure of self-esteem that possesses good psychometric properties.

  4. Development and Standardization of a New Cognitive Assessment Test Battery for Chinese Aphasic Patients: A Preliminary Study.

    PubMed

    Wu, Ji-Bao; Lyu, Zhi-Hong; Liu, Xiao-Jia; Li, Hai-Peng; Wang, Qi

    2017-10-05

    Nonlinguistic cognitive impairment has become an important issue for aphasic patients, but currently there are few neuropsychological cognitive assessment tests for it. To get more information on cognitive impairment of aphasic patients, this study aimed to develop a new cognitive assessment test battery for aphasic patients, the Non-language-based Cognitive Assessment (NLCA), and evaluate its utility in Chinese-speaking patients with aphasia. The NLCA consists of five nonverbal tests, which could assess five nonlinguistic cognitive domains such as visuospatial functions, attention test, memory, reasoning, and executive functions of aphasic patients. All tests are modified from the nonverbal items of the current existed tests with some changes to the characteristics of Chinese culture. The NLCA was tested in 157 participants (including 57 aphasic patients, 50 mild cognitive impairment (MCI) patients, and 50 normal controls), and was compared with other well-established relative neuropsychological tests on the reliability, validity, and utility. The NLCA was fully applicable in the MCI patients and the normal controls, almost working in the aphasic patients (57/62 patients, 91.9%). The NLCA scores were 66.70 ± 6.30, 48.67 ± 15.04, and 77.58 ± 2.56 for the MCI group, the aphasic group, and the control group, respectively , and a significant difference was found among three groups (F = 118.446, P < 0.001). The Cronbach's alpha of the NLCA as an index of internal consistency was 0.805, and the test-retest and interrater reliability was adequate (r=0.977 and r= 0.970, respectively). The correlations of the cognitive subtests and their validation instruments were between 0.540 and 0.670 (all P < 0.05). Spearman's correlation analysis indicated that the coefficient of internal consistency of each subtest itself was higher than other subtests. When choosing the Montreal Cognitive Assessment score of <26 as the diagnostic criteria of cognitive impairment, the area under

  5. Adaptive significance of right hemisphere activation in aphasic language comprehension

    PubMed Central

    Meltzer, Jed A.; Wagage, Suraji; Ryder, Jennifer; Solomon, Beth; Braun, Allen R.

    2013-01-01

    Aphasic patients often exhibit increased right hemisphere activity during language tasks. This may represent takeover of function by regions homologous to the left-hemisphere language networks, maladaptive interference, or adaptation of alternate compensatory strategies. To distinguish between these accounts, we tested language comprehension in 25 aphasic patients using an online sentence-picture matching paradigm while measuring brain activation with MEG. Linguistic conditions included semantically irreversible (“The boy is eating the apple”) and reversible (“The boy is pushing the girl”) sentences at three levels of syntactic complexity. As expected, patients performed well above chance on irreversible sentences, and at chance on reversible sentences of high complexity. Comprehension of reversible non-complex sentences ranged from nearly perfect to chance, and was highly correlated with offline measures of language comprehension. Lesion analysis revealed that comprehension deficits for reversible sentences were predicted by damage to the left temporal lobe. Although aphasic patients activated homologous areas in the right temporal lobe, such activation was not correlated with comprehension performance. Rather, patients with better comprehension exhibited increased activity in dorsal fronto-parietal regions. Correlations between performance and dorsal network activity occurred bilaterally during perception of sentences, and in the right hemisphere during a post-sentence memory delay. These results suggest that effortful reprocessing of perceived sentences in short-term memory can support improved comprehension in aphasia, and that strategic recruitment of alternative networks, rather than homologous takeover, may account for some findings of right hemisphere language activation in aphasia. PMID:23566891

  6. APHASIC CHILDREN, IDENTIFICATION AND EDUCATION BY THE ASSOCIATION METHOD.

    ERIC Educational Resources Information Center

    MCGINNIS, MILDRED A.

    THIS BOOK IS DESIGNED TO DEFINE APHASIA AND ITS CHARACTERISTICS, TO PRESENT A PROCEDURE FOR TEACHING LANGUAGE TO APHASIC CHILDREN, AND TO APPLY THIS PROCEDURE TO ELEMENTARY SCHOOL SUBJECTS. OTHER HANDICAPPING CONDITIONS WHICH COMPLICATE THE DIAGNOSIS OF APHASIA ARE PRESENTED BY MEANS OF CASE STUDIES. CHARACTERISTICS OF TWO TYPES OF…

  7. The Effects of Three Types of Verbal Cues on the Accuracy and Latency of Aphasic Subjects' Naming Responses.

    ERIC Educational Resources Information Center

    Teubner-Rhodes, Louise A.

    This study deals with word retrieval problems of aphasic patients. This word-finding difficulty is a common characteristic of aphasics and many methods have been used by aphasia clinicians to attempt to remediate word retrieval skills. Cueing, one of the methods used, presumably facilitates word-finding by supplying additional information to the…

  8. Training in rapid auditory processing ameliorates auditory comprehension in aphasic patients: a randomized controlled pilot study.

    PubMed

    Szelag, Elzbieta; Lewandowska, Monika; Wolak, Tomasz; Seniow, Joanna; Poniatowska, Renata; Pöppel, Ernst; Szymaszek, Aneta

    2014-03-15

    Experimental studies have often reported close associations between rapid auditory processing and language competency. The present study was aimed at improving auditory comprehension in aphasic patients following specific training in the perception of temporal order (TO) of events. We tested 18 aphasic patients showing both comprehension and TO perception deficits. Auditory comprehension was assessed by the Token Test, phonemic awareness and Voice-Onset-Time Test. The TO perception was assessed using auditory Temporal-Order-Threshold, defined as the shortest interval between two consecutive stimuli, necessary to report correctly their before-after relation. Aphasic patients participated in eight 45-minute sessions of either specific temporal training (TT, n=11) aimed to improve sequencing abilities, or control non-temporal training (NT, n=7) focussed on volume discrimination. The TT yielded improved TO perception; moreover, a transfer of improvement was observed from the time domain to the language domain, which was untrained during the training. The NT did not improve either the TO perception or comprehension in any language test. These results are in agreement with previous literature studies which proved ameliorated language competency following the TT in language-learning-impaired or dyslexic children. Our results indicated for the first time such benefits also in aphasic patients. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Contrasting Effects of Phonological Priming in Aphasic Word Production

    ERIC Educational Resources Information Center

    Wilshire, Carolyn E.; Saffran, Eleanor M.

    2005-01-01

    Two fluent aphasics, IG and GL, performed a phonological priming task in which they repeated an auditory prime then named a target picture. The two patients both had selective deficits in word production: they were at or near ceiling on lexical comprehension tasks, but were significantly impaired in picture naming. IG's naming errors included both…

  10. Selective impairment of masculine gender processing: evidence from a German aphasic.

    PubMed

    Seyboth, Margret; Blanken, Gerhard; Ehmann, Daniela; Schwarz, Falke; Bormann, Tobias

    2011-12-01

    The present single case study describes the performance of the German aphasic E.M. who exhibited a severe impairment of grammatical gender processing in masculine nouns but relatively spared performance regarding feminine and neuter ones. This error pattern was assessed with tests of gender assignment to orally or visually presented words, with oral or written responses, and with tests of gender congruency decision on noun phrases. The pattern occurred across tasks and modalities, thus suggesting a gender-specific impairment at a modality-independent level of processing. It was sensitive to frequency, thus supporting the assumption that access to gender features as part of grammatical processing is frequency sensitive. Besides being the first description of a gender-specific impairment in an aphasic subject, the data therefore have implications regarding the modelling of representation and processing of grammatical gender information within the mental lexicon.

  11. Acquired dyslexia in Serbian speakers with Broca's and Wernicke's aphasia.

    PubMed

    Vuković, Mile; Vuković, Irena; Miller, Nick

    2016-01-01

    This study examined patterns of acquired dyslexia in Serbian aphasic speakers, comparing profiles of groups with Broca's versus Wernicke's aphasia. The study also looked at the relationship of reading and auditory comprehension and between reading comprehension and reading aloud in these groups. Participants were 20 people with Broca's and 20 with Wernicke's aphasia. They were asked to read aloud and to understand written material from the Serbian adaptation of the Boston Diagnostic Aphasia Examination. A Serbian Word Reading Aloud Test was also used. The people with Broca's aphasia achieved better results in reading aloud and in reading comprehension than those with Wernicke's aphasia. Those with Wernicke's aphasia showed significantly more semantic errors than those with Broca's aphasia who had significantly more morphological and phonological errors. From the data we inferred that lesion sites accorded with previous work on networks associated with Broca's and Wernicke's aphasia and with a posterior-anterior axis for reading processes centred on (left) parietal-temporal-frontal lobes. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Cognitive control and its impact on recovery from aphasic stroke

    PubMed Central

    Warren, Jane E.; Geranmayeh, Fatemeh; Woodhead, Zoe; Leech, Robert; Wise, Richard J. S.

    2014-01-01

    Aphasic deficits are usually only interpreted in terms of domain-specific language processes. However, effective human communication and tests that probe this complex cognitive skill are also dependent on domain-general processes. In the clinical context, it is a pragmatic observation that impaired attention and executive functions interfere with the rehabilitation of aphasia. One system that is important in cognitive control is the salience network, which includes dorsal anterior cingulate cortex and adjacent cortex in the superior frontal gyrus (midline frontal cortex). This functional imaging study assessed domain-general activity in the midline frontal cortex, which was remote from the infarct, in relation to performance on a standard test of spoken language in 16 chronic aphasic patients both before and after a rehabilitation programme. During scanning, participants heard simple sentences, with each listening trial followed immediately by a trial in which they repeated back the previous sentence. Listening to sentences in the context of a listen–repeat task was expected to activate regions involved in both language-specific processes (speech perception and comprehension, verbal working memory and pre-articulatory rehearsal) and a number of task-specific processes (including attention to utterances and attempts to overcome pre-response conflict and decision uncertainty during impaired speech perception). To visualize the same system in healthy participants, sentences were presented to them as three-channel noise-vocoded speech, thereby impairing speech perception and assessing whether this evokes domain general cognitive systems. As expected, contrasting the more difficult task of perceiving and preparing to repeat noise-vocoded speech with the same task on clear speech demonstrated increased activity in the midline frontal cortex in the healthy participants. The same region was activated in the aphasic patients as they listened to standard (undistorted

  13. Manual versus Automated Narrative Analysis of Agrammatic Production Patterns: The Northwestern Narrative Language Analysis and Computerized Language Analysis

    ERIC Educational Resources Information Center

    Hsu, Chien-Ju; Thompson, Cynthia K.

    2018-01-01

    Purpose: The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals…

  14. Apraxia of Speech and Phonological Errors in the Diagnosis of Nonfluent/Agrammatic and Logopenic Variants of Primary Progressive Aphasia

    ERIC Educational Resources Information Center

    Croot, Karen; Ballard, Kirrie; Leyton, Cristian E.; Hodges, John R.

    2012-01-01

    Purpose: The International Consensus Criteria for the diagnosis of primary progressive aphasia (PPA; Gorno-Tempini et al., 2011) propose apraxia of speech (AOS) as 1 of 2 core features of nonfluent/agrammatic PPA and propose phonological errors or absence of motor speech disorder as features of logopenic PPA. We investigated the sensitivity and…

  15. [An integrated model for examination of aphasic patients and evaluation of treatment results].

    PubMed

    Ansink, B J; Vanneste, J A; Endtz, L J

    1980-02-01

    This article is an overview of the literature on integrated, multidisciplinar examination of aphasic patients, its consequences for treatment and the evaluation of the results thereof; the need of virtually standardized methods of investigation for each language is stressed.

  16. Mimicking aphasic semantic errors in normal speech production: evidence from a novel experimental paradigm.

    PubMed

    Hodgson, Catherine; Lambon Ralph, Matthew A

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study utilised a novel method- tempo picture naming. Experiment 1 showed that, compared to standard deadline naming tasks, participants made more errors on the tempo picture naming tasks. Further, RTs were longer and more errors were produced to living items than non-living items a pattern seen in both semantic dementia and semantically-impaired stroke aphasic patients. Experiment 2 showed that providing the initial phoneme as a cue enhanced performance whereas providing an incorrect phonemic cue further reduced performance. These results support the contention that the tempo picture naming paradigm reduces the time allowed for controlled semantic processing causing increased error rates. This experimental procedure would, therefore, appear to mimic the performance of aphasic patients with multi-modal semantic impairment that results from poor semantic control rather than the degradation of semantic representations observed in semantic dementia [Jefferies, E. A., & Lambon Ralph, M. A. (2006). Semantic impairment in stoke aphasia vs. semantic dementia: A case-series comparison. Brain, 129, 2132-2147]. Further implications for theories of semantic cognition and models of speech processing are discussed.

  17. The use of main concept analysis to measure discourse production in Cantonese-speaking persons with aphasia: a preliminary report.

    PubMed

    Kong, Anthony Pak-Hin

    2009-01-01

    Discourse produced by speakers with aphasia contains rich and valuable information for researchers to understand the manifestation of aphasia as well as for clinicians to plan specific treatment components for their clients. Various approaches to investigate aphasic discourse have been proposed in the English literature. However, this is not the case in Chinese. As a result, clinical evaluations of aphasic discourse have not been a common practice. This problem is further compounded by the lack of validated stimuli that are culturally appropriate for language elicitation. The purpose of this study was twofold: (a) to develop and validate four sequential pictorial stimuli for elicitation of language samples in Cantonese speakers with aphasia, and (b) to investigate the use of a main concept measurement, a clinically oriented quantitative system, to analyze the elicited language samples. Twenty speakers with aphasia and ten normal speakers were invited to participate in this study. The aphasic group produced significantly less key information than the normal group. More importantly, a strong relationship was also found between aphasia severity and production of main concepts. While the results of the inter-rater and intra-rater reliability suggested the scoring system to be reliable, the test-retest results yielded strong and significant correlations across two testing sessions one to three weeks apart. Readers will demonstrate better understanding of (1) the development and validation of newly devised sequential pictorial stimuli to elicit oral language production, and (2) the use of a main concept measurement to quantify aphasic connected speech in Cantonese Chinese.

  18. Analysis of Spoken Narratives in a Marathi-Hindi-English Multilingual Aphasic Patient

    ERIC Educational Resources Information Center

    Karbhari-Adhyaru, Medha

    2010-01-01

    In a multilingual country such as India, the probability that clinicians may not have command over different languages used by aphasic patients is very high. Since formal tests in different languages are limited, assessment of people from diverse linguistic backgrounds presents speech- language pathologists with many challenges. With a view to…

  19. Storage Costs and Heuristics Interact to Produce Patterns of Aphasic Sentence Comprehension Performance

    PubMed Central

    Clark, David Glenn

    2012-01-01

    Background: Despite general agreement that aphasic individuals exhibit difficulty understanding complex sentences, the nature of sentence complexity itself is unresolved. In addition, aphasic individuals appear to make use of heuristic strategies for understanding sentences. This research is a comparison of predictions derived from two approaches to the quantification of sentence complexity, one based on the hierarchical structure of sentences, and the other based on dependency locality theory (DLT). Complexity metrics derived from these theories are evaluated under various assumptions of heuristic use. Method: A set of complexity metrics was derived from each general theory of sentence complexity and paired with assumptions of heuristic use. Probability spaces were generated that summarized the possible patterns of performance across 16 different sentence structures. The maximum likelihood of comprehension scores of 42 aphasic individuals was then computed for each probability space and the expected scores from the best-fitting points in the space were recorded for comparison to the actual scores. Predictions were then compared using measures of fit quality derived from linear mixed effects models. Results: All three of the metrics that provide the most consistently accurate predictions of patient scores rely on storage costs based on the DLT. Patients appear to employ an Agent–Theme heuristic, but vary in their tendency to accept heuristically generated interpretations. Furthermore, the ability to apply the heuristic may be degraded in proportion to aphasia severity. Conclusion: DLT-derived storage costs provide the best prediction of sentence comprehension patterns in aphasia. Because these costs are estimated by counting incomplete syntactic dependencies at each point in a sentence, this finding suggests that aphasia is associated with reduced availability of cognitive resources for maintaining these dependencies. PMID:22590462

  20. Storage costs and heuristics interact to produce patterns of aphasic sentence comprehension performance.

    PubMed

    Clark, David Glenn

    2012-01-01

    Despite general agreement that aphasic individuals exhibit difficulty understanding complex sentences, the nature of sentence complexity itself is unresolved. In addition, aphasic individuals appear to make use of heuristic strategies for understanding sentences. This research is a comparison of predictions derived from two approaches to the quantification of sentence complexity, one based on the hierarchical structure of sentences, and the other based on dependency locality theory (DLT). Complexity metrics derived from these theories are evaluated under various assumptions of heuristic use. A set of complexity metrics was derived from each general theory of sentence complexity and paired with assumptions of heuristic use. Probability spaces were generated that summarized the possible patterns of performance across 16 different sentence structures. The maximum likelihood of comprehension scores of 42 aphasic individuals was then computed for each probability space and the expected scores from the best-fitting points in the space were recorded for comparison to the actual scores. Predictions were then compared using measures of fit quality derived from linear mixed effects models. All three of the metrics that provide the most consistently accurate predictions of patient scores rely on storage costs based on the DLT. Patients appear to employ an Agent-Theme heuristic, but vary in their tendency to accept heuristically generated interpretations. Furthermore, the ability to apply the heuristic may be degraded in proportion to aphasia severity. DLT-derived storage costs provide the best prediction of sentence comprehension patterns in aphasia. Because these costs are estimated by counting incomplete syntactic dependencies at each point in a sentence, this finding suggests that aphasia is associated with reduced availability of cognitive resources for maintaining these dependencies.

  1. [The significance of the Montessori method and phenomenon with a particular view to the therapy of the aphasics (author's transl)].

    PubMed

    Birchmeier-Nussbaumer, A K

    1980-05-01

    The methods of the Italian physician Maria Montessori influenced the development of modern learning practices. There is general agreement that the Montessori phenomenon is personality forming. Aspects of this method, which are relevant for the rehabilitation of the brain-damaged and, in particular, the aphasics are presented. Possible shifts of emphasis within the relationship therapist - method - patient are analysed. Examples are used to outline in how far an increasingly patient-oriented therapy can influence the development of the aphasic patient.

  2. Partially supervised speaker clustering.

    PubMed

    Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S

    2012-05-01

    Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical

  3. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse

    PubMed Central

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. PMID:26186256

  4. A Taiwanese Mandarin Main Concept Analysis (TM-MCA) for Quantification of Aphasic Oral Discourse

    ERIC Educational Resources Information Center

    Kong, Anthony Pak-Hin; Yeh, Chun-Chih

    2015-01-01

    Background: Various quantitative systems have been proposed to examine aphasic oral narratives in English. A clinical tool for assessing discourse produced by Cantonese-speaking persons with aphasia (PWA), namely Main Concept Analysis (MCA), was developed recently for quantifying the presence, accuracy and completeness of a narrative. Similar…

  5. Bilingualism delays the onset of behavioral but not aphasic forms of frontotemporal dementia.

    PubMed

    Alladi, Suvarna; Bak, Thomas H; Shailaja, Mekala; Gollahalli, Divyaraj; Rajan, Amulya; Surampudi, Bapiraju; Hornberger, Michael; Duggirala, Vasanta; Chaudhuri, Jaydip Ray; Kaul, Subhash

    2017-05-01

    Bilingualism has been found to delay onset of dementia and this has been attributed to an advantage in executive control in bilinguals. However, the relationship between bilingualism and cognition is complex, with costs as well as benefits to language functions. To further explore the cognitive consequences of bilingualism, the study used Frontotemporal dementia (FTD) syndromes, to examine whether bilingualism modifies the age at onset of behavioral and language variants of Frontotemporal dementia (FTD) differently. Case records of 193 patients presenting with FTD (121 of them bilingual) were examined and the age at onset of the first symptoms were compared between monolinguals and bilinguals. A significant effect of bilingualism delaying the age at onset of dementia was found in behavioral variant FTD (5.7 years) but not in progressive nonfluent aphasia (0.7 years), semantic dementia (0.5 years), corticobasal syndrome (0.4 years), progressive supranuclear palsy (4.3 years) and FTD-motor neuron disease (3 years). On dividing all patients predominantly behavioral and predominantly aphasic groups, age at onset in the bilingual behavioral group (62.6) was over 6 years higher than in the monolingual patients (56.5, p=0.006), while there was no difference in the aphasic FTD group (60.9 vs. 60.6 years, p=0.851). The bilingual effect on age of bvFTD onset was shown independently of other potential confounding factors such as education, gender, occupation, and urban vs rural dwelling of subjects. To conclude, bilingualism delays the age at onset in the behavioral but not in the aphasic variants of FTD. The results are in line with similar findings based on research in stroke and with the current views of the interaction between bilingualism and cognition, pointing to advantages in executive functions and disadvantages in lexical tasks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Transcranial direct current stimulation improves word retrieval in healthy and nonfluent aphasic subjects.

    PubMed

    Fiori, Valentina; Coccia, Michela; Marinelli, Chiara V; Vecchi, Veronica; Bonifazi, Silvia; Ceravolo, M Gabriella; Provinciali, Leandro; Tomaiuolo, Francesco; Marangolo, Paola

    2011-09-01

    A number of studies have shown that modulating cortical activity by means of transcranial direct current stimulation (tDCS) affects performances of both healthy and brain-damaged subjects. In this study, we investigated the potential of tDCS to enhance associative verbal learning in 10 healthy individuals and to improve word retrieval deficits in three patients with stroke-induced aphasia. In healthy individuals, tDCS (20 min, 1 mA) was applied over Wernicke's area (position CP5 of the International 10-20 EEG System) while they learned 20 new "words" (legal nonwords arbitrarily assigned to 20 different pictures). The healthy subjects participated in a randomized counterbalanced double-blind procedure in which they were subjected to one session of anodic tDCS over left Wernicke's area, one sham session over this location and one session of anodic tDCS stimulating the right occipito-parietal area. Each experimental session was performed during a different week (over three consecutive weeks) with 6 days of intersession interval. Over 2 weeks, three aphasic subjects participated in a randomized double-blind experiment involving intensive language training for their anomic difficulties in two tDCS conditions. Each subject participated in five consecutive daily sessions of anodic tDCS (20 min, 1 mA) and sham stimulation over Wernicke's area while they performed a picture-naming task. By the end of each week, anodic tDCS had significantly improved their accuracy on the picture-naming task. Both normal subjects and aphasic patients also had shorter naming latencies during anodic tDCS than during sham condition. At two follow-ups (1 and 3 weeks after the end of treatment), performed only in two aphasic subjects, response accuracy and reaction times were still significantly better in the anodic than in the sham condition, suggesting a long-term effect on recovery of their anomic disturbances.

  7. Widening the temporal window: Processing support in the treatment of aphasic language production

    PubMed Central

    Linebarger, Marcia; McCall, Denise; Virata, Telana; Berndt, Rita Sloan

    2007-01-01

    Investigations of language processing in aphasia have increasingly implicated performance factors such as slowed activation and/or rapid decay of linguistic information. This approach is supported by studies utilizing a communication system (SentenceShaper™) which functions as a “processing prosthesis.” The system may reduce the impact of processing limitations by allowing repeated refreshing of working memory and by increasing the opportunity for aphasic subjects to monitor their own speech. Some aphasic subjects are able to produce markedly more structured speech on the system than they are able to produce spontaneously, and periods of largely independent home use of SentenceShaper have been linked to treatment effects, that is, to gains in speech produced without the use of the system. The purpose of the current study was to follow up on these studies with a new group of subjects. A second goal was to determine whether repeated, unassisted elicitations of the same narratives at baseline would give rise to practice effects, which could undermine claims for the efficacy of the system. PMID:17069883

  8. The Development of More Efficient Measures for Evaluating Language Impairments in Aphasic Patients.

    ERIC Educational Resources Information Center

    Phillips, Phyllis P.; Halpin, Gerald

    Because it generally took over an hour to administer the Porch Index of Communicative Ability (PICA), a shorter but comparable version of the test was developed. The original test was designed to quantify aphasic patients' ability level on common communicative tasks and consisted of 18 ten-item subtests. Each item resulted in a proficiency rating,…

  9. Phonological Processing of Second Language Phonemes: A Selective Deficit in a Bilingual Aphasic.

    ERIC Educational Resources Information Center

    Eviatar, Zohar; Leikin, Mark; Ibrahim, Raphiq

    1999-01-01

    A case study of a Russian-Hebrew bilingual woman with transcortical sensory aphasia showed that overall, aphasic symptoms were similar in the two languages, with Hebrew somewhat more impaired. The woman revealed a difference in her ability to perceive phonemes in the context of Hebrew words that depended on whether they were presented in a Russian…

  10. The comprehension of ambiguous idioms in aphasic patients.

    PubMed

    Cacciari, Cristina; Reati, Fabiola; Colombo, Maria Rosa; Padovani, Roberto; Rizzo, Silvia; Papagno, Costanza

    2006-01-01

    The ability to understand ambiguous idioms was assessed in 15 aphasic patients with preserved comprehension at a single word level. A string-to-word matching task was used. Patients were requested to choose one among four alternatives: a word associated with the figurative meaning of the idiom string; a word semantically associate with the last constituent of the idiom string; and two unrelated words. The results showed that patients' performance was impaired with respect to a group of matched controls, with patients showing a frontal and/or temporal lesion being the most impaired. A significant number of semantically associate errors were produced, suggesting an impairment of inhibition mechanisms and/or of recognition/activation of the idiomatic meaning.

  11. A multimodal neuroimaging study of a case of crossed nonfluent/agrammatic primary progressive aphasia.

    PubMed

    Spinelli, Edoardo G; Caso, Francesca; Agosta, Federica; Gambina, Giuseppe; Magnani, Giuseppe; Canu, Elisa; Blasi, Valeria; Perani, Daniela; Comi, Giancarlo; Falini, Andrea; Gorno-Tempini, Maria Luisa; Filippi, Massimo

    2015-10-01

    Crossed aphasia has been reported mainly as post-stroke aphasia resulting from brain damage ipsilateral to the dominant right hand. Here, we described a case of a crossed nonfluent/agrammatic primary progressive aphasia (nfvPPA), who developed a corticobasal syndrome (CBS). We collected clinical, cognitive, and neuroimaging data for four consecutive years from a 55-year-old right-handed lady (JV) presenting with speech disturbances. 18-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) and DaT-scan with (123)I-Ioflupane were obtained. Functional MRI (fMRI) during a verb naming task was acquired to characterize patterns of language lateralization. Diffusion tensor MRI was used to evaluate white matter damage within the language network. At onset, JV presented with prominent speech output impairment and right frontal atrophy. After 3 years, language deficits worsened, with the occurrence of a mild agrammatism. The patient also developed a left-sided mild extrapyramidal bradykinetic-rigid syndrome. The clinical picture was suggestive of nfvPPA with mild left-sided extrapyramidal syndrome. At this time, voxel-wise SPM analyses of (18)F-FDG PET and structural MRI showed right greater than left frontal hypometabolism and damage, which included the Broca's area. DaT-scan showed a reduced uptake in the right striatum. FMRI during naming task demonstrated bilateral language activations, and tractography showed right superior longitudinal fasciculus (SLF) involvement. Over the following year, JV became mute and developed frank left-sided motor signs and symptoms, evolving into a CBS clinical picture. Brain atrophy worsened in frontal areas bilaterally, and extended to temporo-parietal regions, still with a right-sided asymmetry. Tractography showed an extension of damage to the left SLF and right inferior longitudinal fasciculus. We report a case of crossed nfvPPA followed longitudinally and studied with advanced neuroimaging techniques. The results highlight a

  12. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  13. Orthographic Effects in the Word Substitutions of Aphasic Patients: An Epidemic of Right Neglect Dyslexia?

    ERIC Educational Resources Information Center

    Berndt, Rita Sloan; Haendiges, Anne N.; Mitchum, Charlotte C.

    2005-01-01

    Aphasic patients with reading impairments frequently substitute incorrect real words for target words when reading aloud. Many of these word substitutions have substantial orthographic overlap with their targets and are classified as ''visual errors'' (i.e., sharing 50% of targets' letters in the same relative position). Fifteen chronic aphasic…

  14. Neural correlates of lexicon and grammar: evidence from the production, reading, and judgment of inflection in aphasia.

    PubMed

    Ullman, Michael T; Pancheva, Roumyana; Love, Tracy; Yee, Eiling; Swinney, David; Hickok, Gregory

    2005-05-01

    Are the linguistic forms that are memorized in the mental lexicon and those that are specified by the rules of grammar subserved by distinct neurocognitive systems or by a single computational system with relatively broad anatomic distribution? On a dual-system view, the productive -ed-suffixation of English regular past tense forms (e.g., look-looked) depends upon the mental grammar, whereas irregular forms (e.g., dig-dug) are retrieved from lexical memory. On a single-mechanism view, the computation of both past tense types depends on associative memory. Neurological double dissociations between regulars and irregulars strengthen the dual-system view. The computation of real and novel, regular and irregular past tense forms was investigated in 20 aphasic subjects. Aphasics with non-fluent agrammatic speech and left frontal lesions were consistently more impaired at the production, reading, and judgment of regular than irregular past tenses. Aphasics with fluent speech and word-finding difficulties, and with left temporal/temporo-parietal lesions, showed the opposite pattern. These patterns held even when measures of frequency, phonological complexity, articulatory difficulty, and other factors were held constant. The data support the view that the memorized words of the mental lexicon are subserved by a brain system involving left temporal/temporo-parietal structures, whereas aspects of the mental grammar, in particular the computation of regular morphological forms, are subserved by a distinct system involving left frontal structures.

  15. Neural Correlates of Dutch Verb Second in Speech Production

    ERIC Educational Resources Information Center

    den Ouden, Dirk-Bart; Hoogduin, Hans; Stowe, Laurie A.; Bastiaanse, Roelien

    2008-01-01

    Dutch speakers with agrammatic Broca's aphasia are known to have problems with the production of finite verbs in main clauses. This performance pattern has been accounted for in terms of the specific syntactic complexity of the Dutch main clause structure, which requires an extra syntactic operation (Verb Second), relative to the basic…

  16. Noninvasive brain stimulation for treatment of right- and left-handed poststroke aphasics.

    PubMed

    Heiss, Wolf-Dieter; Hartmann, Alexander; Rubi-Fessen, Ilona; Anglade, Carole; Kracht, Lutz; Kessler, Josef; Weiduschat, Nora; Rommel, Thomas; Thiel, Alexander

    2013-01-01

    Accumulating evidence from single case studies, small case series and randomized controlled trials seems to suggest that inhibitory noninvasive brain stimulation (NIBS) over the contralesional inferior frontal gyrus (IFG) of right-handers in conjunction with speech and language therapy (SLT) improves recovery from poststroke aphasia. Application of inhibitory NIBS to improve recovery in left-handed patients has not yet been reported. A total of 29 right-handed subacute poststroke aphasics were randomized to receive either 10 sessions of SLT following 20 min of inhibitory repetitive transcranial magnetic stimulation (rTMS) over the contralesional IFG or 10 sessions of SLT following sham stimulation; 2 left-handers were treated according to the same protocol with real rTMS. Language activation patterns were assessed with positron emission tomography prior to and after the treatment; 95% confidence intervals for changes in language performance scores and the activated brain volumes in both hemispheres were derived from TMS- and sham-treated right-handed patients and compared to the same parameters in left-handers. Right-handed patients treated with rTMS showed better recovery of language function in global aphasia test scores (t test, p < 0.002) as well as in picture-naming performance (ANOVA, p = 0.03) than sham-treated right-handers. In treated right-handers, a shift of activation to the ipsilesional hemisphere was observed, while sham-treated patients consolidated network activity in the contralesional hemisphere (repeated-measures ANOVA, p = 0.009). Both left-handed patients also improved, with 1 patient within the confidence limits of TMS-treated right-handers (23 points, 15.9-28.9) and the other patient within the limits of sham-treated subjects (8 points, 2.8-14.5). Both patients exhibited only a very small interhemispheric shift, much less than expected in TMS-treated right-handers, and more or less consolidated initially active networks in both hemispheres

  17. Working with Speakers.

    ERIC Educational Resources Information Center

    Pestel, Ann

    1989-01-01

    The author discusses working with speakers from business and industry to present career information at the secondary level. Advice for speakers is presented, as well as tips for program coordinators. (CH)

  18. On the optimization of a mixed speaker array in an enclosed space using the virtual-speaker weighting method

    NASA Astrophysics Data System (ADS)

    Peng, Bo; Zheng, Sifa; Liao, Xiangning; Lian, Xiaomin

    2018-03-01

    In order to achieve sound field reproduction in a wide frequency band, multiple-type speakers are used. The reproduction accuracy is not only affected by the signals sent to the speakers, but also depends on the position and the number of each type of speaker. The method of optimizing a mixed speaker array is investigated in this paper. A virtual-speaker weighting method is proposed to optimize both the position and the number of each type of speaker. In this method, a virtual-speaker model is proposed to quantify the increment of controllability of the speaker array when the speaker number increases. While optimizing a mixed speaker array, the gain of the virtual-speaker transfer function is used to determine the priority orders of the candidate speaker positions, which optimizes the position of each type of speaker. Then the relative gain of the virtual-speaker transfer function is used to determine whether the speakers are redundant, which optimizes the number of each type of speaker. Finally the virtual-speaker weighting method is verified by reproduction experiments of the interior sound field in a passenger car. The results validate that the optimum mixed speaker array can be obtained using the proposed method.

  19. RTP Speakers Bureau

    EPA Pesticide Factsheets

    The Research Triangle Park Speakers Bureau page is a free resource that schools, universities, and community groups in the Raleigh-Durham-Chapel Hill, N.C. area can use to request speakers and find educational resources.

  20. Clinical and MRI correlates of disease progression in a case of nonfluent/agrammatic variant of primary progressive aphasia due to progranulin (GRN) Cys157LysfsX97 mutation.

    PubMed

    Caso, Francesca; Agosta, Federica; Magnani, Giuseppe; Galantucci, Sebastiano; Spinelli, Edoardo G; Galimberti, Daniela; Falini, Andrea; Comi, Giancarlo; Filippi, Massimo

    2014-07-15

    Little is known about the longitudinal changes of brain damage in patients with sporadic nonfluent/agrammatic variant of primary progressive aphasia (nfvPPA) and in progranulin (GRN) mutation carriers. This study reports the clinical and MRI longitudinal data of a patient with nfvPPA carrying GRN Cys157LysfsX97 mutation (GRN+). Voxel-based morphometry, tensor-based morphometry and diffusion tensor MRI were applied to evaluate gray matter (GM) and white matter (WM) changes over three years. The prominent clinical feature was motor speech impairment associated with only mild agrammatism. MRI demonstrated a progressive and severe GM atrophy of inferior fronto-insular-temporo-parietal regions with focal damage to frontotemporal and frontoparietal WM connections. This is the first report of longitudinal MRI data in a nfvPPA- GRN+ patient and this report offers new insights into the pathophysiology of the disease. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. A comparison of aphasic and non-brain-injured adults on a dichotic CV-syllable listening task.

    PubMed

    Shanks, J; Ryan, W

    1976-06-01

    A dichotic CV-syllable listening task was administered to a group of eleven non-brain-injured adults and to a group of eleven adult aphasics. The results of this study may be summarized as follows: 1)The group of non-brain-injured adults showed a slight right ear advantage for dichotically presented CV-syllables. 2)In comparison with the control group the asphasic group showed a bilateral deficit in response to the dichotic CV-syllables, superimposed on a non-significant right ear advantage. 3) The asphasic group demonstrated a great deal of intersubject variability on the dichotic task with six aphasics showing a right ear preference for the stimuli. The non-brain-injured subjects performed more homogeneously on the task. 4) The two subgroups of aphasics, a right ear advantage group and a left ear advantage group, performed significantly different on the dichotic listening task. 5) Single correct data analysis proved valuable by deleting accuracy of report for an examination of trials in which there was true competition for the single left hemispheric speech processor. These results were analyzed in terms of a functional model of auditory processing. In view of this model, the bilateral deficit in dichotic performance of the asphasic group was accounted for by the presence of a lesion within the dominant left hemisphere, where the speech signals from both ears converge for final processing. The right ear advantage shown by one asphasic subgroup was explained by a lesion interfering with the corpus callosal pathways from the left hemisphere; the left ear advantage observed within the other subgroup was explained by a lesion in the area of the auditory processor of the left hemisphere.

  2. Request a Speaker

    Science.gov Websites

    . Northern Command Speakers Program The U.S. Northern Command Speaker's Program works to increase face-to -face contact with our public to help build and sustain public understanding of our command missions and

  3. Reflecting on Native Speaker Privilege

    ERIC Educational Resources Information Center

    Berger, Kathleen

    2014-01-01

    The issues surrounding native speakers (NSs) and nonnative speakers (NNSs) as teachers (NESTs and NNESTs, respectively) in the field of teaching English to speakers of other languages (TESOL) are a current topic of interest. In many contexts, the native speaker of English is viewed as the model teacher, thus putting the NEST into a position of…

  4. EEG Delta Band as a Marker of Brain Damage in Aphasic Patients after Recovery of Language

    ERIC Educational Resources Information Center

    Spironelli, Chiara; Angrilli, Alessandro

    2009-01-01

    In this study spectral delta percentage was used to assess both brain dysfunction/inhibition and functional linguistic impairment during different phases of word processing. To this aim, EEG delta amplitude was measured in 17 chronic non-fluent aphasic patients while engaged in three linguistic tasks: Orthographic, Phonological and Semantic.…

  5. Brief intervention for agrammatism in Primary Progressive Nonfluent Aphasia: A case report

    PubMed Central

    Machado, Thais Helena; Campanha, Aline Carvalho; Caramelli, Paulo; Carthery-Goulart, Maria Teresa

    2014-01-01

    The non-fluent and agrammatic variant of Primary Progressive Aphasia (NFPPA) is characterized by reduced verbal production with deficits in building grammatically correct sentences, involving dysfunctions in syntactic and morphological levels of language. There are a growing number of studies about non-pharmacological alternatives focusing on the rehabilitation of functional aspects or specific cognitive impairments of each variant of PPA. This study reports a short-term treatment administered to a patient with NFPPA focusing on the production of sentences. The patient had significant reduction in verbal fluency, use of keywords, phrasal and grammatical simplifying as well as anomia. Using the method of errorless learning, six sessions were structured to stimulate the formation of sentences in the present and past with the cloze technique. The patient had improvement restricted to the strategy, with 100% accuracy on the trained phrases and generalization to untrained similar syntactic structure after training. These results persisted one month after the treatment. PMID:29213916

  6. Variable Solutions to the Same Problem: Aberrant Practice Effects in Object Naming by Three Aphasic Patients

    ERIC Educational Resources Information Center

    Wingfield, Arthur; Brownell, Hiram; Hoyte, Ken J.

    2006-01-01

    Although deficits in confrontation naming are a common consequence of damage to the language areas of the left cerebral hemisphere, some patients with aphasia show relatively good naming ability. We measured effects of repeated practice on naming latencies for a set of pictured objects by three aphasic patients with near-normal naming ability and…

  7. Resumption of gainful employment in aphasics: preliminary findings.

    PubMed

    Carriero, M R; Faglia, L; Vignolo, L A

    1987-12-01

    We report preliminary data on aphasic patients who, in spite of their language problems, have succeeded in finding a reasonably satisfactory occupational resettlement. Patients who: (a) still had a moderate to sever aphasia, (b) had resumed a gainful employment requiring interpersonal communication, were recalled for a check-up and assessed with: (1) a comprehensive aphasia test: (2) a semistructured interview including detailed questioning about the type and reaction to aphasia, the type of work before the onset of aphasia, the type of current work with particular emphasis on the patients' compensatory mechanisms and emotional reactions. Results comprise 10 cases up to date. One case is described in detail. Findings indicate that the ability to resume a gainful occupation is often greater than could be expected on the sole basis of formal language examination. Findings are discussed from a neuropsychological, social and rehabilitation point of view.

  8. When speaker identity is unavoidable: Neural processing of speaker identity cues in natural speech.

    PubMed

    Tuninetti, Alba; Chládková, Kateřina; Peter, Varghese; Schiller, Niels O; Escudero, Paola

    2017-11-01

    Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners' native language. In a multi-deviant oddball paradigm, native and nonnative speakers of Dutch were exposed to naturally-produced Dutch vowels varying in speaker, sex, accent, and phoneme identity. Unexpectedly, the analysis of mismatch negativity (MMN) amplitudes elicited by each type of change shows a large degree of early perceptual sensitivity to non-linguistic cues. This finding on perception of naturally-produced stimuli contrasts with previous studies examining the perception of synthetic stimuli wherein adult listeners automatically disregard acoustic cues to speaker identity. The present finding bears relevance to speech normalization theories, suggesting that at an unattended level of processing, listeners are indeed sensitive to changes in fundamental frequency in natural speech tokens. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. The Role of Interaction in Native Speaker Comprehension of Nonnative Speaker Speech.

    ERIC Educational Resources Information Center

    Polio, Charlene; Gass, Susan M.

    1998-01-01

    Because interaction gives language learners an opportunity to modify their speech upon a signal of noncomprehension, it should also have a positive effect on native speakers' (NS) comprehension of nonnative speakers (NNS). This study shows that interaction does help NSs comprehend NNSs, contrasting the claims of an earlier study that found no…

  10. Arctic Visiting Speakers Series (AVS)

    NASA Astrophysics Data System (ADS)

    Fox, S. E.; Griswold, J.

    2011-12-01

    The Arctic Visiting Speakers (AVS) Series funds researchers and other arctic experts to travel and share their knowledge in communities where they might not otherwise connect. Speakers cover a wide range of arctic research topics and can address a variety of audiences including K-12 students, graduate and undergraduate students, and the general public. Host applications are accepted on an on-going basis, depending on funding availability. Applications need to be submitted at least 1 month prior to the expected tour dates. Interested hosts can choose speakers from an online Speakers Bureau or invite a speaker of their choice. Preference is given to individuals and organizations to host speakers that reach a broad audience and the general public. AVS tours are encouraged to span several days, allowing ample time for interactions with faculty, students, local media, and community members. Applications for both domestic and international visits will be considered. Applications for international visits should involve participation of more than one host organization and must include either a US-based speaker or a US-based organization. This is a small but important program that educates the public about Arctic issues. There have been 27 tours since 2007 that have impacted communities across the globe including: Gatineau, Quebec Canada; St. Petersburg, Russia; Piscataway, New Jersey; Cordova, Alaska; Nuuk, Greenland; Elizabethtown, Pennsylvania; Oslo, Norway; Inari, Finland; Borgarnes, Iceland; San Francisco, California and Wolcott, Vermont to name a few. Tours have included lectures to K-12 schools, college and university students, tribal organizations, Boy Scout troops, science center and museum patrons, and the general public. There are approximately 300 attendees enjoying each AVS tour, roughly 4100 people have been reached since 2007. The expectations for each tour are extremely manageable. Hosts must submit a schedule of events and a tour summary to be posted online

  11. Comparison of singer's formant, speaker's ring, and LTA spectrum among classical singers and untrained normal speakers.

    PubMed

    Oliveira Barrichelo, V M; Heuer, R J; Dean, C M; Sataloff, R T

    2001-09-01

    Many studies have described and analyzed the singer's formant. A similar phenomenon produced by trained speakers led some authors to examine the speaker's ring. If we consider these phenomena as resonance effects associated with vocal tract adjustments and training, can we hypothesize that trained singers can carry over their singing formant ability into speech, also obtaining a speaker's ring? Can we find similar differences for energy distribution in continuous speech? Forty classically trained singers and forty untrained normal speakers performed an all-voiced reading task and produced a sample of a sustained spoken vowel /a/. The singers were also requested to perform a sustained sung vowel /a/ at a comfortable pitch. The reading was analyzed by the long-term average spectrum (LTAS) method. The sustained vowels were analyzed through power spectrum analysis. The data suggest that singers show more energy concentration in the singer's formant/speaker's ring region in both sung and spoken vowels. The singers' spoken vowel energy in the speaker's ring area was found to be significantly larger than that of the untrained speakers. The LTAS showed similar findings suggesting that those differences also occur in continuous speech. This finding supports the value of further research on the effect of singing training on the resonance of the speaking voice.

  12. Hybrid Speaker Recognition Using Universal Acoustic Model

    NASA Astrophysics Data System (ADS)

    Nishimura, Jun; Kuroda, Tadahiro

    We propose a novel speaker recognition approach using a speaker-independent universal acoustic model (UAM) for sensornet applications. In sensornet applications such as “Business Microscope”, interactions among knowledge workers in an organization can be visualized by sensing face-to-face communication using wearable sensor nodes. In conventional studies, speakers are detected by comparing energy of input speech signals among the nodes. However, there are often synchronization errors among the nodes which degrade the speaker recognition performance. By focusing on property of the speaker's acoustic channel, UAM can provide robustness against the synchronization error. The overall speaker recognition accuracy is improved by combining UAM with the energy-based approach. For 0.1s speech inputs and 4 subjects, speaker recognition accuracy of 94% is achieved at the synchronization error less than 100ms.

  13. Grammatical comprehension deficits in non-fluent/agrammatic primary progressive aphasia.

    PubMed

    Charles, Dorothy; Olm, Christopher; Powers, John; Ash, Sharon; Irwin, David J; McMillan, Corey T; Rascovsky, Katya; Grossman, Murray

    2014-03-01

    Grammatical comprehension difficulty is an essential supporting feature of the non-fluent/agrammatic variant of primary progressive aphasia (naPPA), but well-controlled clinical measures of grammatical comprehension are unavailable. To develop a measure of grammatical comprehension and examine this comparatively in PPA variants and behavioural-variant frontotemporal degeneration (bvFTD) and to assess the neuroanatomic basis for these deficits with volumetric grey matter atrophy and whole-brain fractional anisotropy (FA) in white matter tracts. Case-control study. Academic medical centre. 39 patients with variants of PPA (naPPA=12, lvPPA=15 and svPPA=12), 27 bvFTD patients without aphasia and 12 healthy controls. Grammatical comprehension accuracy. Patients with naPPA had selective difficulty understanding cleft sentence structures, while all PPA variants and patients with bvFTD were impaired with sentences containing a centre-embedded subordinate clause. Patients with bvFTD were also impaired understanding sentences involving short-term memory. Linear regressions related grammatical comprehension difficulty in naPPA to left anterior-superior temporal atrophy and reduced FA in corpus callosum and inferior frontal-occipital fasciculus. Difficulty with centre-embedded sentences in other PPA variants was related to other brain regions. These findings emphasise a distinct grammatical comprehension deficit in naPPA and associate this with interruption of a frontal-temporal neural network.

  14. Experimental study on GMM-based speaker recognition

    NASA Astrophysics Data System (ADS)

    Ye, Wenxing; Wu, Dapeng; Nucci, Antonio

    2010-04-01

    Speaker recognition plays a very important role in the field of biometric security. In order to improve the recognition performance, many pattern recognition techniques have be explored in the literature. Among these techniques, the Gaussian Mixture Model (GMM) is proved to be an effective statistic model for speaker recognition and is used in most state-of-the-art speaker recognition systems. The GMM is used to represent the 'voice print' of a speaker through modeling the spectral characteristic of speech signals of the speaker. In this paper, we implement a speaker recognition system, which consists of preprocessing, Mel-Frequency Cepstrum Coefficients (MFCCs) based feature extraction, and GMM based classification. We test our system with TIDIGITS data set (325 speakers) and our own recordings of more than 200 speakers; our system achieves 100% correct recognition rate. Moreover, we also test our system under the scenario that training samples are from one language but test samples are from a different language; our system also achieves 100% correct recognition rate, which indicates that our system is language independent.

  15. Accounting for the listener: comparing the production of contrastive intonation in typically-developing speakers and speakers with autism.

    PubMed

    Kaland, Constantijn; Swerts, Marc; Krahmer, Emiel

    2013-09-01

    The present research investigates what drives the prosodic marking of contrastive information. For example, a typically developing speaker of a Germanic language like Dutch generally refers to a pink car as a "PINK car" (accented words in capitals) when a previously mentioned car was red. The main question addressed in this paper is whether contrastive intonation is produced with respect to the speaker's or (also) the listener's perspective on the preceding discourse. Furthermore, this research investigates the production of contrastive intonation by typically developing speakers and speakers with autism. The latter group is investigated because people with autism are argued to have difficulties accounting for another person's mental state and exhibit difficulties in the production and perception of accentuation and pitch range. To this end, utterances with contrastive intonation are elicited from both groups and analyzed in terms of function and form of prosody using production and perception measures. Contrary to expectations, typically developing speakers and speakers with autism produce functionally similar contrastive intonation as both groups account for both their own and their listener's perspective. However, typically developing speakers use a larger pitch range and are perceived as speaking more dynamically than speakers with autism, suggesting differences in their use of prosodic form.

  16. Speaker Identity Supports Phonetic Category Learning

    ERIC Educational Resources Information Center

    Mani, Nivedita; Schneider, Signe

    2013-01-01

    Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The…

  17. Speaker normalization for chinese vowel recognition in cochlear implants.

    PubMed

    Luo, Xin; Fu, Qian-Jie

    2005-07-01

    Because of the limited spectra-temporal resolution associated with cochlear implants, implant patients often have greater difficulty with multitalker speech recognition. The present study investigated whether multitalker speech recognition can be improved by applying speaker normalization techniques to cochlear implant speech processing. Multitalker Chinese vowel recognition was tested with normal-hearing Chinese-speaking subjects listening to a 4-channel cochlear implant simulation, with and without speaker normalization. For each subject, speaker normalization was referenced to the speaker that produced the best recognition performance under conditions without speaker normalization. To match the remaining speakers to this "optimal" output pattern, the overall frequency range of the analysis filter bank was adjusted for each speaker according to the ratio of the mean third formant frequency values between the specific speaker and the reference speaker. Results showed that speaker normalization provided a small but significant improvement in subjects' overall recognition performance. After speaker normalization, subjects' patterns of recognition performance across speakers changed, demonstrating the potential for speaker-dependent effects with the proposed normalization technique.

  18. Multimodal Speaker Diarization.

    PubMed

    Noulas, A; Englebienne, G; Krose, B J A

    2012-01-01

    We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.

  19. Using Stimulated Recall to Investigate Native Speaker Perceptions in Native-Nonnative Speaker Interaction

    ERIC Educational Resources Information Center

    Polio, Charlene; Gass, Susan; Chapin, Laura

    2006-01-01

    Implicit negative feedback has been shown to facilitate SLA, and the extent to which such feedback is given is related to a variety of task and interlocutor variables. The background of a native speaker (NS), in terms of amount of experience in interactions with nonnative speakers (NNSs), has been shown to affect the quantity of implicit negative…

  20. English Speakers Attend More Strongly than Spanish Speakers to Manner of Motion when Classifying Novel Objects and Events

    ERIC Educational Resources Information Center

    Kersten, Alan W.; Meissner, Christian A.; Lechuga, Julia; Schwartz, Bennett L.; Albrechtsen, Justin S.; Iglesias, Adam

    2010-01-01

    Three experiments provide evidence that the conceptualization of moving objects and events is influenced by one's native language, consistent with linguistic relativity theory. Monolingual English speakers and bilingual Spanish/English speakers tested in an English-speaking context performed better than monolingual Spanish speakers and bilingual…

  1. Patterns of lung volume use during an extemporaneous speech task in persons with Parkinson disease.

    PubMed

    Bunton, Kate

    2005-01-01

    This study examined patterns of lung volume use in speakers with Parkinson disease (PD) during an extemporaneous speaking task. The performance of a control group was also examined. Behaviors described are based on acoustic, kinematic and linguistic measures. Group differences were found in breath group duration, lung volume initiation, and lung volume termination measures. Speakers in the control group alternated between a longer and shorter breath groups. With starting lung volumes being higher for the longer breath groups and lower for shorter breath groups. Speech production was terminated before reaching tidal end expiratory level. This pattern was also seen in 4 of 7 speakers with PD. The remaining 3 PD speakers initiated speech at low starting lung volumes and continued speaking below EEL. This subgroup of PD speakers ended breath groups at agrammatical boundaries, whereas control speakers ended at appropriate grammatical boundaries. As a result of participating in this exercise, the reader will (1) be able to describe the patterns of lung volume use in speakers with Parkinson disease and compare them with those employed by control speakers; and (2) obtain information about the influence of speaking task on speech breathing.

  2. The Speaker Gender Gap at Critical Care Conferences.

    PubMed

    Mehta, Sangeeta; Rose, Louise; Cook, Deborah; Herridge, Margaret; Owais, Sawayra; Metaxa, Victoria

    2018-06-01

    To review women's participation as faculty at five critical care conferences over 7 years. Retrospective analysis of five scientific programs to identify the proportion of females and each speaker's profession based on conference conveners, program documents, or internet research. Three international (European Society of Intensive Care Medicine, International Symposium on Intensive Care and Emergency Medicine, Society of Critical Care Medicine) and two national (Critical Care Canada Forum, U.K. Intensive Care Society State of the Art Meeting) annual critical care conferences held between 2010 and 2016. Female faculty speakers. None. Male speakers outnumbered female speakers at all five conferences, in all 7 years. Overall, women represented 5-31% of speakers, and female physicians represented 5-26% of speakers. Nursing and allied health professional faculty represented 0-25% of speakers; in general, more than 50% of allied health professionals were women. Over the 7 years, Society of Critical Care Medicine had the highest representation of female (27% overall) and nursing/allied health professional (16-25%) speakers; notably, male physicians substantially outnumbered female physicians in all years (62-70% vs 10-19%, respectively). Women's representation on conference program committees ranged from 0% to 40%, with Society of Critical Care Medicine having the highest representation of women (26-40%). The female proportions of speakers, physician speakers, and program committee members increased significantly over time at the Society of Critical Care Medicine and U.K. Intensive Care Society State of the Art Meeting conferences (p < 0.05), but there was no temporal change at the other three conferences. There is a speaker gender gap at critical care conferences, with male faculty outnumbering female faculty. This gap is more marked among physician speakers than those speakers representing nursing and allied health professionals. Several organizational strategies can

  3. How Do Speakers Avoid Ambiguous Linguistic Expressions?

    ERIC Educational Resources Information Center

    Ferreira, V.S.; Slevc, L.R.; Rogers, E.S.

    2005-01-01

    Three experiments assessed how speakers avoid linguistically and nonlinguistically ambiguous expressions. Speakers described target objects (a flying mammal, bat) in contexts including foil objects that caused linguistic (a baseball bat) and nonlinguistic (a larger flying mammal) ambiguity. Speakers sometimes avoided linguistic-ambiguity, and they…

  4. Primary progressive aphasia and the evolving neurology of the language network

    PubMed Central

    Mesulam, M.-Marsel; Rogalski, Emily J.; Wieneke, Christina; Hurley, Robert S.; Geula, Changiz; Bigio, Eileen H.; Thompson, Cynthia K.; Weintraub, Sandra

    2014-01-01

    Primary progressive aphasia (PPA) is caused by selective neurodegeneration of the language-dominant cerebral hemisphere; a language deficit initially arises as the only consequential impairment and remains predominant throughout most of the course of the disease. Agrammatic, logopenic and semantic subtypes, each reflecting a characteristic pattern of language impairment and corresponding anatomical distribution of cortical atrophy, represent the most frequent presentations of PPA. Such associations between clinical features and the sites of atrophy have provided new insights into the neurology of fluency, grammar, word retrieval, and word comprehension, and have necessitated modification of concepts related to the functions of the anterior temporal lobe and Wernicke’s area. The underlying neuropathology of PPA is, most commonly, frontotemporal lobar degeneration in the agrammatic and semantic forms, and Alzheimer disease (AD) pathology in the logopenic form; the AD pathology often displays atypical and asymmetrical anatomical features consistent with the aphasic phenotype. The PPA syndrome reflects complex interactions between disease-specific neuropathological features and patient-specific vulnerability. A better understanding of these interactions might help us to elucidate the biology of the language network and the principles of selective vulnerability in neurodegenerative diseases. We review these aspects of PPA, focusing on advances in our understanding of the clinical features and neuropathology of PPA and what they have taught us about the neural substrates of the language network. PMID:25179257

  5. Primary progressive aphasia and the evolving neurology of the language network.

    PubMed

    Mesulam, M-Marsel; Rogalski, Emily J; Wieneke, Christina; Hurley, Robert S; Geula, Changiz; Bigio, Eileen H; Thompson, Cynthia K; Weintraub, Sandra

    2014-10-01

    Primary progressive aphasia (PPA) is caused by selective neurodegeneration of the language-dominant cerebral hemisphere; a language deficit initially arises as the only consequential impairment and remains predominant throughout most of the course of the disease. Agrammatic, logopenic and semantic subtypes, each reflecting a characteristic pattern of language impairment and corresponding anatomical distribution of cortical atrophy, represent the most frequent presentations of PPA. Such associations between clinical features and the sites of atrophy have provided new insights into the neurology of fluency, grammar, word retrieval, and word comprehension, and have necessitated modification of concepts related to the functions of the anterior temporal lobe and Wernicke's area. The underlying neuropathology of PPA is, most commonly, frontotemporal lobar degeneration in the agrammatic and semantic forms, and Alzheimer disease (AD) pathology in the logopenic form; the AD pathology often displays atypical and asymmetrical anatomical features consistent with the aphasic phenotype. The PPA syndrome reflects complex interactions between disease-specific neuropathological features and patient-specific vulnerability. A better understanding of these interactions might help us to elucidate the biology of the language network and the principles of selective vulnerability in neurodegenerative diseases. We review these aspects of PPA, focusing on advances in our understanding of the clinical features and neuropathology of PPA and what they have taught us about the neural substrates of the language network.

  6. Speech serial control in healthy speakers and speakers with hypokinetic or ataxic dysarthria: effects of sequence length and practice

    PubMed Central

    Reilly, Kevin J.; Spencer, Kristie A.

    2013-01-01

    The current study investigated the processes responsible for selection of sounds and syllables during production of speech sequences in 10 adults with hypokinetic dysarthria from Parkinson’s disease, five adults with ataxic dysarthria, and 14 healthy control speakers. Speech production data from a choice reaction time task were analyzed to evaluate the effects of sequence length and practice on speech sound sequencing. Speakers produced sequences that were between one and five syllables in length over five experimental runs of 60 trials each. In contrast to the healthy speakers, speakers with hypokinetic dysarthria demonstrated exaggerated sequence length effects for both inter-syllable intervals (ISIs) and speech error rates. Conversely, speakers with ataxic dysarthria failed to demonstrate a sequence length effect on ISIs and were also the only group that did not exhibit practice-related changes in ISIs and speech error rates over the five experimental runs. The exaggerated sequence length effects in the hypokinetic speakers with Parkinson’s disease are consistent with an impairment of action selection during speech sequence production. The absent length effects observed in the speakers with ataxic dysarthria is consistent with previous findings that indicate a limited capacity to buffer speech sequences in advance of their execution. In addition, the lack of practice effects in these speakers suggests that learning-related improvements in the production rate and accuracy of speech sequences involves processing by structures of the cerebellum. Together, the current findings inform models of serial control for speech in healthy speakers and support the notion that sequencing deficits contribute to speech symptoms in speakers with hypokinetic or ataxic dysarthria. In addition, these findings indicate that speech sequencing is differentially impaired in hypokinetic and ataxic dysarthria. PMID:24137121

  7. Audiovisual perceptual learning with multiple speakers.

    PubMed

    Mitchel, Aaron D; Gerfen, Chip; Weiss, Daniel J

    2016-05-01

    One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.

  8. The 2016 NIST Speaker Recognition Evaluation

    DTIC Science & Technology

    2017-08-20

    The 2016 NIST Speaker Recognition Evaluation Seyed Omid Sadjadi1,∗, Timothée Kheyrkhah1,†, Audrey Tong1, Craig Greenberg1, Douglas Reynolds2, Elliot...recent in an ongoing series of speaker recognition evaluations (SRE) to foster research in ro- bust text-independent speaker recognition, as well as...online evaluation platform, a fixed training data condition, more variability in test segment duration (uni- formly distributed between 10s and 60s

  9. Learning Words from Speakers with False Beliefs

    ERIC Educational Resources Information Center

    Papafragou, Anna; Fairchild, Sarah; Cohen, Matthew L.; Friedberg, Carlyn

    2017-01-01

    During communication, hearers try to infer the speaker's intentions to be able to understand what the speaker means. Nevertheless, whether (and how early) preschoolers track their interlocutors' mental states is still a matter of debate. Furthermore, there is disagreement about how children's ability to consult a speaker's belief in communicative…

  10. Embodied Communication: Speakers' Gestures Affect Listeners' Actions

    ERIC Educational Resources Information Center

    Cook, Susan Wagner; Tanenhaus, Michael K.

    2009-01-01

    We explored how speakers and listeners use hand gestures as a source of perceptual-motor information during naturalistic communication. After solving the Tower of Hanoi task either with real objects or on a computer, speakers explained the task to listeners. Speakers' hand gestures, but not their speech, reflected properties of the particular…

  11. Unsupervised real-time speaker identification for daily movies

    NASA Astrophysics Data System (ADS)

    Li, Ying; Kuo, C.-C. Jay

    2002-07-01

    The problem of identifying speakers for movie content analysis is addressed in this paper. While most previous work on speaker identification was carried out in a supervised mode using pure audio data, more robust results can be obtained in real-time by integrating knowledge from multiple media sources in an unsupervised mode. In this work, both audio and visual cues will be employed and subsequently combined in a probabilistic framework to identify speakers. Particularly, audio information is used to identify speakers with a maximum likelihood (ML)-based approach while visual information is adopted to distinguish speakers by detecting and recognizing their talking faces based on face detection/recognition and mouth tracking techniques. Moreover, to accommodate for speakers' acoustic variations along time, we update their models on the fly by adapting to their newly contributed speech data. Encouraging results have been achieved through extensive experiments, which shows a promising future of the proposed audiovisual-based unsupervised speaker identification system.

  12. Speaker Linking and Applications using Non-Parametric Hashing Methods

    DTIC Science & Technology

    2016-09-08

    clustering method based on hashing—canopy- clustering . We apply this method to a large corpus of speaker recordings, demonstrate performance tradeoffs...and compare to other hash- ing methods. Index Terms: speaker recognition, clustering , hashing, locality sensitive hashing. 1. Introduction We assume...speaker in our corpus. Second, given a QBE method, how can we perform speaker clustering —each clustering should be a single speaker, and a cluster should

  13. The effects of enactment on communicative competence in aphasic casual conversation: a functional linguistic perspective.

    PubMed

    Groenewold, Rimke; Armstrong, Elizabeth

    2018-05-14

    Previous research has shown that speakers with aphasia rely on enactment more often than non-brain-damaged language users. Several studies have been conducted to explain this observed increase, demonstrating that spoken language containing enactment is easier to produce and is more engaging to the conversation partner. This paper describes the effects of the occurrence of enactment in casual conversation involving individuals with aphasia on its level of conversational assertiveness. To evaluate whether and to what extent the occurrence of enactment in speech of individuals with aphasia contributes to its conversational assertiveness. Conversations between a speaker with aphasia and his wife (drawn from AphasiaBank) were analysed in several steps. First, the transcripts were divided into moves, and all moves were coded according to the systemic functional linguistics (SFL) framework. Next, all moves were labelled in terms of their level of conversational assertiveness, as defined in the previous literature. Finally, all enactments were identified and their level of conversational assertiveness was compared with that of non-enactments. Throughout their conversations, the non-brain-damaged speaker was more assertive than the speaker with aphasia. However, the speaker with aphasia produced more enactments than the non-brain-damaged speaker. The moves of the speaker with aphasia containing enactment were more assertive than those without enactment. The use of enactment in the conversations under study positively affected the level of conversational assertiveness of the speaker with aphasia, a competence that is important for speakers with aphasia because it contributes to their floor time, chances to be heard seriously and degree of control over the conversation topic. © 2018 The Authors International Journal of Language & Communication Disorders published by John Wiley & Sons Ltd on behalf of Royal College of Speech and Language Therapists.

  14. Compliment Responses: Comparing American Learners of Japanese, Native Japanese Speakers, and American Native English Speakers

    ERIC Educational Resources Information Center

    Tatsumi, Naofumi

    2012-01-01

    Previous research shows that American learners of Japanese (AJs) tend to differ from native Japanese speakers in their compliment responses (CRs). Yokota (1986) and Shimizu (2009) have reported that AJs tend to respond more negatively than native Japanese speakers. It has also been reported that AJs' CRs tend to lack the use of avoidance or…

  15. Speaker Segmentation and Clustering Using Gender Information

    DTIC Science & Technology

    2006-02-01

    used in the first stages of segmentation forder information in the clustering of the opposite-gender speaker diarization of news broadcasts. files, the...AFRL-HE-WP-TP-2006-0026 AIR FORCE RESEARCH LABORATORY Speaker Segmentation and Clustering Using Gender Information Brian M. Ore General Dynamics...COVERED (From - To) February 2006 ProceedinLgs 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Speaker Segmentation and Clustering Using Gender Information 5b

  16. Magnetic Fluids Deliver Better Speaker Sound Quality

    NASA Technical Reports Server (NTRS)

    2015-01-01

    In the 1960s, Glenn Research Center developed a magnetized fluid to draw rocket fuel into spacecraft engines while in space. Sony has incorporated the technology into its line of slim speakers by using the fluid as a liquid stand-in for the speaker's dampers, which prevent the speaker from blowing out while adding stability. The fluid helps to deliver more volume and hi-fidelity sound while reducing distortion.

  17. Predicting clinical decline in progressive agrammatic aphasia and apraxia of speech.

    PubMed

    Whitwell, Jennifer L; Weigand, Stephen D; Duffy, Joseph R; Clark, Heather M; Strand, Edythe A; Machulda, Mary M; Spychalla, Anthony J; Senjem, Matthew L; Jack, Clifford R; Josephs, Keith A

    2017-11-28

    To determine whether baseline clinical and MRI features predict rate of clinical decline in patients with progressive apraxia of speech (AOS). Thirty-four patients with progressive AOS, with AOS either in isolation or in the presence of agrammatic aphasia, were followed up longitudinally for up to 4 visits, with clinical testing and MRI at each visit. Linear mixed-effects regression models including all visits (n = 94) were used to assess baseline clinical and MRI variables that predict rate of worsening of aphasia, motor speech, parkinsonism, and behavior. Clinical predictors included baseline severity and AOS type. MRI predictors included baseline frontal, premotor, motor, and striatal gray matter volumes. More severe parkinsonism at baseline was associated with faster rate of decline in parkinsonism. Patients with predominant sound distortions (AOS type 1) showed faster rates of decline in aphasia and motor speech, while patients with segmented speech (AOS type 2) showed faster rates of decline in parkinsonism. On MRI, we observed trends for fastest rates of decline in aphasia in patients with relatively small left, but preserved right, Broca area and precentral cortex. Bilateral reductions in lateral premotor cortex were associated with faster rates of decline of behavior. No associations were observed between volumes and decline in motor speech or parkinsonism. Rate of decline of each of the 4 clinical features assessed was associated with different baseline clinical and regional MRI predictors. Our findings could help improve prognostic estimates for these patients. © 2017 American Academy of Neurology.

  18. How Cognitive Load Influences Speakers' Choice of Referring Expressions.

    PubMed

    Vogels, Jorrig; Krahmer, Emiel; Maes, Alfons

    2015-08-01

    We report on two experiments investigating the effect of an increased cognitive load for speakers on the choice of referring expressions. Speakers produced story continuations to addressees, in which they referred to characters that were either salient or non-salient in the discourse. In Experiment 1, referents that were salient for the speaker were non-salient for the addressee, and vice versa. In Experiment 2, all discourse information was shared between speaker and addressee. Cognitive load was manipulated by the presence or absence of a secondary task for the speaker. The results show that speakers under load are more likely to produce pronouns, at least when referring to less salient referents. We take this finding as evidence that speakers under load have more difficulties taking discourse salience into account, resulting in the use of expressions that are more economical for themselves. © 2014 Cognitive Science Society, Inc.

  19. International Student Speaker Programs: "Someone from Another World."

    ERIC Educational Resources Information Center

    Wilson, Angene

    This study surveyed members of the Association of International Educators and community volunteers to find out how international student speaker programs actually work. An international student speaker program provides speakers (from the university foreign student population) for community organizations and schools. The results of the survey (49…

  20. Narrative Language in Traumatic Brain Injury

    ERIC Educational Resources Information Center

    Marini, Andrea; Galetto, Valentina; Zampieri, Elisa; Vorano, Lorenza; Zettin, Marina; Carlomagno, Sergio

    2011-01-01

    Persons with traumatic brain injury (TBI) often show impaired linguistic and/or narrative abilities. The present study aimed to document the features of narrative discourse impairment in a group of adults with TBI. 14 severe TBI non-aphasic speakers (GCS less than 8) in the phase of neurological stability and 14 neurologically intact participants…

  1. A computer-based therapy for the treatment of aphasic subjects with writing disorders.

    PubMed

    Seron, X; Deloche, G; Moulard, G; Rousselle, M

    1980-02-01

    A computer-controlled rehabilitation for aphasics with writing impairments is presented. Subjects were asked to type words under dictation. Each time a letter was typed in its correct position, it was displayed on a screen. If the contrary, the error was not displayed, thus avoiding visual reinforcement of false choices. This method of rehabilitation has proved efficient as concerns typewriting. More importantly, some learning transfer to handwriting was observed at the completion of experimental training. The results showed a significant reduction in the number of misspelled words as well as in the erroneous choice and serial ordering of letters. The stability of the observed improvement is discussed in relationship to variables such as the time elapsed since brain damage and the type of writing difficulty.

  2. Speech Breathing in Speakers Who Use an Electrolarynx

    ERIC Educational Resources Information Center

    Bohnenkamp, Todd A.; Stowell, Talena; Hesse, Joy; Wright, Simon

    2010-01-01

    Speakers who use an electrolarynx following a total laryngectomy no longer require pulmonary support for speech. Subsequently, chest wall movements may be affected; however, chest wall movements in these speakers are not well defined. The purpose of this investigation was to evaluate speech breathing in speakers who use an electrolarynx during…

  3. Temporal information processing as a basis for auditory comprehension: clinical evidence from aphasic patients.

    PubMed

    Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta

    2015-01-01

    Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap separating two successive stimuli necessary for a subject to report their temporal order correctly, thus the relation 'before-after'. Neuropsychological evidence has indicated elevated TOT values (corresponding to deteriorated time perception) in different clinical groups, such as aphasic patients, dyslexic subjects or children with specific language impairment. To test relationships between elevated TOT and declined cognitive functions in brain-injured patients suffering from post-stroke aphasia. We tested 30 aphasic patients (13 male, 17 female), aged between 50 and 81 years. TIP comprised assessment of TOT. Auditory comprehension was assessed with the selected language tests, i.e. Token Test, Phoneme Discrimination Test (PDT) and Voice-Onset-Time Test (VOT), while two aspects of attentional resources (i.e. alertness and vigilance) were measured using the Test of Attentional Performance (TAP) battery. Significant correlations were indicated between elevated values of TOT and deteriorated performance on all applied language tests. Moreover, significant correlations were evidenced between elevated TOT and alertness. Finally, positive correlations were found between particular language tests, i.e. (1) Token Test and PDT; (2) Token Test and VOT Test; and (3) PDT and VOT Test, as well as between PDT and both attentional tasks. These results provide further clinical evidence supporting the thesis that TIP constitutes the core process incorporated in both language and attentional resources. The novel value of the present study is the indication for the first time in Slavic language users a clear coexistence of the 'timing

  4. Intonation and gender perception: applications for transgender speakers.

    PubMed

    Hancock, Adrienne; Colton, Lindsey; Douglas, Fiacre

    2014-03-01

    Intonation is commonly addressed in voice and communication feminization therapy, yet empirical evidence of gender differences for intonation is scarce and rarely do studies examine how it relates to gender perception of transgender speakers. This study examined intonation of 12 males, 12 females, six female-to-male, and 14 male-to-female transgender speakers describing a Norman Rockwell image. Several intonation measures were compared between biological gender groups, between perceived gender groups, and between male-to-female (MTF) speakers who were perceived as male, female, or ambiguous gender. Speakers with a larger percentage of utterances with upward intonation and a larger utterance semitone range were perceived as female by listeners, despite no significant differences between the actual intonation of the four gender groups. MTF speakers who do not pass as female appear to use less upward and more downward intonations than female and passing MTF speakers. Intonation has potential for use in transgender communication therapy because it can influence perception to some degree. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  5. Accent Attribution in Speakers with Foreign Accent Syndrome

    ERIC Educational Resources Information Center

    Verhoeven, Jo; De Pauw, Guy; Pettinato, Michele; Hirson, Allen; Van Borsel, John; Marien, Peter

    2013-01-01

    Purpose: The main aim of this experiment was to investigate the perception of Foreign Accent Syndrome in comparison to speakers with an authentic foreign accent. Method: Three groups of listeners attributed accents to conversational speech samples of 5 FAS speakers which were embedded amongst those of 5 speakers with a real foreign accent and 5…

  6. A Jesuit Approach to Campus Speakers

    ERIC Educational Resources Information Center

    Herbeck, Dale A.

    2007-01-01

    In this article, the author examines the newly revised speakers policy in Boston College. The revised policy, defended by administrators as being consistent with past practice, differs in two important respects from the speakers policy it replaced. Lest the scope of this unfortunate policy be exaggerated, it is important to note that the policy…

  7. The speakers' bureau system: a form of peer selling.

    PubMed

    Reid, Lynette; Herder, Matthew

    2013-01-01

    In the speakers' bureau system, physicians are recruited and trained by pharmaceutical, biotechnology, and medical device companies to deliver information about products to other physicians, in exchange for a fee. Using publicly available disclosures, we assessed the thesis that speakers' bureau involvement is not a feature of academic medicine in Canada, by estimating the prevalence of participation in speakers' bureaus among Canadian faculty in one medical specialty, cardiology. We analyzed the relevant features of an actual contract made public by the physician addressee and applied the Canadian Medical Association (CMA) guidelines on physician-industry relations to participation in a speakers' bureau. We argue that speakers' bureau participation constitutes a form of peer selling that should be understood to contravene the prohibition on product endorsement in the CMA Code of Ethics. Academic medical institutions, in conjunction with regulatory colleges, should continue and strengthen their policies to address participation in speakers' bureaus.

  8. Speaker and Observer Perceptions of Physical Tension during Stuttering.

    PubMed

    Tichenor, Seth; Leslie, Paula; Shaiman, Susan; Yaruss, J Scott

    2017-01-01

    Speech-language pathologists routinely assess physical tension during evaluation of those who stutter. If speakers experience tension that is not visible to clinicians, then judgments of severity may be inaccurate. This study addressed this potential discrepancy by comparing judgments of tension by people who stutter and expert clinicians to determine if clinicians could accurately identify the speakers' experience of physical tension. Ten adults who stutter were audio-video recorded in two speaking samples. Two board-certified specialists in fluency evaluated the samples using the Stuttering Severity Instrument-4 and a checklist adapted for this study. Speakers rated their tension using the same forms, and then discussed their experiences in a qualitative interview so that themes related to physical tension could be identified. The degree of tension reported by speakers was higher than that observed by specialists. Tension in parts of the body that were less visible to the observer (chest, abdomen, throat) was reported more by speakers than by specialists. The thematic analysis revealed that speakers' experience of tension changes over time and that these changes may be related to speakers' acceptance of stuttering. The lack of agreement between speaker and specialist perceptions of tension suggests that using self-reports is a necessary component for supporting the accurate diagnosis of tension in stuttering. © 2018 S. Karger AG, Basel.

  9. Statistical Evaluation of Biometric Evidence in Forensic Automatic Speaker Recognition

    NASA Astrophysics Data System (ADS)

    Drygajlo, Andrzej

    Forensic speaker recognition is the process of determining if a specific individual (suspected speaker) is the source of a questioned voice recording (trace). This paper aims at presenting forensic automatic speaker recognition (FASR) methods that provide a coherent way of quantifying and presenting recorded voice as biometric evidence. In such methods, the biometric evidence consists of the quantified degree of similarity between speaker-dependent features extracted from the trace and speaker-dependent features extracted from recorded speech of a suspect. The interpretation of recorded voice as evidence in the forensic context presents particular challenges, including within-speaker (within-source) variability and between-speakers (between-sources) variability. Consequently, FASR methods must provide a statistical evaluation which gives the court an indication of the strength of the evidence given the estimated within-source and between-sources variabilities. This paper reports on the first ENFSI evaluation campaign through a fake case, organized by the Netherlands Forensic Institute (NFI), as an example, where an automatic method using the Gaussian mixture models (GMMs) and the Bayesian interpretation (BI) framework were implemented for the forensic speaker recognition task.

  10. Evaluation of Speakers with Foreign-Accented Speech in Japan: The Effect of Accent Produced by English Native Speakers

    ERIC Educational Resources Information Center

    Tsurutani, Chiharu

    2012-01-01

    Foreign-accented speakers are generally regarded as less educated, less reliable and less interesting than native speakers and tend to be associated with cultural stereotypes of their country of origin. This discrimination against foreign accents has, however, been discussed mainly using accented English in English-speaking countries. This study…

  11. Inferring speaker attributes in adductor spasmodic dysphonia: ratings from unfamiliar listeners.

    PubMed

    Isetti, Derek; Xuereb, Linnea; Eadie, Tanya L

    2014-05-01

    To determine whether unfamiliar listeners' perceptions of speakers with adductor spasmodic dysphonia (ADSD) differ from control speakers on the parameters of relative age, confidence, tearfulness, and vocal effort and are related to speaker-rated vocal effort or voice-specific quality of life. Twenty speakers with ADSD (including 6 speakers with ADSD plus tremor) and 20 age- and sex-matched controls provided speech recordings, completed a voice-specific quality-of-life instrument (Voice Handicap Index; Jacobson et al., 1997), and rated their own vocal effort. Twenty listeners evaluated speech samples for relative age, confidence, tearfulness, and vocal effort using rating scales. Listeners judged speakers with ADSD as sounding significantly older, less confident, more tearful, and more effortful than control speakers (p < .01). Increased vocal effort was strongly associated with decreased speaker confidence (rs = .88-.89) and sounding more tearful (rs = .83-.85). Self-rated speaker effort was moderately related (rs = .45-.52) to listener impressions. Listeners' perceptions of confidence and tearfulness were also moderately associated with higher Voice Handicap Index scores (rs = .65-.70). Unfamiliar listeners judge speakers with ADSD more negatively than control speakers, with judgments extending beyond typical clinical measures. The results have implications for counseling and understanding the psychosocial effects of ADSD.

  12. Syndromes dominated by apraxia of speech show distinct characteristics from agrammatic PPA

    PubMed Central

    Duffy, Joseph R.; Strand, Edythe A.; Machulda, Mary M.; Senjem, Matthew L.; Lowe, Val J.; Jack, Clifford R.; Whitwell, Jennifer L.

    2013-01-01

    Objective: We assessed whether clinical and imaging features of subjects with apraxia of speech (AOS) more severe than aphasia (dominant AOS) are more similar to agrammatic primary progressive aphasia (agPPA) or to primary progressive AOS (PPAOS). Methods: Sixty-seven subjects (PPAOS = 18, dominant AOS = 10, agPPA = 9, age-matched controls = 30) who all had volumetric MRI, diffusion tensor imaging, F18-fluorodeoxyglucose and C11-labeled Pittsburgh compound B (PiB)-PET scanning, as well as neurologic and speech and language assessments, were included in this case-control study. AOS was classified as either type 1, predominated by sound distortions and distorted sound substitutions, or type 2, predominated by syllabically segmented prosodic speech patterns. Results: The dominant AOS subjects most often had AOS type 2, similar to PPAOS. In contrast, agPPA subjects most often had type 1 (p = 0.01). Both dominant AOS and PPAOS showed focal imaging abnormalities in premotor cortex, whereas agPPA showed widespread involvement affecting premotor, prefrontal, temporal and parietal lobes, caudate, and insula. Only the dominant AOS and PPAOS groups showed midbrain atrophy compared with controls. No differences were observed in PiB binding across all 3 groups, with the majority being PiB negative. Conclusion: These results suggest that dominant AOS is more similar to PPAOS than agPPA, with dominant AOS and PPAOS exhibiting a clinically distinguishable subtype of progressive AOS compared with agPPA. PMID:23803320

  13. The association between tobacco, alcohol, and drug use, stress, and depression among uninsured free clinic patients: U.S.-born English speakers, non-U.S.-born English speakers, and Spanish speakers.

    PubMed

    Kamimura, Akiko; Ashby, Jeanie; Tabler, Jennifer; Nourian, Maziar M; Trinh, Ha Ngoc; Chen, Jason; Reel, Justine J

    2017-01-01

    The abuse of substances is a significant public health issue. Perceived stress and depression have been found to be related to the abuse of substances. The purpose of this study is to examine the prevalence of substance use (i.e., alcohol problems, smoking, and drug use) and the association between substance use, perceived stress, and depression among free clinic patients. Patients completed a self-administered survey in 2015 (N = 504). The overall prevalence of substance use among free clinic patients was not high compared to the U.S. general population. U.S.-born English speakers reported a higher prevalence rate of tobacco smoking and drug use than did non-U.S.-born English speakers and Spanish speakers. Alcohol problems and smoking were significantly related to higher levels of perceived stress and depression. Substance use prevention and education should be included in general health education programs. U.S.-born English speakers would need additional attention. Mental health intervention would be essential to prevention and intervention.

  14. Optimization of multilayer neural network parameters for speaker recognition

    NASA Astrophysics Data System (ADS)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  15. Speaker recognition with temporal cues in acoustic and electric hearing

    NASA Astrophysics Data System (ADS)

    Vongphoe, Michael; Zeng, Fan-Gang

    2005-08-01

    Natural spoken language processing includes not only speech recognition but also identification of the speaker's gender, age, emotional, and social status. Our purpose in this study is to evaluate whether temporal cues are sufficient to support both speech and speaker recognition. Ten cochlear-implant and six normal-hearing subjects were presented with vowel tokens spoken by three men, three women, two boys, and two girls. In one condition, the subject was asked to recognize the vowel. In the other condition, the subject was asked to identify the speaker. Extensive training was provided for the speaker recognition task. Normal-hearing subjects achieved nearly perfect performance in both tasks. Cochlear-implant subjects achieved good performance in vowel recognition but poor performance in speaker recognition. The level of the cochlear implant performance was functionally equivalent to normal performance with eight spectral bands for vowel recognition but only to one band for speaker recognition. These results show a disassociation between speech and speaker recognition with primarily temporal cues, highlighting the limitation of current speech processing strategies in cochlear implants. Several methods, including explicit encoding of fundamental frequency and frequency modulation, are proposed to improve speaker recognition for current cochlear implant users.

  16. A language-familiarity effect for speaker discrimination without comprehension.

    PubMed

    Fleming, David; Giordano, Bruno L; Caldara, Roberto; Belin, Pascal

    2014-09-23

    The influence of language familiarity upon speaker identification is well established, to such an extent that it has been argued that "Human voice recognition depends on language ability" [Perrachione TK, Del Tufo SN, Gabrieli JDE (2011) Science 333(6042):595]. However, 7-mo-old infants discriminate speakers of their mother tongue better than they do foreign speakers [Johnson EK, Westrek E, Nazzi T, Cutler A (2011) Dev Sci 14(5):1002-1011] despite their limited speech comprehension abilities, suggesting that speaker discrimination may rely on familiarity with the sound structure of one's native language rather than the ability to comprehend speech. To test this hypothesis, we asked Chinese and English adult participants to rate speaker dissimilarity in pairs of sentences in English or Mandarin that were first time-reversed to render them unintelligible. Even in these conditions a language-familiarity effect was observed: Both Chinese and English listeners rated pairs of native-language speakers as more dissimilar than foreign-language speakers, despite their inability to understand the material. Our data indicate that the language familiarity effect is not based on comprehension but rather on familiarity with the phonology of one's native language. This effect may stem from a mechanism analogous to the "other-race" effect in face recognition.

  17. The Arctic Visiting Speakers Program

    NASA Astrophysics Data System (ADS)

    Wiggins, H. V.; Fahnestock, J.

    2013-12-01

    The Arctic Visiting Speakers Program (AVS) is a program of the Arctic Research Consortium of the U.S. (ARCUS) and funded by the National Science Foundation. AVS provides small grants to researchers and other Arctic experts to travel and share their knowledge in communities where they might not otherwise connect. The program aims to: initiate and encourage arctic science education in communities with little exposure to arctic research; increase collaboration among the arctic research community; nurture communication between arctic researchers and community residents; and foster arctic science education at the local level. Individuals, community organizations, and academic organizations can apply to host a speaker. Speakers cover a wide range of arctic topics and can address a variety of audiences including K-12 students, graduate and undergraduate students, and the general public. Preference is given to tours that reach broad and varied audiences, especially those targeted to underserved populations. Between October 2000 and July 2013, AVS supported 114 tours spanning 9 different countries, including tours in 23 U.S. states. Tours over the past three and a half years have connected Arctic experts with over 6,600 audience members. Post-tour evaluations show that AVS consistently rates high for broadening interest and understanding of arctic issues. AVS provides a case study for how face-to-face interactions between arctic scientists and general audiences can produce high-impact results. Further information can be found at: http://www.arcus.org/arctic-visiting-speakers.

  18. ``The perceptual bases of speaker identity'' revisited

    NASA Astrophysics Data System (ADS)

    Voiers, William D.

    2003-10-01

    A series of experiments begun 40 years ago [W. D. Voiers, J. Acoust. Soc. Am. 36, 1065-1073 (1964)] was concerned with identifying the perceived voice traits (PVTs) on which human recognition of voices depends. It culminated with the development of a voice taxonomy based on 20 PVTs and a set of highly reliable rating scales for classifying voices with respect to those PVTs. The development of a perceptual voice taxonomy was motivated by the need for a practical method of evaluating speaker recognizability in voice communication systems. The Diagnostic Speaker Recognition Test (DSRT) evaluates the effects of systems on speaker recognizability as reflected in changes in the inter-listener reliability of voice ratings on the 20 PVTs. The DSRT thus provides a qualitative, as well as quantitative, evaluation of the effects of a system on speaker recognizability. A fringe benefit of this project is PVT rating data for a sample of 680 voices. [Work partially supported by USAFRL.

  19. Speaker verification using committee neural networks.

    PubMed

    Reddy, Narender P; Buch, Ojas A

    2003-10-01

    Security is a major problem in web based access or remote access to data bases. In the present study, the technique of committee neural networks was developed for speech based speaker verification. Speech data from the designated speaker and several imposters were obtained. Several parameters were extracted in the time and frequency domains, and fed to neural networks. Several neural networks were trained and the five best performing networks were recruited into the committee. The committee decision was based on majority voting of the member networks. The committee opinion was evaluated with further testing data. The committee correctly identified the designated speaker in (50 out of 50) 100% of the cases and rejected imposters in (150 out of 150) 100% of the cases. The committee decision was not unanimous in majority of the cases tested.

  20. Speaker Reliability Guides Children's Inductive Inferences about Novel Properties

    ERIC Educational Resources Information Center

    Kim, Sunae; Kalish, Charles W.; Harris, Paul L.

    2012-01-01

    Prior work shows that children can make inductive inferences about objects based on their labels rather than their appearance (Gelman, 2003). A separate line of research shows that children's trust in a speaker's label is selective. Children accept labels from a reliable speaker over an unreliable speaker (e.g., Koenig & Harris, 2005). In the…

  1. Guest Speakers in School-Based Sexuality Education

    ERIC Educational Resources Information Center

    McRee, Annie-Laurie; Madsen, Nikki; Eisenberg, Marla E.

    2014-01-01

    This study, using data from a statewide survey (n = 332), examined teachers' practices regarding the inclusion of guest speakers to cover sexuality content. More than half of teachers (58%) included guest speakers. In multivariate analyses, teachers who taught high school, had professional preparation in health education, or who received…

  2. The Communication of Public Speaking Anxiety: Perceptions of Asian and American Speakers.

    ERIC Educational Resources Information Center

    Martini, Marianne; And Others

    1992-01-01

    Finds that U.S. audiences perceive Asian speakers to have more speech anxiety than U.S. speakers, even though Asian speakers do not self-report higher anxiety levels. Confirms that speech state anxiety is not communicated effectively between speakers and audiences for Asian or U.S. speakers. (SR)

  3. Quality of "Glottal" Stops in Tracheoesophageal Speakers

    ERIC Educational Resources Information Center

    van Rossum, M. A.; van As-Brooks, C. J.; Hilgers, F. J. M.; Roozen, M.

    2009-01-01

    Glottal stops are conveyed by an abrupt constriction at the level of the glottis. Tracheoesophageal (TE) speakers are known to have poor control over the new voice source (neoglottis), and this might influence the production of "glottal" stops. This study investigated how TE speakers realized "glottal" stops in abutting words…

  4. Speakers of Different Languages Process the Visual World Differently

    PubMed Central

    Chabal, Sarah; Marian, Viorica

    2015-01-01

    Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct linguistic input, showing that language is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. PMID:26030171

  5. An Investigation of Syntactic Priming among German Speakers at Varying Proficiency Levels

    ERIC Educational Resources Information Center

    Ruf, Helena T.

    2011-01-01

    This dissertation investigates syntactic priming in second language (L2) development among three speaker populations: (1) less proficient L2 speakers; (2) advanced L2 speakers; and (3) LI speakers. Using confederate scripting this study examines how German speakers choose certain word orders in locative constructions (e.g., "Auf dem Tisch…

  6. Modeling Speaker Proficiency, Comprehensibility, and Perceived Competence in a Language Use Domain

    ERIC Educational Resources Information Center

    Schmidgall, Jonathan Edgar

    2013-01-01

    Research suggests that listener perceptions of a speaker's oral language use, or a speaker's "comprehensibility," may be influenced by a variety of speaker-, listener-, and context-related factors. Primary speaker factors include aspects of the speaker's proficiency in the target language such as pronunciation and…

  7. Aphasia from the inside: The cognitive world of the aphasic patient.

    PubMed

    Ardila, Alfredo; Rubio-Bruno, Silvia

    2017-05-23

    The purpose of this study was to analyze the question: how do people with aphasia experience the world? Three questions are approached: (1) how is behavior controlled in aphasia, considering that a normal linguistic control is no longer available; (2) what is the pattern of intellectual abilities in aphasia; and (3) what do aphasia patients' self-report regarding the experience of living without language. In aphasia, behavior can no longer be controlled through the "second signal system" and only the first signal system remains. Available information suggests that sometimes no verbal abilities may be affected in aphasia. However, an important variability is observed: whereas, in some patients, evident nonverbal defects are found; in other patients, performance verbal abilities are within normal limits. Several self-reports of recovered aphasic patients explain the experience of living without language. Considering that language represents the major instrument of cognition, in aphasia, surrounding information is evidently interpreted in a partially different way and cognitive strategies are reorganized, resulting in an idiosyncratic cognitive world.

  8. Literacy Skill Differences between Adult Native English and Native Spanish Speakers

    ERIC Educational Resources Information Center

    Herman, Julia; Cote, Nicole Gilbert; Reilly, Lenore; Binder, Katherine S.

    2013-01-01

    The goal of this study was to compare the literacy skills of adult native English and native Spanish ABE speakers. Participants were 169 native English speakers and 124 native Spanish speakers recruited from five prior research projects. The results showed that the native Spanish speakers were less skilled on morphology and passage comprehension…

  9. Speaker Clustering for a Mixture of Singing and Reading (Preprint)

    DTIC Science & Technology

    2012-03-01

    diarization [2, 3] which answers the ques- tion of ”who spoke when?” is a combination of speaker segmentation and clustering. Although it is possible to...focuses on speaker clustering, the techniques developed here can be applied to speaker diarization . For the remainder of this paper, the term ”speech...and retrieval,” Proceedings of the IEEE, vol. 88, 2000. [2] S. Tranter and D. Reynolds, “An overview of automatic speaker diarization systems,” IEEE

  10. Electrophysiology of subject-verb agreement mediated by speakers' gender.

    PubMed

    Hanulíková, Adriana; Carreiras, Manuel

    2015-01-01

    An important property of speech is that it explicitly conveys features of a speaker's identity such as age or gender. This event-related potential (ERP) study examined the effects of social information provided by a speaker's gender, i.e., the conceptual representation of gender, on subject-verb agreement. Despite numerous studies on agreement, little is known about syntactic computations generated by speaker characteristics extracted from the acoustic signal. Slovak is well suited to investigate this issue because it is a morphologically rich language in which agreement involves features for number, case, and gender. Grammaticality of a sentence can be evaluated by checking a speaker's gender as conveyed by his/her voice. We examined how conceptual information about speaker gender, which is not syntactic but rather social and pragmatic in nature, is interpreted for the computation of agreement patterns. ERP responses to verbs disagreeing with the speaker's gender (e.g., a sentence including a masculine verbal inflection spoken by a female person 'the neighbors were upset because I (∗)stoleMASC plums') elicited a larger early posterior negativity compared to correct sentences. When the agreement was purely syntactic and did not depend on the speaker's gender, a disagreement between a formally marked subject and the verb inflection (e.g., the womanFEM (∗)stoleMASC plums) resulted in a larger P600 preceded by a larger anterior negativity compared to the control sentences. This result is in line with proposals according to which the recruitment of non-syntactic information such as the gender of the speaker results in N400-like effects, while formally marked syntactic features lead to structural integration as reflected in a LAN/P600 complex.

  11. Perception of speaker size and sex of vowel sounds

    NASA Astrophysics Data System (ADS)

    Smith, David R. R.; Patterson, Roy D.

    2005-04-01

    Glottal-pulse rate (GPR) and vocal-tract length (VTL) are both related to speaker size and sex-however, it is unclear how they interact to determine our perception of speaker size and sex. Experiments were designed to measure the relative contribution of GPR and VTL to judgements of speaker size and sex. Vowels were scaled to represent people with different GPRs and VTLs, including many well beyond the normal population values. In a single interval, two response rating paradigm, listeners judged the size (using a 7-point scale) and sex/age of the speaker (man, woman, boy, or girl) of these scaled vowels. Results from the size-rating experiments show that VTL has a much greater influence upon judgements of speaker size than GPR. Results from the sex-categorization experiments show that judgements of speaker sex are influenced about equally by GPR and VTL for vowels with normal GPR and VTL values. For abnormal combinations of GPR and VTL, where low GPRs are combined with short VTLs, VTL has more influence than GPR in sex judgements. [Work supported by the UK MRC (G9901257) and the German Volkswagen Foundation (VWF 1/79 783).

  12. Speakers of different languages process the visual world differently.

    PubMed

    Chabal, Sarah; Marian, Viorica

    2015-06-01

    Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (c) 2015 APA, all rights reserved).

  13. Neural underpinnings for model-oriented therapy of aphasic word production.

    PubMed

    Abel, Stefanie; Weiller, Cornelius; Huber, Walter; Willmes, Klaus

    2014-05-01

    Model-oriented therapies of aphasic word production have been shown to be effective, with item-specific therapy effects being larger than generalisation effects for untrained items. However, it remains unclear whether semantic versus phonological therapy lead to differential effects, depending on type of lexical impairment. Functional imaging studies revealed that mainly left-hemisphere, perisylvian brain areas were involved in successful therapy-induced recovery of aphasic word production. However, the neural underpinnings for model-oriented therapy effects have not received much attention yet. We aimed at identifying brain areas indicating (1) general therapy effects using a naming task measured by functional magnetic resonance imaging (fMRI) in 14 patients before and after a 4-week naming therapy, which comprised increasing semantic and phonological cueing-hierarchies. We also intended to reveal differential effects (2) of training versus generalisation, (3) of therapy methods, and (4) of type of impairment as assessed by the connectionist Dell model. Training effects were stronger than generalisation effects, even though both were significant. Furthermore, significant impairment-specific therapy effects were observed for patients with phonological disorders (P-patients). (1) Left inferior frontal gyrus, pars opercularis (IFGoper), was a positive predictor of therapy gains while the right caudate was a negative predictor. Moreover, less activation decrease due to therapy in left-hemisphere temporo-parietal language areas was positively correlated with therapy gains. (2) Naming of trained compared to untrained words yielded less activation decrease in left superior temporal gyrus (STG) and precuneus, bilateral thalamus, and right caudate due to therapy. (3) Differential therapy effects could be detected in the right superior parietal lobule for the semantic method, and in regions involving bilateral anterior and mid cingulate, right precuneus, and left middle

  14. DISRUPTION OF LARGE-SCALE NEURAL NETWORKS IN NON-FLUENT/AGRAMMATIC VARIANT PRIMARY PROGRESSIVE APHASIA ASSOCIATED WITH FRONTOTEMPORAL DEGENERATION PATHOLOGY

    PubMed Central

    Grossman, Murray; Powers, John; Ash, Sherry; McMillan, Corey; Burkholder, Lisa; Irwin, David; Trojanowski, John Q.

    2012-01-01

    Non-fluent/agrammatic primary progressive aphasia (naPPA) is a progressive neurodegenerative condition most prominently associated with slowed, effortful speech. A clinical imaging marker of naPPA is disease centered in the left inferior frontal lobe. We used multimodal imaging to assess large-scale neural networks underlying effortful expression in 15 patients with sporadic naPPA due to frontotemporal lobar degeneration (FTLD) spectrum pathology. Effortful speech in these patients is related in part to impaired grammatical processing, and to phonologic speech errors. Gray matter (GM) imaging shows frontal and anterior-superior temporal atrophy, most prominently in the left hemisphere. Diffusion tensor imaging reveals reduced fractional anisotropy in several white matter (WM) tracts mediating projections between left frontal and other GM regions. Regression analyses suggest disruption of three large-scale GM-WM neural networks in naPPA that support fluent, grammatical expression. These findings emphasize the role of large-scale neural networks in language, and demonstrate associated language deficits in naPPA. PMID:23218686

  15. Speaker Invariance for Phonetic Information: an fMRI Investigation

    PubMed Central

    Salvata, Caden; Blumstein, Sheila E.; Myers, Emily B.

    2012-01-01

    The current study explored how listeners map the variable acoustic input onto a common sound structure representation while being able to retain phonetic detail to distinguish among the identity of talkers. An adaptation paradigm was utilized to examine areas which showed an equal neural response (equal release from adaptation) to phonetic change when spoken by the same speaker and when spoken by two different speakers, and insensitivity (failure to show release from adaptation) when the same phonetic input was spoken by a different speaker. Neural areas which showed speaker invariance were located in the anterior portion of the middle superior temporal gyrus bilaterally. These findings provide support for the view that speaker normalization processes allow for the translation of a variable speech input to a common abstract sound structure. That this process appears to occur early in the processing stream, recruiting temporal structures, suggests that this mapping takes place prelexically, before sound structure input is mapped on to lexical representations. PMID:23264714

  16. Content-specific coordination of listeners' to speakers' EEG during communication.

    PubMed

    Kuhlen, Anna K; Allefeld, Carsten; Haynes, John-Dylan

    2012-01-01

    Cognitive neuroscience has recently begun to extend its focus from the isolated individual mind to two or more individuals coordinating with each other. In this study we uncover a coordination of neural activity between the ongoing electroencephalogram (EEG) of two people-a person speaking and a person listening. The EEG of one set of twelve participants ("speakers") was recorded while they were narrating short stories. The EEG of another set of twelve participants ("listeners") was recorded while watching audiovisual recordings of these stories. Specifically, listeners watched the superimposed videos of two speakers simultaneously and were instructed to attend either to one or the other speaker. This allowed us to isolate neural coordination due to processing the communicated content from the effects of sensory input. We find several neural signatures of communication: First, the EEG is more similar among listeners attending to the same speaker than among listeners attending to different speakers, indicating that listeners' EEG reflects content-specific information. Secondly, listeners' EEG activity correlates with the attended speakers' EEG, peaking at a time delay of about 12.5 s. This correlation takes place not only between homologous, but also between non-homologous brain areas in speakers and listeners. A semantic analysis of the stories suggests that listeners coordinate with speakers at the level of complex semantic representations, so-called "situation models". With this study we link a coordination of neural activity between individuals directly to verbally communicated information.

  17. Electroglottogram waveform types of untrained speakers.

    PubMed

    Painter, C

    1990-01-01

    Electroglottography is a useful, non-invasive technique that can assist in the assessment of vocal fold dysfunction. However, if it is to become a useful clinical tool, there is a need for normative studies of the electroglottogram waveform types that characterize different groups of speakers. This report compares the electroglottogram waveform types characterizing one trained professional voice user phonating in 15 experimental sessions under various fundamental frequencies, intensities and voice qualities with those obtained from 52 untrained non-professional speakers.

  18. When pitch Accents Encode Speaker Commitment: Evidence from French Intonation.

    PubMed

    Michelas, Amandine; Portes, Cristel; Champagne-Lavau, Maud

    2016-06-01

    Recent studies on a variety of languages have shown that a speaker's commitment to the propositional content of his or her utterance can be encoded, among other strategies, by pitch accent types. Since prior research mainly relied on lexical-stress languages, our understanding of how speakers of a non-lexical-stress language encode speaker commitment is limited. This paper explores the contribution of the last pitch accent of an intonation phrase to convey speaker commitment in French, a language that has stress at the phrasal level as well as a restricted set of pitch accents. In a production experiment, participants had to produce sentences in two pragmatic contexts: unbiased questions (the speaker had no particular belief with respect to the expected answer) and negatively biased questions (the speaker believed the proposition to be false). Results revealed that negatively biased questions consistently exhibited an additional unaccented F0 peak in the preaccentual syllable (an H+!H* pitch accent) while unbiased questions were often realized with a rising pattern across the accented syllable (an H* pitch accent). These results provide evidence that pitch accent types in French can signal the speaker's belief about the certainty of the proposition expressed in French. It also has implications for the phonological model of French intonation.

  19. Neural Systems Involved When Attending to a Speaker

    PubMed Central

    Kamourieh, Salwa; Braga, Rodrigo M.; Leech, Robert; Newbould, Rexford D.; Malhotra, Paresh; Wise, Richard J. S.

    2015-01-01

    Remembering what a speaker said depends on attention. During conversational speech, the emphasis is on working memory, but listening to a lecture encourages episodic memory encoding. With simultaneous interference from background speech, the need for auditory vigilance increases. We recreated these context-dependent demands on auditory attention in 2 ways. The first was to require participants to attend to one speaker in either the absence or presence of a distracting background speaker. The second was to alter the task demand, requiring either an immediate or delayed recall of the content of the attended speech. Across 2 fMRI studies, common activated regions associated with segregating attended from unattended speech were the right anterior insula and adjacent frontal operculum (aI/FOp), the left planum temporale, and the precuneus. In contrast, activity in a ventral right frontoparietal system was dependent on both the task demand and the presence of a competing speaker. Additional multivariate analyses identified other domain-general frontoparietal systems, where activity increased during attentive listening but was modulated little by the need for speech stream segregation in the presence of 2 speakers. These results make predictions about impairments in attentive listening in different communicative contexts following focal or diffuse brain pathology. PMID:25596592

  20. The road to understanding is paved with the speaker's intentions: cues to the speaker's attention and intentions affect pronoun comprehension.

    PubMed

    Nappa, Rebecca; Arnold, Jennifer E

    2014-05-01

    A series of experiments explore the effects of attention-directing cues on pronoun resolution, contrasting four specific hypotheses about the interpretation of ambiguous pronouns he and she: (1) it is driven by grammatical rules, (2) it is primarily a function of social processing of the speaker's intention to communicate, (3) it is modulated by the listener's own egocentric attention, and (4) it is primarily a function of learned probabilistic cues. Experiment 1 demonstrates that pronoun interpretation is guided by the well-known N1 (first-mention) bias, which is also modulated by both the speaker's gaze and pointing gestures. Experiment 2 demonstrates that a low-level visual capture cue has no effect on pronoun interpretation, in contrast with the social cue of pointing. Experiment 3 uses a novel intentional cue: the same attention-capture flash as in Experiment 2, but with instructions that the cue is intentionally created by the speaker. This cue does modulate the N1 bias, demonstrating the importance of information about the speaker's intentions to pronoun resolution. Taken in sum, these findings demonstrate that pronoun resolution is a process best categorized as driven by an appreciation of the speaker's communicative intent, which may be subserved by a sensitivity to predictive cues in the environment. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Speaker normalization and adaptation using second-order connectionist networks.

    PubMed

    Watrous, R L

    1993-01-01

    A method for speaker normalization and adaption using connectionist networks is developed. A speaker-specific linear transformation of observations of the speech signal is computed using second-order network units. Classification is accomplished by a multilayer feedforward network that operates on the normalized speech data. The network is adapted for a new talker by modifying the transformation parameters while leaving the classifier fixed. This is accomplished by backpropagating classification error through the classifier to the second-order transformation units. This method was evaluated for the classification of ten vowels for 76 speakers using the first two formant values of the Peterson-Barney data. The results suggest that rapid speaker adaptation resulting in high classification accuracy can be accomplished by this method.

  2. Consistency between verbal and non-verbal affective cues: a clue to speaker credibility.

    PubMed

    Gillis, Randall L; Nilsen, Elizabeth S

    2017-06-01

    Listeners are exposed to inconsistencies in communication; for example, when speakers' words (i.e. verbal) are discrepant with their demonstrated emotions (i.e. non-verbal). Such inconsistencies introduce ambiguity, which may render a speaker to be a less credible source of information. Two experiments examined whether children make credibility discriminations based on the consistency of speakers' affect cues. In Experiment 1, school-age children (7- to 8-year-olds) preferred to solicit information from consistent speakers (e.g. those who provided a negative statement with negative affect), over novel speakers, to a greater extent than they preferred to solicit information from inconsistent speakers (e.g. those who provided a negative statement with positive affect) over novel speakers. Preschoolers (4- to 5-year-olds) did not demonstrate this preference. Experiment 2 showed that school-age children's ratings of speakers were influenced by speakers' affect consistency when the attribute being judged was related to information acquisition (speakers' believability, "weird" speech), but not general characteristics (speakers' friendliness, likeability). Together, findings suggest that school-age children are sensitive to, and use, the congruency of affect cues to determine whether individuals are credible sources of information.

  3. Cost-sensitive learning for emotion robust speaker recognition.

    PubMed

    Li, Dongdong; Yang, Yingchun; Dai, Weihui

    2014-01-01

    In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved.

  4. Cost-Sensitive Learning for Emotion Robust Speaker Recognition

    PubMed Central

    Li, Dongdong; Yang, Yingchun

    2014-01-01

    In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved. PMID:24999492

  5. Recognition of speaker-dependent continuous speech with KEAL

    NASA Astrophysics Data System (ADS)

    Mercier, G.; Bigorgne, D.; Miclet, L.; Le Guennec, L.; Querre, M.

    1989-04-01

    A description of the speaker-dependent continuous speech recognition system KEAL is given. An unknown utterance, is recognized by means of the followng procedures: acoustic analysis, phonetic segmentation and identification, word and sentence analysis. The combination of feature-based, speaker-independent coarse phonetic segmentation with speaker-dependent statistical classification techniques is one of the main design features of the acoustic-phonetic decoder. The lexical access component is essentially based on a statistical dynamic programming technique which aims at matching a phonemic lexical entry containing various phonological forms, against a phonetic lattice. Sentence recognition is achieved by use of a context-free grammar and a parsing algorithm derived from Earley's parser. A speaker adaptation module allows some of the system parameters to be adjusted by matching known utterances with their acoustical representation. The task to be performed, described by its vocabulary and its grammar, is given as a parameter of the system. Continuously spoken sentences extracted from a 'pseudo-Logo' language are analyzed and results are presented.

  6. Young Children's Sensitivity to Speaker Gender When Learning from Others

    ERIC Educational Resources Information Center

    Ma, Lili; Woolley, Jacqueline D.

    2013-01-01

    This research explores whether young children are sensitive to speaker gender when learning novel information from others. Four- and 6-year-olds ("N" = 144) chose between conflicting statements from a male versus a female speaker (Studies 1 and 3) or decided which speaker (male or female) they would ask (Study 2) when learning about the functions…

  7. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    PubMed

    Kreysa, Helene; Kessler, Luise; Schweinberger, Stefan R

    2016-01-01

    A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins"). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  8. Investigating Auditory Processing of Syntactic Gaps with L2 Speakers Using Pupillometry

    ERIC Educational Resources Information Center

    Fernandez, Leigh; Höhle, Barbara; Brock, Jon; Nickels, Lyndsey

    2018-01-01

    According to the Shallow Structure Hypothesis (SSH), second language (L2) speakers, unlike native speakers, build shallow syntactic representations during sentence processing. In order to test the SSH, this study investigated the processing of a syntactic movement in both native speakers of English and proficient late L2 speakers of English using…

  9. Speaker Verification Using SVM

    DTIC Science & Technology

    2010-11-01

    application the required resources are provided by the phone itself. Speaker recognition can be used in many areas, like: • homeland security: airport ... security , strengthening the national borders, in travel documents, visas; • enterprise-wide network security infrastructures; • secure electronic

  10. Noise Reduction with Microphone Arrays for Speaker Identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, Z

    Reducing acoustic noise in audio recordings is an ongoing problem that plagues many applications. This noise is hard to reduce because of interfering sources and non-stationary behavior of the overall background noise. Many single channel noise reduction algorithms exist but are limited in that the more the noise is reduced; the more the signal of interest is distorted due to the fact that the signal and noise overlap in frequency. Specifically acoustic background noise causes problems in the area of speaker identification. Recording a speaker in the presence of acoustic noise ultimately limits the performance and confidence of speaker identificationmore » algorithms. In situations where it is impossible to control the environment where the speech sample is taken, noise reduction filtering algorithms need to be developed to clean the recorded speech of background noise. Because single channel noise reduction algorithms would distort the speech signal, the overall challenge of this project was to see if spatial information provided by microphone arrays could be exploited to aid in speaker identification. The goals are: (1) Test the feasibility of using microphone arrays to reduce background noise in speech recordings; (2) Characterize and compare different multichannel noise reduction algorithms; (3) Provide recommendations for using these multichannel algorithms; and (4) Ultimately answer the question - Can the use of microphone arrays aid in speaker identification?« less

  11. Understanding speaker attitudes from prosody by adults with Parkinson's disease.

    PubMed

    Monetta, Laura; Cheang, Henry S; Pell, Marc D

    2008-09-01

    The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical 'pseudo-utterances' were presented to listener groups with and without PD in two separate rating tasks. Task I required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo-utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the politelimpolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language (Pell & Leonard, 2003).

  12. Articulatory settings of French-English bilingual speakers

    NASA Astrophysics Data System (ADS)

    Wilson, Ian

    2005-04-01

    The idea of a language-specific articulatory setting (AS), an underlying posture of the articulators during speech, has existed for centuries [Laver, Historiogr. Ling. 5 (1978)], but until recently it had eluded direct measurement. In an analysis of x-ray movies of French and English monolingual speakers, Gick et al. [Phonetica (in press)] link AS to inter-speech posture, allowing measurement of AS without interference from segmental targets during speech, and they give quantitative evidence showing AS to be language-specific. In the present study, ultrasound and Optotrak are used to investigate whether bilingual English-French speakers have two ASs, and whether this varies depending on the mode (monolingual or bilingual) these speakers are in. Specifically, for inter-speech posture of the lips, lip aperture and protrusion are measured using Optotrak. For inter-speech posture of the tongue, tongue root retraction, tongue body and tongue tip height are measured using optically-corrected ultrasound. Segmental context is balanced across the two languages ensuring that the sets of sounds before and after an inter-speech posture are consistent across languages. By testing bilingual speakers, vocal tract morphology across languages is controlled for. Results have implications for L2 acquisition, specifically the teaching and acquisition of pronunciation.

  13. On how the brain decodes vocal cues about speaker confidence.

    PubMed

    Jiang, Xiaoming; Pell, Marc D

    2015-05-01

    In speech communication, listeners must accurately decode vocal cues that refer to the speaker's mental state, such as their confidence or 'feeling of knowing'. However, the time course and neural mechanisms associated with online inferences about speaker confidence are unclear. Here, we used event-related potentials (ERPs) to examine the temporal neural dynamics underlying a listener's ability to infer speaker confidence from vocal cues during speech processing. We recorded listeners' real-time brain responses while they evaluated statements wherein the speaker's tone of voice conveyed one of three levels of confidence (confident, close-to-confident, unconfident) or were spoken in a neutral manner. Neural responses time-locked to event onset show that the perceived level of speaker confidence could be differentiated at distinct time points during speech processing: unconfident expressions elicited a weaker P2 than all other expressions of confidence (or neutral-intending utterances), whereas close-to-confident expressions elicited a reduced negative response in the 330-500 msec and 550-740 msec time window. Neutral-intending expressions, which were also perceived as relatively confident, elicited a more delayed, larger sustained positivity than all other expressions in the 980-1270 msec window for this task. These findings provide the first piece of evidence of how quickly the brain responds to vocal cues signifying the extent of a speaker's confidence during online speech comprehension; first, a rough dissociation between unconfident and confident voices occurs as early as 200 msec after speech onset. At a later stage, further differentiation of the exact level of speaker confidence (i.e., close-to-confident, very confident) is evaluated via an inferential system to determine the speaker's meaning under current task settings. These findings extend three-stage models of how vocal emotion cues are processed in speech comprehension (e.g., Schirmer & Kotz, 2006) by

  14. Somatotype and Body Composition of Normal and Dysphonic Adult Speakers.

    PubMed

    Franco, Débora; Fragoso, Isabel; Andrea, Mário; Teles, Júlia; Martins, Fernando

    2017-01-01

    Voice quality provides information about the anatomical characteristics of the speaker. The patterns of somatotype and body composition can provide essential knowledge to characterize the individuality of voice quality. The aim of this study was to verify if there were significant differences in somatotype and body composition between normal and dysphonic speakers. Cross-sectional study. Anthropometric measurements were taken of a sample of 72 adult participants (40 normal speakers and 32 dysphonic speakers) according to International Society for the Advancement of Kinanthropometry standards, which allowed the calculation of endomorphism, mesomorphism, ectomorphism components, body density, body mass index, fat mass, percentage fat, and fat-free mass. Perception and acoustic evaluations as well as nasoendoscopy were used to assign speakers into normal or dysphonic groups. There were no significant differences between normal and dysphonic speakers in the mean somatotype attitudinal distance and somatotype dispersion distance (in spite of marginally significant differences [P < 0.10] in somatotype attitudinal distance and somatotype dispersion distance between groups) and in the mean vector of the somatotype components. Furthermore, no significant differences were found between groups concerning the mean of percentage fat, fat mass, fat-free mass, body density, and body mass index after controlling by sex. The findings suggested no significant differences in the somatotype and body composition variables, between normal and dysphonic speakers. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  15. Content-specific coordination of listeners' to speakers' EEG during communication

    PubMed Central

    Kuhlen, Anna K.; Allefeld, Carsten; Haynes, John-Dylan

    2012-01-01

    Cognitive neuroscience has recently begun to extend its focus from the isolated individual mind to two or more individuals coordinating with each other. In this study we uncover a coordination of neural activity between the ongoing electroencephalogram (EEG) of two people—a person speaking and a person listening. The EEG of one set of twelve participants (“speakers”) was recorded while they were narrating short stories. The EEG of another set of twelve participants (“listeners”) was recorded while watching audiovisual recordings of these stories. Specifically, listeners watched the superimposed videos of two speakers simultaneously and were instructed to attend either to one or the other speaker. This allowed us to isolate neural coordination due to processing the communicated content from the effects of sensory input. We find several neural signatures of communication: First, the EEG is more similar among listeners attending to the same speaker than among listeners attending to different speakers, indicating that listeners' EEG reflects content-specific information. Secondly, listeners' EEG activity correlates with the attended speakers' EEG, peaking at a time delay of about 12.5 s. This correlation takes place not only between homologous, but also between non-homologous brain areas in speakers and listeners. A semantic analysis of the stories suggests that listeners coordinate with speakers at the level of complex semantic representations, so-called “situation models”. With this study we link a coordination of neural activity between individuals directly to verbally communicated information. PMID:23060770

  16. English and Thai Speakers' Perception of Mandarin Tones

    ERIC Educational Resources Information Center

    Li, Ying

    2016-01-01

    Language learners' language experience is predicted to display a significant effect on their accurate perception of foreign language sounds (Flege, 1995). At the superasegmental level, there is still a debate regarding whether tone language speakers are better able to perceive foreign lexical tones than non-tone language speakers (i.e Lee et al.,…

  17. Respiratory Control in Stuttering Speakers: Evidence from Respiratory High-Frequency Oscillations.

    ERIC Educational Resources Information Center

    Denny, Margaret; Smith, Anne

    2000-01-01

    This study examined whether stuttering speakers (N=10) differed from fluent speakers in relations between the neural control systems for speech and life support. It concluded that in some stuttering speakers the relations between respiratory controllers are atypical, but that high participation by the high frequency oscillation-producing circuitry…

  18. Early testimonial learning: monitoring speech acts and speakers.

    PubMed

    Stephens, Elizabeth; Suarez, Sarah; Koenig, Melissa

    2015-01-01

    Testimony provides children with a rich source of knowledge about the world and the people in it. However, testimony is not guaranteed to be veridical, and speakers vary greatly in both knowledge and intent. In this chapter, we argue that children encounter two primary types of conflicts when learning from speakers: conflicts of knowledge and conflicts of interest. We review recent research on children's selective trust in testimony and propose two distinct mechanisms supporting early epistemic vigilance in response to the conflicts associated with speakers. The first section of the chapter focuses on the mechanism of coherence checking, which occurs during the process of message comprehension and facilitates children's comparison of information communicated through testimony to their prior knowledge, alerting them to inaccurate, inconsistent, irrational, and implausible messages. The second section focuses on source-monitoring processes. When children lack relevant prior knowledge with which to evaluate testimonial messages, they monitor speakers themselves for evidence of competence and morality, attending to cues such as confidence, consensus, access to information, prosocial and antisocial behavior, and group membership. © 2015 Elsevier Inc. All rights reserved.

  19. Speaking Japanese in Japan: Issues for English Speakers

    ERIC Educational Resources Information Center

    Stephens, Meredith

    2010-01-01

    Due to the global momentum of English as a Lingua Franca (ELF), Anglophones may perceive that there is less urgency for them to learn other languages than for speakers of other languages to learn English. The monolingual expectations of English speakers are evidenced not only in Anglophone countries but also abroad. This study reports on the…

  20. Native-Speakerism and the Complexity of Personal Experience: A Duoethnographic Study

    ERIC Educational Resources Information Center

    Lowe, Robert J.; Kiczkowiak, Marek

    2016-01-01

    This paper presents a duoethnographic study into the effects of native-speakerism on the professional lives of two English language teachers, one "native", and one "non-native speaker" of English. The goal of the study was to build on and extend existing research on the topic of native-speakerism by investigating, through…

  1. Speaker Recognition Through NLP and CWT Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown-VanHoozer, S.A.; Kercel, S.W.; Tucker, R.W.

    The objective of this research is to develop a system capable of identifying speakers on wiretaps from a large database (>500 speakers) with a short search time duration (<30 seconds), and with better than 90% accuracy. Much previous research in speaker recognition has led to algorithms that produced encouraging preliminary results, but were overwhelmed when applied to populations of more than a dozen or so different speakers. The authors are investigating a solution to the "large population" problem by seeking two completely different kinds of characterizing features. These features are he techniques of Neuro-Linguistic Programming (NLP) and the continuous waveletmore » transform (CWT). NLP extracts precise neurological, verbal and non-verbal information, and assimilates the information into useful patterns. These patterns are based on specific cues demonstrated by each individual, and provide ways of determining congruency between verbal and non-verbal cues. The primary NLP modalities are characterized through word spotting (or verbal predicates cues, e.g., see, sound, feel, etc.) while the secondary modalities would be characterized through the speech transcription used by the individual. This has the practical effect of reducing the size of the search space, and greatly speeding up the process of identifying an unknown speaker. The wavelet-based line of investigation concentrates on using vowel phonemes and non-verbal cues, such as tempo. The rationale for concentrating on vowels is there are a limited number of vowels phonemes, and at least one of them usually appears in even the shortest of speech segments. Using the fast, CWT algorithm, the details of both the formant frequency and the glottal excitation characteristics can be easily extracted from voice waveforms. The differences in the glottal excitation waveforms as well as the formant frequency are evident in the CWT output. More significantly, the CWT reveals significant detail of the glottal

  2. Speaker recognition through NLP and CWT modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown-VanHoozer, A.; Kercel, S. W.; Tucker, R. W.

    The objective of this research is to develop a system capable of identifying speakers on wiretaps from a large database (>500 speakers) with a short search time duration (<30 seconds), and with better than 90% accuracy. Much previous research in speaker recognition has led to algorithms that produced encouraging preliminary results, but were overwhelmed when applied to populations of more than a dozen or so different speakers. The authors are investigating a solution to the ''huge population'' problem by seeking two completely different kinds of characterizing features. These features are extracted using the techniques of Neuro-Linguistic Programming (NLP) and themore » continuous wavelet transform (CWT). NLP extracts precise neurological, verbal and non-verbal information, and assimilates the information into useful patterns. These patterns are based on specific cues demonstrated by each individual, and provide ways of determining congruency between verbal and non-verbal cues. The primary NLP modalities are characterized through word spotting (or verbal predicates cues, e.g., see, sound, feel, etc.) while the secondary modalities would be characterized through the speech transcription used by the individual. This has the practical effect of reducing the size of the search space, and greatly speeding up the process of identifying an unknown speaker. The wavelet-based line of investigation concentrates on using vowel phonemes and non-verbal cues, such as tempo. The rationale for concentrating on vowels is there are a limited number of vowels phonemes, and at least one of them usually appears in even the shortest of speech segments. Using the fast, CWT algorithm, the details of both the formant frequency and the glottal excitation characteristics can be easily extracted from voice waveforms. The differences in the glottal excitation waveforms as well as the formant frequency are evident in the CWT output. More significantly, the CWT reveals significant detail of

  3. Race in Conflict with Heritage: "Black" Heritage Language Speaker of Japanese

    ERIC Educational Resources Information Center

    Doerr, Neriko Musha; Kumagai, Yuri

    2014-01-01

    "Heritage language speaker" is a relatively new term to denote minority language speakers who grew up in a household where the language was used or those who have a family, ancestral, or racial connection to the minority language. In research on heritage language speakers, overlap between these 2 definitions is often assumed--that is,…

  4. Are Cantonese-speakers really descriptivists? Revisiting cross-cultural semantics.

    PubMed

    Lam, Barry

    2010-05-01

    In an article in Cognition [Machery, E., Mallon, R., Nichols, S., & Stich, S. (2004). Semantics cross-cultural style. Cognition, 92, B1-B12] present data which purports to show that East Asian Cantonese-speakers tend to have descriptivist intuitions about the referents of proper names, while Western English-speakers tend to have causal-historical intuitions about proper names. Machery et al. take this finding to support the view that some intuitions, the universality of which they claim is central to philosophical theories, vary according to cultural background. Machery et al. conclude from their findings that the philosophical methodology of consulting intuitions about hypothetical cases is flawed vis a vis the goal of determining truths about some philosophical domains like philosophical semantics. In the following study, three new vignettes in English were given to Western native English-speakers, and Cantonese translations were given to native Cantonese-speaking immigrants from a Cantonese community in Southern California. For all three vignettes, questions were given to elicit intuitions about the referent of a proper name and the truth-value of an uttered sentence containing a proper name. The results from this study reveal that East Asian Cantonese-speakers do not differ from Western English-speakers in ways that support Machery et al.'s conclusions. This new data concerning the intuitions of Cantonese-speakers raises questions about whether cross-cultural variation in answers to questions on certain vignettes reveal genuine differences in intuitions, or whether such differences stem from non-intuitional differences, such as differences in linguistic competence. Copyright 2009 Elsevier B.V. All rights reserved.

  5. Factor analysis of auto-associative neural networks with application in speaker verification.

    PubMed

    Garimella, Sri; Hermansky, Hynek

    2013-04-01

    Auto-associative neural network (AANN) is a fully connected feed-forward neural network, trained to reconstruct its input at its output through a hidden compression layer, which has fewer numbers of nodes than the dimensionality of input. AANNs are used to model speakers in speaker verification, where a speaker-specific AANN model is obtained by adapting (or retraining) the universal background model (UBM) AANN, an AANN trained on multiple held out speakers, using corresponding speaker data. When the amount of speaker data is limited, this adaptation procedure may lead to overfitting as all the parameters of UBM-AANN are adapted. In this paper, we introduce and develop the factor analysis theory of AANNs to alleviate this problem. We hypothesize that only the weight matrix connecting the last nonlinear hidden layer and the output layer is speaker-specific, and further restrict it to a common low-dimensional subspace during adaptation. The subspace is learned using large amounts of development data, and is held fixed during adaptation. Thus, only the coordinates in a subspace, also known as i-vector, need to be estimated using speaker-specific data. The update equations are derived for learning both the common low-dimensional subspace and the i-vectors corresponding to speakers in the subspace. The resultant i-vector representation is used as a feature for the probabilistic linear discriminant analysis model. The proposed system shows promising results on the NIST-08 speaker recognition evaluation (SRE), and yields a 23% relative improvement in equal error rate over the previously proposed weighted least squares-based subspace AANNs system. The experiments on NIST-10 SRE confirm that these improvements are consistent and generalize across datasets.

  6. Analysis of human scream and its impact on text-independent speaker verification.

    PubMed

    Hansen, John H L; Nandwana, Mahesh Kumar; Shokouhi, Navid

    2017-04-01

    Scream is defined as sustained, high-energy vocalizations that lack phonological structure. Lack of phonological structure is how scream is identified from other forms of loud vocalization, such as "yell." This study investigates the acoustic aspects of screams and addresses those that are known to prevent standard speaker identification systems from recognizing the identity of screaming speakers. It is well established that speaker variability due to changes in vocal effort and Lombard effect contribute to degraded performance in automatic speech systems (i.e., speech recognition, speaker identification, diarization, etc.). However, previous research in the general area of speaker variability has concentrated on human speech production, whereas less is known about non-speech vocalizations. The UT-NonSpeech corpus is developed here to investigate speaker verification from scream samples. This study considers a detailed analysis in terms of fundamental frequency, spectral peak shift, frame energy distribution, and spectral tilt. It is shown that traditional speaker recognition based on the Gaussian mixture models-universal background model framework is unreliable when evaluated with screams.

  7. Do Listeners Store in Memory a Speaker's Habitual Utterance-Final Phonation Type?

    PubMed Central

    Bőhm, Tamás; Shattuck-Hufnagel, Stefanie

    2009-01-01

    Earlier studies report systematic differences across speakers in the occurrence of utterance-final irregular phonation; the work reported here investigated whether human listeners remember this speaker-specific information and can access it when necessary (a prerequisite for using this cue in speaker recognition). Listeners personally familiar with the voices of the speakers were presented with pairs of speech samples: one with the original and the other with transformed final phonation type. Asked to select the member of the pair that was closer to the talker's voice, most listeners tended to choose the unmanipulated token (even though they judged them to sound essentially equally natural). This suggests that utterance-final pitch period irregularity is part of the mental representation of individual speaker voices, although this may depend on the individual speaker and listener to some extent. PMID:19776665

  8. Teaching First Language Speakers to Communicate across Linguistic Difference: Addressing Attitudes, Comprehension, and Strategies

    ERIC Educational Resources Information Center

    Subtirelu, Nicholas Close; Lindemann, Stephanie

    2016-01-01

    While most research in applied linguistics has focused on second language (L2) speakers and their language capabilities, the success of interaction between such speakers and first language (L1) speakers also relies on the positive attitudes and communication skills of the L1 speakers. However, some research has suggested that many L1 speakers lack…

  9. A Study on Metadiscoursive Interaction in the MA Theses of the Native Speakers of English and the Turkish Speakers of English

    ERIC Educational Resources Information Center

    Köroglu, Zehra; Tüm, Gülden

    2017-01-01

    This study has been conducted to evaluate the TM usage in the MA theses written by the native speakers (NSs) of English and the Turkish speakers (TSs) of English. The purpose is to compare the TM usage in the introduction, results and discussion, and conclusion sections by both groups' randomly selected MA theses in the field of ELT between the…

  10. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    NASA Astrophysics Data System (ADS)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  11. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  12. Acquired dyslexia in a Turkish-English speaker.

    PubMed

    Raman, Ilhan; Weekes, Brendan S

    2005-06-01

    The Turkish script is characterised by completely transparent bidirectional mappings between orthography and phonology. To date, there has been no reported evidence of acquired dyslexia in Turkish speakers leading to the naïve view that reading and writing problems in Turkish are probably rare. We examined the extent to which phonological impairment and orthographic transparency influence reading disorders in a native Turkish speaker. BRB is a bilingual Turkish-English speaker with deep dysphasia accompanied by acquired dyslexia in both languages. The main findings are an effect of imageability on reading in Turkish coincident with surface dyslexia in English and preserved nonword reading. BRB's acquired dyslexia suggests that damage to phonological representations might have a consequence for learning to read in Turkish. We argue that BRB's acquired dyslexia has a common locus in chronic underactivation of phonological representations in Turkish and English. Despite a common locus, reading problems manifest themselves differently according to properties of the script and the type of task.

  13. A fundamental residue pitch perception bias for tone language speakers

    NASA Astrophysics Data System (ADS)

    Petitti, Elizabeth

    A complex tone composed of only higher-order harmonics typically elicits a pitch percept equivalent to the tone's missing fundamental frequency (f0). When judging the direction of residue pitch change between two such tones, however, listeners may have completely opposite perceptual experiences depending on whether they are biased to perceive changes based on the overall spectrum or the missing f0 (harmonic spacing). Individual differences in residue pitch change judgments are reliable and have been associated with musical experience and functional neuroanatomy. Tone languages put greater pitch processing demands on their speakers than non-tone languages, and we investigated whether these lifelong differences in linguistic pitch processing affect listeners' bias for residue pitch. We asked native tone language speakers and native English speakers to perform a pitch judgment task for two tones with missing fundamental frequencies. Given tone pairs with ambiguous pitch changes, listeners were asked to judge the direction of pitch change, where the direction of their response indicated whether they attended to the overall spectrum (exhibiting a spectral bias) or the missing f0 (exhibiting a fundamental bias). We found that tone language speakers are significantly more likely to perceive pitch changes based on the missing f0 than English speakers. These results suggest that tone-language speakers' privileged experience with linguistic pitch fundamentally tunes their basic auditory processing.

  14. Revisiting speech rate and utterance length manipulations in stuttering speakers.

    PubMed

    Blomgren, Michael; Goberman, Alexander M

    2008-01-01

    The goal of this study was to evaluate stuttering frequency across a multidimensional (2x2) hierarchy of speech performance tasks. Specifically, this study examined the interaction between changes in length of utterance and levels of speech rate stability. Forty-four adult male speakers participated in the study (22 stuttering speakers and 22 non-stuttering speakers). Participants were audio and video recorded while producing a spontaneous speech task and four different experimental speaking tasks. The four experimental speaking tasks involved reading a list of 45 words and a list 45 phrases two times each. One reading of each list involved speaking at a steady habitual rate (habitual rate tasks) and another reading involved producing each list at a variable speaking rate (variable rate tasks). For the variable rate tasks, participants were directed to produce words or phrases at randomly ordered slow, habitual, and fast rates. The stuttering speakers exhibited significantly more stuttering on the variable rate tasks than on the habitual rate tasks. In addition, the stuttering speakers exhibited significantly more stuttering on the first word of the phrase length tasks compared to the single word tasks. Overall, the results indicated that varying levels of both utterance length and temporal complexity function to modulate stuttering frequency in adult stuttering speakers. Discussion focuses on issues of speech performance according to stuttering severity and possible clinical implications. The reader will learn about and be able to: (1) describe the mediating effects of length of utterance and speech rate on the frequency of stuttering in stuttering speakers; (2) understand the rationale behind multidimensional skill performance matrices; and (3) describe possible applications of motor skill performance matrices to stuttering therapy.

  15. Children's Use of Information Quality to Establish Speaker Preferences

    ERIC Educational Resources Information Center

    Gillis, Randall L.; Nilsen, Elizabeth S.

    2013-01-01

    Knowledge transfer is most effective when speakers provide good quality (in addition to accurate) information. Two studies investigated whether preschool- (4-5 years old) and school-age (6-7 years old) children prefer speakers who provide sufficient information over those who provide insufficient (yet accurate) information. Children were provided…

  16. Intonation and Gesture as Bootstrapping Devices in Speaker Uncertainty

    ERIC Educational Resources Information Center

    Hübscher, Iris; Esteve-Gibert, Núria; Igualada, Alfonso; Prieto, Pilar

    2017-01-01

    This study investigates 3- to 5-year-old children's sensitivity to lexical, intonational and gestural information in the comprehension of speaker uncertainty. Most previous studies on children's understanding of speaker certainty and uncertainty across languages have focused on the comprehension of lexical markers, and little is known about the…

  17. Verification of endocrinological functions at a short distance between parametric speakers and the human body.

    PubMed

    Lee, Soomin; Katsuura, Tetsuo; Shimomura, Yoshihiro

    2011-01-01

    In recent years, a new type of speaker called the parametric speaker has been used to generate highly directional sound, and these speakers are now commercially available. In our previous study, we verified that the burden of the parametric speaker was lower than that of the general speaker for endocrine functions. However, nothing has yet been demonstrated about the effects of the shorter distance than 2.6 m between parametric speakers and the human body. Therefore, we investigated the distance effect on endocrinological function and subjective evaluation. Nine male subjects participated in this study. They completed three consecutive sessions: a 20-min quiet period as a baseline, a 30-min mental task period with general speakers or parametric speakers, and a 20-min recovery period. We measured salivary cortisol and chromogranin A (CgA) concentrations. Furthermore, subjects took the Kwansei-gakuin Sleepiness Scale (KSS) test before and after the task and also a sound quality evaluation test after it. Four experiments, one with a speaker condition (general speaker and parametric speaker), the other with a distance condition (0.3 m and 1.0 m), were conducted, respectively, at the same time of day on separate days. We used three-way repeated measures ANOVA (speaker factor × distance factor × time factor) to examine the effects of the parametric speaker. We found that the endocrinological functions were not significantly different between the speaker condition and the distance condition. The results also showed that the physiological burdens increased with progress in time independent of the speaker condition and distance condition.

  18. Advancements in robust algorithm formulation for speaker identification of whispered speech

    NASA Astrophysics Data System (ADS)

    Fan, Xing

    Whispered speech is an alternative speech production mode from neutral speech, which is used by talkers intentionally in natural conversational scenarios to protect privacy and to avoid certain content from being overheard/made public. Due to the profound differences between whispered and neutral speech in production mechanism and the absence of whispered adaptation data, the performance of speaker identification systems trained with neutral speech degrades significantly. This dissertation therefore focuses on developing a robust closed-set speaker recognition system for whispered speech by using no or limited whispered adaptation data from non-target speakers. This dissertation proposes the concept of "High''/"Low'' performance whispered data for the purpose of speaker identification. A variety of acoustic properties are identified that contribute to the quality of whispered data. An acoustic analysis is also conducted to compare the phoneme/speaker dependency of the differences between whispered and neutral data in the feature domain. The observations from those acoustic analysis are new in this area and also serve as a guidance for developing robust speaker identification systems for whispered speech. This dissertation further proposes two systems for speaker identification of whispered speech. One system focuses on front-end processing. A two-dimensional feature space is proposed to search for "Low''-quality performance based whispered utterances and separate feature mapping functions are applied to vowels and consonants respectively in order to retain the speaker's information shared between whispered and neutral speech. The other system focuses on speech-mode-independent model training. The proposed method generates pseudo whispered features from neutral features by using the statistical information contained in a whispered Universal Background model (UBM) trained from extra collected whispered data from non-target speakers. Four modeling methods are proposed

  19. Intonation contrast in Cantonese speakers with hypokinetic dysarthria associated with Parkinson's disease.

    PubMed

    Ma, Joan K-Y; Whitehill, Tara L; So, Susanne Y-S

    2010-08-01

    Speech produced by individuals with hypokinetic dysarthria associated with Parkinson's disease (PD) is characterized by a number of features including impaired speech prosody. The purpose of this study was to investigate intonation contrasts produced by this group of speakers. Speech materials with a question-statement contrast were collected from 14 Cantonese speakers with PD. Twenty listeners then classified the productions as either questions or statements. Acoustic analyses of F0, duration, and intensity were conducted to determine which acoustic cues distinguished the production of questions from statements, and which cues appeared to be exploited by listeners in identifying intonational contrasts. The results show that listeners identified statements with a high degree of accuracy, but the accuracy of question identification ranged from 0.56% to 96% across the 14 speakers. The speakers with PD used similar acoustic cues as nondysarthric Cantonese speakers to mark the question-statement contrast, although the contrasts were not observed in all speakers. Listeners mainly used F0 cues at the final syllable for intonation identification. These data contribute to the researchers' understanding of intonation marking in speakers with PD, with specific application to the production and perception of intonation in a lexical tone language.

  20. Speaker-Machine Interaction in Automatic Speech Recognition. Technical Report.

    ERIC Educational Resources Information Center

    Makhoul, John I.

    The feasibility and limitations of speaker adaptation in improving the performance of a "fixed" (speaker-independent) automatic speech recognition system were examined. A fixed vocabulary of 55 syllables is used in the recognition system which contains 11 stops and fricatives and five tense vowels. The results of an experiment on speaker…

  1. Single-Word Intelligibility in Speakers with Repaired Cleft Palate

    ERIC Educational Resources Information Center

    Whitehill, Tara; Chau, Cynthia

    2004-01-01

    Many speakers with repaired cleft palate have reduced intelligibility, but there are limitations with current procedures for assessing intelligibility. The aim of this study was to construct a single-word intelligibility test for speakers with cleft palate. The test used a multiple-choice identification format, and was based on phonetic contrasts…

  2. Modeling the Control of Phonological Encoding in Bilingual Speakers

    ERIC Educational Resources Information Center

    Roelofs, Ardi; Verhoef, Kim

    2006-01-01

    Phonological encoding is the process by which speakers retrieve phonemic segments for morphemes from memory and use the segments to assemble phonological representations of words to be spoken. When conversing in one language, bilingual speakers have to resist the temptation of encoding word forms using the phonological rules and representations of…

  3. Does language shape thought? Mandarin and English speakers' conceptions of time.

    PubMed

    Boroditsky, L

    2001-08-01

    Does the language you speak affect how you think about the world? This question is taken up in three experiments. English and Mandarin talk about time differently--English predominantly talks about time as if it were horizontal, while Mandarin also commonly describes time as vertical. This difference between the two languages is reflected in the way their speakers think about time. In one study, Mandarin speakers tended to think about time vertically even when they were thinking for English (Mandarin speakers were faster to confirm that March comes earlier than April if they had just seen a vertical array of objects than if they had just seen a horizontal array, and the reverse was true for English speakers). Another study showed that the extent to which Mandarin-English bilinguals think about time vertically is related to how old they were when they first began to learn English. In another experiment native English speakers were taught to talk about time using vertical spatial terms in a way similar to Mandarin. On a subsequent test, this group of English speakers showed the same bias to think about time vertically as was observed with Mandarin speakers. It is concluded that (1) language is a powerful tool in shaping thought about abstract domains and (2) one's native language plays an important role in shaping habitual thought (e.g., how one tends to think about time) but does not entirely determine one's thinking in the strong Whorfian sense. Copyright 2001 Academic Press.

  4. Listening: The Second Speaker.

    ERIC Educational Resources Information Center

    Erway, Ella Anderson

    1972-01-01

    Scholars agree that listening is an active rather than a passive process. The listening which makes people achieve higher scores on current listening tests is "second speaker" listening or active participation in the encoding of the message. Most of the instructional suggestions in listening curriculum guides are based on this concept. In terms of…

  5. Are Cantonese-Speakers Really Descriptivists? Revisiting Cross-Cultural Semantics

    ERIC Educational Resources Information Center

    Lam, Barry

    2010-01-01

    In an article in "Cognition" [Machery, E., Mallon, R., Nichols, S., & Stich, S. (2004). "Semantics cross-cultural style." "Cognition, 92", B1-B12] present data which purports to show that East Asian Cantonese-speakers tend to have descriptivist intuitions about the referents of proper names, while Western English-speakers tend to have…

  6. Long short-term memory for speaker generalization in supervised speech separation

    PubMed Central

    Chen, Jitong; Wang, DeLiang

    2017-01-01

    Speech separation can be formulated as learning to estimate a time-frequency mask from acoustic features extracted from noisy speech. For supervised speech separation, generalization to unseen noises and unseen speakers is a critical issue. Although deep neural networks (DNNs) have been successful in noise-independent speech separation, DNNs are limited in modeling a large number of speakers. To improve speaker generalization, a separation model based on long short-term memory (LSTM) is proposed, which naturally accounts for temporal dynamics of speech. Systematic evaluation shows that the proposed model substantially outperforms a DNN-based model on unseen speakers and unseen noises in terms of objective speech intelligibility. Analyzing LSTM internal representations reveals that LSTM captures long-term speech contexts. It is also found that the LSTM model is more advantageous for low-latency speech separation and it, without future frames, performs better than the DNN model with future frames. The proposed model represents an effective approach for speaker- and noise-independent speech separation. PMID:28679261

  7. Initial Teacher Training Courses and Non-Native Speaker Teachers

    ERIC Educational Resources Information Center

    Anderson, Jason

    2016-01-01

    This article reports on a study contrasting 41 native speakers (NSs) and 38 non-native speakers (NNSs) of English from two short initial teacher training courses, the Cambridge Certificate in English Language Teaching to Adults and the Trinity College London CertTESOL. After a brief history and literature review, I present findings on teachers'…

  8. The Employability of Non-Native-Speaker Teachers of EFL: A UK Survey

    ERIC Educational Resources Information Center

    Clark, Elizabeth; Paran, Amos

    2007-01-01

    The native speaker still has a privileged position in English language teaching, representing both the model speaker and the ideal teacher. Non-native-speaker teachers of English are often perceived as having a lower status than their native-speaking counterparts, and have been shown to face discriminatory attitudes when applying for teaching…

  9. Generic Language and Speaker Confidence Guide Preschoolers' Inferences about Novel Animate Kinds

    ERIC Educational Resources Information Center

    Stock, Hayli R.; Graham, Susan A.; Chambers, Craig G.

    2009-01-01

    We investigated the influence of speaker certainty on 156 four-year-old children's sensitivity to generic and nongeneric statements. An inductive inference task was implemented, in which a speaker described a nonobvious property of a novel creature using either a generic or a nongeneric statement. The speaker appeared to be confident, neutral, or…

  10. Toddlers Use Speech Disfluencies to Predict Speakers' Referential Intentions

    ERIC Educational Resources Information Center

    Kidd, Celeste; White, Katherine S.; Aslin, Richard N.

    2011-01-01

    The ability to infer the referential intentions of speakers is a crucial part of learning a language. Previous research has uncovered various contextual and social cues that children may use to do this. Here we provide the first evidence that children also use speech disfluencies to infer speaker intention. Disfluencies (e.g. filled pauses "uh"…

  11. Making Math Real: Effective Qualities of Guest Speaker Presentations and the Impact of Speakers on Student Attitude and Achievement in the Algebra Classroom

    ERIC Educational Resources Information Center

    McKain, Danielle R.

    2012-01-01

    The term real world is often used in mathematics education, yet the definition of real-world problems and how to incorporate them in the classroom remains ambiguous. One way real-world connections can be made is through guest speakers. Guest speakers can offer different perspectives and share knowledge about various subject areas, yet the impact…

  12. Modern Greek Language: Acquisition of Morphology and Syntax by Non-Native Speakers

    ERIC Educational Resources Information Center

    Andreou, Georgia; Karapetsas, Anargyros; Galantomos, Ioannis

    2008-01-01

    This study investigated the performance of native and non native speakers of Modern Greek language on morphology and syntax tasks. Non-native speakers of Greek whose native language was English, which is a language with strict word order and simple morphology, made more errors and answered more slowly than native speakers on morphology but not…

  13. Speaker diarization system on the 2007 NIST rich transcription meeting recognition evaluation

    NASA Astrophysics Data System (ADS)

    Sun, Hanwu; Nwe, Tin Lay; Koh, Eugene Chin Wei; Bin, Ma; Li, Haizhou

    2007-09-01

    This paper presents a speaker diarization system developed at the Institute for Infocomm Research (I2R) for NIST Rich Transcription 2007 (RT-07) evaluation task. We describe in details our primary approaches for the speaker diarization on the Multiple Distant Microphones (MDM) conditions in conference room scenario. Our proposed system consists of six modules: 1). Least-mean squared (NLMS) adaptive filter for the speaker direction estimate via Time Difference of Arrival (TDOA), 2). An initial speaker clustering via two-stage TDOA histogram distribution quantization approach, 3). Multiple microphone speaker data alignment via GCC-PHAT Time Delay Estimate (TDE) among all the distant microphone channel signals, 4). A speaker clustering algorithm based on GMM modeling approach, 5). Non-speech removal via speech/non-speech verification mechanism and, 6). Silence removal via "Double-Layer Windowing"(DLW) method. We achieves error rate of 31.02% on the 2006 Spring (RT-06s) MDM evaluation task and a competitive overall error rate of 15.32% for the NIST Rich Transcription 2007 (RT-07) MDM evaluation task.

  14. Language systems in normal and aphasic human subjects: functional imaging studies and inferences from animal studies.

    PubMed

    Wise, Richard J S

    2003-01-01

    The old neurological model of language, based on the writings of Broca, Wernicke and Lichtheim in the 19th century, is now undergoing major modifications. Observations on the anatomy and physiology of auditory processing in non-human primates are giving strong indicators as to how speech perception is organised in the human brain. In the light of this knowledge, functional activation studies with positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) are achieving a new level of precision in the investigation of language organisation in the human brain, in a manner not possible with observations on patients with aphasic stroke. Although the use of functional imaging to inform methods of improving aphasia rehabilitation remains underdeveloped, there are strong indicators that this methodology will provide the means to research a very imperfectly developed area of therapy.

  15. Shhh… I Need Quiet! Children's Understanding of American, British, and Japanese-accented English Speakers.

    PubMed

    Bent, Tessa; Holt, Rachael Frush

    2018-02-01

    Children's ability to understand speakers with a wide range of dialects and accents is essential for efficient language development and communication in a global society. Here, the impact of regional dialect and foreign-accent variability on children's speech understanding was evaluated in both quiet and noisy conditions. Five- to seven-year-old children ( n = 90) and adults ( n = 96) repeated sentences produced by three speakers with different accents-American English, British English, and Japanese-accented English-in quiet or noisy conditions. Adults had no difficulty understanding any speaker in quiet conditions. Their performance declined for the nonnative speaker with a moderate amount of noise; their performance only substantially declined for the British English speaker (i.e., below 93% correct) when their understanding of the American English speaker was also impeded. In contrast, although children showed accurate word recognition for the American and British English speakers in quiet conditions, they had difficulty understanding the nonnative speaker even under ideal listening conditions. With a moderate amount of noise, their perception of British English speech declined substantially and their ability to understand the nonnative speaker was particularly poor. These results suggest that although school-aged children can understand unfamiliar native dialects under ideal listening conditions, their ability to recognize words in these dialects may be highly susceptible to the influence of environmental degradation. Fully adult-like word identification for speakers with unfamiliar accents and dialects may exhibit a protracted developmental trajectory.

  16. Open-set speaker identification with diverse-duration speech data

    NASA Astrophysics Data System (ADS)

    Karadaghi, Rawande; Hertlein, Heinz; Ariyaeeinia, Aladdin

    2015-05-01

    The concern in this paper is an important category of applications of open-set speaker identification in criminal investigation, which involves operating with short and varied duration speech. The study presents investigations into the adverse effects of such an operating condition on the accuracy of open-set speaker identification, based on both GMMUBM and i-vector approaches. The experiments are conducted using a protocol developed for the identification task, based on the NIST speaker recognition evaluation corpus of 2008. In order to closely cover the real-world operating conditions in the considered application area, the study includes experiments with various combinations of training and testing data duration. The paper details the characteristics of the experimental investigations conducted and provides a thorough analysis of the results obtained.

  17. Google Home: smart speaker as environmental control unit.

    PubMed

    Noda, Kenichiro

    2017-08-23

    Environmental Control Units (ECU) are devices or a system that allows a person to control appliances in their home or work environment. Such system can be utilized by clients with physical and/or functional disability to enhance their ability to control their environment, to promote independence and improve their quality of life. Over the last several years, there have been an emergence of several inexpensive, commercially-available, voice activated smart speakers into the market such as Google Home and Amazon Echo. These smart speakers are equipped with far field microphone that supports voice recognition, and allows for complete hand-free operation for various purposes, including for playing music, for information retrieval, and most importantly, for environmental control. Clients with disability could utilize these features to turn the unit into a simple ECU that is completely voice activated and wirelessly connected to appliances. Smart speakers, with their ease of setup, low cost and versatility, may be a more affordable and accessible alternative to the traditional ECU. Implications for Rehabilitation Environmental Control Units (ECU) enable independence for physically and functionally disabled clients, and reduce burden and frequency of demands on carers. Traditional ECU can be costly and may require clients to learn specialized skills to use. Smart speakers have the potential to be used as a new-age ECU by overcoming these barriers, and can be used by a wider range of clients.

  18. Co-Construction of Nonnative Speaker Identity in Cross-Cultural Interaction

    ERIC Educational Resources Information Center

    Park, Jae-Eun

    2007-01-01

    Informed by Conversation Analysis, this paper examines discursive practices through which nonnative speaker (NNS) identity is constituted in relation to native speaker (NS) identity in naturally occurring English conversations. Drawing on studies of social interaction that view identity as intrinsically a social, dialogic, negotiable entity, I…

  19. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements

    PubMed Central

    Kessler, Luise; Schweinberger, Stefan R.

    2016-01-01

    A speaker’s gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., “sniffer dogs cannot smell the difference between identical twins”). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze. PMID:27643789

  20. Voice Handicap Index in Persian Speakers with Various Severities of Hearing Loss.

    PubMed

    Aghadoost, Ozra; Moradi, Negin; Dabirmoghaddam, Payman; Aghadoost, Alireza; Naderifar, Ehsan; Dehbokri, Siavash Mohammadi

    2016-01-01

    The purpose of this study was to assess and compare the total score and subscale scores of the Voice Handicap Index (VHI) in speakers with and without hearing loss. A further aim was to determine if a correlation exists between severities of hearing loss with total scores and VHI subscale scores. In this cross-sectional, descriptive analytical study, 100 participants, divided in 2 groups of participants with and without hearing loss, were studied. Background information was gathered by interview, and VHI questionnaires were filled in by all participants. For all variables, including mean total score and VHI subscale scores, there was a considerable difference in speakers with and without hearing loss (p < 0.05). The correlation between severity of hearing loss with total score and VHI subscale scores was significant. Speakers with hearing loss were found to have higher mean VHI scores than speakers with normal hearing. This indicates a high voice handicap related to voice in speakers with hearing loss. In addition, increased severity of hearing loss leads to more severe voice handicap. This finding emphasizes the need for a multilateral assessment and treatment of voice disorders in speakers with hearing loss. © 2017 S. Karger AG, Basel.

  1. Postaccess processes in the open vs. closed class distinction.

    PubMed

    Matthei, E H; Kean, M L

    1989-02-01

    We present the results of two auditory lexical decision experiments in which we attempted to replicate findings originally presented in Bradley (1978, Computational distinctions of vocabulary type, Ph.D. dissertation, MIT). The results obtained by Bradley were used as evidence for a processing distinction between the open and the closed class vocabularies; this distinction then used as part of an explanation for agrammatism in the comprehension and production of Broca's aphasics. In our first experiment we failed to replicate Bradley's result of frequency insensitivity in the closed class. Our second experiment, however, replicates Bradley's finding that closed class based nonwords (e.g., thanage) fail to induce interference effects in nonword decisions. We argue that our results, together with the various other reported failures to replicate Bradley's frequency insensitivity result, indicate that the open and closed classes may play distinct roles in postaccess phenomena involving the processing of morphological information but that such studies cannot address the question of whether the open vs. closed class distinction plays a role in syntactic processing.

  2. Phase Asymmetries in Normophonic Speakers: Visual Judgments and Objective Findings

    ERIC Educational Resources Information Center

    Bonilha, Heather Shaw; Deliyski, Dimitar D.; Gerlach, Terri Treman

    2008-01-01

    Purpose: To ascertain the amount of phase asymmetry of the vocal fold vibration in normophonic speakers via visualization techniques and compare findings for habitual and pressed phonations. Method: Fifty-two normophonic speakers underwent stroboscopy and high-speed videoendoscopy (HSV). The HSV images were further processed into 4 visual…

  3. Dysprosody and Stimulus Effects in Cantonese Speakers with Parkinson's Disease

    ERIC Educational Resources Information Center

    Ma, Joan K.-Y.; Whitehill, Tara; Cheung, Katherine S.-K.

    2010-01-01

    Background: Dysprosody is a common feature in speakers with hypokinetic dysarthria. However, speech prosody varies across different types of speech materials. This raises the question of what is the most appropriate speech material for the evaluation of dysprosody. Aims: To characterize the prosodic impairment in Cantonese speakers with…

  4. Profiles of an Acquisition Generation: Nontraditional Heritage Speakers of Spanish

    ERIC Educational Resources Information Center

    DeFeo, Dayna Jean

    2018-01-01

    Though definitions vary, the literature on heritage speakers of Spanish identifies two primary attributes: a linguistic and cultural connection to the language. This article profiles four Anglo college students who grew up in bilingual or Spanish-dominant communities in the Southwest who self-identified as Spanish heritage speakers, citing…

  5. The Denial of Ideology in Perceptions of "Nonnative Speaker" Teachers

    ERIC Educational Resources Information Center

    Holliday, Adrian; Aboshiha, Pamela

    2009-01-01

    There is now general acceptance that the traditional "nonnative speaker" label for teachers of English is problematic on sociolinguistic grounds and can be the source of employment discrimination. However, there continues to be disagreement regarding how far there is a prejudice against "nonnative speaker" teachers which is deep and sustained and…

  6. Nonnative Speakers Do Not Take Competing Alternative Expressions into Account the Way Native Speakers Do

    ERIC Educational Resources Information Center

    Robenalt, Clarice; Goldberg, Adele E.

    2016-01-01

    When native speakers judge the acceptability of novel sentences, they appear to implicitly take competing formulations into account, judging novel sentences with a readily available alternative formulation to be less acceptable than novel sentences with no competing alternative. Moreover, novel sentences with a competing alternative are more…

  7. Simultaneous Talk--From the Perspective of Floor Management of English and Japanese Speakers.

    ERIC Educational Resources Information Center

    Hayashi, Reiko

    1988-01-01

    Investigates simultaneous talk in face-to-face conversation using the analytic framework of "floor" proposed by Edelsky (1981). Analysis of taped conversation among speakers of Japanese and among speakers of English shows that, while both groups use simultaneous talk, it is used more frequently by Japanese speakers. A reference list…

  8. Defining "Native Speaker" in Multilingual Settings: English as a Native Language in Asia

    ERIC Educational Resources Information Center

    Hansen Edwards, Jette G.

    2017-01-01

    The current study examines how and why speakers of English from multilingual contexts in Asia are identifying as native speakers of English. Eighteen participants from different contexts in Asia, including Singapore, Malaysia, India, Taiwan, and The Philippines, who self-identified as native speakers of English participated in hour-long interviews…

  9. Using Avatars for Improving Speaker Identification in Captioning

    NASA Astrophysics Data System (ADS)

    Vy, Quoc V.; Fels, Deborah I.

    Captioning is the main method for accessing television and film content by people who are deaf or hard-of-hearing. One major difficulty consistently identified by the community is that of knowing who is speaking particularly for an off screen narrator. A captioning system was created using a participatory design method to improve speaker identification. The final prototype contained avatars and a coloured border for identifying specific speakers. Evaluation results were very positive; however participants also wanted to customize various components such as caption and avatar location.

  10. Vowel reduction across tasks for male speakers of American English.

    PubMed

    Kuo, Christina; Weismer, Gary

    2016-07-01

    This study examined acoustic variation of vowels within speakers across speech tasks. The overarching goal of the study was to understand within-speaker variation as one index of the range of normal speech motor behavior for American English vowels. Ten male speakers of American English performed four speech tasks including citation form sentence reading with a clear-speech style (clear-speech), citation form sentence reading (citation), passage reading (reading), and conversational speech (conversation). Eight monophthong vowels in a variety of consonant contexts were studied. Clear-speech was operationally defined as the reference point for describing variation. Acoustic measures associated with the conventions of vowel targets were obtained and examined. These included temporal midpoint formant frequencies for the first three formants (F1, F2, and F3) and the derived Euclidean distances in the F1-F2 and F2-F3 planes. Results indicated that reduction toward the center of the F1-F2 and F2-F3 planes increased in magnitude across the tasks in the order of clear-speech, citation, reading, and conversation. The cross-task variation was comparable for all speakers despite fine-grained individual differences. The characteristics of systematic within-speaker acoustic variation across tasks have potential implications for the understanding of the mechanisms of speech motor control and motor speech disorders.

  11. What's Learned Together Stays Together: Speakers' Choice of Referring Expression Reflects Shared Experience

    ERIC Educational Resources Information Center

    Gorman, Kristen S.; Gegg-Harrison, Whitney; Marsh, Chelsea R.; Tanenhaus, Michael K.

    2013-01-01

    When referring to named objects, speakers can choose either a name ("mbira") or a description ("that gourd-like instrument with metal strips"); whether the name provides useful information depends on whether the speaker's knowledge of the name is shared with the addressee. But, how do speakers determine what is shared? In 2…

  12. Early Language Experience Facilitates the Processing of Gender Agreement in Spanish Heritage Speakers

    ERIC Educational Resources Information Center

    Montrul, Silvina; Davidson, Justin; De La Fuente, Israel; Foote, Rebecca

    2014-01-01

    We examined how age of acquisition in Spanish heritage speakers and L2 learners interacts with implicitness vs. explicitness of tasks in gender processing of canonical and non-canonical ending nouns. Twenty-three Spanish native speakers, 29 heritage speakers, and 33 proficiency-matched L2 learners completed three on-line spoken word recognition…

  13. Rationales for Indirect Speech: The Theory of the Strategic Speaker

    ERIC Educational Resources Information Center

    Lee, James J.; Pinker, Steven

    2010-01-01

    Speakers often do not state requests directly but employ innuendos such as "Would you like to see my etchings?" Though such indirectness seems puzzlingly inefficient, it can be explained by a theory of the "strategic speaker", who seeks plausible deniability when he or she is uncertain of whether the hearer is cooperative or…

  14. The Status of Native Speaker Intuitions in a Polylectal Grammar.

    ERIC Educational Resources Information Center

    Debose, Charles E.

    A study of one speaker's intuitions about and performance in Black English is presented with relation to Saussure's "langue-parole" dichotomy. Native speakers of a language have intuitions about the static synchronic entities although the data of their speaking is variable and panchronic. These entities are in a diglossic relationship to each…

  15. Taiwanese University Students' Attitudes to Non-Native Speakers English Teachers

    ERIC Educational Resources Information Center

    Chang, Feng-Ru

    2016-01-01

    Numerous studies have been conducted to explore issues surrounding non-native speakers (NNS) English teachers and native speaker (NS) teachers which concern, among others, the comparison between the two, the self-perceptions of NNS English teachers and the effectiveness of their teaching, and the students' opinions on and attitudes towards them.…

  16. An Email Exchange Project between Non-Native Speakers of English.

    ERIC Educational Resources Information Center

    Fedderholdt, Karen

    2001-01-01

    Describes a recent email writing project between nonnative speakers of English. The project was carried out by a group of Japanese university students, and a group of Danish students preparing for university entrance examinations. Explains the reasons for choosing to use email in writing classes and why nonnative speakers were chosen. (Author/VWL)

  17. Speaker Introductions at Internal Medicine Grand Rounds: Forms of Address Reveal Gender Bias.

    PubMed

    Files, Julia A; Mayer, Anita P; Ko, Marcia G; Friedrich, Patricia; Jenkins, Marjorie; Bryan, Michael J; Vegunta, Suneela; Wittich, Christopher M; Lyle, Melissa A; Melikian, Ryan; Duston, Trevor; Chang, Yu-Hui H; Hayes, Sharonne N

    2017-05-01

    Gender bias has been identified as one of the drivers of gender disparity in academic medicine. Bias may be reinforced by gender subordinating language or differential use of formality in forms of address. Professional titles may influence the perceived expertise and authority of the referenced individual. The objective of this study is to examine how professional titles were used in the same and mixed-gender speaker introductions at Internal Medicine Grand Rounds (IMGR). A retrospective observational study of video-archived speaker introductions at consecutive IMGR was conducted at two different locations (Arizona, Minnesota) of an academic medical center. Introducers and speakers at IMGR were physician and scientist peers holding MD, PhD, or MD/PhD degrees. The primary outcome was whether or not a speaker's professional title was used during the first form of address during speaker introductions at IMGR. As secondary outcomes, we evaluated whether or not the speakers professional title was used in any form of address during the introduction. Three hundred twenty-one forms of address were analyzed. Female introducers were more likely to use professional titles when introducing any speaker during the first form of address compared with male introducers (96.2% [102/106] vs. 65.6% [141/215]; p < 0.001). Female dyads utilized formal titles during the first form of address 97.8% (45/46) compared with male dyads who utilized a formal title 72.4% (110/152) of the time (p = 0.007). In mixed-gender dyads, where the introducer was female and speaker male, formal titles were used 95.0% (57/60) of the time. Male introducers of female speakers utilized professional titles 49.2% (31/63) of the time (p < 0.001). In this study, women introduced by men at IMGR were less likely to be addressed by professional title than were men introduced by men. Differential formality in speaker introductions may amplify isolation, marginalization, and professional discomfiture

  18. Speaker and Accent Variation Are Handled Differently: Evidence in Native and Non-Native Listeners

    PubMed Central

    Kriengwatana, Buddhamas; Terry, Josephine; Chládková, Kateřina; Escudero, Paola

    2016-01-01

    Listeners are able to cope with between-speaker variability in speech that stems from anatomical sources (i.e. individual and sex differences in vocal tract size) and sociolinguistic sources (i.e. accents). We hypothesized that listeners adapt to these two types of variation differently because prior work indicates that adapting to speaker/sex variability may occur pre-lexically while adapting to accent variability may require learning from attention to explicit cues (i.e. feedback). In Experiment 1, we tested our hypothesis by training native Dutch listeners and Australian-English (AusE) listeners without any experience with Dutch or Flemish to discriminate between the Dutch vowels /I/ and /ε/ from a single speaker. We then tested their ability to classify /I/ and /ε/ vowels of a novel Dutch speaker (i.e. speaker or sex change only), or vowels of a novel Flemish speaker (i.e. speaker or sex change plus accent change). We found that both Dutch and AusE listeners could successfully categorize vowels if the change involved a speaker/sex change, but not if the change involved an accent change. When AusE listeners were given feedback on their categorization responses to the novel speaker in Experiment 2, they were able to successfully categorize vowels involving an accent change. These results suggest that adapting to accents may be a two-step process, whereby the first step involves adapting to speaker differences at a pre-lexical level, and the second step involves adapting to accent differences at a contextual level, where listeners have access to word meaning or are given feedback that allows them to appropriately adjust their perceptual category boundaries. PMID:27309889

  19. Coffee Can Speakers: Amazing Energy Transformers--Fifth-Grade Students Learn the Science behind Speakers

    ERIC Educational Resources Information Center

    Wise, Kevin; Haake, Monica

    2007-01-01

    In this article, the authors describe steps on how to develop a high-impact activity in which students build, test, and improve their own "coffee can" speakers to observe firsthand how loudspeakers work to convert electrical energy to sound. The activity is appropriate for students in grades three to six and lends itself best to students…

  20. Prosodic Marking of Information Structure by Malaysian Speakers of English

    ERIC Educational Resources Information Center

    Gut, Ulrike; Pillai, Stefanie

    2014-01-01

    Various researchers have shown that second language (L2) speakers have difficulties with marking information structure in English prosodically: They deviate from native speakers not only in terms of pitch accent placement (Grosser, 1997; Gut, 2009; Ramírez Verdugo, 2002) and the type of pitch accent they produce (Wennerstrom, 1994, 1998) but also…

  1. Revisiting Speech Rate and Utterance Length Manipulations in Stuttering Speakers

    ERIC Educational Resources Information Center

    Blomgren, Michael; Goberman, Alexander M.

    2008-01-01

    The goal of this study was to evaluate stuttering frequency across a multidimensional (2 x 2) hierarchy of speech performance tasks. Specifically, this study examined the interaction between changes in length of utterance and levels of speech rate stability. Forty-four adult male speakers participated in the study (22 stuttering speakers and 22…

  2. Anxiety in speakers who persist and recover from stuttering.

    PubMed

    Davis, Stephen; Shisca, Daniella; Howell, Peter

    2007-01-01

    The study was designed to see whether young children and adolescents who persist in their stutter (N=18) show differences in trait and/or state anxiety compared with people who recover from their stutter (N=17) and fluent control speakers (N=19). A fluent control group, a group of speakers who have been documented as stuttering in the past but do not stutter now and a group of speakers (also with a documented history of stuttering) who persist in their stuttering participated, all aged 10-17 years. The State-Trait Anxiety Inventory for Children was administered. There were no differences between persistent, recovered and control groups with regard to trait anxiety. The persistent group had higher state anxiety than controls and the recovered group for three out of four speaking situations. The findings are interpreted as showing that anxiety levels in certain affective states appear to be associated with the speaking problem. A reader should be able to appreciate the difference between state and trait anxiety understand views about how the role anxiety has on stuttering has changed over time appreciate different views about how anxiety affects speakers who persist and recover from stuttering see why longitudinal work is needed to study these issues.

  3. Children's Understanding That Utterances Emanate from Minds: Using Speaker Belief To Aid Interpretation.

    ERIC Educational Resources Information Center

    Mitchell, Peter; Robinson, Elizabeth J.; Thompson, Doreen E.

    1999-01-01

    Three experiments examined 3- to 6-year olds' ability to use a speaker's utterance based on false belief to identify which of several referents was intended. Found that many 4- to 5-year olds performed correctly only when it was unnecessary to consider the speaker's belief. When the speaker gave an ambiguous utterance, many 3- to 6-year olds…

  4. Identification and tracking of particular speaker in noisy environment

    NASA Astrophysics Data System (ADS)

    Sawada, Hideyuki; Ohkado, Minoru

    2004-10-01

    Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.

  5. Robust speaker's location detection in a vehicle environment using GMM models.

    PubMed

    Hu, Jwu-Sheng; Cheng, Chieh-Cheng; Liu, Wei-Han

    2006-04-01

    Abstract-Human-computer interaction (HCI) using speech communication is becoming increasingly important, especially in driving where safety is the primary concern. Knowing the speaker's location (i.e., speaker localization) not only improves the enhancement results of a corrupted signal, but also provides assistance to speaker identification. Since conventional speech localization algorithms suffer from the uncertainties of environmental complexity and noise, as well as from the microphone mismatch problem, they are frequently not robust in practice. Without a high reliability, the acceptance of speech-based HCI would never be realized. This work presents a novel speaker's location detection method and demonstrates high accuracy within a vehicle cabinet using a single linear microphone array. The proposed approach utilize Gaussian mixture models (GMM) to model the distributions of the phase differences among the microphones caused by the complex characteristic of room acoustic and microphone mismatch. The model can be applied both in near-field and far-field situations in a noisy environment. The individual Gaussian component of a GMM represents some general location-dependent but content and speaker-independent phase difference distributions. Moreover, the scheme performs well not only in nonline-of-sight cases, but also when the speakers are aligned toward the microphone array but at difference distances from it. This strong performance can be achieved by exploiting the fact that the phase difference distributions at different locations are distinguishable in the environment of a car. The experimental results also show that the proposed method outperforms the conventional multiple signal classification method (MUSIC) technique at various SNRs.

  6. Mechanisms of Verbal Morphology Processing in Heritage Speakers of Russian

    ERIC Educational Resources Information Center

    Romanova, Natalia

    2008-01-01

    The goal of the study is to analyze the morphological processing of real and novel verb forms by heritage speakers of Russian in order to determine whether it differs from that of native (L1) speakers and second language (L2) learners; if so, how it is different; and which factors may guide the acquisition process. The experiment involved three…

  7. Speaker variability augments phonological processing in early word learning

    PubMed Central

    Rost, Gwyneth C.; McMurray, Bob

    2010-01-01

    Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e., word pairs that differ by a single phoneme), despite the ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top-down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom-up acoustic-phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single-speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them. PMID:19143806

  8. Using speakers' referential intentions to model early cross-situational word learning.

    PubMed

    Frank, Michael C; Goodman, Noah D; Tenenbaum, Joshua B

    2009-05-01

    Word learning is a "chicken and egg" problem. If a child could understand speakers' utterances, it would be easy to learn the meanings of individual words, and once a child knows what many words mean, it is easy to infer speakers' intended meanings. To the beginning learner, however, both individual word meanings and speakers' intentions are unknown. We describe a computational model of word learning that solves these two inference problems in parallel, rather than relying exclusively on either the inferred meanings of utterances or cross-situational word-meaning associations. We tested our model using annotated corpus data and found that it inferred pairings between words and object concepts with higher precision than comparison models. Moreover, as the result of making probabilistic inferences about speakers' intentions, our model explains a variety of behavioral phenomena described in the word-learning literature. These phenomena include mutual exclusivity, one-trial learning, cross-situational learning, the role of words in object individuation, and the use of inferred intentions to disambiguate reference.

  9. Coronal View Ultrasound Imaging of Movement in Different Segments of the Tongue during Paced Recital: Findings from Four Normal Speakers and a Speaker with Partial Glossectomy

    ERIC Educational Resources Information Center

    Bressmann, Tim; Flowers, Heather; Wong, Willy; Irish, Jonathan C.

    2010-01-01

    The goal of this study was to quantitatively describe aspects of coronal tongue movement in different anatomical regions of the tongue. Four normal speakers and a speaker with partial glossectomy read four repetitions of a metronome-paced poem. Their tongue movement was recorded in four coronal planes using two-dimensional B-mode ultrasound…

  10. Evaluation of speaker de-identification based on voice gender and age conversion

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Matoušek, Jindřich

    2018-03-01

    Two basic tasks are covered in this paper. The first one consists in the design and practical testing of a new method for voice de-identification that changes the apparent age and/or gender of a speaker by multi-segmental frequency scale transformation combined with prosody modification. The second task is aimed at verification of applicability of a classifier based on Gaussian mixture models (GMM) to detect the original Czech and Slovak speakers after applied voice deidentification. The performed experiments confirm functionality of the developed gender and age conversion for all selected types of de-identification which can be objectively evaluated by the GMM-based open-set classifier. The original speaker detection accuracy was compared also for sentences uttered by German and English speakers showing language independence of the proposed method.

  11. The processing and comprehension of wh-questions among L2 German speakers

    PubMed Central

    Jackson, Carrie N.; Bobb, Susan C.

    2009-01-01

    Using the self-paced-reading paradigm, the present study examines whether highly proficient second language (L2) speakers of German (English L1) use case-marking information during the on-line comprehension of unambiguous wh-extractions, even when task demands do not draw explicit attention to this morphosyntactic feature in German. Results support previous findings, in that both the native and the L2 German speakers exhibited an immediate subject-preference in the matrix clause, suggesting they were sensitive to case-marking information. However, only among the native speakers did this subject-preference carry over to reading times in the complement clause. The results from the present study are discussed in light of current debates regarding the ability of L2 speakers to attain native-like processing strategies in their L2. PMID:20161006

  12. Computer-Mediated Assessment of Intelligibility in Aphasia and Apraxia of Speech

    PubMed Central

    Haley, Katarina L.; Roth, Heidi; Grindstaff, Enetta; Jacks, Adam

    2011-01-01

    Background Previous work indicates that single word intelligibility tests developed for dysarthria are sensitive to segmental production errors in aphasic individuals with and without apraxia of speech. However, potential listener learning effects and difficulties adapting elicitation procedures to coexisting language impairments limit their applicability to left hemisphere stroke survivors. Aims The main purpose of this study was to examine basic psychometric properties for a new monosyllabic intelligibility test developed for individuals with aphasia and/or AOS. A related purpose was to examine clinical feasibility and potential to standardize a computer-mediated administration approach. Methods & Procedures A 600-item monosyllabic single word intelligibility test was constructed by assembling sets of phonetically similar words. Custom software was used to select 50 target words from this test in a pseudo-random fashion and to elicit and record production of these words by 23 speakers with aphasia and 20 neurologically healthy participants. To evaluate test-retest reliability, two identical sets of 50-word lists were elicited by requesting repetition after a live speaker model. To examine the effect of a different word set and auditory model, an additional set of 50 different words was elicited with a pre-recorded model. The recorded words were presented to normal-hearing listeners for identification via orthographic and multiple-choice response formats. To examine construct validity, production accuracy for each speaker was estimated via phonetic transcription and rating of overall articulation. Outcomes & Results Recording and listening tasks were completed in less than six minutes for all speakers and listeners. Aphasic speakers were significantly less intelligible than neurologically healthy speakers and displayed a wide range of intelligibility scores. Test-retest and inter-listener reliability estimates were strong. No significant difference was found in

  13. Fifty years of progress in speech and speaker recognition

    NASA Astrophysics Data System (ADS)

    Furui, Sadaoki

    2004-10-01

    Speech and speaker recognition technology has made very significant progress in the past 50 years. The progress can be summarized by the following changes: (1) from template matching to corpus-base statistical modeling, e.g., HMM and n-grams, (2) from filter bank/spectral resonance to Cepstral features (Cepstrum + DCepstrum + DDCepstrum), (3) from heuristic time-normalization to DTW/DP matching, (4) from gdistanceh-based to likelihood-based methods, (5) from maximum likelihood to discriminative approach, e.g., MCE/GPD and MMI, (6) from isolated word to continuous speech recognition, (7) from small vocabulary to large vocabulary recognition, (8) from context-independent units to context-dependent units for recognition, (9) from clean speech to noisy/telephone speech recognition, (10) from single speaker to speaker-independent/adaptive recognition, (11) from monologue to dialogue/conversation recognition, (12) from read speech to spontaneous speech recognition, (13) from recognition to understanding, (14) from single-modality (audio signal only) to multi-modal (audio/visual) speech recognition, (15) from hardware recognizer to software recognizer, and (16) from no commercial application to many practical commercial applications. Most of these advances have taken place in both the fields of speech recognition and speaker recognition. The majority of technological changes have been directed toward the purpose of increasing robustness of recognition, including many other additional important techniques not noted above.

  14. Perception of English palatal codas by Korean speakers of English

    NASA Astrophysics Data System (ADS)

    Yeon, Sang-Hee

    2003-04-01

    This study aimed at looking at perception of English palatal codas by Korean speakers of English to determine if perception problems are the source of production problems. In particular, first, this study looked at the possible first language effect on the perception of English palatal codas. Second, a possible perceptual source of vowel epenthesis after English palatal codas was investigated. In addition, individual factors, such as length of residence, TOEFL score, gender and academic status, were compared to determine if those affected the varying degree of the perception accuracy. Eleven adult Korean speakers of English as well as three native speakers of English participated in the study. Three sets of a perception test including identification of minimally different English pseudo- or real words were carried out. The results showed that, first, the Korean speakers perceived the English codas significantly worse than the Americans. Second, the study supported the idea that Koreans perceived an extra /i/ after the final affricates due to final release. Finally, none of the individual factors explained the varying degree of the perceptional accuracy. In particular, TOEFL scores and the perception test scores did not have any statistically significant association.

  15. Evaluating acoustic speaker normalization algorithms: evidence from longitudinal child data.

    PubMed

    Kohn, Mary Elizabeth; Farrington, Charlie

    2012-03-01

    Speaker vowel formant normalization, a technique that controls for variation introduced by physical differences between speakers, is necessary in variationist studies to compare speakers of different ages, genders, and physiological makeup in order to understand non-physiological variation patterns within populations. Many algorithms have been established to reduce variation introduced into vocalic data from physiological sources. The lack of real-time studies tracking the effectiveness of these normalization algorithms from childhood through adolescence inhibits exploration of child participation in vowel shifts. This analysis compares normalization techniques applied to data collected from ten African American children across five time points. Linear regressions compare the reduction in variation attributable to age and gender for each speaker for the vowels BEET, BAT, BOT, BUT, and BOAR. A normalization technique is successful if it maintains variation attributable to a reference sociolinguistic variable, while reducing variation attributable to age. Results indicate that normalization techniques which rely on both a measure of central tendency and range of the vowel space perform best at reducing variation attributable to age, although some variation attributable to age persists after normalization for some sections of the vowel space. © 2012 Acoustical Society of America

  16. Do children go for the nice guys? The influence of speaker benevolence and certainty on selective word learning.

    PubMed

    Bergstra, Myrthe; DE Mulder, Hannah N M; Coopmans, Peter

    2018-04-06

    This study investigated how speaker certainty (a rational cue) and speaker benevolence (an emotional cue) influence children's willingness to learn words in a selective learning paradigm. In two experiments four- to six-year-olds learnt novel labels from two speakers and, after a week, their memory for these labels was reassessed. Results demonstrated that children retained the label-object pairings for at least a week. Furthermore, children preferred to learn from certain over uncertain speakers, but they had no significant preference for nice over nasty speakers. When the cues were combined, children followed certain speakers, even if they were nasty. However, children did prefer to learn from nice and certain speakers over nasty and certain speakers. These results suggest that rational cues regarding a speaker's linguistic competence trump emotional cues regarding a speaker's affective status in word learning. However, emotional cues were found to have a subtle influence on this process.

  17. Effects of Language Background on Gaze Behavior: A Crosslinguistic Comparison Between Korean and German Speakers

    PubMed Central

    Goller, Florian; Lee, Donghoon; Ansorge, Ulrich; Choi, Soonja

    2017-01-01

    Languages differ in how they categorize spatial relations: While German differentiates between containment (in) and support (auf) with distinct spatial words—(a) den Kuli IN die Kappe stecken (”put pen in cap”); (b) die Kappe AUF den Kuli stecken (”put cap on pen”)—Korean uses a single spatial word (kkita) collapsing (a) and (b) into one semantic category, particularly when the spatial enclosure is tight-fit. Korean uses a different word (i.e., netha) for loose-fits (e.g., apple in bowl). We tested whether these differences influence the attention of the speaker. In a crosslinguistic study, we compared native German speakers with native Korean speakers. Participants rated the similarity of two successive video clips of several scenes where two objects were joined or nested (either in a tight or loose manner). The rating data show that Korean speakers base their rating of similarity more on tight- versus loose-fit, whereas German speakers base their rating more on containment versus support (in vs. auf). Throughout the experiment, we also measured the participants’ eye movements. Korean speakers looked equally long at the moving Figure object and at the stationary Ground object, whereas German speakers were more biased to look at the Ground object. Additionally, Korean speakers also looked more at the region where the two objects touched than did German speakers. We discuss our data in the light of crosslinguistic semantics and the extent of their influence on spatial cognition and perception. PMID:29362644

  18. The Sound of Voice: Voice-Based Categorization of Speakers' Sexual Orientation within and across Languages.

    PubMed

    Sulpizio, Simone; Fasoli, Fabio; Maass, Anne; Paladino, Maria Paola; Vespignani, Francesco; Eyssel, Friederike; Bentler, Dominik

    2015-01-01

    Empirical research had initially shown that English listeners are able to identify the speakers' sexual orientation based on voice cues alone. However, the accuracy of this voice-based categorization, as well as its generalizability to other languages (language-dependency) and to non-native speakers (language-specificity), has been questioned recently. Consequently, we address these open issues in 5 experiments: First, we tested whether Italian and German listeners are able to correctly identify sexual orientation of same-language male speakers. Then, participants of both nationalities listened to voice samples and rated the sexual orientation of both Italian and German male speakers. We found that listeners were unable to identify the speakers' sexual orientation correctly. However, speakers were consistently categorized as either heterosexual or gay on the basis of how they sounded. Moreover, a similar pattern of results emerged when listeners judged the sexual orientation of speakers of their own and of the foreign language. Overall, this research suggests that voice-based categorization of sexual orientation reflects the listeners' expectations of how gay voices sound rather than being an accurate detector of the speakers' actual sexual identity. Results are discussed with regard to accuracy, acoustic features of voices, language dependency and language specificity.

  19. The “Virtual” Panel: A Computerized Model for LGBT Speaker Panels

    PubMed Central

    Beasley, Christopher; Torres-Harding, Susan; Pedersen, Paula J.

    2012-01-01

    Recent societal trends indicate more tolerance for homosexuality, but prejudice remains on college campuses. Speaker panels are commonly used in classrooms as a way to educate students about sexual diversity and decrease negative attitudes toward sexual diversity. The advent of computer delivered instruction presents a unique opportunity to broaden the impact of traditional speaker panels. The current investigation examined the influence of an interactive “virtual” gay and lesbian speaker panel on cognitive, affective, and behavioral homonegativity. Findings suggest the computer-administered panel is lowers homonegativity, particularly for affective experiential homonegativity. The implications of these findings for research and practice are discussed. PMID:23646036

  20. Proficiency in English sentence stress production by Cantonese speakers who speak English as a second language (ESL).

    PubMed

    Ng, Manwa L; Chen, Yang

    2011-12-01

    The present study examined English sentence stress produced by native Cantonese speakers who were speaking English as a second language (ESL). Cantonese ESL speakers' proficiency in English stress production as perceived by English-speaking listeners was also studied. Acoustical parameters associated with sentence stress including fundamental frequency (F0), vowel duration, and intensity were measured from the English sentences produced by 40 Cantonese ESL speakers. Data were compared with those obtained from 40 native speakers of American English. The speech samples were also judged by eight native listeners who were native speakers of American English for placement, degree, and naturalness of stress. Results showed that Cantonese ESL speakers were able to use F0, vowel duration, and intensity to differentiate sentence stress patterns. Yet, both female and male Cantonese ESL speakers exhibited consistently higher F0 in stressed words than English speakers. Overall, Cantonese ESL speakers were found to be proficient in using duration and intensity to signal sentence stress, in a way comparable with English speakers. In addition, F0 and intensity were found to correlate closely with perceptual judgement and the degree of stress with the naturalness of stress.

  1. Improving the Effectiveness of Speaker Verification Domain Adaptation With Inadequate In-Domain Data

    DTIC Science & Technology

    2017-08-20

    Improving the Effectiveness of Speaker Verification Domain Adaptation With Inadequate In-Domain Data Bengt J. Borgström1, Elliot Singer1, Douglas...ll.mit.edu.edu, dar@ll.mit.edu, es@ll.mit.edu, omid.sadjadi@nist.gov Abstract This paper addresses speaker verification domain adaptation with...contain speakers with low channel diversity. Existing domain adaptation methods are reviewed, and their shortcomings are discussed. We derive an

  2. Speaker identification for the improvement of the security communication between law enforcement units

    NASA Astrophysics Data System (ADS)

    Tovarek, Jaromir; Partila, Pavol

    2017-05-01

    This article discusses the speaker identification for the improvement of the security communication between law enforcement units. The main task of this research was to develop the text-independent speaker identification system which can be used for real-time recognition. This system is designed for identification in the open set. It means that the unknown speaker can be anyone. Communication itself is secured, but we have to check the authorization of the communication parties. We have to decide if the unknown speaker is the authorized for the given action. The calls are recorded by IP telephony server and then these recordings are evaluate using classification If the system evaluates that the speaker is not authorized, it sends a warning message to the administrator. This message can detect, for example a stolen phone or other unusual situation. The administrator then performs the appropriate actions. Our novel proposal system uses multilayer neural network for classification and it consists of three layers (input layer, hidden layer, and output layer). A number of neurons in input layer corresponds with the length of speech features. Output layer then represents classified speakers. Artificial Neural Network classifies speech signal frame by frame, but the final decision is done over the complete record. This rule substantially increases accuracy of the classification. Input data for the neural network are a thirteen Mel-frequency cepstral coefficients, which describe the behavior of the vocal tract. These parameters are the most used for speaker recognition. Parameters for training, testing and validation were extracted from recordings of authorized users. Recording conditions for training data correspond with the real traffic of the system (sampling frequency, bit rate). The main benefit of the research is the system developed for text-independent speaker identification which is applied to secure communication between law enforcement units.

  3. Articulatory Movements during Vowels in Speakers with Dysarthria and Healthy Controls

    ERIC Educational Resources Information Center

    Yunusova, Yana; Weismer, Gary; Westbury, John R.; Lindstrom, Mary J.

    2008-01-01

    Purpose: This study compared movement characteristics of markers attached to the jaw, lower lip, tongue blade, and dorsum during production of selected English vowels by normal speakers and speakers with dysarthria due to amyotrophic lateral sclerosis (ALS) or Parkinson disease (PD). The study asked the following questions: (a) Are movement…

  4. A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia

    PubMed Central

    Law, Sam-Po; Chak, Gigi Wan-Chi

    2017-01-01

    Purpose Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Method Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Results Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. Conclusions The current results supported the sketch model of language–gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed. PMID:28609510

  5. A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia.

    PubMed

    Kong, Anthony Pak-Hin; Law, Sam-Po; Chak, Gigi Wan-Chi

    2017-07-12

    Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. The current results supported the sketch model of language-gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed.

  6. Effect of delayed auditory feedback on normal speakers at two speech rates

    NASA Astrophysics Data System (ADS)

    Stuart, Andrew; Kalinowski, Joseph; Rastatter, Michael P.; Lynch, Kerry

    2002-05-01

    This study investigated the effect of short and long auditory feedback delays at two speech rates with normal speakers. Seventeen participants spoke under delayed auditory feedback (DAF) at 0, 25, 50, and 200 ms at normal and fast rates of speech. Significantly two to three times more dysfluencies were displayed at 200 ms (p<0.05) relative to no delay or the shorter delays. There were significantly more dysfluencies observed at the fast rate of speech (p=0.028). These findings implicate the peripheral feedback system(s) of fluent speakers for the disruptive effects of DAF on normal speech production at long auditory feedback delays. Considering the contrast in fluency/dysfluency exhibited between normal speakers and those who stutter at short and long delays, it appears that speech disruption of normal speakers under DAF is a poor analog of stuttering.

  7. What a speaker's choice of frame reveals: reference points, frame selection, and framing effects.

    PubMed

    McKenzie, Craig R M; Nelson, Jonathan D

    2003-09-01

    Framing effects are well established: Listeners' preferences depend on how outcomes are described to them, or framed. Less well understood is what determines how speakers choose frames. Two experiments revealed that reference points systematically influenced speakers' choices between logically equivalent frames. For example, speakers tended to describe a 4-ounce cup filled to the 2-ounce line as half full if it was previously empty but described it as half empty if it was previously full. Similar results were found when speakers could describe the outcome of a medical treatment in terms of either mortality or survival (e.g., 25% die vs. 75% survive). Two additional experiments showed that listeners made accurate inferences about speakers' reference points on the basis of the selected frame (e.g., if a speaker described a cup as half empty, listeners inferred that the cup used to be full). Taken together, the data suggest that frames reliably convey implicit information in addition to their explicit content, which helps explain why framing effects are so robust.

  8. Congenital amusia in speakers of a tone language: association with lexical tone agnosia.

    PubMed

    Nan, Yun; Sun, Yanan; Peretz, Isabelle

    2010-09-01

    Congenital amusia is a neurogenetic disorder that affects the processing of musical pitch in speakers of non-tonal languages like English and French. We assessed whether this musical disorder exists among speakers of Mandarin Chinese who use pitch to alter the meaning of words. Using the Montreal Battery of Evaluation of Amusia, we tested 117 healthy young Mandarin speakers with no self-declared musical problems and 22 individuals who reported musical difficulties and scored two standard deviations below the mean obtained by the Mandarin speakers without amusia. These 22 amusic individuals showed a similar pattern of musical impairment as did amusic speakers of non-tonal languages, by exhibiting a more pronounced deficit in melody than in rhythm processing. Furthermore, nearly half the tested amusics had impairments in the discrimination and identification of Mandarin lexical tones. Six showed marked impairments, displaying what could be called lexical tone agnosia, but had normal tone production. Our results show that speakers of tone languages such as Mandarin may experience musical pitch disorder despite early exposure to speech-relevant pitch contrasts. The observed association between the musical disorder and lexical tone difficulty indicates that the pitch disorder as defining congenital amusia is not specific to music or culture but is rather general in nature.

  9. Use of the BAT with a Cantonese-Putonghua Speaker with Aphasia

    ERIC Educational Resources Information Center

    Kong, Anthony Pak-Hin; Weekes, Brendan Stuart

    2011-01-01

    The aim of this article is to illustrate the use of the Bilingual Aphasia Test (BAT) with a Cantonese-Putonghua speaker. We describe G, who is a relatively young Chinese bilingual speaker with aphasia. G's communication abilities in his L2, Putonghua, were impaired following brain damage. This impairment caused specific difficulties in…

  10. Processing Lexical and Speaker Information in Repetition and Semantic/Associative Priming

    ERIC Educational Resources Information Center

    Lee, Chao-Yang; Zhang, Yu

    2018-01-01

    The purpose of this study is to investigate the interaction between processing lexical and speaker-specific information in spoken word recognition. The specific question is whether repetition and semantic/associative priming is reduced when the prime and target are produced by different speakers. In Experiment 1, the prime and target were repeated…

  11. Speaker Reliability in Preschoolers' Inferences about the Meanings of Novel Words

    ERIC Educational Resources Information Center

    Sobel, David M.; Sedivy, Julie; Buchanan, David W.; Hennessy, Rachel

    2012-01-01

    Preschoolers participated in a modified version of the disambiguation task, designed to test whether the pragmatic environment generated by a reliable or unreliable speaker affected how children interpreted novel labels. Two objects were visible to children, while a third was only visible to the speaker (a fact known by the child). Manipulating…

  12. Dissociations and Associations of Performance in Syntactic Comprehension in Aphasia and their Implications for the Nature of Aphasic Deficits

    PubMed Central

    Caplan, David; Michaud, Jennifer; Hufford, Rebecca

    2013-01-01

    Sixty one pwa were tested on syntactic comprehension in three tasks: sentence-picture matching, sentence-picture matching with auditory moving window presentation, and object manipulation. There were significant correlations of performances on sentences across tasks. First factors in unrotated factor analyses accounted for most of the variance on which all sentence types loaded in each task. Dissociations in performance between sentence types that differed minimally in their syntactic structures were not consistent across tasks. These results replicate previous results with smaller samples and provide important validation of basic aspects of aphasic performance in this area of language processing. They point to the role of a reduction in processing resources and of the interaction of task demands and parsing and interpretive abilities in the genesis of patient performance. PMID:24061104

  13. Dissociations and associations of performance in syntactic comprehension in aphasia and their implications for the nature of aphasic deficits.

    PubMed

    Caplan, David; Michaud, Jennifer; Hufford, Rebecca

    2013-10-01

    Sixty-one pwa were tested on syntactic comprehension in three tasks: sentence-picture matching, sentence-picture matching with auditory moving window presentation, and object manipulation. There were significant correlations of performances on sentences across tasks. First factors on which all sentence types loaded in unrotated factor analyses accounted for most of the variance in each task. Dissociations in performance between sentence types that differed minimally in their syntactic structures were not consistent across tasks. These results replicate previous results with smaller samples and provide important validation of basic aspects of aphasic performance in this area of language processing. They point to the role of a reduction in processing resources and of the interaction of task demands and parsing and interpretive abilities in the genesis of patient performance. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Gender parity trends for invited speakers at four prominent virology conference series.

    PubMed

    Kalejta, Robert F; Palmenberg, Ann C

    2017-06-07

    Scientific conferences are most beneficial to participants when they showcase significant new experimental developments, accurately summarize the current state of the field, and provide strong opportunities for collaborative networking. A top-notch slate of invited speakers, assembled by conference organizers or committees, is key to achieving these goals. The perceived underrepresentation of female speakers at prominent scientific meetings is currently a popular topic for discussion, but one that often lacks supportive data. We compiled the full rosters of invited speakers over the last 35 years for four prominent international virology conferences, the American Society for Virology Annual Meeting (ASV), the International Herpesvirus Workshop (IHW), the Positive-Strand RNA Virus Symposium (PSR), and the Gordon Research Conference on Viruses & Cells (GRC). The rosters were cross-indexed by unique names, gender, year, and repeat invitations. When plotted as gender-dependent trends over time, all four conferences showed a clear proclivity for male-dominated invited speaker lists. Encouragingly, shifts toward parity are emerging within all units, but at different rates. Not surprisingly, both selection of a larger percentage of first time participants and the presence of a woman on the speaker selection committee correlated with improved parity. Session chair information was also collected for the IHW and GRC. These visible positions also displayed a strong male dominance over time that is eroding slowly. We offer our personal interpretation of these data to aid future organizers achieve improved equity among the limited number of available positions for session moderators and invited speakers. IMPORTANCE Politicians and media members have a tendency to cite anecdotes as conclusions without any supporting data. This happens so frequently now, that a name for it has emerged: fake news. Good science proceeds otherwise. The under representation of women as invited

  15. Gender Parity Trends for Invited Speakers at Four Prominent Virology Conference Series

    PubMed Central

    Palmenberg, Ann C.

    2017-01-01

    ABSTRACT Scientific conferences are most beneficial to participants when they showcase significant new experimental developments, accurately summarize the current state of the field, and provide strong opportunities for collaborative networking. A top-notch slate of invited speakers, assembled by conference organizers or committees, is key to achieving these goals. The perceived underrepresentation of female speakers at prominent scientific meetings is currently a popular topic for discussion, but one that often lacks supportive data. We compiled the full rosters of invited speakers over the last 35 years for four prominent international virology conferences, the American Society for Virology Annual Meeting (ASV), the International Herpesvirus Workshop (IHW), the Positive-Strand RNA Virus Symposium (PSR), and the Gordon Research Conference on Viruses & Cells (GRC). The rosters were cross-indexed by unique names, gender, year, and repeat invitations. When plotted as gender-dependent trends over time, all four conferences showed a clear proclivity for male-dominated invited speaker lists. Encouragingly, shifts toward parity are emerging within all units, but at different rates. Not surprisingly, both selection of a larger percentage of first-time participants and the presence of a woman on the speaker selection committee correlated with improved parity. Session chair information was also collected for the IHW and GRC. These visible positions also displayed a strong male dominance over time that is eroding slowly. We offer our personal interpretation of these data to aid future organizers achieve improved equity among the limited number of available positions for session moderators and invited speakers. IMPORTANCE Politicians and media members have a tendency to cite anecdotes as conclusions without any supporting data. This happens so frequently now, that a name for it has emerged: fake news. Good science proceeds otherwise. The underrepresentation of women as

  16. Development of a speaker discrimination test for cochlear implant users based on the Oldenburg Logatome corpus.

    PubMed

    Mühler, Roland; Ziese, Michael; Rostalski, Dorothea

    2009-01-01

    The purpose of the study was to develop a speaker discrimination test for cochlear implant (CI) users. The speech material was drawn from the Oldenburg Logatome (OLLO) corpus, which contains 150 different logatomes read by 40 German and 10 French native speakers. The prototype test battery included 120 logatome pairs spoken by 5 male and 5 female speakers with balanced representations of the conditions 'same speaker' and 'different speaker'. Ten adult normal-hearing listeners and 12 adult postlingually deafened CI users were included in a study to evaluate the suitability of the test. The mean speaker discrimination score for the CI users was 67.3% correct and for the normal-hearing listeners 92.2% correct. A significant influence of voice gender and fundamental frequency difference on the speaker discrimination score was found in CI users as well as in normal-hearing listeners. Since the test results of the CI users were significantly above chance level and no ceiling effect was observed, we conclude that subsets of the OLLO corpus are very well suited to speaker discrimination experiments in CI users. Copyright 2008 S. Karger AG, Basel.

  17. Bystander capability to activate speaker function for continuous dispatcher assisted CPR in case of suspected cardiac arrest.

    PubMed

    Steensberg, Alvilda T; Eriksen, Mette M; Andersen, Lars B; Hendriksen, Ole M; Larsen, Heinrich D; Laier, Gunnar H; Thougaard, Thomas

    2017-06-01

    The European Resuscitation Council Guidelines 2015 recommend bystanders to activate their mobile phone speaker function, if possible, in case of suspected cardiac arrest. This is to facilitate continuous dialogue with the dispatcher including (if required) cardiopulmonary resuscitation instructions. The aim of this study was to measure the bystander capability to activate speaker function in case of suspected cardiac arrest. In 87days, a systematic prospective registration of bystander capability to activate the speaker function, when cardiac arrest was suspected, was performed. For those asked, "can you activate your mobile phone's speaker function", audio recordings were examined and categorized into groups according to the bystanders capability to activate speaker function on their own initiative, without instructions, or with instructions from the emergency medical dispatcher. Time delay was measured, in seconds, for the bystanders without pre-activated speaker function. 42.0% (58) was able to activate the speaker function without instructions, 2.9% (4) with instructions, 18.1% (25) on own initiative and 37.0% (51) were unable to activate the speaker function. The median time to activate speaker function was 19s and 8s, with and without instructions, respectively. Dispatcher assisted cardiopulmonary resuscitation with activated speaker function, in cases of suspected cardiac arrest, allows for continuous dialogue between the emergency medical dispatcher and the bystander. In this study, we found a 63.0% success rate of activating the speaker function in such situations. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Artificially intelligent recognition of Arabic speaker using voice print-based local features

    NASA Astrophysics Data System (ADS)

    Mahmood, Awais; Alsulaiman, Mansour; Muhammad, Ghulam; Akram, Sheeraz

    2016-11-01

    Local features for any pattern recognition system are based on the information extracted locally. In this paper, a local feature extraction technique was developed. This feature was extracted in the time-frequency plain by taking the moving average on the diagonal directions of the time-frequency plane. This feature captured the time-frequency events producing a unique pattern for each speaker that can be viewed as a voice print of the speaker. Hence, we referred to this technique as voice print-based local feature. The proposed feature was compared to other features including mel-frequency cepstral coefficient (MFCC) for speaker recognition using two different databases. One of the databases used in the comparison is a subset of an LDC database that consisted of two short sentences uttered by 182 speakers. The proposed feature attained 98.35% recognition rate compared to 96.7% for MFCC using the LDC subset.

  19. The Use of Native Speaker Norms in Critical Period Hypothesis Research

    ERIC Educational Resources Information Center

    Andringa, Sible

    2014-01-01

    In critical period hypothesis (CPH) research, native speaker (NS) norm groups have often been used to determine whether nonnative speakers (NNSs) were able to score within the NS range of scores. One goal of this article is to investigate what NS samples were used in previous CPH research. The literature review shows that NS control groups tend to…

  20. (Non)Native Speakering: The (Dis)Invention of (Non)Native Speakered Subjectivities in a Graduate Teacher Education Program

    ERIC Educational Resources Information Center

    Aneja, Geeta A.

    2017-01-01

    Despite its imprecision, the native-nonnative dichotomy has become the dominant paradigm for categorizing language users, learners, and educators. The "NNEST Movement" has been instrumental in documenting the privilege of native speakers, the marginalization of their nonnative counterparts, and why an individual may be perceived as one…

  1. Speaker verification system using acoustic data and non-acoustic data

    DOEpatents

    Gable, Todd J [Walnut Creek, CA; Ng, Lawrence C [Danville, CA; Holzrichter, John F [Berkeley, CA; Burnett, Greg C [Livermore, CA

    2006-03-21

    A method and system for speech characterization. One embodiment includes a method for speaker verification which includes collecting data from a speaker, wherein the data comprises acoustic data and non-acoustic data. The data is used to generate a template that includes a first set of "template" parameters. The method further includes receiving a real-time identity claim from a claimant, and using acoustic data and non-acoustic data from the identity claim to generate a second set of parameters. The method further includes comparing the first set of parameters to the set of parameters to determine whether the claimant is the speaker. The first set of parameters and the second set of parameters include at least one purely non-acoustic parameter, including a non-acoustic glottal shape parameter derived from averaging multiple glottal cycle waveforms.

  2. Objective eye-gaze behaviour during face-to-face communication with proficient alaryngeal speakers: a preliminary study.

    PubMed

    Evitts, Paul; Gallop, Robert

    2011-01-01

    There is a large body of research demonstrating the impact of visual information on speaker intelligibility in both normal and disordered speaker populations. However, there is minimal information on which specific visual features listeners find salient during conversational discourse. To investigate listeners' eye-gaze behaviour during face-to-face conversation with normal, laryngeal and proficient alaryngeal speakers. Sixty participants individually participated in a 10-min conversation with one of four speakers (typical laryngeal, tracheoesophageal, oesophageal, electrolaryngeal; 15 participants randomly assigned to one mode of speech). All speakers were > 85% intelligible and were judged to be 'proficient' by two certified speech-language pathologists. Participants were fitted with a head-mounted eye-gaze tracking device (Mobile Eye, ASL) that calculated the region of interest and mean duration of eye-gaze. Self-reported gaze behaviour was also obtained following the conversation using a 10 cm visual analogue scale. While listening, participants viewed the lower facial region of the oesophageal speaker more than the normal or tracheoesophageal speaker. Results of non-hierarchical cluster analyses showed that while listening, the pattern of eye-gaze was predominantly directed at the lower face of the oesophageal and electrolaryngeal speaker and more evenly dispersed among the background, lower face, and eyes of the normal and tracheoesophageal speakers. Finally, results show a low correlation between self-reported eye-gaze behaviour and objective regions of interest data. Overall, results suggest similar eye-gaze behaviour when healthy controls converse with normal and tracheoesophageal speakers and that participants had significantly different eye-gaze patterns when conversing with an oesophageal speaker. Results are discussed in terms of existing eye-gaze data and its potential implications on auditory-visual speech perception. © 2011 Royal College of Speech

  3. The main concept analysis in cantonese aphasic oral discourse: external validation and monitoring chronic aphasia.

    PubMed

    Kong, Anthony Pak-Hin

    2011-02-01

    The 1st aim of this study was to further establish the external validity of the main concept (MC) analysis by examining its relationship with the Cantonese Linguistic Communication Measure (CLCM; Kong, 2006; Kong & Law, 2004)-an established quantitative system for narrative production-and the Cantonese version of the Western Aphasia Battery (CAB; Yiu, 1992). The 2nd purpose of the study was to evaluate how well the MC analysis reflects the stability of discourse production among chronic Cantonese speakers with aphasia. Sixteen participants with aphasia were evaluated on the MC analysis, CAB, and CLCM in the summer of 2008 and were subsequently reassessed in the summer of 2009. They encompassed a range of aphasia severity (with an Aphasia Quotient ranging between 30.2/100 and 94.8/100 at the time of the 1st evaluation). Significant associations were found between the MC measures and the corresponding CLCM indices and CAB performance scores that were relevant to the presence, accuracy, and completeness of content in oral narratives. Moreover, the MC analysis was found to yield comparable scores for chronic speakers on 2 occasions 1 year apart. The present study has further established the external validity of MC analysis in Cantonese. Future investigations involving more speakers with aphasia will allow adequate description of its psychometric properties.

  4. Emergence of neural encoding of auditory objects while listening to competing speakers

    PubMed Central

    Ding, Nai; Simon, Jonathan Z.

    2012-01-01

    A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation. PMID:22753470

  5. The Acquisition of English Focus Marking by Non-Native Speakers

    NASA Astrophysics Data System (ADS)

    Baker, Rachel Elizabeth

    This dissertation examines Mandarin and Korean speakers' acquisition of English focus marking, which is realized by accenting particular words within a focused constituent. It is important for non-native speakers to learn how accent placement relates to focus in English because appropriate accent placement and realization makes a learner's English more native-like and easier to understand. Such knowledge may also improve their English comprehension skills. In this study, 20 native English speakers, 20 native Mandarin speakers, and 20 native Korean speakers participated in four experiments: (1) a production experiment, in which they were recorded reading the answers to questions, (2) a perception experiment, in which they were asked to determine which word in a recording was the last prominent word, (3) an understanding experiment, in which they were asked whether the answers in recorded question-answer pairs had context-appropriate prosody, and (4) an accent placement experiment, in which they were asked which word they would make prominent in a particular context. Finally, a new group of native English speakers listened to utterances produced in the production experiment, and determined whether the prosody of each utterance was appropriate for its context. The results of the five experiments support a novel predictive model for second language prosodic focus marking acquisition. This model holds that both transfer of linguistic features from a learner's native language (L1) and features of their second language (L2) affect learners' acquisition of prosodic focus marking. As a result, the model includes two complementary components: the Transfer Component and the L2 Challenge Component. The Transfer Component predicts that prosodic structures in the L2 will be more easily acquired by language learners that have similar structures in their L1 than those who do not, even if there are differences between the L1 and L2 in how the structures are realized. The L2

  6. Speaker transfer in children's peer conversation: completing communication-aid-mediated contributions.

    PubMed

    Clarke, Michael; Bloch, Steven; Wilkinson, Ray

    2013-03-01

    Managing the exchange of speakers from one person to another effectively is a key issue for participants in everyday conversational interaction. Speakers use a range of resources to indicate, in advance, when their turn will come to an end, and listeners attend to such signals in order to know when they might legitimately speak. Using the principles and findings from conversation analysis, this paper examines features of speaker transfer in a conversation between a boy with cerebral palsy who has been provided with a voice-output communication aid (VOCA), and a peer without physical or communication difficulties. Specifically, the analysis focuses on turn exchange, where a VOCA-mediated contribution approach completion, and the child without communication needs is due to speak next.

  7. Comparing headphone and speaker effects on simulated driving.

    PubMed

    Nelson, T M; Nilsson, T H

    1990-12-01

    Twelve persons drove for three hours in an automobile simulator while listening to music at sound level 63dB over stereo headphones during one session and from a dashboard speaker during another session. They were required to steer a mountain highway, maintain a certain indicated speed, shift gears, and respond to occasional hazards. Steering and speed control were dependent on visual cues. The need to shift and the hazards were indicated by sound and vibration effects. With the headphones, the driver's average reaction time for the most complex task presented--shifting gears--was about one-third second longer than with the speaker. The use of headphones did not delay the development of subjective fatigue.

  8. Selectivity of lexical-semantic disorders in Polish-speaking patients with aphasia: evidence from single-word comprehension.

    PubMed

    Jodzio, Krzysztof; Biechowska, Daria; Leszniewska-Jodzio, Barbara

    2008-09-01

    Several neuropsychological studies have shown that patients with brain damage may demonstrate selective category-specific deficits of auditory comprehension. The present paper reports on an investigation of aphasic patients' preserved ability to perform a semantic task on spoken words despite severe impairment in auditory comprehension, as shown by failure in matching spoken words to pictured objects. Twenty-six aphasic patients (11 women and 15 men) with impaired speech comprehension due to a left-hemisphere ischaemic stroke were examined; all were right-handed and native speakers of Polish. Six narrowly defined semantic categories for which dissociations have been reported are colors, body parts, animals, food, objects (mostly tools), and means of transportation. An analysis using one-way ANOVA with repeated measures in conjunction with the Lambda-Wilks Test revealed significant discrepancies among these categories in aphasic patients, who had much more difficulty comprehending names of colors than they did comprehending names of other objects (F((5,21))=13.15; p<.001). Animals were most often the easiest category to understand. The possibility of a simple explanation in terms of word frequency and/or visual complexity was ruled out. Evidence from the present study support the position that so called "global" aphasia is an imprecise term and should be redefined. These results are discussed within the connectionist and modular perspectives on category-specific deficits in aphasia.

  9. Speaker information affects false recognition of unstudied lexical-semantic associates.

    PubMed

    Luthra, Sahil; Fox, Neal P; Blumstein, Sheila E

    2018-05-01

    Recognition of and memory for a spoken word can be facilitated by a prior presentation of that word spoken by the same talker. However, it is less clear whether this speaker congruency advantage generalizes to facilitate recognition of unheard related words. The present investigation employed a false memory paradigm to examine whether information about a speaker's identity in items heard by listeners could influence the recognition of novel items (critical intruders) phonologically or semantically related to the studied items. In Experiment 1, false recognition of semantically associated critical intruders was sensitive to speaker information, though only when subjects attended to talker identity during encoding. Results from Experiment 2 also provide some evidence that talker information affects the false recognition of critical intruders. Taken together, the present findings indicate that indexical information is able to contact the lexical-semantic network to affect the processing of unheard words.

  10. Perception of intelligibility and qualities of non-native accented speakers.

    PubMed

    Fuse, Akiko; Navichkova, Yuliya; Alloggio, Krysteena

    To provide effective treatment to clients, speech-language pathologists must be understood, and be perceived to demonstrate the personal qualities necessary for therapeutic practice (e.g., resourcefulness and empathy). One factor that could interfere with the listener's perception of non-native speech is the speaker's accent. The current study explored the relationship between how accurately listeners could understand non-native speech and their perceptions of personal attributes of the speaker. Additionally, this study investigated how listeners' familiarity and experience with other languages may influence their perceptions of non-native accented speech. Through an online survey, native monolingual and bilingual English listeners rated four non-native accents (i.e., Spanish, Chinese, Russian, and Indian) on perceived intelligibility and perceived personal qualities (i.e., professionalism, intelligence, resourcefulness, empathy, and patience) necessary for speech-language pathologists. The results indicated significant relationships between the perception of intelligibility and the perception of personal qualities (i.e., professionalism, intelligence, and resourcefulness) attributed to non-native speakers. However, these findings were not supported for the Chinese accent. Bilingual listeners judged the non-native speech as more intelligible in comparison to monolingual listeners. No significant differences were found in the ratings between bilingual listeners who share the same language background as the speaker and other bilingual listeners. Based on the current findings, greater perception of intelligibility was the key to promoting a positive perception of personal qualities such as professionalism, intelligence, and resourcefulness, important for speech-language pathologists. The current study found evidence to support the claim that bilinguals have a greater ability in understanding non-native accented speech compared to monolingual listeners. The results

  11. Formant trajectory characteristics in speakers with dysarthria and homogeneous speech intelligibility scores: Further data

    NASA Astrophysics Data System (ADS)

    Kim, Yunjung; Weismer, Gary; Kent, Ray D.

    2005-09-01

    In previous work [J. Acoust. Soc. Am. 117, 2605 (2005)], we reported on formant trajectory characteristics of a relatively large number of speakers with dysarthria and near-normal speech intelligibility. The purpose of that analysis was to begin a documentation of the variability, within relatively homogeneous speech-severity groups, of acoustic measures commonly used to predict across-speaker variation in speech intelligibility. In that study we found that even with near-normal speech intelligibility (90%-100%), many speakers had reduced formant slopes for some words and distributional characteristics of acoustic measures that were different than values obtained from normal speakers. In the current report we extend those findings to a group of speakers with dysarthria with somewhat poorer speech intelligibility than the original group. Results are discussed in terms of the utility of certain acoustic measures as indices of speech intelligibility, and as explanatory data for theories of dysarthria. [Work supported by NIH Award R01 DC00319.

  12. Designing, Modeling, Constructing, and Testing a Flat Panel Speaker and Sound Diffuser for a Simulator

    NASA Technical Reports Server (NTRS)

    Dillon, Christina

    2013-01-01

    The goal of this project was to design, model, build, and test a flat panel speaker and frame for a spherical dome structure being made into a simulator. The simulator will be a test bed for evaluating an immersive environment for human interfaces. This project focused on the loud speakers and a sound diffuser for the dome. The rest of the team worked on an Ambisonics 3D sound system, video projection system, and multi-direction treadmill to create the most realistic scene possible. The main programs utilized in this project, were Pro-E and COMSOL. Pro-E was used for creating detailed figures for the fabrication of a frame that held a flat panel loud speaker. The loud speaker was made from a thin sheet of Plexiglas and 4 acoustic exciters. COMSOL, a multiphysics finite analysis simulator, was used to model and evaluate all stages of the loud speaker, frame, and sound diffuser. Acoustical testing measurements were utilized to create polar plots from the working prototype which were then compared to the COMSOL simulations to select the optimal design for the dome. The final goal of the project was to install the flat panel loud speaker design in addition to a sound diffuser on to the wall of the dome. After running tests in COMSOL on various speaker configurations, including a warped Plexiglas version, the optimal speaker design included a flat piece of Plexiglas with a rounded frame to match the curvature of the dome. Eight of these loud speakers will be mounted into an inch and a half of high performance acoustic insulation, or Thinsulate, that will cover the inside of the dome. The following technical paper discusses these projects and explains the engineering processes used, knowledge gained, and the projected future goals of this project

  13. Prosodic Disambiguation of Syntactic Structure: For the Speaker or for the Addressee?

    ERIC Educational Resources Information Center

    Kraljic, Tanya; Brennan, Susan E.

    2005-01-01

    Evidence has been mixed on whether speakers spontaneously and reliably produce prosodic cues that resolve syntactic ambiguities. And when speakers do produce such cues, it is unclear whether they do so ''for'' their addressees (the "audience design" hypothesis) or ''for'' themselves, as a by-product of planning and articulating utterances. Three…

  14. During Threaded Discussions Are Non-Native English Speakers Always at a Disadvantage?

    ERIC Educational Resources Information Center

    Shafer Willner, Lynn

    2014-01-01

    When participating in threaded discussions, under what conditions might non¬native speakers of English (NNSE) be at a comparative disadvantage to their classmates who are native speakers of English (NSE)? This study compares the threaded discussion perspectives of closely-matched NNSE and NSE adult students having different levels of threaded…

  15. Grammatical versus Pragmatic Error: Employer Perceptions of Nonnative and Native English Speakers

    ERIC Educational Resources Information Center

    Wolfe, Joanna; Shanmugaraj, Nisha; Sipe, Jaclyn

    2016-01-01

    Many communication instructors make allowances for grammatical error in nonnative English speakers' writing, but do businesspeople do the same? We asked 169 businesspeople to comment on three versions of an email with different types of errors. We found that businesspeople do make allowances for errors made by nonnative English speakers,…

  16. Discriminative analysis of lip motion features for speaker identification and speech-reading.

    PubMed

    Cetingül, H Ertan; Yemez, Yücel; Erzin, Engin; Tekalp, A Murat

    2006-10-01

    There have been several studies that jointly use audio, lip intensity, and lip geometry information for speaker identification and speech-reading applications. This paper proposes using explicit lip motion information, instead of or in addition to lip intensity and/or geometry information, for speaker identification and speech-reading within a unified feature selection and discrimination analysis framework, and addresses two important issues: 1) Is using explicit lip motion information useful, and, 2) if so, what are the best lip motion features for these two applications? The best lip motion features for speaker identification are considered to be those that result in the highest discrimination of individual speakers in a population, whereas for speech-reading, the best features are those providing the highest phoneme/word/phrase recognition rate. Several lip motion feature candidates have been considered including dense motion features within a bounding box about the lip, lip contour motion features, and combination of these with lip shape features. Furthermore, a novel two-stage, spatial, and temporal discrimination analysis is introduced to select the best lip motion features for speaker identification and speech-reading applications. Experimental results using an hidden-Markov-model-based recognition system indicate that using explicit lip motion information provides additional performance gains in both applications, and lip motion features prove more valuable in the case of speech-reading application.

  17. Analysis of Acoustic Features in Speakers with Cognitive Disorders and Speech Impairments

    NASA Astrophysics Data System (ADS)

    Saz, Oscar; Simón, Javier; Rodríguez, W. Ricardo; Lleida, Eduardo; Vaquero, Carlos

    2009-12-01

    This work presents the results in the analysis of the acoustic features (formants and the three suprasegmental features: tone, intensity and duration) of the vowel production in a group of 14 young speakers suffering different kinds of speech impairments due to physical and cognitive disorders. A corpus with unimpaired children's speech is used to determine the reference values for these features in speakers without any kind of speech impairment within the same domain of the impaired speakers; this is 57 isolated words. The signal processing to extract the formant and pitch values is based on a Linear Prediction Coefficients (LPCs) analysis of the segments considered as vowels in a Hidden Markov Model (HMM) based Viterbi forced alignment. Intensity and duration are also based in the outcome of the automated segmentation. As main conclusion of the work, it is shown that intelligibility of the vowel production is lowered in impaired speakers even when the vowel is perceived as correct by human labelers. The decrease in intelligibility is due to a 30% of increase in confusability in the formants map, a reduction of 50% in the discriminative power in energy between stressed and unstressed vowels and to a 50% increase of the standard deviation in the length of the vowels. On the other hand, impaired speakers keep good control of tone in the production of stressed and unstressed vowels.

  18. Intoxicated Speech Detection: A Fusion Framework with Speaker-Normalized Hierarchical Functionals and GMM Supervectors

    PubMed Central

    Bone, Daniel; Li, Ming; Black, Matthew P.; Narayanan, Shrikanth S.

    2013-01-01

    Segmental and suprasegmental speech signal modulations offer information about paralinguistic content such as affect, age and gender, pathology, and speaker state. Speaker state encompasses medium-term, temporary physiological phenomena influenced by internal or external biochemical actions (e.g., sleepiness, alcohol intoxication). Perceptual and computational research indicates that detecting speaker state from speech is a challenging task. In this paper, we present a system constructed with multiple representations of prosodic and spectral features that provided the best result at the Intoxication Subchallenge of Interspeech 2011 on the Alcohol Language Corpus. We discuss the details of each classifier and show that fusion improves performance. We additionally address the question of how best to construct a speaker state detection system in terms of robust and practical marginalization of associated variability such as through modeling speakers, utterance type, gender, and utterance length. As is the case in human perception, speaker normalization provides significant improvements to our system. We show that a held-out set of baseline (sober) data can be used to achieve comparable gains to other speaker normalization techniques. Our fused frame-level statistic-functional systems, fused GMM systems, and final combined system achieve unweighted average recalls (UARs) of 69.7%, 65.1%, and 68.8%, respectively, on the test set. More consistent numbers compared to development set results occur with matched-prompt training, where the UARs are 70.4%, 66.2%, and 71.4%, respectively. The combined system improves over the Challenge baseline by 5.5% absolute (8.4% relative), also improving upon our previously best result. PMID:24376305

  19. Effect of Intensive Voice Treatment on Tone-Language Speakers with Parkinson's Disease

    ERIC Educational Resources Information Center

    Whitehill, Tara L.; Wong, Lina L. -N.

    2007-01-01

    The aim of this study was to investigate the effect of intensive voice therapy on Cantonese speakers with Parkinson's disease. The effect of the treatment on lexical tone was of particular interest. Four Cantonese speakers with idiopathic Parkinson's disease received treatment based on the principles of Lee Silverman Voice Treatment (LSVT).…

  20. 7 CFR 247.13 - Provisions for non-English or limited-English speakers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Provisions for non-English or limited-English speakers... § 247.13 Provisions for non-English or limited-English speakers. (a) What must State and local agencies do to ensure that non-English or limited-English speaking persons are aware of their rights and...

  1. Discrimination of speaker sex and size when glottal-pulse rate and vocal-tract length are controlled.

    PubMed

    Smith, David R R; Walters, Thomas C; Patterson, Roy D

    2007-12-01

    A recent study [Smith and Patterson, J. Acoust. Soc. Am. 118, 3177-3186 (2005)] demonstrated that both the glottal-pulse rate (GPR) and the vocal-tract length (VTL) of vowel sounds have a large effect on the perceived sex and age (or size) of a speaker. The vowels for all of the "different" speakers in that study were synthesized from recordings of the sustained vowels of one, adult male speaker. This paper presents a follow-up study in which a range of vowels were synthesized from recordings of four different speakers--an adult man, an adult woman, a young boy, and a young girl--to determine whether the sex and age of the original speaker would have an effect upon listeners' judgments of whether a vowel was spoken by a man, woman, boy, or girl, after they were equated for GPR and VTL. The sustained vowels of the four speakers were scaled to produce the same combinations of GPR and VTL, which covered the entire range normally encountered in every day life. The results show that listeners readily distinguish children from adults based on their sustained vowels but that they struggle to distinguish the sex of the speaker.

  2. Can you hear my age? Influences of speech rate and speech spontaneity on estimation of speaker age

    PubMed Central

    Skoog Waller, Sara; Eriksson, Mårten; Sörqvist, Patrik

    2015-01-01

    Cognitive hearing science is mainly about the study of how cognitive factors contribute to speech comprehension, but cognitive factors also partake in speech processing to infer non-linguistic information from speech signals, such as the intentions of the talker and the speaker’s age. Here, we report two experiments on age estimation by “naïve” listeners. The aim was to study how speech rate influences estimation of speaker age by comparing the speakers’ natural speech rate with increased or decreased speech rate. In Experiment 1, listeners were presented with audio samples of read speech from three different speaker age groups (young, middle aged, and old adults). They estimated the speakers as younger when speech rate was faster than normal and as older when speech rate was slower than normal. This speech rate effect was slightly greater in magnitude for older (60–65 years) speakers in comparison with younger (20–25 years) speakers, suggesting that speech rate may gain greater importance as a perceptual age cue with increased speaker age. This pattern was more pronounced in Experiment 2, in which listeners estimated age from spontaneous speech. Faster speech rate was associated with lower age estimates, but only for older and middle aged (40–45 years) speakers. Taken together, speakers of all age groups were estimated as older when speech rate decreased, except for the youngest speakers in Experiment 2. The absence of a linear speech rate effect in estimates of younger speakers, for spontaneous speech, implies that listeners use different age estimation strategies or cues (possibly vocabulary) depending on the age of the speaker and the spontaneity of the speech. Potential implications for forensic investigations and other applied domains are discussed. PMID:26236259

  3. Children Increase Their Sensitivity to a Speaker's Nonlinguistic Cues Following a Communicative Breakdown

    ERIC Educational Resources Information Center

    Yow, W. Quin; Markman, Ellen M.

    2016-01-01

    Bilingual children regularly face communicative challenges when speakers switch languages. To cope with such challenges, children may attempt to discern a speaker's communicative intent, thereby heightening their sensitivity to nonverbal communicative cues. Two studies examined whether such communication breakdowns increase sensitivity to…

  4. The Acquisition of Clitic Pronouns in the Spanish Interlanguage of Peruvian Quechua Speakers.

    ERIC Educational Resources Information Center

    Klee, Carol A.

    1989-01-01

    Analysis of four adult Quechua speakers' acquisition of clitic pronouns in Spanish revealed that educational attainment and amount of contact with monolingual Spanish speakers were positively related to native-like norms of competence in the use of object pronouns in Spanish. (CB)

  5. INTERPOL survey of the use of speaker identification by law enforcement agencies.

    PubMed

    Morrison, Geoffrey Stewart; Sahito, Farhan Hyder; Jardine, Gaëlle; Djokic, Djordje; Clavet, Sophie; Berghs, Sabine; Goemans Dorny, Caroline

    2016-06-01

    A survey was conducted of the use of speaker identification by law enforcement agencies around the world. A questionnaire was circulated to law enforcement agencies in the 190 member countries of INTERPOL. 91 responses were received from 69 countries. 44 respondents reported that they had speaker identification capabilities in house or via external laboratories. Half of these came from Europe. 28 respondents reported that they had databases of audio recordings of speakers. The clearest pattern in the responses was that of diversity. A variety of different approaches to speaker identification were used: The human-supervised-automatic approach was the most popular in North America, the auditory-acoustic-phonetic approach was the most popular in Europe, and the spectrographic/auditory-spectrographic approach was the most popular in Africa, Asia, the Middle East, and South and Central America. Globally, and in Europe, the most popular framework for reporting conclusions was identification/exclusion/inconclusive. In Europe, the second most popular framework was the use of verbal likelihood ratio scales. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. "I May Be a Native Speaker but I'm Not Monolingual": Reimagining "All" Teachers' Linguistic Identities in TESOL

    ERIC Educational Resources Information Center

    Ellis, Elizabeth M.

    2016-01-01

    Teacher linguistic identity has so far mainly been researched in terms of whether a teacher identifies (or is identified by others) as a native speaker (NEST) or nonnative speaker (NNEST) (Moussu & Llurda, 2008; Reis, 2011). Native speakers are presumed to be monolingual, and nonnative speakers, although by definition bilingual, tend to be…

  7. The effect of speakers' sex on voice onset time in Mandarin stops

    PubMed Central

    Li, Fangfang

    2013-01-01

    The goal of the present study is to examine the effect of speakers' gender on voice onset time in Mandarin speakers' stop productions. Word-initial lingual stops were elicited from 10 male and 10 female Mandarin speakers using a word-repetition task. The results revealed differentiated voice onset time (VOT) patterns between the two genders for all four lingual stops on raw VOT values. After factoring out speech rate variation, gender-related differences remained for voiced stops only with females' VOTs being shorter than males. The results, together with previous findings from other languages, suggest a sociolinguistic/stylistic account on the relation between gender and VOT that vary in a language-specific manner. PMID:23363195

  8. New and Not so New Horizons: Brief Encounters between UK Undergraduate Native-Speaker and Non-Native-Speaker Englishes

    ERIC Educational Resources Information Center

    Henderson, Juliet

    2011-01-01

    This paper explores the apparent contradiction between the valuing and promoting of diverse literacies in most UK HEIs, and the discursive construction of spoken native-speaker English as the medium of good grades and prestige academic knowledge. During group interviews on their experiences of university internationalisation, 38 undergraduate…

  9. The needs of aphasic patients for verbal communication as the element of life quality.

    PubMed

    Kulik, Teresa Bernadetta; Koc-Kozłowiec, Barbara; Wrońska, Irena; Rudnicka-Drozak, Ewa

    2003-01-01

    The fact of using the language by man confirms the specific properties of his brain. Man is not able to learn this skill without a contact with speaking and human environment. This skill of linguistic communication with others allows man to get knowledge about the surrounding world and on the other hand it enables him to express his thoughts, feelings and needs. Therefore, people with serious speech disorders, i.e. aphasic patients, suffer not only from the problems connected with communication but mainly because of the deterioration of their social status that consequently will change their life quality. Generally, they cannot cope with the tasks they are lacking both in their personal and professional life. Speech is defined as the process of communication; the act in which the transmitter sends verbal structured message (statement), and the receiver perceives this message or understands its contents. The present paper presents the realised programme of 8-week speech re-education of 10 patients with motor aphasia and 10 patients with sensory aphasia. The examination of speech was performed on the basis of clinical-experimental tests developed by A. Luria. Diagnostic treatment in this test is focused on the qualitative analysis of the disorders structure.

  10. The trouble with nouns and verbs in Greek fluent aphasia.

    PubMed

    Kambanaros, Maria

    2008-01-01

    In the past verb retrieval problems were associated primarily with agrammatism and noun retrieval difficulties with fluent aphasia. With regards to fluent aphasia, so far in the literature, three distinct patterns of verb/noun dissociations have been described for individuals with fluent anomic aphasia in languages with different underlying forms; better verb retrieval, poorer verb retrieval and equal retrieval difficulties for verbs and nouns. Verbs and nouns in Greek are considered of similar morphological complexity thus it was predicted that anomic aphasic individuals would suffer from a non-dissociated impairment of verbs and nouns. Problems with verbs and/or nouns may arise at any stage in the process of lexical retrieval, i.e. lexical-semantic, lemma, lexeme or articulation. The aim of this research was to investigate verb and noun retrieval using a picture-naming task to explore any possible selective noun and/or verb comprehension or retrieval deficits in Greek individuals with anomic aphasia. The results revealed a significant verb/noun dichotomy with verbs significantly more difficult to retrieve than nouns. These findings lend support for the growing body of evidence showing a specific verb impairment in fluent anomic individuals as well as Broca's patients. Given the prevailing view, that anomic patients experience difficulty retrieving the morpho-phonological form of the target word, the results show that specific information of the grammatical category is also important during word form retrieval. LEARNER OUTCOMES: The reader will become familiar with (i) studies investigating grammatical word class breakdown in individuals with aphasia who speak different languages, (ii) the application of the serial model to word production breakdown in aphasia and (iii) the characteristics of verbs and nouns in Greek. It will be concluded that successful verb retrieval for fluent aphasic individuals who speak Greek is dependant on the retrieval of the morpho

  11. Bridging Gaps in Common Ground: Speakers Design Their Gestures for Their Listeners

    ERIC Educational Resources Information Center

    Hilliard, Caitlin; Cook, Susan Wagner

    2016-01-01

    Communication is shaped both by what we are trying to say and by whom we are saying it to. We examined whether and how shared information influences the gestures speakers produce along with their speech. Unlike prior work examining effects of common ground on speech and gesture, we examined a situation in which some speakers have the same amount…

  12. A Comparison of Coverbal Gesture Use in Oral Discourse among Speakers with Fluent and Nonfluent Aphasia

    ERIC Educational Resources Information Center

    Kong, Anthony Pak-Hin; Law, Sam-Po; Chak, Gigi Wan-Chi

    2017-01-01

    Purpose: Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Method: Multimedia data of…

  13. Encoding, rehearsal, and recall in signers and speakers: shared network but differential engagement.

    PubMed

    Bavelier, D; Newman, A J; Mukherjee, M; Hauser, P; Kemeny, S; Braun, A; Boutla, M

    2008-10-01

    Short-term memory (STM), or the ability to hold verbal information in mind for a few seconds, is known to rely on the integrity of a frontoparietal network of areas. Here, we used functional magnetic resonance imaging to ask whether a similar network is engaged when verbal information is conveyed through a visuospatial language, American Sign Language, rather than speech. Deaf native signers and hearing native English speakers performed a verbal recall task, where they had to first encode a list of letters in memory, maintain it for a few seconds, and finally recall it in the order presented. The frontoparietal network described to mediate STM in speakers was also observed in signers, with its recruitment appearing independent of the modality of the language. This finding supports the view that signed and spoken STM rely on similar mechanisms. However, deaf signers and hearing speakers differentially engaged key structures of the frontoparietal network as the stages of STM unfold. In particular, deaf signers relied to a greater extent than hearing speakers on passive memory storage areas during encoding and maintenance, but on executive process areas during recall. This work opens new avenues for understanding similarities and differences in STM performance in signers and speakers.

  14. Encoding, Rehearsal, and Recall in Signers and Speakers: Shared Network but Differential Engagement

    PubMed Central

    Newman, A. J.; Mukherjee, M.; Hauser, P.; Kemeny, S.; Braun, A.; Boutla, M.

    2008-01-01

    Short-term memory (STM), or the ability to hold verbal information in mind for a few seconds, is known to rely on the integrity of a frontoparietal network of areas. Here, we used functional magnetic resonance imaging to ask whether a similar network is engaged when verbal information is conveyed through a visuospatial language, American Sign Language, rather than speech. Deaf native signers and hearing native English speakers performed a verbal recall task, where they had to first encode a list of letters in memory, maintain it for a few seconds, and finally recall it in the order presented. The frontoparietal network described to mediate STM in speakers was also observed in signers, with its recruitment appearing independent of the modality of the language. This finding supports the view that signed and spoken STM rely on similar mechanisms. However, deaf signers and hearing speakers differentially engaged key structures of the frontoparietal network as the stages of STM unfold. In particular, deaf signers relied to a greater extent than hearing speakers on passive memory storage areas during encoding and maintenance, but on executive process areas during recall. This work opens new avenues for understanding similarities and differences in STM performance in signers and speakers. PMID:18245041

  15. Optimizing Vowel Formant Measurements in Four Acoustic Analysis Systems for Diverse Speaker Groups

    PubMed Central

    Derdemezis, Ekaterini; Kent, Ray D.; Fourakis, Marios; Reinicke, Emily L.; Bolt, Daniel M.

    2016-01-01

    Purpose This study systematically assessed the effects of select linear predictive coding (LPC) analysis parameter manipulations on vowel formant measurements for diverse speaker groups using 4 trademarked Speech Acoustic Analysis Software Packages (SAASPs): CSL, Praat, TF32, and WaveSurfer. Method Productions of 4 words containing the corner vowels were recorded from 4 speaker groups with typical development (male and female adults and male and female children) and 4 speaker groups with Down syndrome (male and female adults and male and female children). Formant frequencies were determined from manual measurements using a consensus analysis procedure to establish formant reference values, and from the 4 SAASPs (using both the default analysis parameters and with adjustments or manipulations to select parameters). Smaller differences between values obtained from the SAASPs and the consensus analysis implied more optimal analysis parameter settings. Results Manipulations of default analysis parameters in CSL, Praat, and TF32 yielded more accurate formant measurements, though the benefit was not uniform across speaker groups and formants. In WaveSurfer, manipulations did not improve formant measurements. Conclusions The effects of analysis parameter manipulations on accuracy of formant-frequency measurements varied by SAASP, speaker group, and formant. The information from this study helps to guide clinical and research applications of SAASPs. PMID:26501214

  16. Nonnative Speakers' Perceptions of English "Nonlexical" Intonation Signals.

    ERIC Educational Resources Information Center

    Luthy, Melvin J.

    1983-01-01

    Native English speakers' and foreign students' perceptions of 14 English intonation signals, recorded free of verbal context, show foreign students may be misinterpreting or missing much information communicated with nonlexical signals. (Author/MSE)

  17. A Respirometric Technique to Evaluate Velopharyngeal Function in Speakers with Cleft Palate, with and without Prostheses.

    ERIC Educational Resources Information Center

    Gilbert, Harvey R.; Ferrand, Carole T.

    1987-01-01

    Respirometric quotients (RQ), the ratio of oral air volume expended to total volume expended, were obtained from the productions of oral and nasal airflow of 10 speakers with cleft palate, with and without their prosthetic appliances, and 10 normal speakers. Cleft palate speakers without their appliances exhibited the lowest RQ values. (Author/DB)

  18. Identifying the nonlinear mechanical behaviour of micro-speakers from their quasi-linear electrical response

    NASA Astrophysics Data System (ADS)

    Zilletti, Michele; Marker, Arthur; Elliott, Stephen John; Holland, Keith

    2017-05-01

    In this study model identification of the nonlinear dynamics of a micro-speaker is carried out by purely electrical measurements, avoiding any explicit vibration measurements. It is shown that a dynamic model of the micro-speaker, which takes into account the nonlinear damping characteristic of the device, can be identified by measuring the response between the voltage input and the current flowing into the coil. An analytical formulation of the quasi-linear model of the micro-speaker is first derived and an optimisation method is then used to identify a polynomial function which describes the mechanical damping behaviour of the micro-speaker. The analytical results of the quasi-linear model are compared with numerical results. This study potentially opens up the possibility of efficiently implementing nonlinear echo cancellers.

  19. Promoting Communities of Practice among Non-Native Speakers of English in Online Discussions

    ERIC Educational Resources Information Center

    Kim, Hoe Kyeung

    2011-01-01

    An online discussion involving text-based computer-mediated communication has great potential for promoting equal participation among non-native speakers of English. Several studies claimed that online discussions could enhance the academic participation of non-native speakers of English. However, there is little research around participation…

  20. Learning foreign labels from a foreign speaker: the role of (limited) exposure to a second language.

    PubMed

    Akhtar, Nameera; Menjivar, Jennifer; Hoicka, Elena; Sabbagh, Mark A

    2012-11-01

    Three- and four-year-olds (N = 144) were introduced to novel labels by an English speaker and a foreign speaker (of Nordish, a made-up language), and were asked to endorse one of the speaker's labels. Monolingual English-speaking children were compared to bilingual children and English-speaking children who were regularly exposed to a language other than English. All children tended to endorse the English speaker's labels when asked 'What do you call this?', but when asked 'What do you call this in Nordish?', children with exposure to a second language were more likely to endorse the foreign label than monolingual and bilingual children. The findings suggest that, at this age, exposure to, but not necessarily immersion in, more than one language may promote the ability to learn foreign words from a foreign speaker.

  1. Is the superior verbal memory span of Mandarin speakers due to faster rehearsal?

    PubMed

    Mattys, Sven L; Baddeley, Alan; Trenkic, Danijela

    2018-04-01

    It is well established that digit span in native Chinese speakers is atypically high. This is commonly attributed to a capacity for more rapid subvocal rehearsal for that group. We explored this hypothesis by testing a group of English-speaking native Mandarin speakers on digit span and word span in both Mandarin and English, together with a measure of speed of articulation for each. When compared to the performance of native English speakers, the Mandarin group proved to be superior on both digit and word spans while predictably having lower spans in English. This suggests that the Mandarin advantage is not limited to digits. Speed of rehearsal correlated with span performance across materials. However, this correlation was more pronounced for English speakers than for any of the Chinese measures. Further analysis suggested that speed of rehearsal did not provide an adequate account of differences between Mandarin and English spans or for the advantage of digits over words. Possible alternative explanations are discussed.

  2. Connective Choice between Native and Japanese Speakers.

    ERIC Educational Resources Information Center

    Tabuki, Masatoshi; Shimatani, Hiroshi

    A study investigated the use of connectors in English conversation between native Japanese-speakers and teachers outside the classroom. Data were drawn from six videotaped conversations between pairs of Japanese students, all learning beginning-level English, with conversational support provided by English teachers. The functions of four…

  3. Emblematic Gestures among Hebrew Speakers in Israel.

    ERIC Educational Resources Information Center

    Safadi, Michaela; Valentine, Carol Ann

    A field study conducted in Israel sought to identify emblematic gestures (body movements that convey specific messages) that are recognized and used by Hebrew speakers. Twenty-six gestures commonly used in classroom interaction were selected for testing, using Schneller's form, "Investigations of Interpersonal Communication in Israel."…

  4. An Analysis of Speech Disfluencies of Turkish Speakers Based on Age Variable

    ERIC Educational Resources Information Center

    Altiparmak, Ayse; Kuruoglu, Gülmira

    2018-01-01

    The focus of this research is to verify the influence of the age variable on fluent Turkish native speakers' production of the various types of speech disfluencies. To accomplish this, four groups of native speakers of Turkish between ages 4-8, 18-23, 33-50 years respectively and those over 50-years-old were constructed. A total of 84 participants…

  5. Native and Non-Native Speakers' Brain Responses to Filled Indirect Object Gaps

    ERIC Educational Resources Information Center

    Jessen, Anna; Festman, Julia; Boxell, Oliver; Felser, Claudia

    2017-01-01

    We examined native and non-native English speakers' processing of indirect object "wh"-dependencies using a filled-gap paradigm while recording event-related potentials (ERPs). The non-native group was comprised of native German-speaking, proficient non-native speakers of English. Both participant groups showed evidence of linking…

  6. Dispelling Myths and Examining Strategies in Teaching Non-Standard Dialect Speakers to Read.

    ERIC Educational Resources Information Center

    Zimet, Sara Goodman

    To dispel the myths of linguistic deficiency among nonstandard English dialect speakers, evidence that repudiates these myths should be examined. These myths include suggestions that nonstandard dialects are ungrammatical and cannot be used to form concepts, and that speakers of such dialects receive little verbal stimulation as children. The…

  7. Psychophysical Boundary for Categorization of Voiced-Voiceless Stop Consonants in Native Japanese Speakers

    ERIC Educational Resources Information Center

    Tamura, Shunsuke; Ito, Kazuhito; Hirose, Nobuyuki; Mori, Shuji

    2018-01-01

    Purpose: The purpose of this study was to investigate the psychophysical boundary used for categorization of voiced-voiceless stop consonants in native Japanese speakers. Method: Twelve native Japanese speakers participated in the experiment. The stimuli were synthetic stop consonant-vowel stimuli varying in voice onset time (VOT) with…

  8. Children's comprehension of an unfamiliar speaker accent: a review.

    PubMed

    Harte, Jennifer; Oliveira, Ana; Frizelle, Pauline; Gibbon, Fiona

    2016-05-01

    The effect of speaker accent on listeners' comprehension has become a key focus of research given the increasing cultural diversity of society and the increased likelihood of an individual encountering a clinician with an unfamiliar accent. To review the studies exploring the effect of an unfamiliar accent on language comprehension in typically developing (TD) children and in children with speech and language difficulties. This review provides a methodological analysis of the relevant studies by exploring the challenges facing this field of research and highlighting the current gaps in the literature. A total of nine studies were identified using a systematic search and organized under studies investigating the effect of speaker accent on language comprehension in (1) TD children and (2) children with speech and/or language difficulties. This review synthesizes the evidence that an unfamiliar speaker accent may lead to a breakdown in language comprehension in TD children and in children with speech difficulties. Moreover, it exposes the inconsistencies found in this field of research and highlights the lack of studies investigating the effect of speaker accent in children with language deficits. Overall, research points towards a developmental trend in children's ability to comprehend accent-related variations in speech. Vocabulary size, language exposure, exposure to different accents and adequate processing resources (e.g. attention) seem to play a key role in children's ability to understand unfamiliar accents. This review uncovered some inconsistencies in the literature that highlight the methodological issues that must be considered when conducting research in this field. It explores how such issues may be controlled in order to increase the validity and reliability of future research. Key clinical implications are also discussed. © 2016 Royal College of Speech and Language Therapists.

  9. Rationales for indirect speech: the theory of the strategic speaker.

    PubMed

    Lee, James J; Pinker, Steven

    2010-07-01

    Speakers often do not state requests directly but employ innuendos such as Would you like to see my etchings? Though such indirectness seems puzzlingly inefficient, it can be explained by a theory of the strategic speaker, who seeks plausible deniability when he or she is uncertain of whether the hearer is cooperative or antagonistic. A paradigm case is bribing a policeman who may be corrupt or honest: A veiled bribe may be accepted by the former and ignored by the latter. Everyday social interactions can have a similar payoff structure (with emotional rather than legal penalties) whenever a request is implicitly forbidden by the relational model holding between speaker and hearer (e.g., bribing an honest maitre d', where the reciprocity of the bribe clashes with his authority). Even when a hearer's willingness is known, indirect speech offers higher-order plausible deniability by preempting certainty, gossip, and common knowledge of the request. In supporting experiments, participants judged the intentions and reactions of characters in scenarios that involved fraught requests varying in politeness and directness. (c) 2010 APA, all rights reserved.

  10. An Acoustic Study of Vowels Produced by Alaryngeal Speakers in Taiwan.

    PubMed

    Liao, Jia-Shiou

    2016-11-01

    This study investigated the acoustic properties of 6 Taiwan Southern Min vowels produced by 10 laryngeal speakers (LA), 10 speakers with a pneumatic artificial larynx (PA), and 8 esophageal speakers (ES). Each of the 6 monophthongs of Taiwan Southern Min (/i, e, a, ɔ, u, ə/) was represented by a Taiwan Southern Min character and appeared randomly on a list 3 times (6 Taiwan Southern Min characters × 3 repetitions = 18 tokens). Each Taiwan Southern Min character in this study has the same syllable structure, /V/, and all were read with tone 1 (high and level). Acoustic measurements of the 1st formant, 2nd formant, and 3rd formant were taken for each vowel. Then, vowel space areas (VSAs) enclosed by /i, a, u/ were calculated for each group of speakers. The Euclidean distance between vowels in the pairs /i, a/, /i, u/, and /a, u/ was also calculated and compared across the groups. PA and ES have higher 1st or 2nd formant values than LA for each vowel. The distance is significantly shorter between vowels in the corner vowel pairs /i, a/ and /i, u/. PA and ES have a significantly smaller VSA compared with LA. In accordance with previous studies, alaryngeal speakers have higher formant frequency values than LA because they have a shortened vocal tract as a result of their total laryngectomy. Furthermore, the resonance frequencies are inversely related to the length of the vocal tract (on the basis of the assumption of the source filter theory). PA and ES have a smaller VSA and shorter distances between corner vowels compared with LA, which may be related to speech intelligibility. This hypothesis needs further support from future study.

  11. Classifications of Vocalic Segments from Articulatory Kinematics: Healthy Controls and Speakers with Dysarthria

    ERIC Educational Resources Information Center

    Yunusova, Yana; Weismer, Gary G.; Lindstrom, Mary J.

    2011-01-01

    Purpose: In this study, the authors classified vocalic segments produced by control speakers (C) and speakers with dysarthria due to amyotrophic lateral sclerosis (ALS) or Parkinson's disease (PD); classification was based on movement measures. The researchers asked the following questions: (a) Can vowels be classified on the basis of selected…

  12. Is the sagittal postural alignment different in normal and dysphonic adult speakers?

    PubMed

    Franco, Débora; Martins, Fernando; Andrea, Mário; Fragoso, Isabel; Carrão, Luís; Teles, Júlia

    2014-07-01

    Clinical research in the field of voice disorders, in particular functional dysphonia, has suggested abnormal laryngeal posture due to muscle adaptive changes, although specific evidence regarding body posture has been lacking. The aim of our study was to verify if there were significant differences in sagittal spine alignment between normal (41 subjects) and dysphonic speakers (33 subjects). Cross-sectional study. Seventy-four adults, 35 males and 39 females, were submitted to sagittal plane photographs so that spine alignment could be analyzed through the Digimizer-MedCalc Software Ltd program. Perceptual and acoustic evaluation and nasoendoscopy were used for dysphonic judgments: normal and dysphonic speakers. For thoracic length curvature (TL) and for the kyphosis index (KI), a significant effect of dysphonia was observed with mean TL and KI significantly higher for the dysphonic speakers than for the normal speakers. Concerning the TL variable, a significant effect of sex was found, in which the mean of the TL was higher for males than females. The interaction between dysphonia and sex did not have a significant effect on TL and KI variables. For the lumbar length curvature variable, a significant main effect of sex was demonstrated; there was no significant main effect of dysphonia or significant sex×dysphonia interaction. Findings indicated significant differences in some sagittal spine posture measures between normal and dysphonic speakers. Postural measures can add useful information to voice assessment protocols and should be taken into account when considering particular treatment strategies. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  13. Limited English Speakers and the Miranda Rights

    ERIC Educational Resources Information Center

    Briere, Eugene J.

    1978-01-01

    This paper describes the process used to determine if a Thai individual, of a limited English speaking background, seemed to have enough proficiency in English to understand the "Miranda Rights," designed to inform him of his rights to an attorney. These Rights should be simplified for limited English speakers. (CFM)

  14. Navigating Native-Speaker Ideologies as FSL Teacher

    ERIC Educational Resources Information Center

    Wernicke, Meike

    2017-01-01

    Although a well-established domain of research in English language teaching, native-speaker ideologies have received little attention in French language education. This article reports on a study that examined the salience of "authentic French" in the identity construction of French as a second language (FSL) teachers in English-speaking…

  15. Visuomotor Tracking Ability of Young Adult Speakers.

    ERIC Educational Resources Information Center

    Moon, Jerald B.; And Others

    1993-01-01

    Twenty-five normal young adult speakers tracked sinusoidal and unpredictable target signals using lower lip and jaw movement and fundamental frequency modulation. Tracking accuracy varied as a function of target frequency and articulator used to track. Results show the potential of visuomotor tracking tasks in the assessment of speech articulatory…

  16. Acquired Dyslexia in a Turkish-English Speaker

    ERIC Educational Resources Information Center

    Raman, Ilhan; Weekes, Brendan S.

    2005-01-01

    The Turkish script is characterised by completely transparent bidirectional mappings between orthography and phonology. To date, there has been no reported evidence of acquired dyslexia in Turkish speakers leading to the naive view that reading and writing problems in Turkish are probably rare. We examined the extent to which phonological…

  17. Speaker's voice as a memory cue.

    PubMed

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2015-02-01

    Speaker's voice occupies a central role as the cornerstone of auditory social interaction. Here, we review the evidence suggesting that speaker's voice constitutes an integral context cue in auditory memory. Investigation into the nature of voice representation as a memory cue is essential to understanding auditory memory and the neural correlates which underlie it. Evidence from behavioral and electrophysiological studies suggest that while specific voice reinstatement (i.e., same speaker) often appears to facilitate word memory even without attention to voice at study, the presence of a partial benefit of similar voices between study and test is less clear. In terms of explicit memory experiments utilizing unfamiliar voices, encoding methods appear to play a pivotal role. Voice congruency effects have been found when voice is specifically attended at study (i.e., when relatively shallow, perceptual encoding takes place). These behavioral findings coincide with neural indices of memory performance such as the parietal old/new recollection effect and the late right frontal effect. The former distinguishes between correctly identified old words and correctly identified new words, and reflects voice congruency only when voice is attended at study. Characterization of the latter likely depends upon voice memory, rather than word memory. There is also evidence to suggest that voice effects can be found in implicit memory paradigms. However, the presence of voice effects appears to depend greatly on the task employed. Using a word identification task, perceptual similarity between study and test conditions is, like for explicit memory tests, crucial. In addition, the type of noise employed appears to have a differential effect. While voice effects have been observed when white noise is used at both study and test, using multi-talker babble does not confer the same results. In terms of neuroimaging research modulations, characterization of an implicit memory effect

  18. Infants' understanding of false labeling events: the referential roles of words and the speakers who use them.

    PubMed

    Koenig, Melissa A; Echols, Catharine H

    2003-04-01

    The four studies reported here examine whether 16-month-old infants' responses to true and false utterances interact with their knowledge of human agents. In Study 1, infants heard repeated instances either of true or false labeling of common objects; labels came from an active human speaker seated next to the infant. In Study 2, infants experienced the same stimuli and procedure; however, we replaced the human speaker of Study 1 with an audio speaker in the same location. In Study 3, labels came from a hidden audio speaker. In Study 4, a human speaker labeled the objects while facing away from them. In Study 1, infants looked significantly longer to the human agent when she falsely labeled than when she truthfully labeled the objects. Infants did not show a similar pattern of attention for the audio speaker of Study 2, the silent human of Study 3 or the facing-backward speaker of Study 4. In fact, infants who experienced truthful labeling looked significantly longer to the facing-backward labeler of Study 4 than to true labelers of the other three contexts. Additionally, infants were more likely to correct false labels when produced by the human labeler of Study 1 than in any of the other contexts. These findings suggest, first, that infants are developing a critical conception of other human speakers as truthful communicators, and second, that infants understand that human speakers may provide uniquely useful information when a word fails to match its referent. These findings are consistent with the view that infants can recognize differences in knowledge and that such differences can be based on differences in the availability of perceptual experience.

  19. Beyond semantic accuracy: preschoolers evaluate a speaker's reasons.

    PubMed

    Koenig, Melissa A

    2012-01-01

    Children's sensitivity to the quality of epistemic reasons and their selective trust in the more reasonable of 2 informants was investigated in 2 experiments. Three-, 4-, and 5-year-old children (N = 90) were presented with speakers who stated different kinds of evidence for what they believed. Experiment 1 showed that children of all age groups appropriately judged looking, reliable testimony, and inference as better reasons for belief than pretense, guessing, and desiring. Experiment 2 showed that 3- and 4-year-old children preferred to seek and accept new information from a speaker who was previously judged to use the "best" way of thinking. The findings demonstrate that children distinguish certain good from bad reasons and prefer to learn from those who showcased good reasoning in the past. © 2012 The Author. Child Development © 2012 Society for Research in Child Development, Inc.

  20. Combining Behavioral and ERP Methodologies to Investigate the Differences Between McGurk Effects Demonstrated by Cantonese and Mandarin Speakers.

    PubMed

    Zhang, Juan; Meng, Yaxuan; McBride, Catherine; Fan, Xitao; Yuan, Zhen

    2018-01-01

    The present study investigated the impact of Chinese dialects on McGurk effect using behavioral and event-related potential (ERP) methodologies. Specifically, intra-language comparison of McGurk effect was conducted between Mandarin and Cantonese speakers. The behavioral results showed that Cantonese speakers exhibited a stronger McGurk effect in audiovisual speech perception compared to Mandarin speakers, although both groups performed equally in the auditory and visual conditions. ERP results revealed that Cantonese speakers were more sensitive to visual cues than Mandarin speakers, though this was not the case for the auditory cues. Taken together, the current findings suggest that the McGurk effect generated by Chinese speakers is mainly influenced by segmental phonology during audiovisual speech integration.

  1. The interaction of glottal-pulse rate and vocal-tract length in judgements of speaker size, sex, and age

    NASA Astrophysics Data System (ADS)

    Smith, David R. R.; Patterson, Roy D.

    2005-11-01

    Glottal-pulse rate (GPR) and vocal-tract length (VTL) are related to the size, sex, and age of the speaker but it is not clear how the two factors combine to influence our perception of speaker size, sex, and age. This paper describes experiments designed to measure the effect of the interaction of GPR and VTL upon judgements of speaker size, sex, and age. Vowels were scaled to represent people with a wide range of GPRs and VTLs, including many well beyond the normal range of the population, and listeners were asked to judge the size and sex/age of the speaker. The judgements of speaker size show that VTL has a strong influence upon perceived speaker size. The results for the sex and age categorization (man, woman, boy, or girl) show that, for vowels with GPR and VTL values in the normal range, judgements of speaker sex and age are influenced about equally by GPR and VTL. For vowels with abnormal combinations of low GPRs and short VTLs, the VTL information appears to decide the sex/age judgement.

  2. Continuing Medical Education Speakers with High Evaluation Scores Use more Image-based Slides.

    PubMed

    Ferguson, Ian; Phillips, Andrew W; Lin, Michelle

    2017-01-01

    Although continuing medical education (CME) presentations are common across health professions, it is unknown whether slide design is independently associated with audience evaluations of the speaker. Based on the conceptual framework of Mayer's theory of multimedia learning, this study aimed to determine whether image use and text density in presentation slides are associated with overall speaker evaluations. This retrospective analysis of six sequential CME conferences (two annual emergency medicine conferences over a three-year period) used a mixed linear regression model to assess whether post-conference speaker evaluations were associated with image fraction (percentage of image-based slides per presentation) and text density (number of words per slide). A total of 105 unique lectures were given by 49 faculty members, and 1,222 evaluations (70.1% response rate) were available for analysis. On average, 47.4% (SD=25.36) of slides had at least one educationally-relevant image (image fraction). Image fraction significantly predicted overall higher evaluation scores [F(1, 100.676)=6.158, p=0.015] in the mixed linear regression model. The mean (SD) text density was 25.61 (8.14) words/slide but was not a significant predictor [F(1, 86.293)=0.55, p=0.815]. Of note, the individual speaker [χ 2 (1)=2.952, p=0.003] and speaker seniority [F(3, 59.713)=4.083, p=0.011] significantly predicted higher scores. This is the first published study to date assessing the linkage between slide design and CME speaker evaluations by an audience of practicing clinicians. The incorporation of images was associated with higher evaluation scores, in alignment with Mayer's theory of multimedia learning. Contrary to this theory, however, text density showed no significant association, suggesting that these scores may be multifactorial. Professional development efforts should focus on teaching best practices in both slide design and presentation skills.

  3. Speaker-Sex Discrimination for Voiced and Whispered Vowels at Short Durations.

    PubMed

    Smith, David R R

    2016-01-01

    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel performance would improve relative to whispered vowel performance as pitch information becomes available. This pattern of results was shown for women's but not for men's voices. A whispered vowel needs to have a duration three times longer than a voiced vowel before listeners can reliably tell whether it's spoken by a man or woman (∼30 ms vs. ∼10 ms). Listeners were half as sensitive to information about speaker-sex when it is carried by whispered compared with voiced vowels.

  4. Speaker-Sex Discrimination for Voiced and Whispered Vowels at Short Durations

    PubMed Central

    2016-01-01

    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel performance would improve relative to whispered vowel performance as pitch information becomes available. This pattern of results was shown for women’s but not for men’s voices. A whispered vowel needs to have a duration three times longer than a voiced vowel before listeners can reliably tell whether it’s spoken by a man or woman (∼30 ms vs. ∼10 ms). Listeners were half as sensitive to information about speaker-sex when it is carried by whispered compared with voiced vowels. PMID:27757218

  5. Infant sensitivity to speaker and language in learning a second label.

    PubMed

    Bhagwat, Jui; Casasola, Marianella

    2014-02-01

    Two experiments examined when monolingual, English-learning 19-month-old infants learn a second object label. Two experimenters sat together. One labeled a novel object with one novel label, whereas the other labeled the same object with a different label in either the same or a different language. Infants were tested on their comprehension of each label immediately following its presentation. Infants mapped the first label at above chance levels, but they did so with the second label only when requested by the speaker who provided it (Experiment 1) or when the second experimenter labeled the object in a different language (Experiment 2). These results show that 19-month-olds learn second object labels but do not readily generalize them across speakers of the same language. The results highlight how speaker and language spoken guide infants' acceptance of second labels, supporting sociopragmatic views of word learning. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Congenital Amusia in Speakers of a Tone Language: Association with Lexical Tone Agnosia

    ERIC Educational Resources Information Center

    Nan, Yun; Sun, Yanan; Peretz, Isabelle

    2010-01-01

    Congenital amusia is a neurogenetic disorder that affects the processing of musical pitch in speakers of non-tonal languages like English and French. We assessed whether this musical disorder exists among speakers of Mandarin Chinese who use pitch to alter the meaning of words. Using the Montreal Battery of Evaluation of Amusia, we tested 117…

  7. A Study of Non-Native English Speakers' Academic Performance at Santa Ana College.

    ERIC Educational Resources Information Center

    Slark, Julie; Bateman, Harold

    A study was conducted in 1980-81 at Santa Ana College (SAC) to collect data on the English communication skills of non-native English speakers and to determine if a relationship existed between these skills and student's educational success. A sample of 22 classes, with an enrollment of at least 50% non-native English speakers and representing a…

  8. Phoneme Error Pattern by Heritage Speakers of Spanish on an English Word Recognition Test.

    PubMed

    Shi, Lu-Feng

    2017-04-01

    Heritage speakers acquire their native language from home use in their early childhood. As the native language is typically a minority language in the society, these individuals receive their formal education in the majority language and eventually develop greater competency with the majority than their native language. To date, there have not been specific research attempts to understand word recognition by heritage speakers. It is not clear if and to what degree we may infer from evidence based on bilingual listeners in general. This preliminary study investigated how heritage speakers of Spanish perform on an English word recognition test and analyzed their phoneme errors. A prospective, cross-sectional, observational design was employed. Twelve normal-hearing adult Spanish heritage speakers (four men, eight women, 20-38 yr old) participated in the study. Their language background was obtained through the Language Experience and Proficiency Questionnaire. Nine English monolingual listeners (three men, six women, 20-41 yr old) were also included for comparison purposes. Listeners were presented with 200 Northwestern University Auditory Test No. 6 words in quiet. They repeated each word orally and in writing. Their responses were scored by word, word-initial consonant, vowel, and word-final consonant. Performance was compared between groups with Student's t test or analysis of variance. Group-specific error patterns were primarily descriptive, but intergroup comparisons were made using 95% or 99% confidence intervals for proportional data. The two groups of listeners yielded comparable scores when their responses were examined by word, vowel, and final consonant. However, heritage speakers of Spanish misidentified significantly more word-initial consonants and had significantly more difficulty with initial /p, b, h/ than their monolingual peers. The two groups yielded similar patterns for vowel and word-final consonants, but heritage speakers made significantly

  9. Combining Behavioral and ERP Methodologies to Investigate the Differences Between McGurk Effects Demonstrated by Cantonese and Mandarin Speakers

    PubMed Central

    Zhang, Juan; Meng, Yaxuan; McBride, Catherine; Fan, Xitao; Yuan, Zhen

    2018-01-01

    The present study investigated the impact of Chinese dialects on McGurk effect using behavioral and event-related potential (ERP) methodologies. Specifically, intra-language comparison of McGurk effect was conducted between Mandarin and Cantonese speakers. The behavioral results showed that Cantonese speakers exhibited a stronger McGurk effect in audiovisual speech perception compared to Mandarin speakers, although both groups performed equally in the auditory and visual conditions. ERP results revealed that Cantonese speakers were more sensitive to visual cues than Mandarin speakers, though this was not the case for the auditory cues. Taken together, the current findings suggest that the McGurk effect generated by Chinese speakers is mainly influenced by segmental phonology during audiovisual speech integration. PMID:29780312

  10. Communication-related affective, behavioral, and cognitive reactions in speakers with spasmodic dysphonia.

    PubMed

    Watts, Christopher R; Vanryckeghem, Martine

    2017-12-01

    To investigate the self-perceived affective, behavioral, and cognitive reactions associated with communication of speakers with spasmodic dysphonia as a function of employment status. Prospective cross-sectional investigation. 148 Participants with spasmodic dysphonia (SD) completed an adapted version of the Behavior Assessment Battery (BAB-Voice), a multidimensional assessment of self-perceived reactions to communication. The BAB-Voice consisted of four subtests: the Speech Situation Checklist for A) Emotional Reaction (SSC-ER) and B) Speech Disruption (SSC-SD), C) the Behavior Checklist (BCL), and D) the Communication Attitude Test for Adults (BigCAT). Participants were assigned to groups based on employment status (working versus retired). Descriptive comparison of the BAB-Voice in speakers with SD to previously published non-dysphonic speaker data revealed substantially higher scores associated with SD across all four subtests. Multivariate Analysis of Variance (MANOVA) revealed no significantly different BAB-Voice subtest scores as a function of SD group status (working vs. retired). BAB-Voice scores revealed that speakers with SD experienced substantial impact of their voice disorder on communication attitude, coping behaviors, and affective reactions in speaking situations as reflected in their high BAB scores. These impacts do not appear to be influenced by work status, as speakers with SD who were employed or retired experienced similar levels of affective and behavioral reactions in various speaking situations and cognitive responses. These findings are consistent with previously published pilot data. The specificity of items assessed by means of the BAB-Voice may inform the clinician of valid patient-centered treatment goals which target the impairment extended beyond the physiological dimension. 2b.

  11. Communication‐related affective, behavioral, and cognitive reactions in speakers with spasmodic dysphonia

    PubMed Central

    Vanryckeghem, Martine

    2017-01-01

    Objectives To investigate the self‐perceived affective, behavioral, and cognitive reactions associated with communication of speakers with spasmodic dysphonia as a function of employment status. Study Design Prospective cross‐sectional investigation Methods 148 Participants with spasmodic dysphonia (SD) completed an adapted version of the Behavior Assessment Battery (BAB‐Voice), a multidimensional assessment of self‐perceived reactions to communication. The BAB‐Voice consisted of four subtests: the Speech Situation Checklist for A) Emotional Reaction (SSC‐ER) and B) Speech Disruption (SSC‐SD), C) the Behavior Checklist (BCL), and D) the Communication Attitude Test for Adults (BigCAT). Participants were assigned to groups based on employment status (working versus retired). Results Descriptive comparison of the BAB‐Voice in speakers with SD to previously published non‐dysphonic speaker data revealed substantially higher scores associated with SD across all four subtests. Multivariate Analysis of Variance (MANOVA) revealed no significantly different BAB‐Voice subtest scores as a function of SD group status (working vs. retired). Conclusions BAB‐Voice scores revealed that speakers with SD experienced substantial impact of their voice disorder on communication attitude, coping behaviors, and affective reactions in speaking situations as reflected in their high BAB scores. These impacts do not appear to be influenced by work status, as speakers with SD who were employed or retired experienced similar levels of affective and behavioral reactions in various speaking situations and cognitive responses. These findings are consistent with previously published pilot data. The specificity of items assessed by means of the BAB‐Voice may inform the clinician of valid patient‐centered treatment goals which target the impairment extended beyond the physiological dimension. Level of Evidence 2b PMID:29299525

  12. Speech rate reduction and "nasality" in normal speakers.

    PubMed

    Brancewicz, T M; Reich, A R

    1989-12-01

    This study explored the effects of reduced speech rate on nasal/voice accelerometric measures and nasality ratings. Nasal/voice accelerometric measures were obtained from normal adults for various speech stimuli and speaking rates. Stimuli included three sentences (one obstruent-loaded, one semivowel-loaded, and one containing a single nasal), and /pv/ syllable trains.. Speakers read the stimuli at their normal rate, half their normal rate, and as slowly as possible. In addition, a computer program paced each speaker at rates of 1, 2, and 3 syllables per second. The nasal/voice accelerometric values revealed significant stimulus effects but no rate effects. The nasality ratings of experienced listeners, evaluated as a function of stimulus and speaking rate, were compared to the accelerometric measures. The nasality scale values demonstrated small, but statistically significant, stimulus and rate effects. However, the nasality percepts were poorly correlated with the nasal/voice accelerometric measures.

  13. Speaker Recognition Using Real vs. Synthetic Parallel Data for DNN Channel Compensation

    DTIC Science & Technology

    2016-08-18

    Speaker Recognition Using Real vs Synthetic Parallel Data for DNN Channel Compensation Fred Richardson, Michael Brandstein, Jennifer Melot and...de- noising DNNs has been demonstrated for several speech tech- nologies such as ASR and speaker recognition. This paper com- pares the use of real ...AVG and POOL min DCFs). In all cases, the telephone channel per- formance on SRE10 is improved by the denoising DNNs with the real Mixer 1 and 2

  14. Speaker Recognition Using Real vs Synthetic Parallel Data for DNN Channel Compensation

    DTIC Science & Technology

    2016-09-08

    Speaker Recognition Using Real vs Synthetic Parallel Data for DNN Channel Compensation Fred Richardson, Michael Brandstein, Jennifer Melot and...de- noising DNNs has been demonstrated for several speech tech- nologies such as ASR and speaker recognition. This paper com- pares the use of real ...AVG and POOL min DCFs). In all cases, the telephone channel per- formance on SRE10 is improved by the denoising DNNs with the real Mixer 1 and 2

  15. On the same wavelength: predictable language enhances speaker-listener brain-to-brain synchrony in posterior superior temporal gyrus.

    PubMed

    Dikker, Suzanne; Silbert, Lauren J; Hasson, Uri; Zevin, Jason D

    2014-04-30

    Recent research has shown that the degree to which speakers and listeners exhibit similar brain activity patterns during human linguistic interaction is correlated with communicative success. Here, we used an intersubject correlation approach in fMRI to test the hypothesis that a listener's ability to predict a speaker's utterance increases such neural coupling between speakers and listeners. Nine subjects listened to recordings of a speaker describing visual scenes that varied in the degree to which they permitted specific linguistic predictions. In line with our hypothesis, the temporal profile of listeners' brain activity was significantly more synchronous with the speaker's brain activity for highly predictive contexts in left posterior superior temporal gyrus (pSTG), an area previously associated with predictive auditory language processing. In this region, predictability differentially affected the temporal profiles of brain responses in the speaker and listeners respectively, in turn affecting correlated activity between the two: whereas pSTG activation increased with predictability in the speaker, listeners' pSTG activity instead decreased for more predictable sentences. Listeners additionally showed stronger BOLD responses for predictive images before sentence onset, suggesting that highly predictable contexts lead comprehenders to preactivate predicted words.

  16. Perceptual prothesis in native Spanish speakers

    NASA Astrophysics Data System (ADS)

    Theodore, Rachel M.; Schmidt, Anna M.

    2003-04-01

    Previous research suggests a perceptual bias exists for native phonotactics [D. Massaro and M. Cohen, Percept. Psychophys. 34, 338-348 (1983)] such that listeners report nonexistent segments when listening to stimuli that violate native phonotactics [E. Dupoux, K. Kakehi, Y. Hirose, C. Pallier, and J. Mehler, J. Exp. Psychol.: Human Percept. Perform. 25, 1568-1578 (1999)]. This study investigated how native-language experience affects second language processing, focusing on how native Spanish speakers perceive the English clusters /st/, /sp/, and /sk/, which represent phonotactically illegal forms in Spanish. To preserve native phonotactics, Spanish speakers often produce prothetic vowels before English words beginning with /s/ clusters. Is the influence of native phonotactics also present in the perception of illegal clusters? A stimuli continuum ranging from no vowel (e.g., ``sku'') to a full vowel (e.g., ``esku'') before the cluster was used. Four final vowel contexts were used for each cluster, resulting in 12 sCV and 12 VsCV nonword endpoints. English and Spanish listeners were asked to discriminate between pairs differing in vowel duration and to identify the presence or absence of a vowel before the cluster. Results will be discussed in terms of implications for theories of second language speech perception.

  17. The Space-Time Topography of English Speakers

    ERIC Educational Resources Information Center

    Duman, Steve

    2016-01-01

    English speakers talk and think about Time in terms of physical space. The past is behind us, and the future is in front of us. In this way, we "map" space onto Time. This dissertation addresses the specificity of this physical space, or its topography. Inspired by languages like Yupno (Nunez, et al., 2012) and Bamileke-Dschang (Hyman,…

  18. Parallel deterioration to language processing in a bilingual speaker.

    PubMed

    Druks, Judit; Weekes, Brendan Stuart

    2013-01-01

    The convergence hypothesis [Green, D. W. (2003). The neural basis of the lexicon and the grammar in L2 acquisition: The convergence hypothesis. In R. van Hout, A. Hulk, F. Kuiken, & R. Towell (Eds.), The interface between syntax and the lexicon in second language acquisition (pp. 197-218). Amsterdam: John Benjamins] assumes that the neural substrates of language representations are shared between the languages of a bilingual speaker. One prediction of this hypothesis is that neurodegenerative disease should produce parallel deterioration to lexical and grammatical processing in bilingual aphasia. We tested this prediction with a late bilingual Hungarian (first language, L1)-English (second language, L2) speaker J.B. who had nonfluent progressive aphasia (NFPA). J.B. had acquired L2 in adolescence but was premorbidly proficient and used English as his dominant language throughout adult life. Our investigations showed comparable deterioration to lexical and grammatical knowledge in both languages during a one-year period. Parallel deterioration to language processing in a bilingual speaker with NFPA challenges the assumption that L1 and L2 rely on different brain mechanisms as assumed in some theories of bilingual language processing [Ullman, M. T. (2001). The neural basis of lexicon and grammar in first and second language: The declarative/procedural model. Bilingualism: Language and Cognition, 4(1), 105-122].

  19. Does dynamic information about the speaker's face contribute to semantic speech processing? ERP evidence.

    PubMed

    Hernández-Gutiérrez, David; Abdel Rahman, Rasha; Martín-Loeches, Manuel; Muñoz, Francisco; Schacht, Annekathrin; Sommer, Werner

    2018-07-01

    Face-to-face interactions characterize communication in social contexts. These situations are typically multimodal, requiring the integration of linguistic auditory input with facial information from the speaker. In particular, eye gaze and visual speech provide the listener with social and linguistic information, respectively. Despite the importance of this context for an ecological study of language, research on audiovisual integration has mainly focused on the phonological level, leaving aside effects on semantic comprehension. Here we used event-related potentials (ERPs) to investigate the influence of facial dynamic information on semantic processing of connected speech. Participants were presented with either a video or a still picture of the speaker, concomitant to auditory sentences. Along three experiments, we manipulated the presence or absence of the speaker's dynamic facial features (mouth and eyes) and compared the amplitudes of the semantic N400 elicited by unexpected words. Contrary to our predictions, the N400 was not modulated by dynamic facial information; therefore, semantic processing seems to be unaffected by the speaker's gaze and visual speech. Even though, during the processing of expected words, dynamic faces elicited a long-lasting late posterior positivity compared to the static condition. This effect was significantly reduced when the mouth of the speaker was covered. Our findings may indicate an increase of attentional processing to richer communicative contexts. The present findings also demonstrate that in natural communicative face-to-face encounters, perceiving the face of a speaker in motion provides supplementary information that is taken into account by the listener, especially when auditory comprehension is non-demanding. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Infants' Selectively Pay Attention to the Information They Receive from a Native Speaker of Their Language.

    PubMed

    Marno, Hanna; Guellai, Bahia; Vidal, Yamil; Franzoi, Julia; Nespor, Marina; Mehler, Jacques

    2016-01-01

    From the first moments of their life, infants show a preference for their native language, as well as toward speakers with whom they share the same language. This preference appears to have broad consequences in various domains later on, supporting group affiliations and collaborative actions in children. Here, we propose that infants' preference for native speakers of their language also serves a further purpose, specifically allowing them to efficiently acquire culture specific knowledge via social learning. By selectively attending to informants who are native speakers of their language and who probably also share the same cultural background with the infant, young learners can maximize the possibility to acquire cultural knowledge. To test whether infants would preferably attend the information they receive from a speaker of their native language, we familiarized 12-month-old infants with a native and a foreign speaker, and then presented them with movies where each of the speakers silently gazed toward unfamiliar objects. At test, infants' looking behavior to the two objects alone was measured. Results revealed that infants preferred to look longer at the object presented by the native speaker. Strikingly, the effect was replicated also with 5-month-old infants, indicating an early development of such preference. These findings provide evidence that young infants pay more attention to the information presented by a person with whom they share the same language. This selectivity can serve as a basis for efficient social learning by influencing how infants' allocate attention between potential sources of information in their environment.

  1. Speaker Recognition by Combining MFCC and Phase Information in Noisy Conditions

    NASA Astrophysics Data System (ADS)

    Wang, Longbiao; Minami, Kazue; Yamamoto, Kazumasa; Nakagawa, Seiichi

    In this paper, we investigate the effectiveness of phase for speaker recognition in noisy conditions and combine the phase information with mel-frequency cepstral coefficients (MFCCs). To date, almost speaker recognition methods are based on MFCCs even in noisy conditions. For MFCCs which dominantly capture vocal tract information, only the magnitude of the Fourier Transform of time-domain speech frames is used and phase information has been ignored. High complement of the phase information and MFCCs is expected because the phase information includes rich voice source information. Furthermore, some researches have reported that phase based feature was robust to noise. In our previous study, a phase information extraction method that normalizes the change variation in the phase depending on the clipping position of the input speech was proposed, and the performance of the combination of the phase information and MFCCs was remarkably better than that of MFCCs. In this paper, we evaluate the robustness of the proposed phase information for speaker identification in noisy conditions. Spectral subtraction, a method skipping frames with low energy/Signal-to-Noise (SN) and noisy speech training models are used to analyze the effect of the phase information and MFCCs in noisy conditions. The NTT database and the JNAS (Japanese Newspaper Article Sentences) database added with stationary/non-stationary noise were used to evaluate our proposed method. MFCCs outperformed the phase information for clean speech. On the other hand, the degradation of the phase information was significantly smaller than that of MFCCs for noisy speech. The individual result of the phase information was even better than that of MFCCs in many cases by clean speech training models. By deleting unreliable frames (frames having low energy/SN), the speaker identification performance was improved significantly. By integrating the phase information with MFCCs, the speaker identification error reduction

  2. Perceptual and acoustic analysis of lexical stress in Greek speakers with dysarthria.

    PubMed

    Papakyritsis, Ioannis; Müller, Nicole

    2014-01-01

    The study reported in this paper investigated the abilities of Greek speakers with dysarthria to signal lexical stress at the single word level. Three speakers with dysarthria and two unimpaired control participants were recorded completing a repetition task of a list of words consisting of minimal pairs of Greek disyllabic words contrasted by lexical stress location only. Fourteen listeners were asked to determine the attempted stress location for each word pair. Acoustic analyses of duration and intensity ratios, both within and across words, were undertaken to identify possible acoustic correlates of the listeners' judgments concerning stress location. Acoustic and perceptual data indicate that while each participant with dysarthria in this study had some difficulty in signaling stress unambiguously, the pattern of difficulty was different for each speaker. Further, it was found that the relationship between the listeners' judgments of stress location and the acoustic data was not conclusive.

  3. Right anterior superior temporal activation predicts auditory sentence comprehension following aphasic stroke.

    PubMed

    Crinion, Jenny; Price, Cathy J

    2005-12-01

    Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.

  4. Switches to English during French Service Encounters: Relationships with L2 French Speakers' Willingness to Communicate and Motivation

    ERIC Educational Resources Information Center

    McNaughton, Stephanie; McDonough, Kim

    2015-01-01

    This exploratory study investigated second language (L2) French speakers' service encounters in the multilingual setting of Montreal, specifically whether switches to English during French service encounters were related to L2 speakers' willingness to communicate or motivation. Over a two-week period, 17 French L2 speakers in Montreal submitted…

  5. The object of my desire: Five-year-olds rapidly reason about a speaker's desire during referential communication.

    PubMed

    San Juan, Valerie; Chambers, Craig G; Berman, Jared; Humphry, Chelsea; Graham, Susan A

    2017-10-01

    Two experiments examined whether 5-year-olds draw inferences about desire outcomes that constrain their online interpretation of an utterance. Children were informed of a speaker's positive (Experiment 1) or negative (Experiment 2) desire to receive a specific toy as a gift before hearing a referentially ambiguous statement ("That's my present") spoken with either a happy or sad voice. After hearing the speaker express a positive desire, children (N=24) showed an implicit (i.e., eye gaze) and explicit ability to predict reference to the desired object when the speaker sounded happy, but they showed only implicit consideration of the alternate object when the speaker sounded sad. After hearing the speaker express a negative desire, children (N=24) used only happy prosodic cues to predict the intended referent of the statement. Taken together, the findings indicate that the efficiency with which 5-year-olds integrate desire reasoning with language processing depends on the emotional valence of the speaker's voice but not on the type of desire representations (i.e., positive vs. negative) that children must reason about online. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.

    PubMed

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.

  7. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers

    PubMed Central

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829

  8. Use of the BAT with a Cantonese-Putonghua speaker with aphasia.

    PubMed

    Kong, Anthony Pak-Hin; Weekes, Brendan Stuart

    2011-06-01

    The aim of this article is to illustrate the use of the Bilingual Aphasia Test (BAT) with a Cantonese-Putonghua speaker. We describe G, who is a relatively young Chinese bilingual speaker with aphasia. G's communication abilities in his L2, Putonghua, were impaired following brain damage. This impairment caused specific difficulties in communication with his wife, a native Putonghua speaker, and was thus a priority for investigation. Given a paucity of standardised tests of aphasia in Putonghua, our goal was to use the BAT to assess G's impairments in his L2. Results showed that G's performance on the BAT subtests measuring word and sentence comprehension and production was impaired. His pattern of performance on the BAT allowed us to generate hypotheses about his higher-level language impairments in Putonghua, which were subsequently found to be impaired. We argue that the BAT is able to capture the primary language impairments in Chinese-speaking patients with aphasia when Putonghua is the second language. We also suggest some modifications to the BAT for testing Chinese-speaking patients with bilingual aphasia.

  9. Challenging stereotypes and changing attitudes: Improving quality of care for people with hepatitis C through Positive Speakers programs.

    PubMed

    Brener, Loren; Wilson, Hannah; Rose, Grenville; Mackenzie, Althea; de Wit, John

    2013-01-01

    Positive Speakers programs consist of people who are trained to speak publicly about their illness. The focus of these programs, especially with stigmatised illnesses such as hepatitis C (HCV), is to inform others of the speakers' experiences, thereby humanising the illness and reducing ignorance associated with the disease. This qualitative research aimed to understand the perceived impact of Positive Speakers programs on changing audience members' attitudes towards people with HCV. Interviews were conducted with nine Positive Speakers and 16 of their audience members to assess the way in which these sessions were perceived by both speakers and the audience to challenge stereotypes and stigma associated with HCV and promote positive attitude change amongst the audience. Data were analysed using Intergroup Contact Theory to frame the analysis with a focus on whether the program met the optimal conditions to promote attitude change. Findings suggest that there are a number of vital components to this Positive Speakers program which ensures that the program meets the requirements for successful and equitable intergroup contact. This Positive Speakers program thereby helps to deconstruct stereotypes about people with HCV, while simultaneously increasing positive attitudes among audience members with the ultimate aim of improving quality of health care and treatment for people with HCV.

  10. Teaching Mandarin to Speakers of Other Dialects.

    ERIC Educational Resources Information Center

    Munro, Stanley R.

    Despite a common attitude that it is very difficult, and possibly unwise, to try to teach Mandarin Chinese to speakers of other dialects, there is a social and academic need for this kind of course, and it is possible to teach it successfully to most students. In the University of Alberta's program the likely candidates are the large group of…

  11. Neural decoding of attentional selection in multi-speaker environments without access to clean sources

    NASA Astrophysics Data System (ADS)

    O'Sullivan, James; Chen, Zhuo; Herrero, Jose; McKhann, Guy M.; Sheth, Sameer A.; Mehta, Ashesh D.; Mesgarani, Nima

    2017-10-01

    Objective. People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-world applications poses a number of challenges, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD. Approach. We present an end-to-end system that (1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener’s neural signals, (2) automatically separates the individual speakers in the mixture, (3) determines the attended speaker, and (4) amplifies the attended speaker’s voice to assist the listener. Main results. Using invasive electrophysiology recordings, we identified the regions of the auditory cortex that contribute to AAD. Given appropriate electrode locations, our system is able to decode the attention of subjects and amplify the attended speaker using only the mixed audio. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures. Significance. Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearable devices for the hearing impaired.

  12. Age differences in vocal emotion perception: on the role of speaker age and listener sex.

    PubMed

    Sen, Antarika; Isaacowitz, Derek; Schirmer, Annett

    2017-10-24

    Older adults have greater difficulty than younger adults perceiving vocal emotions. To better characterise this effect, we explored its relation to age differences in sensory, cognitive and emotional functioning. Additionally, we examined the role of speaker age and listener sex. Participants (N = 163) aged 19-34 years and 60-85 years categorised neutral sentences spoken by ten younger and ten older speakers with a happy, neutral, sad, or angry voice. Acoustic analyses indicated that expressions from younger and older speakers denoted the intended emotion with similar accuracy. As expected, younger participants outperformed older participants and this effect was statistically mediated by an age-related decline in both optimism and working-memory. Additionally, age differences in emotion perception were larger for younger as compared to older speakers and a better perception of younger as compared to older speakers was greater in younger as compared to older participants. Last, a female perception benefit was less pervasive in the older than the younger group. Together, these findings suggest that the role of age for emotion perception is multi-faceted. It is linked to emotional and cognitive change, to processing biases that benefit young and own-age expressions, and to the different aptitudes of women and men.

  13. Interactive voice technology: Variations in the vocal utterances of speakers performing a stress-inducing task

    NASA Astrophysics Data System (ADS)

    Mosko, J. D.; Stevens, K. N.; Griffin, G. R.

    1983-08-01

    Acoustical analyses were conducted of words produced by four speakers in a motion stress-inducing situation. The aim of the analyses was to document the kinds of changes that occur in the vocal utterances of speakers who are exposed to motion stress and to comment on the implications of these results for the design and development of voice interactive systems. The speakers differed markedly in the types and magnitudes of the changes that occurred in their speech. For some speakers, the stress-inducing experimental condition caused an increase in fundamental frequency, changes in the pattern of vocal fold vibration, shifts in vowel production and changes in the relative amplitudes of sounds containing turbulence noise. All speakers showed greater variability in the experimental condition than in more relaxed control situation. The variability was manifested in the acoustical characteristics of individual phonetic elements, particularly in speech sound variability observed serve to unstressed syllables. The kinds of changes and variability observed serve to emphasize the limitations of speech recognition systems based on template matching of patterns that are stored in the system during a training phase. There is need for a better understanding of these phonetic modifications and for developing ways of incorporating knowledge about these changes within a speech recognition system.

  14. Physiological Indices of Bilingualism: Oral-Motor Coordination and Speech Rate in Bengali-English Speakers

    ERIC Educational Resources Information Center

    Chakraborty, Rahul; Goffman, Lisa; Smith, Anne

    2008-01-01

    Purpose: To examine how age of immersion and proficiency in a 2nd language influence speech movement variability and speaking rate in both a 1st language and a 2nd language. Method: A group of 21 Bengali-English bilingual speakers participated. Lip and jaw movements were recorded. For all 21 speakers, lip movement variability was assessed based on…

  15. Processing ser and estar to locate objects and events: An ERP study with L2 speakers of Spanish.

    PubMed

    Dussias, Paola E; Contemori, Carla; Román, Patricia

    2014-01-01

    In Spanish locative constructions, a different form of the copula is selected in relation to the semantic properties of the grammatical subject: sentences that locate objects require estar while those that locate events require ser (both translated in English as 'to be'). In an ERP study, we examined whether second language (L2) speakers of Spanish are sensitive to the selectional restrictions that the different types of subjects impose on the choice of the two copulas. Twenty-four native speakers of Spanish and two groups of L2 Spanish speakers (24 beginners and 18 advanced speakers) were recruited to investigate the processing of 'object/event + estar/ser ' permutations. Participants provided grammaticality judgments on correct (object + estar ; event + ser ) and incorrect (object + ser ; event + estar ) sentences while their brain activity was recorded. In line with previous studies (Leone-Fernández, Molinaro, Carreiras, & Barber, 2012; Sera, Gathje, & Pintado, 1999), the results of the grammaticality judgment for the native speakers showed that participants correctly accepted object + estar and event + ser constructions. In addition, while 'object + ser ' constructions were considered grossly ungrammatical, 'event + estar ' combinations were perceived as unacceptable to a lesser degree. For these same participants, ERP recording time-locked to the onset of the critical word ' en ' showed a larger P600 for the ser predicates when the subject was an object than when it was an event (*La silla es en la cocina vs. La fiesta es en la cocina). This P600 effect is consistent with syntactic repair of the defining predicate when it does not fit with the adequate semantic properties of the subject. For estar predicates (La silla está en la cocina vs. *La fiesta está en la cocina), the findings showed a central-frontal negativity between 500-700 ms. Grammaticality judgment data for the L2 speakers of Spanish showed that beginners were significantly less accurate than

  16. The Effect of Noise on Relationships Between Speech Intelligibility and Self-Reported Communication Measures in Tracheoesophageal Speakers.

    PubMed

    Eadie, Tanya L; Otero, Devon Sawin; Bolt, Susan; Kapsner-Smith, Mara; Sullivan, Jessica R

    2016-08-01

    The purpose of this study was to examine how sentence intelligibility relates to self-reported communication in tracheoesophageal speakers when speech intelligibility is measured in quiet and noise. Twenty-four tracheoesophageal speakers who were at least 1 year postlaryngectomy provided audio recordings of 5 sentences from the Sentence Intelligibility Test. Speakers also completed self-reported measures of communication-the Voice Handicap Index-10 and the Communicative Participation Item Bank short form. Speech recordings were presented to 2 groups of inexperienced listeners who heard sentences in quiet or noise. Listeners transcribed the sentences to yield speech intelligibility scores. Very weak relationships were found between intelligibility in quiet and measures of voice handicap and communicative participation. Slightly stronger, but still weak and nonsignificant, relationships were observed between measures of intelligibility in noise and both self-reported measures. However, 12 speakers who were more than 65% intelligible in noise showed strong and statistically significant relationships with both self-reported measures (R2 = .76-.79). Speech intelligibility in quiet is a weak predictor of self-reported communication measures in tracheoesophageal speakers. Speech intelligibility in noise may be a better metric of self-reported communicative function for speakers who demonstrate higher speech intelligibility in noise.

  17. The influence of visual speech information on the intelligibility of English consonants produced by non-native speakers.

    PubMed

    Kawase, Saya; Hannah, Beverly; Wang, Yue

    2014-09-01

    This study examines how visual speech information affects native judgments of the intelligibility of speech sounds produced by non-native (L2) speakers. Native Canadian English perceivers as judges perceived three English phonemic contrasts (/b-v, θ-s, l-ɹ/) produced by native Japanese speakers as well as native Canadian English speakers as controls. These stimuli were presented under audio-visual (AV, with speaker voice and face), audio-only (AO), and visual-only (VO) conditions. The results showed that, across conditions, the overall intelligibility of Japanese productions of the native (Japanese)-like phonemes (/b, s, l/) was significantly higher than the non-Japanese phonemes (/v, θ, ɹ/). In terms of visual effects, the more visually salient non-Japanese phonemes /v, θ/ were perceived as significantly more intelligible when presented in the AV compared to the AO condition, indicating enhanced intelligibility when visual speech information is available. However, the non-Japanese phoneme /ɹ/ was perceived as less intelligible in the AV compared to the AO condition. Further analysis revealed that, unlike the native English productions, the Japanese speakers produced /ɹ/ without visible lip-rounding, indicating that non-native speakers' incorrect articulatory configurations may decrease the degree of intelligibility. These results suggest that visual speech information may either positively or negatively affect L2 speech intelligibility.

  18. Communication Boot Camp: Discover the Speaker in You!

    ERIC Educational Resources Information Center

    Binti Ali, Zuraidah; Binti Nor Azmi, Noor Hafiza; Phillip, Alicia; bin Mokhtar, Mohd Zin

    2013-01-01

    Learning can take place almost anywhere, and this is especially true for our undergraduates who wish to become public speakers. Besides university course and public speaking workshops on campus grounds, undergraduates are now looking for a different learning environment--communication boot camps!! This study presents a compilation of learners'…

  19. A Statistical Method of Evaluating the Pronunciation Proficiency/Intelligibility of English Presentations by Japanese Speakers

    ERIC Educational Resources Information Center

    Kibishi, Hiroshi; Hirabayashi, Kuniaki; Nakagawa, Seiichi

    2015-01-01

    In this paper, we propose a statistical evaluation method of pronunciation proficiency and intelligibility for presentations made in English by native Japanese speakers. We statistically analyzed the actual utterances of speakers to find combinations of acoustic and linguistic features with high correlation between the scores estimated by the…

  20. Effects of syllable structure in aphasic errors: implications for a new model of speech production.

    PubMed

    Romani, Cristina; Galluzzi, Claudia; Bureca, Ivana; Olson, Andrew

    2011-03-01

    Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. An Investigation into the Tense/Aspect Preferences of Turkish Speakers of English and Native English Speakers in Their Oral Narration

    ERIC Educational Resources Information Center

    Bada, Erdogan; Genc, Bilal

    2007-01-01

    The study of SLA began around the beginning of the 70s with the emergence of both theoretical and empirical studies. Undoubtedly, the acquisition of tense/aspect, besides other topics, has attracted much interest from researchers. This study investigated the use of telic and atelic verb forms in the oral production of Turkish speakers of English…

  2. Psycholinguistic Approaches to Language Processing in Heritage Speakers

    ERIC Educational Resources Information Center

    Bolger, Patrick A.; Zapata, Gabriela C.

    2011-01-01

    This paper focuses on the dearth of language-processing research addressing Spanish heritage speakers in assimilationist communities. First, we review key offline work on this population, and we then summarize the few psycholinguistic (online) studies that have already been carried out. In an attempt to encourage more such research, in the next…

  3. Facial Expression Generation from Speaker's Emotional States in Daily Conversation

    NASA Astrophysics Data System (ADS)

    Mori, Hiroki; Ohshima, Koh

    A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.

  4. Brief Report: Relations between Prosodic Performance and Communication and Socialization Ratings in High Functioning Speakers with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Paul, Rhea; Shriberg, Lawrence D.; McSweeny, Jane; Cicchetti, Domenic; Klin, Ami; Volkmar, Fred

    2005-01-01

    Shriberg "et al." [Shriberg, L. "et al." (2001). "Journal of Speech, Language and Hearing Research, 44," 1097-1115] described prosody-voice features of 30 high functioning speakers with autistic spectrum disorder (ASD) compared to age-matched control speakers. The present study reports additional information on the speakers with ASD, including…

  5. White Native English Speakers Needed: The Rhetorical Construction of Privilege in Online Teacher Recruitment Spaces

    ERIC Educational Resources Information Center

    Ruecker, Todd; Ives, Lindsey

    2015-01-01

    Over the past few decades, scholars have paid increasing attention to the role of native speakerism in the field of TESOL. Several recent studies have exposed instances of native speakerism in TESOL recruitment discourses published through a variety of media, but none have focused specifically on professional websites advertising programs in…

  6. Subglottal resonances of adult male and female native speakers of American English.

    PubMed

    Lulich, Steven M; Morton, John R; Arsikere, Harish; Sommers, Mitchell S; Leung, Gary K F; Alwan, Abeer

    2012-10-01

    This paper presents a large-scale study of subglottal resonances (SGRs) (the resonant frequencies of the tracheo-bronchial tree) and their relations to various acoustical and physiological characteristics of speakers. The paper presents data from a corpus of simultaneous microphone and accelerometer recordings of consonant-vowel-consonant (CVC) words embedded in a carrier phrase spoken by 25 male and 25 female native speakers of American English ranging in age from 18 to 24 yr. The corpus contains 17,500 utterances of 14 American English monophthongs, diphthongs, and the rhotic approximant [[inverted r

  7. Extending Situated Language Comprehension (Accounts) with Speaker and Comprehender Characteristics: Toward Socially Situated Interpretation.

    PubMed

    Münster, Katja; Knoeferle, Pia

    2017-01-01

    More and more findings suggest a tight temporal coupling between (non-linguistic) socially interpreted context and language processing. Still, real-time language processing accounts remain largely elusive with respect to the influence of biological (e.g., age) and experiential (e.g., world and moral knowledge) comprehender characteristics and the influence of the 'socially interpreted' context, as for instance provided by the speaker. This context could include actions, facial expressions, a speaker's voice or gaze, and gestures among others. We review findings from social psychology, sociolinguistics and psycholinguistics to highlight the relevance of (the interplay between) the socially interpreted context and comprehender characteristics for language processing. The review informs the extension of an extant real-time processing account (already featuring a coordinated interplay between language comprehension and the non-linguistic visual context) with a variable ('ProCom') that captures characteristics of the language user and with a first approximation of the comprehender's speaker representation. Extending the CIA to the sCIA (social Coordinated Interplay Account) is the first step toward a real-time language comprehension account which might eventually accommodate the socially situated communicative interplay between comprehenders and speakers.

  8. Validity of Single-Item Screening for Limited Health Literacy in English and Spanish Speakers.

    PubMed

    Bishop, Wendy Pechero; Craddock Lee, Simon J; Skinner, Celette Sugg; Jones, Tiffany M; McCallister, Katharine; Tiro, Jasmin A

    2016-05-01

    To evaluate 3 single-item screening measures for limited health literacy in a community-based population of English and Spanish speakers. We recruited 324 English and 314 Spanish speakers from a community research registry in Dallas, Texas, enrolled between 2009 and 2012. We used 3 screening measures: (1) How would you rate your ability to read?; (2) How confident are you filling out medical forms by yourself?; and (3) How often do you have someone help you read hospital materials? In analyses stratified by language, we used area under the receiver operating characteristic (AUROC) curves to compare each item with the validated 40-item Short Test of Functional Health Literacy in Adults. For English speakers, no difference was seen among the items. For Spanish speakers, "ability to read" identified inadequate literacy better than "help reading hospital materials" (AUROC curve = 0.76 vs 0.65; P = .019). The "ability to read" item performed the best, supporting use as a screening tool in safety-net systems caring for diverse populations. Future studies should investigate how to implement brief measures in safety-net settings and whether highlighting health literacy level influences providers' communication practices and patient outcomes.

  9. Euclidean Distances as measures of speaker similarity including identical twin pairs: A forensic investigation using source and filter voice characteristics.

    PubMed

    San Segundo, Eugenia; Tsanas, Athanasios; Gómez-Vilda, Pedro

    2017-01-01

    There is a growing consensus that hybrid approaches are necessary for successful speaker characterization in Forensic Speaker Comparison (FSC); hence this study explores the forensic potential of voice features combining source and filter characteristics. The former relate to the action of the vocal folds while the latter reflect the geometry of the speaker's vocal tract. This set of features have been extracted from pause fillers, which are long enough for robust feature estimation while spontaneous enough to be extracted from voice samples in real forensic casework. Speaker similarity was measured using standardized Euclidean Distances (ED) between pairs of speakers: 54 different-speaker (DS) comparisons, 54 same-speaker (SS) comparisons and 12 comparisons between monozygotic twins (MZ). Results revealed that the differences between DS and SS comparisons were significant in both high quality and telephone-filtered recordings, with no false rejections and limited false acceptances; this finding suggests that this set of voice features is highly speaker-dependent and therefore forensically useful. Mean ED for MZ pairs lies between the average ED for SS comparisons and DS comparisons, as expected according to the literature on twin voices. Specific cases of MZ speakers with very high ED (i.e. strong dissimilarity) are discussed in the context of sociophonetic and twin studies. A preliminary simplification of the Vocal Profile Analysis (VPA) Scheme is proposed, which enables the quantification of voice quality features in the perceptual assessment of speaker similarity, and allows for the calculation of perceptual-acoustic correlations. The adequacy of z-score normalization for this study is also discussed, as well as the relevance of heat maps for detecting the so-called phantoms in recent approaches to the biometric menagerie. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Noun Countability Judgments by Arabic Speakers of English

    ERIC Educational Resources Information Center

    Alenizi, Aied

    2017-01-01

    In an attempt to better understand the role of relationship between the use of English indefinite article and L1 transfer in L2 countability judgments by speakers of non-classifier languages, the current study investigates how Saudi EFL learners judge noun countability in English. The current study aims to find; (1) if countability judgments…

  11. Reducing Homophobia through Gay and Lesbian Speaker Panels.

    ERIC Educational Resources Information Center

    Reinhardt, Brian

    This study investigated the effects of attendance at a panel featuring gay and lesbian speakers on self-reported measures of individual homophobia. A total of 200 female and 120 male college students enrolled in human sexuality classes were randomly assigned to pretest or no pretest conditions and completed surveys before, after, and one month…

  12. Auditory Training for Experienced and Inexperienced Second-Language Learners: Native French Speakers Learning English Vowels

    ERIC Educational Resources Information Center

    Iverson, Paul; Pinet, Melanie; Evans, Bronwen G.

    2012-01-01

    This study examined whether high-variability auditory training on natural speech can benefit experienced second-language English speakers who already are exposed to natural variability in their daily use of English. The subjects were native French speakers who had learned English in school; experienced listeners were tested in England and the less…

  13. Left hemisphere lateralization for lexical and acoustic pitch processing in Cantonese speakers as revealed by mismatch negativity.

    PubMed

    Gu, Feng; Zhang, Caicai; Hu, Axu; Zhao, Guoping

    2013-12-01

    For nontonal language speakers, speech processing is lateralized to the left hemisphere and musical processing is lateralized to the right hemisphere (i.e., function-dependent brain asymmetry). On the other hand, acoustic temporal processing is lateralized to the left hemisphere and spectral/pitch processing is lateralized to the right hemisphere (i.e., acoustic-dependent brain asymmetry). In this study, we examine whether the hemispheric lateralization of lexical pitch and acoustic pitch processing in tonal language speakers is consistent with the patterns of function- and acoustic-dependent brain asymmetry in nontonal language speakers. Pitch contrast in both speech stimuli (syllable /ji/ in Experiment 1) and nonspeech stimuli (harmonic tone in Experiment 1; pure tone in Experiment 2) was presented to native Cantonese speakers in passive oddball paradigms. We found that the mismatch negativity (MMN) elicited by lexical pitch contrast was lateralized to the left hemisphere, which is consistent with the pattern of function-dependent brain asymmetry (i.e., left hemisphere lateralization for speech processing) in nontonal language speakers. However, the MMN elicited by acoustic pitch contrast was also left hemisphere lateralized (harmonic tone in Experiment 1) or showed a tendency for left hemisphere lateralization (pure tone in Experiment 2), which is inconsistent with the pattern of acoustic-dependent brain asymmetry (i.e., right hemisphere lateralization for acoustic pitch processing) in nontonal language speakers. The consistent pattern of function-dependent brain asymmetry and the inconsistent pattern of acoustic-dependent brain asymmetry between tonal and nontonal language speakers can be explained by the hypothesis that the acoustic-dependent brain asymmetry is the consequence of a carryover effect from function-dependent brain asymmetry. Potential evolutionary implication of this hypothesis is discussed. © 2013.

  14. Disrupted behaviour in grammatical morphology in French speakers with autism spectrum disorders.

    PubMed

    Le Normand, Marie-Thérèse; Blanc, Romuald; Caldani, Simona; Bonnet-Brilhault, Frédérique

    2018-01-18

    Mixed and inconsistent findings have been reported across languages concerning grammatical morphology in speakers with Autism Spectrum Disorders (ASD). Some researchers argue for a selective sparing of grammar whereas others claim to have identified grammatical deficits. The present study aimed to investigate this issue in 26 participants with ASD speaking European French who were matched on age, gender and SES to 26 participants with typical development (TD). The groups were compared regarding their productivity and accuracy of syntactic and agreement categories using the French MOR part-of-speech tagger available from the CHILDES. The groups significantly differed in productivity with respect to nouns, adjectives, determiners, prepositions and gender markers. Error analysis revealed that ASD speakers exhibited a disrupted behaviour in grammatical morphology. They made gender, tense and preposition errors and they omitted determiners and pronouns in nominal and verbal contexts. ASD speakers may have a reduced sensitivity to perceiving and processing the distributional structure of syntactic categories when producing grammatical morphemes and agreement categories. The theoretical and cross-linguistic implications of these findings are discussed.

  15. Effects of speaker emotional facial expression and listener age on incremental sentence processing.

    PubMed

    Carminati, Maria Nella; Knoeferle, Pia

    2013-01-01

    We report two visual-world eye-tracking experiments that investigated how and with which time course emotional information from a speaker's face affects younger (N = 32, Mean age  = 23) and older (N = 32, Mean age  = 64) listeners' visual attention and language comprehension as they processed emotional sentences in a visual context. The age manipulation tested predictions by socio-emotional selectivity theory of a positivity effect in older adults. After viewing the emotional face of a speaker (happy or sad) on a computer display, participants were presented simultaneously with two pictures depicting opposite-valence events (positive and negative; IAPS database) while they listened to a sentence referring to one of the events. Participants' eye fixations on the pictures while processing the sentence were increased when the speaker's face was (vs. wasn't) emotionally congruent with the sentence. The enhancement occurred from the early stages of referential disambiguation and was modulated by age. For the older adults it was more pronounced with positive faces, and for the younger ones with negative faces. These findings demonstrate for the first time that emotional facial expressions, similarly to previously-studied speaker cues such as eye gaze and gestures, are rapidly integrated into sentence processing. They also provide new evidence for positivity effects in older adults during situated sentence processing.

  16. Musical experience facilitates lexical tone processing among Mandarin speakers: Behavioral and neural evidence.

    PubMed

    Tang, Wei; Xiong, Wen; Zhang, Yu-Xuan; Dong, Qi; Nan, Yun

    2016-10-01

    Music and speech share many sound attributes. Pitch, as the percept of fundamental frequency, often occupies the center of researchers' attention in studies on the relationship between music and speech. One widely held assumption is that music experience may confer an advantage in speech tone processing. The cross-domain effects of musical training on non-tonal language speakers' linguistic pitch processing have been relatively well established. However, it remains unclear whether musical experience improves the processing of lexical tone for native tone language speakers who actually use lexical tones in their daily communication. Using a passive oddball paradigm, the present study revealed that among Mandarin speakers, musicians demonstrated enlarged electrical responses to lexical tone changes as reflected by the increased mismatch negativity (MMN) amplitudes, as well as faster behavioral discrimination performance compared with age- and IQ-matched nonmusicians. The current results suggest that in spite of the preexisting long-term experience with lexical tones in both musicians and nonmusicians, musical experience can still modulate the cortical plasticity of linguistic tone processing and is associated with enhanced neural processing of speech tones. Our current results thus provide the first electrophysiological evidence supporting the notion that pitch expertise in the music domain may indeed be transferable to the speech domain even for native tone language speakers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Native and Nonnative Speakers' Pragmatic Interpretations of English Texts.

    ERIC Educational Resources Information Center

    Hinkel, Eli

    1994-01-01

    Considering the complicating effect of cultural differences in writing conventions, this study examines discourse tradition as influenced by Confucian/Taoist precepts and those of U.S. academic environments, the latter requiring rational argumentation, justification, and proof. Pedagogical implications of native-speaker and nonnative-speaker…

  18. Beyond Semantic Accuracy: Preschoolers Evaluate a Speaker's Reasons

    ERIC Educational Resources Information Center

    Koenig, Melissa A.

    2012-01-01

    Children's sensitivity to the quality of epistemic reasons and their selective trust in the more reasonable of 2 informants was investigated in 2 experiments. Three-, 4-, and 5-year-old children (N = 90) were presented with speakers who stated different kinds of evidence for what they believed. Experiment 1 showed that children of all age groups…

  19. Selected Bibliography of Spanish for Native Speaker Sources.

    ERIC Educational Resources Information Center

    Rodriguez Pino, Cecilia, Comp.

    This bibliography was prepared for middle school and high school teachers participating in a conference at New Mexico State University (July 14-18, 1993), to assist in research and pedagogical endeavors in the teaching of Spanish to native speakers. It is presented in two parts. The first is a bibliography edited by Francisco J. Ronquillo, which…

  20. NASA Ambassadors: A Speaker Outreach Program

    NASA Technical Reports Server (NTRS)

    McDonald, Malcolm W.

    1998-01-01

    The work done on this project this summer has been geared toward setting up the necessary infrastructure and planning to support the operation of an effective speaker outreach program. The program has been given the name, NASA AMBASSADORS. Also, individuals who become participants in the program will be known as "NASA AMBASSADORS". This summer project has been conducted by the joint efforts of this author and those of Professor George Lebo who will be issuing a separate report. The description in this report will indicate that the NASA AMBASSADOR program operates largely on the contributions of volunteers, with the assistance of persons at the Marshall Space Flight Center (MSFC). The volunteers include participants in the various summer programs hosted by MSFC as well as members of the NASA Alumni League. The MSFC summer participation programs include: the Summer Faculty Fellowship Program for college and university professors, the Science Teacher Enrichment Program for middle- and high-school teachers, and the NASA ACADEMY program for college and university students. The NASA Alumni League members are retired NASA employees, scientists, and engineers. The MSFC offices which will have roles in the operation of the NASA AMBASSADORS include the Educational Programs Office and the Public Affairs Office. It is possible that still other MSFC offices may become integrated into the operation of the program. The remainder of this report will establish the operational procedures which will be necessary to sustain the NASA AMBASSADOR speaker outreach program.

  1. Pragmatic Instruction May Not Be Necessary among Heritage Speakers of Spanish: A Study on Requests

    ERIC Educational Resources Information Center

    Barros García, María J.; Bachelor, Jeremy W.

    2018-01-01

    This paper studies the pragmatic competence of U.S. heritage speakers of Spanish in an attempt to determine (a) the degree of pragmatic transfer from English to Spanish experienced by heritage speakers when producing different types of requests in Spanish; and (b) how to best teach pragmatics to students of Spanish as a Heritage Language (SHL).…

  2. TESOL Teachers' Engagement with the Native Speaker Model: How Does Teacher Education Impact on Their Beliefs?

    ERIC Educational Resources Information Center

    Nguyen, Mai Xuan Nhat Chi

    2017-01-01

    This research investigates non-native English teachers' engagement with the native speaker model, i.e. whether they agree/disagree with measuring English teaching and learning performance against native speaker standards. More importantly, it aims to unearth the impact of teacher education on teachers' attitudes and beliefs about…

  3. Low-voltage Driven Graphene Foam Thermoacoustic Speaker.

    PubMed

    Fei, Wenwen; Zhou, Jianxin; Guo, Wanlin

    2015-05-20

    A low-voltage driven thermoacoustic speaker is fabricated based on three-dimensional graphene foams synthesized by a nickel-template assisted chemical vapor deposition method. The corresponding thermoacoustic performances are found to be related to its microstructure. Graphene foams exhibit low heat-leakage to substrates and feasible tunability in structures and thermoacoustic performances, having great promise for applications in flexible or ultrasonic acoustic devices. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Within the School and the Community--A Speaker's Bureau.

    ERIC Educational Resources Information Center

    McClintock, Joy H.

    Student interest prompted the formation of a Speaker's Bureau in Seminole Senior High School, Seminole, Florida. First, students compiled a list of community contacts, including civic clubs, churches, retirement villages, newspaper offices, and the County School Administration media center. A letter of introduction was composed and speaking…

  5. Sensing of Particular Speakers for the Construction of Voice Interface Utilized in Noisy Environment

    NASA Astrophysics Data System (ADS)

    Sawada, Hideyuki; Ohkado, Minoru

    Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.

  6. Spanish Native-Speaker Perception of Accentedness in Learner Speech

    ERIC Educational Resources Information Center

    Moranski, Kara

    2012-01-01

    Building upon current research in native-speaker (NS) perception of L2 learner phonology (Zielinski, 2008; Derwing & Munro, 2009), the present investigation analyzed multiple dimensions of NS speech perception in order to achieve a more complete understanding of the specific linguistic elements and attitudinal variables that contribute to…

  7. A virtual speaker in noisy classroom conditions: supporting or disrupting children's listening comprehension?

    PubMed

    Nirme, Jens; Haake, Magnus; Lyberg Åhlander, Viveka; Brännström, Jonas; Sahlén, Birgitta

    2018-04-05

    Seeing a speaker's face facilitates speech recognition, particularly under noisy conditions. Evidence for how it might affect comprehension of the content of the speech is more sparse. We investigated how children's listening comprehension is affected by multi-talker babble noise, with or without presentation of a digitally animated virtual speaker, and whether successful comprehension is related to performance on a test of executive functioning. We performed a mixed-design experiment with 55 (34 female) participants (8- to 9-year-olds), recruited from Swedish elementary schools. The children were presented with four different narratives, each in one of four conditions: audio-only presentation in a quiet setting, audio-only presentation in noisy setting, audio-visual presentation in a quiet setting, and audio-visual presentation in a noisy setting. After each narrative, the children answered questions on the content and rated their perceived listening effort. Finally, they performed a test of executive functioning. We found significantly fewer correct answers to explicit content questions after listening in noise. This negative effect was only mitigated to a marginally significant degree by audio-visual presentation. Strong executive function only predicted more correct answers in quiet settings. Altogether, our results are inconclusive regarding how seeing a virtual speaker affects listening comprehension. We discuss how methodological adjustments, including modifications to our virtual speaker, can be used to discriminate between possible explanations to our results and contribute to understanding the listening conditions children face in a typical classroom.

  8. Bihemispheric tDCS enhances language recovery but does not alter BDNF levels in chronic aphasic patients.

    PubMed

    Marangolo, Paola; Fiori, Valentina; Gelfo, Francesca; Shofany, Jacob; Razzano, Carmelina; Caltagirone, Carlo; Angelucci, Francesco

    2014-01-01

    Several studies have shown that transcranial direct current stimulation (tDCS) is a useful tool to enhance language recovery in aphasia. It has also been suggested that modulation of the neurotrophin brain-derived neurotrophic factor (BDNF) might be part of the mechanisms involved in tDCS effects on synaptic connectivity. However, all language studies have previously investigated the effects using unihemispheric stimulation. The purpose of the present study is to investigate the role of bihemispheric tDCS on language recovery and BDNF serum levels. Seven aphasic persons underwent an intensive language therapy in two different conditions: real bihemispheric stimulation over the left and right Broca's areas and a sham condition. After the stimulation, patients exibited a significant recovery in three language tasks (picture description, noun and verb naming) compared to the sham condition which persisted in the follow-up session. No significant differences were found in BDNF serum levels after tDCS stimulation and in the follow-up session. However, a significant positive correlation was present for the real stimulation condition between percent changes in BDNF levels and in the verb naming task. The data suggest that this novel approach may potentiate the recovery of language in chronic aphasia. They also emphasize the importance to further investigate the role of possible biomarkers associated with tDCS treatment response in language recovery.

  9. On compensation of mismatched recording conditions in the Bayesian approach for forensic automatic speaker recognition.

    PubMed

    Botti, F; Alexander, A; Drygajlo, A

    2004-12-02

    This paper deals with a procedure to compensate for mismatched recording conditions in forensic speaker recognition, using a statistical score normalization. Bayesian interpretation of the evidence in forensic automatic speaker recognition depends on three sets of recordings in order to perform forensic casework: reference (R) and control (C) recordings of the suspect, and a potential population database (P), as well as a questioned recording (QR) . The requirement of similar recording conditions between suspect control database (C) and the questioned recording (QR) is often not satisfied in real forensic cases. The aim of this paper is to investigate a procedure of normalization of scores, which is based on an adaptation of the Test-normalization (T-norm) [2] technique used in the speaker verification domain, to compensate for the mismatch. Polyphone IPSC-02 database and ASPIC (an automatic speaker recognition system developed by EPFL and IPS-UNIL in Lausanne, Switzerland) were used in order to test the normalization procedure. Experimental results for three different recording condition scenarios are presented using Tippett plots and the effect of the compensation on the evaluation of the strength of the evidence is discussed.

  10. Cortical encoding and neurophysiological tracking of intensity and pitch cues signaling English stress patterns in native and nonnative speakers.

    PubMed

    Chung, Wei-Lun; Bidelman, Gavin M

    2016-01-01

    We examined cross-language differences in neural encoding and tracking of intensity and pitch cues signaling English stress patterns. Auditory mismatch negativities (MMNs) were recorded in English and Mandarin listeners in response to contrastive English pseudowords whose primary stress occurred either on the first or second syllable (i.e., "nocTICity" vs. "NOCticity"). The contrastive syllable stress elicited two consecutive MMNs in both language groups, but English speakers demonstrated larger responses to stress patterns than Mandarin speakers. Correlations between the amplitude of ERPs and continuous changes in the running intensity and pitch of speech assessed how well each language group's brain activity tracked these salient acoustic features of lexical stress. We found that English speakers' neural responses tracked intensity changes in speech more closely than Mandarin speakers (higher brain-acoustic correlation). Findings demonstrate more robust and precise processing of English stress (intensity) patterns in early auditory cortical responses of native relative to nonnative speakers. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Agreement Reflexes of Emerging Optionality in Heritage Speaker Spanish

    ERIC Educational Resources Information Center

    Pascual Cabo, Diego

    2013-01-01

    This study contributes to current trends of heritage speaker (HS) acquisition research by examining the syntax of psych-predicates in HS Spanish. Broadly defined, psych-predicates communicate states of emotions (e.g., to love) and have traditionally been categorized as belonging to one of three classes: class I--"temere" "to…

  12. Gesturing by Speakers with Aphasia: How Does It Compare?

    ERIC Educational Resources Information Center

    Mol, Lisette; Krahmer, Emiel; van de Sandt-Koenderman, Mieke

    2013-01-01

    Purpose: To study the independence of gesture and verbal language production. The authors assessed whether gesture can be semantically compensatory in cases of verbal language impairment and whether speakers with aphasia and control participants use similar depiction techniques in gesture. Method: The informativeness of gesture was assessed in 3…

  13. Espanol para el hispanolhablante (Spanish for the Spanish Speaker).

    ERIC Educational Resources Information Center

    Blanco, George M.

    This guide provides Texas teachers and administrators with guidelines, goals, instructional strategies, and activities for teaching Spanish to secondary level native speakers. It is based on the principle that the Spanish speaking student is the strongest linguistic and cultural resource to Texas teachers of languages other than English, and one…

  14. A study on (K, Na) NbO3 based multilayer piezoelectric ceramics micro speaker

    NASA Astrophysics Data System (ADS)

    Gao, Renlong; Chu, Xiangcheng; Huan, Yu; Sun, Yiming; Liu, Jiayi; Wang, Xiaohui; Li, Longtu

    2014-10-01

    A flat panel micro speaker was fabricated from (K, Na) NbO3 (KNN)-based multilayer piezoelectric ceramics by a tape casting and cofiring process using Ag-Pd alloys as an inner electrode. The interface between ceramic and electrode was investigated by scanning electron microscope (SEM) and transmission electron microscope (TEM). The acoustic response was characterized by a standard audio test system. We found that the micro speaker with dimensions of 23 × 27 × 0.6 mm3, using three layers of 30 μm thickness KNN-based ceramic, has a high average sound pressure level (SPL) of 87 dB, between 100 Hz-20 kHz under five voltage. This result was even better than that of lead zirconate titanate (PZT)-based ceramics under the same conditions. The experimental results show that the KNN-based multilayer ceramics could be used as lead free piezoelectric micro speakers.

  15. Accent, Intelligibility, and the Role of the Listener: Perceptions of English-Accented German by Native German Speakers

    ERIC Educational Resources Information Center

    Hayes-Harb, Rachel; Watzinger-Tharp, Johanna

    2012-01-01

    We explore the relationship between accentedness and intelligibility, and investigate how listeners' beliefs about nonnative speech interact with their accentedness and intelligibility judgments. Native German speakers and native English learners of German produced German sentences, which were presented to 12 native German speakers in accentedness…

  16. Does the speaker matter? Online processing of semantic and pragmatic information in L2 speech comprehension.

    PubMed

    Foucart, Alice; Garcia, Xavier; Ayguasanosa, Meritxell; Thierry, Guillaume; Martin, Clara; Costa, Albert

    2015-08-01

    The present study investigated how pragmatic information is integrated during L2 sentence comprehension. We put forward that the differences often observed between L1 and L2 sentence processing may reflect differences on how various types of information are used to process a sentence, and not necessarily differences between native and non-native linguistic systems. Based on the idea that when a cue is missing or distorted, one relies more on other cues available, we hypothesised that late bilinguals favour the cues that they master during sentence processing. To verify this hypothesis we investigated whether late bilinguals take the speaker's identity (inferred by the voice) into account when incrementally processing speech and whether this affects their online interpretation of the sentence. To do so, we adapted Van Berkum, J.J.A., Van den Brink, D., Tesink, C.M.J.Y., Kos, M., Hagoort, P., 2008. J. Cogn. Neurosci. 20(4), 580-591, study in which sentences with either semantic violations or pragmatic inconsistencies were presented. While both the native and the non-native groups showed a similar response to semantic violations (N400), their response to speakers' inconsistencies slightly diverged; late bilinguals showed a positivity much earlier than native speakers (LPP). These results suggest that, like native speakers, late bilinguals process semantic and pragmatic information incrementally; however, what seems to differ between L1 and L2 processing is the time-course of the different processes. We propose that this difference may originate from late bilinguals' sensitivity to pragmatic information and/or their ability to efficiently make use of the information provided by the sentence context to generate expectations in relation to pragmatic information during L2 sentence comprehension. In other words, late bilinguals may rely more on speaker identity than native speakers when they face semantic integration difficulties. Copyright © 2015 Elsevier Ltd. All rights

  17. When Alphabets Collide: Alphabetic First-Language Speakers' Approach to Speech Production in an Alphabetic Second Language

    ERIC Educational Resources Information Center

    Vokic, Gabriela

    2011-01-01

    This study analysed the extent to which literate native speakers of a language with a phonemic alphabetic orthography rely on their first language (L1) orthography during second language (L2) speech production of a language that has a morphophonemic alphabetic orthography. The production of the English flapping rule by 15 adult native speakers of…

  18. A Personal Statement about a Four-Year Curriculum for Heritage and Native Speakers of Spanish Programs.

    ERIC Educational Resources Information Center

    Stering, Edward

    This document shares a vision for a 4-year curriculum for Heritage Speakers of Spanish (HSS)/Spanish for Native Speakers (SNS), describing a course developed for SNS students within Mercy High School in San Francisco, California. The vision foresees an ever-increasing number of HSS and SNS students completing college level degree programs then…

  19. Politeness Strategies among Native and Romanian Speakers of English

    ERIC Educational Resources Information Center

    Ambrose, Dominic

    1995-01-01

    Background: Politeness strategies vary from language to language and within each society. At times the wrong strategies can have disastrous effects. This can occur when languages are used by non-native speakers or when they are used outside of their own home linguistic context. Purpose: This study of spoken language compares the politeness…

  20. Social Cues in Multimedia Learning: Role of Speaker's Voice.

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Sobko, Kristina; Mautone, Patricia D.

    2003-01-01

    In 2 experiments, learners who were seated at a computer workstation received a narrated animation about lightning formation. Then, they took a retention test, a transfer test, and rated the speaker. The results are consistent with social agency theory, which posits that social cues in multimedia messages can encourage learners to interpret…

  1. How do speakers resist distraction? Evidence from a taboo picture-word interference task.

    PubMed

    Dhooge, Elisah; Hartsuiker, Robert J

    2011-07-01

    Even in the presence of irrelevant stimuli, word production is a highly accurate and fluent process. But how do speakers prevent themselves from naming the wrong things? One possibility is that an attentional system inhibits task-irrelevant representations. Alternatively, a verbal self-monitoring system might check speech for accuracy and remove errors stemming from irrelevant information. Because self-monitoring is sensitive to social appropriateness, taboo errors should be intercepted more than neutral errors are. To prevent embarrassment, speakers might also speak more slowly when confronted with taboo distractors. Our results from two experiments are consistent with the self-monitoring account: Examining picture-naming speed (Experiment 1) and accuracy (Experiment 2), we found fewer naming errors but longer picture-naming latencies for pictures presented with taboo distractors than for pictures presented with neutral distractors. These results suggest that when intrusions of irrelevant words are highly undesirable, speakers do not simply inhibit these words: Rather, the language-production system adjusts itself to the context and filters out the undesirable words.

  2. 24. AIRCONDITIONING DUCT, WINCH CONTROL BOX, AND SPEAKER AT STATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    24. AIR-CONDITIONING DUCT, WINCH CONTROL BOX, AND SPEAKER AT STATION 85.5 OF MST. FOLDED-UP PLATFORM ON RIGHT OF PHOTO. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 East, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  3. Beyond the language given: the neural correlates of inferring speaker meaning.

    PubMed

    Bašnáková, Jana; Weber, Kirsten; Petersson, Karl Magnus; van Berkum, Jos; Hagoort, Peter

    2014-10-01

    Even though language allows us to say exactly what we mean, we often use language to say things indirectly, in a way that depends on the specific communicative context. For example, we can use an apparently straightforward sentence like "It is hard to give a good presentation" to convey deeper meanings, like "Your talk was a mess!" One of the big puzzles in language science is how listeners work out what speakers really mean, which is a skill absolutely central to communication. However, most neuroimaging studies of language comprehension have focused on the arguably much simpler, context-independent process of understanding direct utterances. To examine the neural systems involved in getting at contextually constrained indirect meaning, we used functional magnetic resonance imaging as people listened to indirect replies in spoken dialog. Relative to direct control utterances, indirect replies engaged dorsomedial prefrontal cortex, right temporo-parietal junction and insula, as well as bilateral inferior frontal gyrus and right medial temporal gyrus. This suggests that listeners take the speaker's perspective on both cognitive (theory of mind) and affective (empathy-like) levels. In line with classic pragmatic theories, our results also indicate that currently popular "simulationist" accounts of language comprehension fail to explain how listeners understand the speaker's intended message. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Linguistically Directed Attention to the Temporal Aspect of Action Events in Monolingual English Speakers and Chinese-English Bilingual Speakers with Varying English Proficiency

    ERIC Educational Resources Information Center

    Chen, Jenn-Yeu; Su, Jui-Ju; Lee, Chao-Yang; O'Seaghdha, Padraig G.

    2012-01-01

    Chinese and English speakers seem to hold different conceptions of time which may be related to the different codings of time in the two languages. Employing a sentence-picture matching task, we have investigated this linguistic relativity in Chinese-English bilinguals varying in English proficiency and found that those with high proficiency…

  5. Revisiting the role of language in spatial cognition: Categorical perception of spatial relations in English and Korean speakers.

    PubMed

    Holmes, Kevin J; Moty, Kelsey; Regier, Terry

    2017-12-01

    The spatial relation of support has been regarded as universally privileged in nonlinguistic cognition and immune to the influence of language. English, but not Korean, obligatorily distinguishes support from nonsupport via basic spatial terms. Despite this linguistic difference, previous research suggests that English and Korean speakers show comparable nonlinguistic sensitivity to the support/nonsupport distinction. Here, using a paradigm previously found to elicit cross-language differences in color discrimination, we provide evidence for a difference in sensitivity to support/nonsupport between native English speakers and native Korean speakers who were late English learners and tested in a context that privileged Korean. Whereas the former group showed categorical perception (CP) when discriminating spatial scenes capturing the support/nonsupport distinction, the latter did not. An additional group of native Korean speakers-relatively early English learners tested in an English-salient context-patterned with the native English speakers in showing CP for support/nonsupport. These findings suggest that obligatory marking of support/nonsupport in one's native language can affect nonlinguistic sensitivity to this distinction, contra earlier findings, but that such sensitivity may also depend on aspects of language background and the immediate linguistic context.

  6. An Investigation of Cultural Differences in Perceived Speaker Effectiveness.

    ERIC Educational Resources Information Center

    Masterson, John T.; Watson, Norman H.

    Culture is a powerful force that may affect the reception and acceptance of communication. To determine if culture has an effect on perceived speaker effectiveness, students in one introductory speech communication class in a Florida university and a variety of courses in a university in the Bahamas, were given an 80-item questionnaire that was…

  7. Do English and Mandarin Speakers Think about Time Differently?

    ERIC Educational Resources Information Center

    Boroditsky, Lera; Fuhrman, Orly; McCormick, Kelly

    2011-01-01

    Time is a fundamental domain of experience. In this paper we ask whether aspects of language and culture affect how people think about this domain. Specifically, we consider whether English and Mandarin speakers think about time differently. We review all of the available evidence both for and against this hypothesis, and report new data that…

  8. Robust Speech Processing & Recognition: Speaker ID, Language ID, Speech Recognition/Keyword Spotting, Diarization/Co-Channel/Environmental Characterization, Speaker State Assessment

    DTIC Science & Technology

    2015-10-01

    Scoring, Gaussian Backend , etc.) as shown in Fig. 39. The methods in this domain also emphasized the ability to perform data purification for both...investigation using the same infrastructure was undertaken to explore Lombard effect “flavor” detection for improved speaker ID. The study The presence of...dimension selection and compared to a common N-gram frequency based selection. 2.1.2: Exploration on NN/DBN backend : Since Deep Neural Networks (DNN) have

  9. CNN: a speaker recognition system using a cascaded neural network.

    PubMed

    Zaki, M; Ghalwash, A; Elkouny, A A

    1996-05-01

    The main emphasis of this paper is to present an approach for combining supervised and unsupervised neural network models to the issue of speaker recognition. To enhance the overall operation and performance of recognition, the proposed strategy integrates the two techniques, forming one global model called the cascaded model. We first present a simple conventional technique based on the distance measured between a test vector and a reference vector for different speakers in the population. This particular distance metric has the property of weighting down the components in those directions along which the intraspeaker variance is large. The reason for presenting this method is to clarify the discrepancy in performance between the conventional and neural network approach. We then introduce the idea of using unsupervised learning technique, presented by the winner-take-all model, as a means of recognition. Due to several tests that have been conducted and in order to enhance the performance of this model, dealing with noisy patterns, we have preceded it with a supervised learning model--the pattern association model--which acts as a filtration stage. This work includes both the design and implementation of both conventional and neural network approaches to recognize the speakers templates--which are introduced to the system via a voice master card and preprocessed before extracting the features used in the recognition. The conclusion indicates that the system performance in case of neural network is better than that of the conventional one, achieving a smooth degradation in respect of noisy patterns, and higher performance in respect of noise-free patterns.

  10. Age of acquisition and naming performance in Frisian-Dutch bilingual speakers with dementia.

    PubMed

    Veenstra, Wencke S; Huisman, Mark; Miller, Nick

    2014-01-01

    Age of acquisition (AoA) of words is a recognised variable affecting language processing in speakers with and without language disorders. For bi- and multilingual speakers their languages can be differentially affected in neurological illness. Study of language loss in bilingual speakers with dementia has been relatively neglected. We investigated whether AoA of words was associated with level of naming impairment in bilingual speakers with probable Alzheimer's dementia within and across their languages. Twenty-six Frisian-Dutch bilinguals with mild to moderate dementia named 90 pictures in each language, employing items with rated AoA and other word variable measures matched across languages. Quantitative (totals correct) and qualitative (error types and (in)appropriate switching) aspects were measured. Impaired retrieval occurred in Frisian (Language 1) and Dutch (Language 2), with a significant effect of AoA on naming in both languages. Earlier acquired words were better preserved and retrieved. Performance was identical across languages, but better in Dutch when controlling for covariates. However, participants demonstrated more inappropriate code switching within the Frisian test setting. On qualitative analysis, no differences in overall error distribution were found between languages for early or late acquired words. There existed a significantly higher percentage of semantically than visually-related errors. These findings have implications for understanding problems in lexical retrieval among bilingual individuals with dementia and its relation to decline in other cognitive functions which may play a role in inappropriate code switching. We discuss the findings in the light of the close relationship between Frisian and Dutch and the pattern of usage across the life-span.

  11. Effects of various electrode configurations on music perception, intonation and speaker gender identification.

    PubMed

    Landwehr, Markus; Fürstenberg, Dirk; Walger, Martin; von Wedel, Hasso; Meister, Hartmut

    2014-01-01

    Advances in speech coding strategies and electrode array designs for cochlear implants (CIs) predominantly aim at improving speech perception. Current efforts are also directed at transmitting appropriate cues of the fundamental frequency (F0) to the auditory nerve with respect to speech quality, prosody, and music perception. The aim of this study was to examine the effects of various electrode configurations and coding strategies on speech intonation identification, speaker gender identification, and music quality rating. In six MED-EL CI users electrodes were selectively deactivated in order to simulate different insertion depths and inter-electrode distances when using the high definition continuous interleaved sampling (HDCIS) and fine structure processing (FSP) speech coding strategies. Identification of intonation and speaker gender was determined and music quality rating was assessed. For intonation identification HDCIS was robust against the different electrode configurations, whereas fine structure processing showed significantly worse results when a short electrode depth was simulated. In contrast, speaker gender recognition was not affected by electrode configuration or speech coding strategy. Music quality rating was sensitive to electrode configuration. In conclusion, the three experiments revealed different outcomes, even though they all addressed the reception of F0 cues. Rapid changes in F0, as seen with intonation, were the most sensitive to electrode configurations and coding strategies. In contrast, electrode configurations and coding strategies did not show large effects when F0 information was available over a longer time period, as seen with speaker gender. Music quality relies on additional spectral cues other than F0, and was poorest when a shallow insertion was simulated.

  12. Perception of Intonation in Native and Non-Native Speakers of English.

    ERIC Educational Resources Information Center

    Berkovits, Rochele

    1980-01-01

    Indicates that native and nonnative speakers alike can make use of intonation if they explicitly listen for it, although prosodic features are generally ignored when other cues (semantic and pragmatic) are available. (Author/RL)

  13. Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆

    PubMed Central

    Cao, Houwei; Verma, Ragini; Nenkova, Ani

    2014-01-01

    We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion. PMID:25422534

  14. Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆

    PubMed

    Cao, Houwei; Verma, Ragini; Nenkova, Ani

    2015-01-01

    We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion.

  15. Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces

    PubMed Central

    Bocquelet, Florent; Hueber, Thomas; Girin, Laurent; Savariaux, Christophe; Yvert, Blaise

    2016-01-01

    Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications. This synthesizer converts movements of the main speech articulators (tongue, jaw, velum, and lips) into intelligible speech. The articulatory-to-acoustic mapping is performed using a deep neural network (DNN) trained on electromagnetic articulography (EMA) data recorded on a reference speaker synchronously with the produced speech signal. This DNN is then used in both offline and online modes to map the position of sensors glued on different speech articulators into acoustic parameters that are further converted into an audio signal using a vocoder. In offline mode, highly intelligible speech could be obtained as assessed by perceptual evaluation performed by 12 listeners. Then, to anticipate future BCI applications, we further assessed the real-time control of the synthesizer by both the reference speaker and new speakers, in a closed-loop paradigm using EMA data recorded in real time. A short calibration period was used to compensate for differences in sensor positions and articulatory differences between new speakers and the reference speaker. We found that real-time synthesis of vowels and consonants was possible with good intelligibility. In conclusion, these results open to future speech BCI applications using such articulatory-based speech synthesizer. PMID:27880768

  16. The Perception and Representation of Segmental and Prosodic Mandarin Contrasts in Native Speakers of Cantonese

    PubMed Central

    Zhang, Xujin; Samuel, Arthur G.; Liu, Siyun

    2011-01-01

    Previous research has found that a speaker’s native phonological system has a great influence on perception of another language. In three experiments, we tested the perception and representation of Mandarin phonological contrasts by Guangzhou Cantonese speakers, and compared their performance to that of native Mandarin speakers. Despite their rich experience using Mandarin Chinese, the Cantonese speakers had problems distinguishing specific Mandarin segmental and tonal contrasts that do not exist in Guangzhou Cantonese. However, we found evidence that the subtle differences between two members of a contrast were nonetheless represented in the lexicon. We also found different processing patterns for non-native segmental versus non-native tonal contrasts. The results provide substantial new information about the representation and processing of segmental and prosodic information by individuals listening to a closely-related, very well-learned, but still non-native language. PMID:22707849

  17. The Native Speaker, the Student, and Woody Allen: Examining Traditional Roles in the Foreign Language Classroom.

    ERIC Educational Resources Information Center

    Finger, Anke

    This paper uses a language classroom role-playing scene from a Woody Allen movie to examine the language student who has traditionally been asked to emulate and copy the native speaker and to discuss roles that teachers ask students to play. It also presents the changing paradigm of the native speaker and his or her role inside and outside the…

  18. Do Children with Autism Use the Speaker's Direction of Gaze Strategy to Crack the Code of Language?

    ERIC Educational Resources Information Center

    Baron-Cohen, Simon; And Others

    1997-01-01

    Two studies of toddlers and children with autism, mentally handicapped children, and normal toddlers examined whether autistic toddlers used Speaker's Direction of Gaze (SDG) strategy or less powerful Listener's Direction of Gaze (LDG) strategy to learn a word for a novel object. Results suggest autistic toddlers are insensitive to speaker's gaze…

  19. Infants' Understanding of False Labeling Events: The Referential Roles of Words and the Speakers Who Use Them.

    ERIC Educational Resources Information Center

    Koenig, Melissa A.; Echols, Catharine H.

    2003-01-01

    Four studies examined whether 16-month-olds' responses to true/false utterances interacted with their knowledge of human agents. Findings suggested that infants are developing a critical conception of human speakers as truthful communicators and that infants understand that human speakers may provide uniquely useful information when a word fails…

  20. Teaching the Native English Speaker How to Teach English

    ERIC Educational Resources Information Center

    Odhuu, Kelli

    2014-01-01

    This article speaks to teachers who have been paired with native speakers (NSs) who have never taught before, and the feelings of frustration, discouragement, and nervousness on the teacher's behalf that can occur as a result. In order to effectively tackle this situation, teachers need to work together with the NSs. Teachers in this scenario…

  1. Vibration and sound radiation of an electrostatic speaker based on circular diaphragm.

    PubMed

    Chiang, Hsin-Yuan; Huang, Yu-Hsi

    2015-04-01

    This study investigated the lumped parameter method (LPM) and distributed parameter method (DPM) in the measurement of vibration and prediction of sound pressure levels (SPLs) produced by an electrostatic speaker with circular diaphragm. An electrostatic speaker with push-pull configuration was achieved by suspending the circular diaphragm (60 mm diameter) between two transparent conductive plates. The transparent plates included a two-dimensional array of holes to enable the visualization of vibrations and avoid acoustic distortion. LPM was used to measure the displacement amplitude at the center of the diaphragm using a scanning vibrometer with the aim of predicting symmetric modes using Helmholtz equations and SPLs using Rayleigh integral equations. DPM was used to measure the amplitude of displacement across the entire surface of the speaker and predict SPL curves. LPM results show that the prediction of SPL associated with the first three symmetric resonant modes is in good agreement with the results of DPM and acoustic measurement. Below the breakup frequency of 375 Hz, the SPL predicted by LPM and DPM are identical with the results of acoustic measurement. This study provides a rapid, accurate method with which to measure the SPL associated with the first three symmetric modes using semi-analytic LPM.

  2. Collaborative Dialogue in Learner-Learner and Learner-Native Speaker Interaction

    ERIC Educational Resources Information Center

    Dobao, Ana Fernandez

    2012-01-01

    This study analyses intermediate and advanced learner-learner and learner-native speaker (NS) interaction looking for collaborative dialogue. It investigates how the presence of a NS interlocutor affects the frequency and nature of lexical language-related episodes (LREs) spontaneously generated during task-based interaction. Twenty-four learners…

  3. Developing Communication in the Workplace for Non-Native English Speakers.

    ERIC Educational Resources Information Center

    Nichols, Pat; Watkins, Lisa

    This curriculum module contains materials for conducting a course designed to build oral and written English skills for nonnative speakers. The course focuses on increasing vocabulary, improving listening/speaking skills, extracting information from various written texts (such as memos, notes, business forms, manuals, letters), and developing…

  4. Children's Understanding of Speaker Reliability between Lexical and Syntactic Knowledge

    ERIC Educational Resources Information Center

    Sobel, David M.; Macris, Deanna M.

    2013-01-01

    Many studies suggest that preschoolers rely on individuals' histories of generating accurate lexical information when learning novel lexical information from them. The present study examined whether children used a speaker's accuracy about one kind of linguistic knowledge to make inferences about another kind of linguistic knowledge, focusing…

  5. Phraseology and Frequency of Occurrence on the Web: Native Speakers' Perceptions of Google-Informed Second Language Writing

    ERIC Educational Resources Information Center

    Geluso, Joe

    2013-01-01

    Usage-based theories of language learning suggest that native speakers of a language are acutely aware of formulaic language due in large part to frequency effects. Corpora and data-driven learning can offer useful insights into frequent patterns of naturally occurring language to second/foreign language learners who, unlike native speakers, are…

  6. The Locus Equation as an Index of Coarticulation in Syllables Produced by Speakers with Profound Hearing Loss

    ERIC Educational Resources Information Center

    McCaffrey Morrison, Helen

    2008-01-01

    Locus equations (LEs) were derived from consonant-vowel-consonant (CVC) syllables produced by four speakers with profound hearing loss. Group data indicated that LE functions obtained for the separate CVC productions initiated by /b/, /d/, and /g/ were less well-separated in acoustic space than those obtained from speakers with normal hearing. A…

  7. The Time-Course of Lexical Activation During Sentence Comprehension in People With Aphasia

    PubMed Central

    Ferrill, Michelle; Love, Tracy; Walenski, Matthew; Shapiro, Lewis P.

    2012-01-01

    Purpose To investigate the time-course of processing of lexical items in auditorily presented canonical (subject–verb–object) constructions in young, neurologically unimpaired control participants and participants with left-hemisphere damage and agrammatic aphasia. Method A cross modal picture priming (CMPP) paradigm was used to test 114 control participants and 8 participants with agrammatic aphasia for priming of a lexical item (direct object noun) immediately after it is initially encountered in the ongoing auditory stream and at 3 additional time points at 400-ms intervals. Results The control participants demonstrated immediate activation of the lexical item, followed by a rapid loss (decay). The participants with aphasia demonstrated delayed activation of the lexical item. Conclusion This evidence supports the hypothesis of a delay in lexical activation in people with agrammatic aphasia. The delay in lexical activation feeds syntactic processing too slowly, contributing to comprehension deficits in people with agrammatic aphasia. PMID:22355007

  8. Ordered short-term memory differs in signers and speakers: Implications for models of short-term memory

    PubMed Central

    Bavelier, Daphne; Newport, Elissa L.; Hall, Matt; Supalla, Ted; Boutla, Mrim

    2008-01-01

    Capacity limits in linguistic short-term memory (STM) are typically measured with forward span tasks in which participants are asked to recall lists of words in the order presented. Using such tasks, native signers of American Sign Language (ASL) exhibit smaller spans than native speakers (Boutla, Supalla, Newport, & Bavelier, 2004). Here, we test the hypothesis that this population difference reflects differences in the way speakers and signers maintain temporal order information in short-term memory. We show that native signers differ from speakers on measures of short-term memory that require maintenance of temporal order of the tested materials, but not on those in which temporal order is not required. In addition, we show that, in a recall task with free order, bilingual subjects are more likely to recall in temporal order when using English than ASL. We conclude that speakers and signers do share common short-term memory processes. However, whereas short-term memory for spoken English is predominantly organized in terms of temporal order, we argue that this dimension does not play as great a role in signers’ short-term memory. Other factors that may affect STM processes in signers are discussed. PMID:18083155

  9. Morphosyntactic Production and Verbal Working Memory: Evidence From Greek Aphasia and Healthy Aging.

    PubMed

    Fyndanis, Valantis; Arcara, Giorgio; Christidou, Paraskevi; Caplan, David

    2018-05-17

    The present work investigated whether verbal working memory (WM) affects morphosyntactic production in configurations that do not involve or favor similarity-based interference and whether WM interacts with verb-related morphosyntactic categories and/or cue-target distance (locality). It also explored whether the findings related to the questions above lend support to a recent account of agrammatic morphosyntactic production: Interpretable Features' Impairment Hypothesis (Fyndanis, Varlokosta, & Tsapkini, 2012). A sentence completion task testing production of subject-verb agreement, tense/time reference, and aspect in local and nonlocal conditions and two verbal WM tasks were administered to 8 Greek-speaking persons with agrammatic aphasia (PWA) and 103 healthy participants. The 3 morphosyntactic categories dissociated in both groups (agreement > tense > aspect). A significant interaction emerged in both groups between the 3 morphosyntactic categories and WM. There was no main effect of locality in either of the 2 groups. At the individual level, all 8 PWA exhibited dissociations between agreement, tense, and aspect, and effects of locality were contradictory. Results suggest that individuals with WM limitations (both PWA and healthy older speakers) show dissociations between the production of verb-related morphosyntactic categories. WM affects performance shaping the pattern of morphosyntactic production (in Greek: subject-verb agreement > tense > aspect). The absence of an effect of locality suggests that executive capacities tapped by WM tasks are involved in morphosyntactic processing of demanding categories even when the cue is adjacent to the target. Results are consistent with the Interpretable Features' Impairment Hypothesis (Fyndanis et al., 2012). https://doi.org/10.23641/asha.6024428.

  10. Sensitivity to Speaker Control in the Online Comprehension of Conditional Tips and Promises: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Stewart, Andrew J.; Haigh, Matthew; Ferguson, Heather J.

    2013-01-01

    Statements of the form if… then… can be used to communicate conditional speech acts such as tips and promises. Conditional promises require the speaker to have perceived control over the outcome event, whereas conditional tips do not. In an eye-tracking study, we examined whether readers are sensitive to information about perceived speaker control…

  11. The "Virtual" Panel: A Computerized Model for LGBT Speaker Panels

    ERIC Educational Resources Information Center

    Beasley, Christopher; Torres-Harding, Susan; Pedersen, Paula J.

    2012-01-01

    Recent societal trends indicate more tolerance for homosexuality, but prejudice remains on college campuses. Speaker panels are commonly used in classrooms as a way to educate students about sexual diversity and decrease negative attitudes toward sexual diversity. The advent of computer-delivered instruction presents a unique opportunity to…

  12. Voice Recognition Software Accuracy with Second Language Speakers of English.

    ERIC Educational Resources Information Center

    Coniam, D.

    1999-01-01

    Explores the potential of the use of voice-recognition technology with second-language speakers of English. Involves the analysis of the output produced by a small group of very competent second-language subjects reading a text into the voice recognition software Dragon Systems "Dragon NaturallySpeaking." (Author/VWL)

  13. Evidential Uses in the Spanish of Quechua Speakers in Peru.

    ERIC Educational Resources Information Center

    Escobar, Anna Maria

    1994-01-01

    Analysis of recordings of spontaneous speech of native speakers of Quechua speaking Spanish as a second language reveals that, using verbal morphological resources of Spanish, they have grammaticalized an epistemic marking system resembling that of Quechua. Sources of this process in both Quechua and Spanish are analyzed. (MSE)

  14. Age of acquisition and naming performance in Frisian-Dutch bilingual speakers with dementia

    PubMed Central

    Veenstra, Wencke S.; Huisman, Mark; Miller, Nick

    2014-01-01

    Age of acquisition (AoA) of words is a recognised variable affecting language processing in speakers with and without language disorders. For bi- and multilingual speakers their languages can be differentially affected in neurological illness. Study of language loss in bilingual speakers with dementia has been relatively neglected. Objective We investigated whether AoA of words was associated with level of naming impairment in bilingual speakers with probable Alzheimer's dementia within and across their languages. Methods Twenty-six Frisian-Dutch bilinguals with mild to moderate dementia named 90 pictures in each language, employing items with rated AoA and other word variable measures matched across languages. Quantitative (totals correct) and qualitative (error types and (in)appropriate switching) aspects were measured. Results Impaired retrieval occurred in Frisian (Language 1) and Dutch (Language 2), with a significant effect of AoA on naming in both languages. Earlier acquired words were better preserved and retrieved. Performance was identical across languages, but better in Dutch when controlling for covariates. However, participants demonstrated more inappropriate code switching within the Frisian test setting. On qualitative analysis, no differences in overall error distribution were found between languages for early or late acquired words. There existed a significantly higher percentage of semantically than visually-related errors. Conclusion These findings have implications for understanding problems in lexical retrieval among bilingual individuals with dementia and its relation to decline in other cognitive functions which may play a role in inappropriate code switching. We discuss the findings in the light of the close relationship between Frisian and Dutch and the pattern of usage across the life-span. PMID:29213911

  15. Effects of a metronome on the filled pauses of fluent speakers.

    PubMed

    Christenfeld, N

    1996-12-01

    Filled pauses (the "ums" and "uhs" that litter spontaneous speech) seem to be a product of the speaker paying deliberate attention to the normally automatic act of talking. This is the same sort of explanation that has been offered for stuttering. In this paper we explore whether a manipulation that has long been known to decrease stuttering, synchronizing speech to the beats of a metronome, will then also decrease filled pauses. Two experiments indicate that a metronome has a dramatic effect on the production of filled pauses. This effect is not due to any simplification or slowing of the speech and supports the view that a metronome causes speakers to attend more to how they are talking and less to what they are saying. It also lends support to the connection between stutters and filled pauses.

  16. Investigating Chinese Speakers' Acquisition of Telicity in English

    ERIC Educational Resources Information Center

    Yin, Bin

    2012-01-01

    This dissertation is concerned with Chinese speakers' acquisition of telicity in L2 English. Telicity is a semantic notion having to do with whether an event has an inherent endpoint or not. Most existing work on L2 telicity is conceptualized within an L1-transfer framework and examines learning situations where L1 and L2 differ on whether…

  17. The effects of gated speech on the fluency of speakers who stutter.

    PubMed

    Howell, Peter

    2007-01-01

    It is known that the speech of people who stutter improves when the speaker's own vocalization is changed while the participant is speaking. One explanation of these effects is the disruptive rhythm hypothesis (DRH). The DRH maintains that the manipulated sound only needs to disturb timing to affect speech control. The experiment investigated whether speech that was gated on and off (interrupted) affected the speech control of speakers who stutter. Eight children who stutter read a passage when they heard their voice normally and when the speech was gated. Fluency was enhanced (fewer errors were made and time to read a set passage was reduced) when speech was interrupted in this way. The results support the DRH. Copyright 2007 S. Karger AG, Basel.

  18. Representational deficit or processing effect? An electrophysiological study of noun-noun compound processing by very advanced L2 speakers of English

    PubMed Central

    De Cat, Cecile; Klepousniotou, Ekaterini; Baayen, R. Harald

    2015-01-01

    The processing of English noun-noun compounds (NNCs) was investigated to identify the extent and nature of differences between the performance of native speakers of English and advanced Spanish and German non-native speakers of English. The study sought to establish whether the word order of the equivalent structure in the non-native speakers' mothertongue (L1) had an influence on their processing of NNCs in their second language (L2), and whether this influence was due to differences in grammatical representation (i.e., incomplete acquisition of the relevant structure) or processing effects. Two mask-primed lexical decision experiments were conducted in which compounds were presented with their constituent nouns in licit vs. reversed order. The first experiment used a speeded lexical decision task with reaction time registration, and the second a delayed lexical decision task with EEG registration. There were no significant group differences in accuracy in the licit word order condition, suggesting that the grammatical representation had been fully acquired by the non-native speakers. However, the Spanish speakers made slightly more errors with the reversed order and had longer response times, suggesting an L1 interference effect (as the reverse order matches the licit word order in Spanish). The EEG data, analyzed with generalized additive mixed models, further supported this hypothesis. The EEG waveform of the non-native speakers was characterized by a slightly later onset N400 in the violation condition (reversed constituent order). Compound frequency predicted the amplitude of the EEG signal for the licit word order for native speakers, but for the reversed constituent order for Spanish speakers—the licit order in their L1—supporting the hypothesis that Spanish speakers are affected by interferences from their L1. The pattern of results for the German speakers in the violation condition suggested a strong conflict arising due to licit constituents being

  19. "Cool" English: Stylized Native-Speaker English in Japanese Television Shows

    ERIC Educational Resources Information Center

    Furukawa, Gavin

    2015-01-01

    This article analyzes stylized pronunciations of English by Japanese speakers on televised variety shows in Japan. Research on style and mocking has done much to reveal how linguistic forms are utilized in interaction as resources of identity construction that can oftentimes subvert hegemonic discourse (Chun 2004). Within this research area,…

  20. Perception and Production of English Lexical Stress by Thai Speakers

    ERIC Educational Resources Information Center

    Jangjamras, Jirapat

    2011-01-01

    This study investigated the effects of first language prosodic transfer on the perception and production of English lexical stress and the relation between stress perception and production by second language learners. To test the effect of Thai tonal distribution rules and stress patterns on native Thai speakers' perception and production of…