Sample records for adult spoken corpora

  1. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    PubMed Central

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980

  2. Spoken Grammar Awareness Raising: Does It Affect the Listening Ability of Iranian EFL Learners?

    ERIC Educational Resources Information Center

    Rashtchi, Mojgan; Afzali, Mahnaz

    2011-01-01

    Advances in spoken corpora analysis have brought about new insights into language pedagogy and have led to an awareness of the characteristics of spoken language. Current findings have shown that grammar of spoken language is different from written language. However, most listening and speaking materials are concocted based on written grammar and…

  3. Spoken Language Production in Young Adults: Examining Syntactic Complexity

    ERIC Educational Resources Information Center

    Nippold, Marilyn A.; Frantz-Kaspar, Megan W.; Vigeland, Laura M.

    2017-01-01

    Purpose: In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language…

  4. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    PubMed

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Integrating Learner Corpora and Natural Language Processing: A Crucial Step towards Reconciling Technological Sophistication and Pedagogical Effectiveness

    ERIC Educational Resources Information Center

    Granger, Sylviane; Kraif, Olivier; Ponton, Claude; Antoniadis, Georges; Zampa, Virginie

    2007-01-01

    Learner corpora, electronic collections of spoken or written data from foreign language learners, offer unparalleled access to many hitherto uncovered aspects of learner language, particularly in their error-tagged format. This article aims to demonstrate the role that the learner corpus can play in CALL, particularly when used in conjunction with…

  6. A Corpus-Based Study on Turkish Spoken Productions of Bilingual Adults

    ERIC Educational Resources Information Center

    Agçam, Reyhan; Bulut, Adem

    2016-01-01

    The current study investigated whether monolingual adult speakers of Turkish and bilingual adult speakers of Arabic and Turkish significantly differ regarding their spoken productions in Turkish. Accordingly, two groups of undergraduate students studying Turkish Language and Literature at a state university in Turkey were presented two videos on a…

  7. SyllabO+: A new tool to study sublexical phenomena in spoken Quebec French.

    PubMed

    Bédard, Pascale; Audet, Anne-Marie; Drouin, Patrick; Roy, Johanna-Pascale; Rivard, Julie; Tremblay, Pascale

    2017-10-01

    Sublexical phonotactic regularities in language have a major impact on language development, as well as on speech processing and production throughout the entire lifespan. To understand the impact of phonotactic regularities on speech and language functions at the behavioral and neural levels, it is essential to have access to oral language corpora to study these complex phenomena in different languages. Yet, probably because of their complexity, oral language corpora remain less common than written language corpora. This article presents the first corpus and database of spoken Quebec French syllables and phones: SyllabO+. This corpus contains phonetic transcriptions of over 300,000 syllables (over 690,000 phones) extracted from recordings of 184 healthy adult native Quebec French speakers, ranging in age from 20 to 97 years. To ensure the representativeness of the corpus, these recordings were made in both formal and familiar communication contexts. Phonotactic distributional statistics (e.g., syllable and co-occurrence frequencies, percentages, percentile ranks, transition probabilities, and pointwise mutual information) were computed from the corpus. An open-access online application to search the database was developed, and is available at www.speechneurolab.ca/syllabo . In this article, we present a brief overview of the corpus, as well as the syllable and phone databases, and we discuss their practical applications in various fields of research, including cognitive neuroscience, psycholinguistics, neurolinguistics, experimental psychology, phonetics, and phonology. Nonacademic practical applications are also discussed, including uses in speech-language pathology.

  8. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    ERIC Educational Resources Information Center

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Halle, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and…

  9. Proposed Framework for the Evaluation of Standalone Corpora Processing Systems: An Application to Arabic Corpora

    PubMed Central

    Al-Thubaity, Abdulmohsen; Alqifari, Reem

    2014-01-01

    Despite the accessibility of numerous online corpora, students and researchers engaged in the fields of Natural Language Processing (NLP), corpus linguistics, and language learning and teaching may encounter situations in which they need to develop their own corpora. Several commercial and free standalone corpora processing systems are available to process such corpora. In this study, we first propose a framework for the evaluation of standalone corpora processing systems and then use it to evaluate seven freely available systems. The proposed framework considers the usability, functionality, and performance of the evaluated systems while taking into consideration their suitability for Arabic corpora. While the results show that most of the evaluated systems exhibited comparable usability scores, the scores for functionality and performance were substantially different with respect to support for the Arabic language and N-grams profile generation. The results of our evaluation will help potential users of the evaluated systems to choose the system that best meets their needs. More importantly, the results will help the developers of the evaluated systems to enhance their systems and developers of new corpora processing systems by providing them with a reference framework. PMID:25610910

  10. Proposed framework for the evaluation of standalone corpora processing systems: an application to Arabic corpora.

    PubMed

    Al-Thubaity, Abdulmohsen; Al-Khalifa, Hend; Alqifari, Reem; Almazrua, Manal

    2014-01-01

    Despite the accessibility of numerous online corpora, students and researchers engaged in the fields of Natural Language Processing (NLP), corpus linguistics, and language learning and teaching may encounter situations in which they need to develop their own corpora. Several commercial and free standalone corpora processing systems are available to process such corpora. In this study, we first propose a framework for the evaluation of standalone corpora processing systems and then use it to evaluate seven freely available systems. The proposed framework considers the usability, functionality, and performance of the evaluated systems while taking into consideration their suitability for Arabic corpora. While the results show that most of the evaluated systems exhibited comparable usability scores, the scores for functionality and performance were substantially different with respect to support for the Arabic language and N-grams profile generation. The results of our evaluation will help potential users of the evaluated systems to choose the system that best meets their needs. More importantly, the results will help the developers of the evaluated systems to enhance their systems and developers of new corpora processing systems by providing them with a reference framework.

  11. Novel Spoken Word Learning in Adults with Developmental Dyslexia

    ERIC Educational Resources Information Center

    Conner, Peggy S.

    2013-01-01

    A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…

  12. The Hebrew CHILDES corpus: transcription and morphological analysis

    PubMed Central

    Albert, Aviad; MacWhinney, Brian; Nir, Bracha

    2014-01-01

    We present a corpus of transcribed spoken Hebrew that reflects spoken interactions between children and adults. The corpus is an integral part of the CHILDES database, which distributes similar corpora for over 25 languages. We introduce a dedicated transcription scheme for the spoken Hebrew data that is sensitive to both the phonology and the standard orthography of the language. We also introduce a morphological analyzer that was specifically developed for this corpus. The analyzer adequately covers the entire corpus, producing detailed correct analyses for all tokens. Evaluation on a new corpus reveals high coverage as well. Finally, we describe a morphological disambiguation module that selects the correct analysis of each token in context. The result is a high-quality morphologically-annotated CHILDES corpus of Hebrew, along with a set of tools that can be applied to new corpora. PMID:25419199

  13. Conventions for sign and speech transcription of child bimodal bilingual corpora in ELAN.

    PubMed

    Chen Pichler, Deborah; Hochgesang, Julie A; Lillo-Martin, Diane; de Quadros, Ronice Müller

    2010-01-01

    This article extends current methodologies for the linguistic analysis of sign language acquisition to cases of bimodal bilingual acquisition. Using ELAN, we are transcribing longitudinal spontaneous production data from hearing children of Deaf parents who are learning either American Sign Language (ASL) and American English (AE), or Brazilian Sign Language (Libras, also referred to as Língua de Sinais Brasileira/LSB in some texts) and Brazilian Portuguese (BP). Our goal is to construct corpora that can be mined for a wide range of investigations on various topics in acquisition. Thus, it is important that we maintain consistency in transcription for both signed and spoken languages. This article documents our transcription conventions, including the principles behind our approach. Using this document, other researchers can chose to follow similar conventions or develop new ones using our suggestions as a starting point.

  14. Conventions for sign and speech transcription of child bimodal bilingual corpora in ELAN

    PubMed Central

    Chen Pichler, Deborah; Hochgesang, Julie A.; Lillo-Martin, Diane; de Quadros, Ronice Müller

    2011-01-01

    This article extends current methodologies for the linguistic analysis of sign language acquisition to cases of bimodal bilingual acquisition. Using ELAN, we are transcribing longitudinal spontaneous production data from hearing children of Deaf parents who are learning either American Sign Language (ASL) and American English (AE), or Brazilian Sign Language (Libras, also referred to as Língua de Sinais Brasileira/LSB in some texts) and Brazilian Portuguese (BP). Our goal is to construct corpora that can be mined for a wide range of investigations on various topics in acquisition. Thus, it is important that we maintain consistency in transcription for both signed and spoken languages. This article documents our transcription conventions, including the principles behind our approach. Using this document, other researchers can chose to follow similar conventions or develop new ones using our suggestions as a starting point. PMID:21625371

  15. Relationships among vocabulary size, nonverbal cognition, and spoken word recognition in adults with cochlear implants

    NASA Astrophysics Data System (ADS)

    Collison, Elizabeth A.; Munson, Benjamin; Carney, Arlene E.

    2002-05-01

    Recent research has attempted to identify the factors that predict speech perception performance among users of cochlear implants (CIs). Studies have found that approximately 20%-60% of the variance in speech perception scores can be accounted for by factors including duration of deafness, etiology, type of device, and length of implant use, leaving approximately 50% of the variance unaccounted for. The current study examines the extent to which vocabulary size and nonverbal cognitive ability predict CI listeners' spoken word recognition. Fifteen postlingually deafened adults with nucleus or clarion CIs were given standardized assessments of nonverbal cognitive ability and expressive vocabulary size: the Expressive Vocabulary Test, the Test of Nonverbal Intelligence-III, and the Woodcock-Johnson-III Test of Cognitive Ability, Verbal Comprehension subtest. Two spoken word recognition tasks were administered. In the first, listeners identified isophonemic CVC words. In the second, listeners identified gated words varying in lexical frequency and neighborhood density. Analyses will examine the influence of lexical frequency and neighborhood density on the uniqueness point in the gating task, as well as relationships among nonverbal cognitive ability, vocabulary size, and the two spoken word recognition measures. [Work supported by NIH Grant P01 DC00110 and by the Lions 3M Hearing Foundation.

  16. Automatic Dictionary Expansion Using Non-parallel Corpora

    NASA Astrophysics Data System (ADS)

    Rapp, Reinhard; Zock, Michael

    Automatically generating bilingual dictionaries from parallel, manually translated texts is a well established technique that works well in practice. However, parallel texts are a scarce resource. Therefore, it is desirable also to be able to generate dictionaries from pairs of comparable monolingual corpora. For most languages, such corpora are much easier to acquire, and often in considerably larger quantities. In this paper we present the implementation of an algorithm which exploits such corpora with good success. Based on the assumption that the co-occurrence patterns between different languages are related, it expands a small base lexicon. For improved performance, it also realizes a novel interlingua approach. That is, if corpora of more than two languages are available, the translations from one language to another can be determined not only directly, but also indirectly via a pivot language.

  17. Wide coverage biomedical event extraction using multiple partially overlapping corpora

    PubMed Central

    2013-01-01

    Background Biomedical events are key to understanding physiological processes and disease, and wide coverage extraction is required for comprehensive automatic analysis of statements describing biomedical systems in the literature. In turn, the training and evaluation of extraction methods requires manually annotated corpora. However, as manual annotation is time-consuming and expensive, any single event-annotated corpus can only cover a limited number of semantic types. Although combined use of several such corpora could potentially allow an extraction system to achieve broad semantic coverage, there has been little research into learning from multiple corpora with partially overlapping semantic annotation scopes. Results We propose a method for learning from multiple corpora with partial semantic annotation overlap, and implement this method to improve our existing event extraction system, EventMine. An evaluation using seven event annotated corpora, including 65 event types in total, shows that learning from overlapping corpora can produce a single, corpus-independent, wide coverage extraction system that outperforms systems trained on single corpora and exceeds previously reported results on two established event extraction tasks from the BioNLP Shared Task 2011. Conclusions The proposed method allows the training of a wide-coverage, state-of-the-art event extraction system from multiple corpora with partial semantic annotation overlap. The resulting single model makes broad-coverage extraction straightforward in practice by removing the need to either select a subset of compatible corpora or semantic types, or to merge results from several models trained on different individual corpora. Multi-corpus learning also allows annotation efforts to focus on covering additional semantic types, rather than aiming for exhaustive coverage in any single annotation effort, or extending the coverage of semantic types annotated in existing corpora. PMID:23731785

  18. A Spoken Word Count.

    ERIC Educational Resources Information Center

    Jones, Lyle V.; Wepman, Joseph M.

    This word count is a composite listing of the different words spoken by a selected sample of 54 English-speaking adults and the frequency with which each of the different words was used in a particular test. The stimulus situation was identical for each subject and consisted of 20 cards of the Thematic Apperception Test. Although most word counts…

  19. Working with Corpora in the Translation Classroom

    ERIC Educational Resources Information Center

    Krüger, Ralph

    2012-01-01

    This article sets out to illustrate possible applications of electronic corpora in the translation classroom. Starting with a survey of corpus use within corpus-based translation studies, the didactic value of corpora in the translation classroom and their epistemic value in translation teaching and practice will be elaborated. A typology of…

  20. Using Monolingual and Bilingual Corpora in Lexicography

    ERIC Educational Resources Information Center

    Miangah, Tayebeh Mosavi

    2009-01-01

    Constructing and exploiting different types of corpora are among computer applications exposed to the researchers in different branches of science including lexicography. In lexicography, different types of corpora may be of great help in finding the most appropriate uses of words and expressions by referring to numerous examples and citations.…

  1. "Visual" Cortex Responds to Spoken Language in Blind Children.

    PubMed

    Bedny, Marina; Richardson, Hilary; Saxe, Rebecca

    2015-08-19

    Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.

  2. Learner Corpora: The Missing Link in EAP Pedagogy

    ERIC Educational Resources Information Center

    Gilquin, Gaetanelle; Granger, Sylviane; Paquot, Magali

    2007-01-01

    This article deals with the place of learner corpora, i.e. corpora containing authentic language data produced by learners of a foreign/second language, in English for academic purposes (EAP) pedagogy and sets out to demonstrate that they have a valuable contribution to make to the field. Following an initial brief introduction to corpus-based…

  3. Eye movements during spoken word recognition in Russian children.

    PubMed

    Sekerina, Irina A; Brooks, Patricia J

    2007-09-01

    This study explores incremental processing in spoken word recognition in Russian 5- and 6-year-olds and adults using free-viewing eye-tracking. Participants viewed scenes containing pictures of four familiar objects and clicked on a target embedded in a spoken instruction. In the cohort condition, two object names shared identical three-phoneme onsets. In the noncohort condition, all object names had unique onsets. Coarse-grain analyses of eye movements indicated that adults produced looks to the competitor on significantly more cohort trials than on noncohort trials, whereas children surprisingly failed to demonstrate cohort competition due to widespread exploratory eye movements across conditions. Fine-grain analyses, in contrast, showed a similar time course of eye movements across children and adults, but with cohort competition lingering more than 1s longer in children. The dissociation between coarse-grain and fine-grain eye movements indicates a need to consider multiple behavioral measures in making developmental comparisons in language processing.

  4. Corpora and Language Assessment: The State of the Art

    ERIC Educational Resources Information Center

    Park, Kwanghyun

    2014-01-01

    This article outlines the current state of and recent developments in the use of corpora for language assessment and considers future directions with a special focus on computational methodology. Because corpora began to make inroads into language assessment in the 1990s, test developers have increasingly used them as a reference resource to…

  5. From Pedagogically Relevant Corpora to Authentic Language Learning Contents

    ERIC Educational Resources Information Center

    Braun, Sabine

    2005-01-01

    The potential of corpora for language learning and teaching has been widely acknowledged and their ready availability on the Web has facilitated access for a broad range of users, including language teachers and learners. However, the integration of corpora into general language learning and teaching practice has so far been disappointing. In this…

  6. The acceleration of spoken-word processing in children's native-language acquisition: an ERP cohort study.

    PubMed

    Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hagiwara, Hiroko

    2011-04-01

    Healthy adults can identify spoken words at a remarkable speed, by incrementally analyzing word-onset information. It is currently unknown how this adult-level speed of spoken-word processing emerges during children's native-language acquisition. In a picture-word mismatch paradigm, we manipulated the semantic congruency between picture contexts and spoken words, and recorded event-related potential (ERP) responses to the words. Previous similar studies focused on the N400 response, but we focused instead on the onsets of semantic congruency effects (N200 or Phonological Mismatch Negativity), which contain critical information for incremental spoken-word processing. We analyzed ERPs obtained longitudinally from two age cohorts of 40 primary-school children (total n=80) in a 3-year period. Children first tested at 7 years of age showed earlier onsets of congruency effects (by approximately 70ms) when tested 2 years later (i.e., at age 9). Children first tested at 9 years of age did not show such shortening of onset latencies 2 years later (i.e., at age 11). Overall, children's onset latencies at age 9 appeared similar to those of adults. These data challenge the previous hypothesis that word processing is well established at age 7. Instead they support the view that the acceleration of spoken-word processing continues beyond age 7. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Linguistic Corpora and Lexicography.

    ERIC Educational Resources Information Center

    Meijs, Willem

    1996-01-01

    Overviews the development of corpus linguistics, reviews the use of corpora in modern lexicography, and presents central issues in ongoing work aimed at broadening the scope of lexicographical use of corpus data. Focuses on how the field has developed in relation to the production of new monolingual English dictionaries by major British…

  8. Metaphor Identification in Large Texts Corpora

    PubMed Central

    Neuman, Yair; Assaf, Dan; Cohen, Yohai; Last, Mark; Argamon, Shlomo; Howard, Newton; Frieder, Ophir

    2013-01-01

    Identifying metaphorical language-use (e.g., sweet child) is one of the challenges facing natural language processing. This paper describes three novel algorithms for automatic metaphor identification. The algorithms are variations of the same core algorithm. We evaluate the algorithms on two corpora of Reuters and the New York Times articles. The paper presents the most comprehensive study of metaphor identification in terms of scope of metaphorical phrases and annotated corpora size. Algorithms’ performance in identifying linguistic phrases as metaphorical or literal has been compared to human judgment. Overall, the algorithms outperform the state-of-the-art algorithm with 71% precision and 27% averaged improvement in prediction over the base-rate of metaphors in the corpus. PMID:23658625

  9. "A Unified Poet Alliance": The Personal and Social Outcomes of Youth Spoken Word Poetry Programming

    ERIC Educational Resources Information Center

    Weinstein, Susan

    2010-01-01

    This article places youth spoken word (YSW) poetry programming within the larger framework of arts education. Drawing primarily on transcripts of interviews with teen poets and adult teaching artists and program administrators, the article identifies specific benefits that participants ascribe to youth spoken word, including the development of…

  10. Automatic translation among spoken languages

    NASA Technical Reports Server (NTRS)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  11. Mice null for Frizzled4 (Fzd4-/-) are infertile and exhibit impaired corpora lutea formation and function.

    PubMed

    Hsieh, Minnie; Boerboom, Derek; Shimada, Masayuki; Lo, Yuet; Parlow, Albert F; Luhmann, Ulrich F O; Berger, Wolfgang; Richards, JoAnne S

    2005-12-01

    Previous studies showed that transcripts encoding specific Wnt ligands and Frizzled receptors including Wnt4, Frizzled1 (Fzd1), and Frizzled4 (Fzd4) were expressed in a cell-specific manner in the adult mouse ovary. Overlapping expression of Wnt4 and Fzd4 mRNA in small follicles and corpora lutea led us to hypothesize that the infertility of mice null for Fzd4 (Fzd4-/-) might involve impaired follicular growth or corpus luteum formation. Analyses at defined stages of reproductive function indicate that immature Fzd4-/- mouse ovaries contain follicles at many stages of development and respond to exogenous hormone treatments in a manner similar to their wild-type littermates, indicating that the processes controlling follicular development and follicular cell responses to gonadotropins are intact. Adult Fzd4-/- mice also exhibit normal mating behavior and ovulate, indicating that endocrine events controlling these processes occur. However, Fzd4-/- mice fail to become pregnant and do not produce offspring. Histological and functional analyses of ovaries from timed mating pairs at Days 1.5-7.5 postcoitus (p.c.) indicate that the corpora lutea of the Fzd4-/- mice do not develop normally. Expression of luteal cell-specific mRNAs (Lhcgr, Prlr, Cyp11a1 and Sfrp4) is reduced, luteal cell morphology is altered, and markers of angiogenesis and vascular formation (Efnb1, Efnb2, Ephb4, Vegfa, Vegfc) are low in the Fzd4-/- mice. Although a recently identified, high-affinity FZD4 ligand Norrin (Norrie disease pseudoglioma homolog) is expressed in the ovary, adult Ndph-/- mice contain functional corpora lutea and do not phenocopy Fzd4-/- mice. Thus, Fzd4 appears to impact the formation of the corpus luteum by mechanisms that more closely phenocopy Prlr null mice.

  12. The Importance of Corpora in Translation Studies: A Practical Case

    ERIC Educational Resources Information Center

    Bermúdez Bausela, Montserrat

    2016-01-01

    This paper deals with the use of corpora in Translation Studies, particularly with the so-called "'ad hoc' corpus" or "translator's corpus" as a working tool both in the classroom and for the professional translator. We believe that corpora are an inestimable source not only for terminology and phraseology extraction (cf. Maia,…

  13. Progestogen treatments for cycle management in a sheep model of assisted conception affect the growth patterns, the expression of luteinizing hormone receptors, and the progesterone secretion of induced corpora lutea.

    PubMed

    Letelier, Claudia; García-Fernández, Rosa Ana; Contreras-Solis, Ignacio; Sanchez, María Angeles; Garcia-Palencia, Pilar; Sanchez, Belen; Gonzalez-Bulnes, Antonio; Flores, Juana María

    2010-03-01

    To determine, in a sheep model, the effect of a short-term progestative treatment on growth dynamics and functionality of induced corpora lutea. Observational, model study. Public university. Sixty adult female sheep. Synchronization and induction of ovulation with progestogens and prostaglandin analogues; ovarian ultrasonography, blood sampling, and ovariectomy. Determination of pituitary function and morphologic characteristics, expression of luteinizing hormone (LH) receptors, and progesterone secretion of corpora lutea. The use of progestative pretreatments for assisted conception affect the growth patterns, the expression of LH receptors, and the progesterone secretion of induced corpora lutea. The current study indicates, in a sheep model, the existence of deleterious effects from progestogens on functionality of induced corpora lutea. Copyright 2010 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  14. Corpora of Vietnamese texts: lexical effects of intended audience and publication place.

    PubMed

    Pham, Giang; Kohnert, Kathryn; Carney, Edward

    2008-02-01

    This article has two primary aims. The first is to introduce a new Vietnamese text-based corpus. The Corpora of Vietnamese Texts (CVT; Tang, 2006a) consists of approximately 1 million words drawn from newspapers and children's literature, and is available online at www.vnspeechtherapy.com/vi/CVT. The second aim is to investigate potential differences in lexical frequency and distributional characteristics in the CVT on the basis of place of publication (Vietnam or Western countries) and intended audience: adult-directed texts (newspapers) or child-directed texts (children's literature). We found clear differences between adult- and child-directed texts, particularly in the distributional frequencies of pronouns or kinship terms, which were more frequent in children's literature. Within child- and adult-directed texts, lexical characteristics did not differ on the basis of place of publication. Implications of these findings for future research are discussed.

  15. A Positivity Bias in Written and Spoken English and Its Moderation by Personality and Gender.

    PubMed

    Augustine, Adam A; Mehl, Matthias R; Larsen, Randy J

    2011-09-01

    The human tendency to use positive words ("adorable") more often than negative words ("dreadful") is called the linguistic positivity bias. We find evidence for this bias in two studies of word use, one based on written corpora and another based on naturalistic speech samples. In addition, we demonstrate that the positivity bias applies to nouns and verbs as well as adjectives. We also show that it is found to the same degree in written as well as spoken English. Moreover, personality traits and gender moderate the effect, such that persons high on extraversion and agreeableness and women display a larger positivity bias in naturalistic speech. Results are discussed in terms of how the linguistic positivity bias may serve as a mechanism for social facilitation. People, in general, and some people more than others, tend to talk about the brighter side of life.

  16. Research on Spoken Dialogue Systems

    NASA Technical Reports Server (NTRS)

    Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel

    2010-01-01

    Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.

  17. Building gold standard corpora for medical natural language processing tasks.

    PubMed

    Deleger, Louise; Li, Qi; Lingren, Todd; Kaiser, Megan; Molnar, Katalin; Stoutenborough, Laura; Kouril, Michal; Marsolo, Keith; Solti, Imre

    2012-01-01

    We present the construction of three annotated corpora to serve as gold standards for medical natural language processing (NLP) tasks. Clinical notes from the medical record, clinical trial announcements, and FDA drug labels are annotated. We report high inter-annotator agreements (overall F-measures between 0.8467 and 0.9176) for the annotation of Personal Health Information (PHI) elements for a de-identification task and of medications, diseases/disorders, and signs/symptoms for information extraction (IE) task. The annotated corpora of clinical trials and FDA labels will be publicly released and to facilitate translational NLP tasks that require cross-corpora interoperability (e.g. clinical trial eligibility screening) their annotation schemas are aligned with a large scale, NIH-funded clinical text annotation project.

  18. Feature-level sentiment analysis by using comparative domain corpora

    NASA Astrophysics Data System (ADS)

    Quan, Changqin; Ren, Fuji

    2016-06-01

    Feature-level sentiment analysis (SA) is able to provide more fine-grained SA on certain opinion targets and has a wider range of applications on E-business. This study proposes an approach based on comparative domain corpora for feature-level SA. The proposed approach makes use of word associations for domain-specific feature extraction. First, we assign a similarity score for each candidate feature to denote its similarity extent to a domain. Then we identify domain features based on their similarity scores on different comparative domain corpora. After that, dependency grammar and a general sentiment lexicon are applied to extract and expand feature-oriented opinion words. Lastly, the semantic orientation of a domain-specific feature is determined based on the feature-oriented opinion lexicons. In evaluation, we compare the proposed method with several state-of-the-art methods (including unsupervised and semi-supervised) using a standard product review test collection. The experimental results demonstrate the effectiveness of using comparative domain corpora.

  19. Speech perception and spoken word recognition: past and present.

    PubMed

    Jusezyk, Peter W; Luce, Paul A

    2002-02-01

    The scientific study of the perception of spoken language has been an exciting, prolific, and productive area of research for more than 50 yr. We have learned much about infants' and adults' remarkable capacities for perceiving and understanding the sounds of their language, as evidenced by our increasingly sophisticated theories of acquisition, process, and representation. We present a selective, but we hope, representative review of the past half century of research on speech perception, paying particular attention to the historical and theoretical contexts within which this research was conducted. Our foci in this review fall on three principle topics: early work on the discrimination and categorization of speech sounds, more recent efforts to understand the processes and representations that subserve spoken word recognition, and research on how infants acquire the capacity to perceive their native language. Our intent is to provide the reader a sense of the progress our field has experienced over the last half century in understanding the human's extraordinary capacity for the perception of spoken language.

  20. Juvenile Hormone Biosynthesis Gene Expression in the corpora allata of Honey Bee (Apis mellifera L.) Female Castes

    PubMed Central

    Rosa, Gustavo Conrado Couto; Moda, Livia Maria; Martins, Juliana Ramos; Bitondi, Márcia Maria Gentile; Hartfelder, Klaus; Simões, Zilá Luz Paulino

    2014-01-01

    Juvenile hormone (JH) controls key events in the honey bee life cycle, viz. caste development and age polyethism. We quantified transcript abundance of 24 genes involved in the JH biosynthetic pathway in the corpora allata-corpora cardiaca (CA-CC) complex. The expression of six of these genes showing relatively high transcript abundance was contrasted with CA size, hemolymph JH titer, as well as JH degradation rates and JH esterase (jhe) transcript levels. Gene expression did not match the contrasting JH titers in queen and worker fourth instar larvae, but jhe transcript abundance and JH degradation rates were significantly lower in queen larvae. Consequently, transcriptional control of JHE is of importance in regulating larval JH titers and caste development. In contrast, the same analyses applied to adult worker bees allowed us inferring that the high JH levels in foragers are due to increased JH synthesis. Upon RNAi-mediated silencing of the methyl farnesoate epoxidase gene (mfe) encoding the enzyme that catalyzes methyl farnesoate-to-JH conversion, the JH titer was decreased, thus corroborating that JH titer regulation in adult honey bees depends on this final JH biosynthesis step. The molecular pathway differences underlying JH titer regulation in larval caste development versus adult age polyethism lead us to propose that mfe and jhe genes be assayed when addressing questions on the role(s) of JH in social evolution. PMID:24489805

  1. A Linguistic Inquiry and Word Count Analysis of the Adult Attachment Interview in Two Large Corpora.

    PubMed

    Waters, Theodore E A; Steele, Ryan D; Roisman, Glenn I; Haydon, Katherine C; Booth-LaForce, Cathryn

    2016-01-01

    An emerging literature suggests that variation in Adult Attachment Interview (AAI; George, Kaplan, & Main, 1985) states of mind about childhood experiences with primary caregivers is reflected in specific linguistic features captured by the Linguistic Inquiry Word Count automated text analysis program (LIWC; Pennebaker, Booth, & Francis, 2007). The current report addressed limitations of prior studies in this literature by using two large AAI corpora ( N s = 826 and 857) and a broader range of linguistic variables, as well as examining associations of LIWC-derived AAI dimensions with key developmental antecedents. First, regression analyses revealed that dismissing states of mind were associated with transcripts that were more truncated and deemphasized discussion of the attachment relationship whereas preoccupied states of mind were associated with longer, more conflicted, and angry narratives. Second, in aggregate, LIWC variables accounted for over a third of the variation in AAI dismissing and preoccupied states of mind, with regression weights cross-validating across samples. Third, LIWC-derived dismissing and preoccupied state of mind dimensions were associated with direct observations of maternal and paternal sensitivity as well as infant attachment security in childhood, replicating the pattern of results reported in Haydon, Roisman, Owen, Booth-LaForce, and Cox (2014) using coder-derived dismissing and preoccupation scores in the same sample.

  2. Mining Quality Phrases from Massive Text Corpora

    PubMed Central

    Liu, Jialu; Shang, Jingbo; Wang, Chi; Ren, Xiang; Han, Jiawei

    2015-01-01

    Text data are ubiquitous and play an essential role in big data applications. However, text data are mostly unstructured. Transforming unstructured text into structured units (e.g., semantically meaningful phrases) will substantially reduce semantic ambiguity and enhance the power and efficiency at manipulating such data using database technology. Thus mining quality phrases is a critical research problem in the field of databases. In this paper, we propose a new framework that extracts quality phrases from text corpora integrated with phrasal segmentation. The framework requires only limited training but the quality of phrases so generated is close to human judgment. Moreover, the method is scalable: both computation time and required space grow linearly as corpus size increases. Our experiments on large text corpora demonstrate the quality and efficiency of the new method. PMID:26705375

  3. Spoken Records. Third Edition.

    ERIC Educational Resources Information Center

    Roach, Helen

    Surveying 75 years of accomplishment in the field of spoken recording, this reference work critically evaluates commercially available recordings selected for excellence of execution, literary or historical merit, interest, and entertainment value. Some types of spoken records included are early recording, documentaries, lectures, interviews,…

  4. Effects of Aging and Noise on Real-Time Spoken Word Recognition: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Ben-David, Boaz M.; Chambers, Craig G.; Daneman, Meredyth; Pichora-Fuller, M. Kathleen; Reingold, Eyal M.; Schneider, Bruce A.

    2011-01-01

    Purpose: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. Method: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted…

  5. Corpora in Language Teaching and Learning

    ERIC Educational Resources Information Center

    Boulton, Alex

    2017-01-01

    This timeline looks at explicit uses of corpora in foreign or second language (L2) teaching and learning, i.e. what happens when end-users explore corpus data, whether directly via concordancers or integrated into CALL programs, or indirectly with prepared printed materials. The underlying rationale is that such contact provides the massive…

  6. Recognizing Spoken Words: The Neighborhood Activation Model

    PubMed Central

    Luce, Paul A.; Pisoni, David B.

    2012-01-01

    Objective A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in

  7. Cognitive aging and hearing acuity: modeling spoken language comprehension.

    PubMed

    Wingfield, Arthur; Amichetti, Nicole M; Lash, Amanda

    2015-01-01

    The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.

  8. Evaluating gold standard corpora against gene/protein tagging solutions and lexical resources

    PubMed Central

    2013-01-01

    Motivation The identification of protein and gene names (PGNs) from the scientific literature requires semantic resources: Terminological and lexical resources deliver the term candidates into PGN tagging solutions and the gold standard corpora (GSC) train them to identify term parameters and contextual features. Ideally all three resources, i.e. corpora, lexica and taggers, cover the same domain knowledge, and thus support identification of the same types of PGNs and cover all of them. Unfortunately, none of the three serves as a predominant standard and for this reason it is worth exploring, how these three resources comply with each other. We systematically compare different PGN taggers against publicly available corpora and analyze the impact of the included lexical resource in their performance. In particular, we determine the performance gains through false positive filtering, which contributes to the disambiguation of identified PGNs. Results In general, machine learning approaches (ML-Tag) for PGN tagging show higher F1-measure performance against the BioCreative-II and Jnlpba GSCs (exact matching), whereas the lexicon based approaches (LexTag) in combination with disambiguation methods show better results on FsuPrge and PennBio. The ML-Tag solutions balance precision and recall, whereas the LexTag solutions have different precision and recall profiles at the same F1-measure across all corpora. Higher recall is achieved with larger lexical resources, which also introduce more noise (false positive results). The ML-Tag solutions certainly perform best, if the test corpus is from the same GSC as the training corpus. As expected, the false negative errors characterize the test corpora and – on the other hand – the profiles of the false positive mistakes characterize the tagging solutions. Lex-Tag solutions that are based on a large terminological resource in combination with false positive filtering produce better results, which, in addition, provide

  9. A Corpus-Based EAP Course for NNS Doctoral Students: Moving from Available Specialized Corpora to Self-Compiled Corpora

    ERIC Educational Resources Information Center

    Lee, David; Swales, John

    2006-01-01

    This paper presents a discussion of an experimental, innovative course in corpus-informed EAP for doctoral students. Participants were given access to specialized corpora of academic writing and speaking, instructed in the tools of the trade (web- and PC-based concordancers) and gradually inducted into the skills needed to best exploit the data…

  10. Automatic Construction of English/Chinese Parallel Corpora.

    ERIC Educational Resources Information Center

    Yang, Christopher C.; Li, Kar Wing

    2003-01-01

    Discussion of multilingual corpora and cross-lingual information retrieval focuses on research that constructed English/Chinese parallel corpus automatically from the World Wide Web. Presents an alignment method which is based on dynamic programming to identify one-to-one Chinese and English title pairs and discusses results of experiments…

  11. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension

    PubMed Central

    Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974

  12. Gonadotropin binding sites in human ovarian follicles and corpora lutea during the menstrual cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shima, K.; Kitayama, S.; Nakano, R.

    Gonadotropin binding sites were localized by autoradiography after incubation of human ovarian sections with /sup 125/I-labeled gonadotropins. The binding sites for /sup 125/I-labeled human follicle-stimulating hormone (/sup 125/I-hFSH) were identified in the granulosa cells and in the newly formed corpora lutea. The /sup 125/I-labeled human luteinizing hormone (/sup 125/I-hLH) binding to the thecal cells increased during follicular maturation, and a dramatic increase was preferentially observed in the granulosa cells of the large preovulatory follicle. In the corpora lutea, the binding of /sup 125/I-hLH increased from the early luteal phase and decreased toward the late luteal phase. The changes in 3more » beta-hydroxysteroid dehydrogenase activity in the corpora lutea corresponded to the /sup 125/I-hLH binding. Thus, the changes in gonadotropin binding sites in the follicles and corpora lutea during the menstrual cycle may help in some important way to regulate human ovarian function.« less

  13. Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items

    ERIC Educational Resources Information Center

    Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.

    2013-01-01

    An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…

  14. Morphometric Correlates of the Ovary and Ovulatory Corpora in the Bowhead Whale, Balaena mysticetus.

    PubMed

    Tarpley, Raymond J; Hillmann, Daniel J; George, John C; Zeh, Judith E; Suydam, Robert S

    2016-06-01

    Gross morphology and morphometry of the bowhead whale ovary, including ovulatory corpora, were investigated in 50 whales from the Chukchi and Beaufort seas off the coast of Alaska. Using the presence of ovarian corpora to define sexual maturity, 23 sexually immature whales (7.6-14.2 m total body length) and 27 sexually mature whales (14.2-17.7 m total body length) were identified. Ovary pair weights ranged from 0.38 to 2.45 kg and 2.92 to 12.02 kg for sexually immature and sexually mature whales, respectively. In sexually mature whales, corpora lutea (CLs) and/or large corpora albicantia (CAs) projected beyond ovary surfaces. CAs became increasingly less interruptive of the surface contour as they regressed, while remaining identifiable within transverse sections of the ovarian cortex. CLs formed large globular bodies, often with a central lumen, featuring golden parenchymas enfolded within radiating fibrous cords. CAs, sometimes vesicular, featured a dense fibrous core with outward fibrous projections through the former luteal tissue. CLs (never more than one per ovary pair) ranged from 6.7 to 15.0 cm in diameter in 13 whales. Fetuses were confirmed in nine of the 13 whales, with the associated CLs ranging from 8.3 to 15.0 cm in diameter. CLs from four whales where a fetus was not detected ranged from 6.7 to 10.6 cm in diameter. CA totals ranged from 0 to 22 for any single ovary, and from 1 to 41 for an ovary pair. CAs measured from 0.3 to 6.3 cm in diameter, and smaller corpora were more numerous, suggesting an accumulating record of ovulation. Neither the left nor the right ovary dominated in the production of corpora. Anat Rec, 299:769-797, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    ERIC Educational Resources Information Center

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  16. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    PubMed Central

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  17. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences

    ERIC Educational Resources Information Center

    Koeritzer, Margaret A.; Rogers, Chad S.; Van Engen, Kristin J.; Peelle, Jonathan E.

    2018-01-01

    Purpose: The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. Method: We tested 30 young adults and 30 older adults. Participants heard lists of sentences in…

  18. The Academic Spoken Word List

    ERIC Educational Resources Information Center

    Dang, Thi Ngoc Yen; Coxhead, Averil; Webb, Stuart

    2017-01-01

    The linguistic features of academic spoken English are different from those of academic written English. Therefore, for this study, an Academic Spoken Word List (ASWL) was developed and validated to help second language (L2) learners enhance their comprehension of academic speech in English-medium universities. The ASWL contains 1,741 word…

  19. 31 CFR 358.6 - What is the procedure for converting bearer corpora and detached bearer coupons to book-entry?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... bearer corpora and detached bearer coupons to book-entry? 358.6 Section 358.6 Money and Finance: Treasury... PUBLIC DEBT REGULATIONS GOVERNING BOOK-ENTRY CONVERSION OF BEARER CORPORA AND DETACHED BEARER COUPONS § 358.6 What is the procedure for converting bearer corpora and detached bearer coupons to book-entry...

  20. Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity

    ERIC Educational Resources Information Center

    Wong, Miranda Kit-Yi; So, Wing Chee

    2016-01-01

    This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…

  1. Lexicogrammar in the International Construction Industry: A Corpus-Based Case Study of Japanese-Hong-Kongese On-Site Interactions in English

    ERIC Educational Resources Information Center

    Handford, Michael; Matous, Petr

    2011-01-01

    The purpose of this research is to identify and interpret statistically significant lexicogrammatical items that are used in on-site spoken communication in the international construction industry, initially through comparisons with reference corpora of everyday spoken and business language. Several data sources, including audio and video…

  2. Talker familiarity and spoken word recognition in school-age children*

    PubMed Central

    Levi, Susannah V.

    2014-01-01

    Research with adults has shown that spoken language processing is improved when listeners are familiar with talkers’ voices, known as the familiar talker advantage. The current study explored whether this ability extends to school-age children, who are still acquiring language. Children were familiarized with the voices of three German–English bilingual talkers and were tested on the speech of six bilinguals, three of whom were familiar. Results revealed that children do show improved spoken language processing when they are familiar with the talkers, but this improvement was limited to highly familiar lexical items. This restriction of the familiar talker advantage is attributed to differences in the representation of highly familiar and less familiar lexical items. In addition, children did not exhibit accent-general learning; despite having been exposed to German-accented talkers during training, there was no improvement for novel German-accented talkers. PMID:25159173

  3. Influences of spoken word planning on speech recognition.

    PubMed

    Roelofs, Ardi; Ozdemir, Rebecca; Levelt, Willem J M

    2007-09-01

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. 2007 APA

  4. Using Teacher-Developed Corpora in the CBI Classroom

    ERIC Educational Resources Information Center

    Salsbury, Tom; Crummer, Crista

    2008-01-01

    This article argues for the use of teacher-generated corpora in content-based courses. Using a content course for engineering and architecture students as an example, the article explains how a corpus consisting of texts from textbooks and journal articles helped students learn grammar, vocabulary, and writing. The article explains how the corpus…

  5. Use of English Corpora as a Primary Resource to Teach English to the Bengali Learners

    ERIC Educational Resources Information Center

    Dash, Niladri Sekhar

    2011-01-01

    In this paper we argue in favour of teaching English as a second language to the Bengali learners with direct utilisation of English corpora. The proposed strategy is meant to be assisted with computer and is based on data, information, and examples retrieved from the present-day English corpora developed with various text samples composed by…

  6. The time course of spoken word learning and recognition: studies with artificial lexicons.

    PubMed

    Magnuson, James S; Tanenhaus, Michael K; Aslin, Richard N; Dahan, Delphine

    2003-06-01

    The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.

  7. How Do Raters Judge Spoken Vocabulary?

    ERIC Educational Resources Information Center

    Li, Hui

    2016-01-01

    The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…

  8. How Can We Use Corpus Wordlists for Language Learning? Interfaces between Computer Corpora and Expert Intervention

    ERIC Educational Resources Information Center

    Chen, Yu-Hua; Bruncak, Radovan

    2015-01-01

    With the advances in technology, wordlists retrieved from computer corpora have become increasingly popular in recent years. The lexical items in those wordlists are usually selected, according to a set of robust frequency and dispersion criteria, from large corpora of authentic and naturally occurring language. Corpus wordlists are of great value…

  9. Exploratory analysis of real personal emergency response call conversations: considerations for personal emergency response spoken dialogue systems.

    PubMed

    Young, Victoria; Rochon, Elizabeth; Mihailidis, Alex

    2016-11-14

    The purpose of this study was to derive data from real, recorded, personal emergency response call conversations to help improve the artificial intelligence and decision making capability of a spoken dialogue system in a smart personal emergency response system. The main study objectives were to: develop a model of personal emergency response; determine categories for the model's features; identify and calculate measures from call conversations (verbal ability, conversational structure, timing); and examine conversational patterns and relationships between measures and model features applicable for improving the system's ability to automatically identify call model categories and predict a target response. This study was exploratory and used mixed methods. Personal emergency response calls were pre-classified according to call model categories identified qualitatively from response call transcripts. The relationships between six verbal ability measures, three conversational structure measures, two timing measures and three independent factors: caller type, risk level, and speaker type, were examined statistically. Emergency medical response services were the preferred response for the majority of medium and high risk calls for both caller types. Older adult callers mainly requested non-emergency medical service responders during medium risk situations. By measuring the number of spoken words-per-minute and turn-length-in-words for the first spoken utterance of a call, older adult and care provider callers could be identified with moderate accuracy. Average call taker response time was calculated using the number-of-speaker-turns and time-in-seconds measures. Care providers and older adults used different conversational strategies when responding to call takers. The words 'ambulance' and 'paramedic' may hold different latent connotations for different callers. The data derived from the real personal emergency response recordings may help a spoken dialogue system

  10. Integration of Partial Information within and across Modalities: Contributions to Spoken and Written Sentence Recognition

    ERIC Educational Resources Information Center

    Smith, Kimberly G.; Fogerty, Daniel

    2015-01-01

    Purpose: This study evaluated the extent to which partial spoken or written information facilitates sentence recognition under degraded unimodal and multimodal conditions. Method: Twenty young adults with typical hearing completed sentence recognition tasks in unimodal and multimodal conditions across 3 proportions of preservation. In the unimodal…

  11. Famous talker effects in spoken word recognition.

    PubMed

    Maibauer, Alisa M; Markis, Teresa A; Newell, Jessica; McLennan, Conor T

    2014-01-01

    Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers.

  12. Annotation of Korean Learner Corpora for Particle Error Detection

    ERIC Educational Resources Information Center

    Lee, Sun-Hee; Jang, Seok Bae; Seo, Sang-Kyu

    2009-01-01

    In this study, we focus on particle errors and discuss an annotation scheme for Korean learner corpora that can be used to extract heuristic patterns of particle errors efficiently. We investigate different properties of particle errors so that they can be later used to identify learner errors automatically, and we provide resourceful annotation…

  13. Some Benefits of Corpora as a Language Learning Tool

    ERIC Educational Resources Information Center

    Marjanovic, Tatjana

    2012-01-01

    What this paper is meant to do is share illustrations and insights into how English learners and teachers alike can benefit from using corpora in their work. Arguments are made for their multifaceted possibilities as grammatical, lexical and discourse pools suitable for discovering ways of the language, be they regularities or idiosyncrasies. The…

  14. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  15. Subarashii: Encounters in Japanese Spoken Language Education.

    ERIC Educational Resources Information Center

    Bernstein, Jared; Najmi, Amir; Ehsani, Farzad

    1999-01-01

    Describes Subarashii, an experimental computer-based interactive spoken-language education system designed to understand what a student is saying in Japanese and respond in a meaningful way in spoken Japanese. Implementation of a preprototype version of the Subarashii system identified strengths and limitations of continuous speech recognition…

  16. A linear-RBF multikernel SVM to classify big text corpora.

    PubMed

    Romero, R; Iglesias, E L; Borrajo, L

    2015-01-01

    Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.

  17. Primary diffuse large B-cell lymphoma of the corpora cavernosa presented as a perineal mass

    PubMed Central

    Carlos, González-Satué; Ivanna, Valverde Vilamala; Gustavo, Tapia Melendo; Joan, Areal Calama; Javier, Sanchez Macias; Luis, Ibarz Servio

    2012-01-01

    Primary male genital lymphomas may appear rarely in testis, and exceptionally in the penis and prostate, but there is not previous evidence of a lymphoma arising from the corpora cavernosa. We report the first case in the literature of a primary diffuse cell B lymphoma of the corpora cavernosa presented with low urinary tract symptoms, perineal pain and palpable mass. Diagnosis was based on trucut biopsy, histopathological studies and computed tomographic images. PMID:22919138

  18. Building Spoken Language in the First Plane

    ERIC Educational Resources Information Center

    Bettmann, Joen

    2016-01-01

    Through a strong Montessori orientation to the parameters of spoken language, Joen Bettmann makes the case for "materializing" spoken knowledge using the stimulation of real objects and real situations that promote mature discussion around the sensorial aspect of the prepared environment. She lists specific materials in the classroom…

  19. The Use of General and Specialized Corpora as Reference Sources for Academic English Writing: A Case Study

    ERIC Educational Resources Information Center

    Chang, Ji-Yeon

    2014-01-01

    Corpora have been suggested as valuable sources for teaching English for academic purposes (EAP). Since previous studies have mainly focused on corpus use in classroom settings, more research is needed to reveal how students react to using corpora on their own and what should be provided to help them become autonomous corpus users, considering…

  20. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    PubMed Central

    Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping

    2017-01-01

    Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading. PMID:28690507

  1. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults.

    PubMed

    Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping

    2017-01-01

    Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

  2. Proficiency Level--A Fuzzy Variable in Computer Learner Corpora

    ERIC Educational Resources Information Center

    Carlsen, Cecilie

    2012-01-01

    This article focuses on the proficiency level of texts in Computer Learner Corpora (CLCs). A claim is made that proficiency levels are often poorly defined in CLC design, and that the methods used for level assignment of corpus texts are not always adequate. Proficiency level can therefore, best be described as a fuzzy variable in CLCs,…

  3. A Platform for Multilingual Research in Spoken Dialogue Systems

    DTIC Science & Technology

    2000-08-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010384 TITLE: A Platform for Multilingual Research in Spoken Dialogue...Ronald A. Cole*, Ben Serridge§, John-Paul Hosomý, Andrew Cronke, and Ed Kaiser* *Center for Spoken Language Understanding ; University of Colorado...Boulder; Boulder, CO, 80309, USA §Universidad de las Americas; 72820 Santa Catarina Martir; Puebla, Mexico *Center for Spoken Language Understanding (CSLU

  4. Contribution of Implicit Sequence Learning to Spoken Language Processing: Some Preliminary Findings with Hearing Adults

    ERIC Educational Resources Information Center

    Conway, Christopher M.; Karpicke, Jennifer; Pisoni, David B.

    2007-01-01

    Spoken language consists of a complex, sequentially arrayed signal that contains patterns that can be described in terms of statistical relations among language units. Previous research has suggested that a domain-general ability to learn structured sequential patterns may underlie language acquisition. To test this prediction, we examined the…

  5. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    PubMed

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  6. Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.

    PubMed

    Hunter, Cynthia R; Pisoni, David B

    Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low

  7. Development of brain networks involved in spoken word processing of Mandarin Chinese.

    PubMed

    Cao, Fan; Khalid, Kainat; Lee, Rebecca; Brennan, Christine; Yang, Yanhui; Li, Kuncheng; Bolger, Donald J; Booth, James R

    2011-08-01

    Developmental differences in phonological and orthographic processing of Chinese spoken words were examined in 9-year-olds, 11-year-olds and adults using functional magnetic resonance imaging (fMRI). Rhyming and spelling judgments were made to two-character words presented sequentially in the auditory modality. Developmental comparisons between adults and both groups of children combined showed that age-related changes in activation in visuo-orthographic regions depended on a task. There were developmental increases in the left inferior temporal gyrus and the right inferior occipital gyrus in the spelling task, suggesting more extensive visuo-orthographic processing in a task that required access to these representations. Conversely, there were developmental decreases in activation in the left fusiform gyrus and left middle occipital gyrus in the rhyming task, suggesting that the development of reading is marked by reduced involvement of orthography in a spoken language task that does not require access to these orthographic representations. Developmental decreases may arise from the existence of extensive homophony (auditory words that have multiple spellings) in Chinese. In addition, we found that 11-year-olds and adults showed similar activation in the left superior temporal gyrus across tasks, with both groups showing greater activation than 9-year-olds. This pattern suggests early development of perceptual representations of phonology. In contrast, 11-year-olds and 9-year-olds showed similar activation in the left inferior frontal gyrus across tasks, with both groups showing weaker activation than adults. This pattern suggests late development of controlled retrieval and selection of lexical representations. Altogether, this study suggests differential effects of character acquisition on development of components of the language network in Chinese as compared to previous reports on alphabetic languages. Published by Elsevier Inc.

  8. Syntax and reading comprehension: a meta-analysis of different spoken-syntax assessments.

    PubMed

    Brimo, Danielle; Lund, Emily; Sapp, Alysha

    2018-05-01

    Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below

  9. A Word by Any Other Intonation: FMRI Evidence for Implicit Memory Traces for Pitch Contours of Spoken Words in Adult Brains

    PubMed Central

    Inspector, Michael; Manor, David; Amir, Noam; Kushnir, Tamar; Karni, Avi

    2013-01-01

    Objectives Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. Experimental design Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. Principal findings The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. Conclusions Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words. PMID:24391713

  10. Modeling open-set spoken word recognition in postlingually deafened adults after cochlear implantation: some preliminary results with the neighborhood activation model.

    PubMed

    Meyer, Ted A; Frisch, Stefan A; Pisoni, David B; Miyamoto, Richard T; Svirsky, Mario A

    2003-07-01

    Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener's lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener's closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly

  11. Modeling Open-Set Spoken Word Recognition in Postlingually Deafened Adults after Cochlear Implantation: Some Preliminary Results with the Neighborhood Activation Model

    PubMed Central

    Meyer, Ted A.; Frisch, Stefan A.; Pisoni, David B.; Miyamoto, Richard T.; Svirsky, Mario A.

    2012-01-01

    Hypotheses Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? Background The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener’s lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener’s closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Methods Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. Results The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test

  12. Circumferential Peyronie's disease involving both the corpora cavernosa.

    PubMed

    Narita, T; Kudo, H; Matsumoto, K

    1995-05-01

    An extraordinary form of Peyronies disease is reported. The patient was a 52 year old male, who died of a malignant thymoma with multiple bone metastasis, extensive pleural carcinomatosis of the left lung and some metastatic nodules in the liver and the mesenterium. At autopsy, the proximal and middle portions of the penis were very hard. Macroscopically, the entire tunica albuginea of both the corpora cavernosa was markedly thickened, 2-4 mm; and calcified. Microscopically, the tunica albuginea showed extensive hyaline degeneration, calcification and ossifying foci with osteoblasts and osteoclasts. Inflammatory cells were frequently found beneath the thickened tunica albuginea. In the corpus cavernosum, cavernous arteries showed marked intimal thickening and medial muscular degeneration with a few inflammatory cells. Smooth muscles of the stroma were extensively atrophic and degenerative, and some of them were infiltrated with a few inflammatory cells. In the corpus spongiosum, the tunica albuginea was not thickened, but the smooth muscle in the stroma was atrophic and degenerative and a few inflammatory cells were also found. Surprisingly, there was no Littrés gland around the urethra. In Peyronies disease, the dorsal part of the penis is usually involved, and less frequently lateral or ventral sites are involved. The circumferential involvement of both the corpora cavernosa has not been reported until now, as far as the authors know.

  13. Direction Asymmetries in Spoken and Signed Language Interpreting

    ERIC Educational Resources Information Center

    Nicodemus, Brenda; Emmorey, Karen

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…

  14. On-Line Orthographic Influences on Spoken Language in a Semantic Task

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Perre, Laetitia; Dufau, Stephane; Ziegler, Johannes C.

    2009-01-01

    Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a…

  15. Presentation video retrieval using automatically recovered slide and spoken text

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  16. Using Corpora in EFL Classrooms: The Case Study of IELTS Preparation

    ERIC Educational Resources Information Center

    Smirnova, Elizaveta A.

    2017-01-01

    This article describes the gathered experience in using corpora in an IELTS preparation course. The practice demonstrates an attempt to reduce negative washback effects occurring when preparation courses just concentrate on the test format neglecting the importance of development of learners' language skills and general study skills. Some…

  17. Fast mapping semantic features: performance of adults with normal language, history of disorders of spoken and written language, and attention deficit hyperactivity disorder on a word-learning task.

    PubMed

    Alt, Mary; Gutmann, Michelle L

    2009-01-01

    This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time). The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group. Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers. Readers will be able to: (1) recognize the influence of a dual disability (hDSWL and ADHD) on word learning outcomes; (2) identify factors that may contribute to word learning in adults in terms of (a) the nature of the words to be learned and (b) the language processing of the learner.

  18. Instructional Benefits of Spoken Words: A Review of Cognitive Load Factors

    ERIC Educational Resources Information Center

    Kalyuga, Slava

    2012-01-01

    Spoken words have always been an important component of traditional instruction. With the development of modern educational technology tools, spoken text more often replaces or supplements written or on-screen textual representations. However, there could be a cognitive load cost involved in this trend, as spoken words can have both benefits and…

  19. Setting the Tone: An ERP Investigation of the Influences of Phonological Similarity on Spoken Word Recognition in Mandarin Chinese

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2012-01-01

    We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin ("N" = 19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following…

  20. Modeling Spoken Word Recognition Performance by Pediatric Cochlear Implant Users using Feature Identification

    PubMed Central

    Frisch, Stefan A.; Pisoni, David B.

    2012-01-01

    Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with

  1. Orthographic effects in spoken word recognition: Evidence from Chinese.

    PubMed

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  2. A word by any other intonation: fMRI evidence for implicit memory traces for pitch contours of spoken words in adult brains.

    PubMed

    Inspector, Michael; Manor, David; Amir, Noam; Kushnir, Tamar; Karni, Avi

    2013-01-01

    Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.

  3. Is the time course of lexical activation and competition in spoken word recognition affected by adult aging? An event-related potential (ERP) study.

    PubMed

    Hunter, Cynthia R

    2016-10-01

    Adult aging is associated with decreased accuracy for recognizing speech, particularly in noisy backgrounds and for high neighborhood density words, which sound similar to many other words. In the current study, the time course of neighborhood density effects in young and older adults was compared using event-related potentials (ERP) and behavioral responses in a lexical decision task for spoken words and nonwords presented either in quiet or in noise. Target items sounded similar either to many or to few other words (neighborhood density) but were balanced for the frequency of their component sounds (phonotactic probability). Behavioral effects of density were similar across age groups, but the event-related potential effects of density differed as a function of age group. For young adults, density modulated the amplitude of both the N400 and the later P300 or late positive complex (LPC). For older adults, density modulated only the amplitude of the P300/LPC. Thus, spreading activation to the semantics of lexical neighbors, indexed by the N400 density effect, appears to be reduced or delayed in adult aging. In contrast, effects of density on P300/LPC amplitude were present in both age groups, perhaps reflecting attentional allocation to items that resemble few words in the mental lexicon. The results constitute the first evidence that ERP effects of neighborhood density are affected by adult aging. The age difference may reflect either a unitary density effect that is delayed by approximately 150ms in older adults, or multiple processes that are differentially affected by aging. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Redundancy in electronic health record corpora: analysis, impact on text mining performance and mitigation strategies.

    PubMed

    Cohen, Raphael; Elhadad, Michael; Elhadad, Noémie

    2013-01-16

    The increasing availability of Electronic Health Record (EHR) data and specifically free-text patient notes presents opportunities for phenotype extraction. Text-mining methods in particular can help disease modeling by mapping named-entities mentions to terminologies and clustering semantically related terms. EHR corpora, however, exhibit specific statistical and linguistic characteristics when compared with corpora in the biomedical literature domain. We focus on copy-and-paste redundancy: clinicians typically copy and paste information from previous notes when documenting a current patient encounter. Thus, within a longitudinal patient record, one expects to observe heavy redundancy. In this paper, we ask three research questions: (i) How can redundancy be quantified in large-scale text corpora? (ii) Conventional wisdom is that larger corpora yield better results in text mining. But how does the observed EHR redundancy affect text mining? Does such redundancy introduce a bias that distorts learned models? Or does the redundancy introduce benefits by highlighting stable and important subsets of the corpus? (iii) How can one mitigate the impact of redundancy on text mining? We analyze a large-scale EHR corpus and quantify redundancy both in terms of word and semantic concept repetition. We observe redundancy levels of about 30% and non-standard distribution of both words and concepts. We measure the impact of redundancy on two standard text-mining applications: collocation identification and topic modeling. We compare the results of these methods on synthetic data with controlled levels of redundancy and observe significant performance variation. Finally, we compare two mitigation strategies to avoid redundancy-induced bias: (i) a baseline strategy, keeping only the last note for each patient in the corpus; (ii) removing redundant notes with an efficient fingerprinting-based algorithm. (a)For text mining, preprocessing the EHR corpus with fingerprinting yields

  5. Exploring Learner Language through Corpora: Comparing and Interpreting Corpus Frequency Information

    ERIC Educational Resources Information Center

    Gablasova, Dana; Brezina, Vaclav; McEnery, Tony

    2017-01-01

    This article contributes to the debate about the appropriate use of corpus data in language learning research. It focuses on frequencies of linguistic features in language use and their comparison across corpora. The majority of corpus-based second language acquisition studies employ a comparative design in which either one or more second language…

  6. Spoken Language and Mathematics.

    ERIC Educational Resources Information Center

    Raiker, Andrea

    2002-01-01

    States teachers/learners use spoken language in a three part mathematics lesson advocated by the British National Numeracy Strategy. Recognizes language's importance by emphasizing correct use of mathematical vocabulary in raising standards. Finds pupils and teachers appear to ascribe different meanings to scientific words because of their…

  7. Teaching the Spoken Language.

    ERIC Educational Resources Information Center

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  8. Combining Language Corpora with Experimental and Computational Approaches for Language Acquisition Research

    ERIC Educational Resources Information Center

    Monaghan, Padraic; Rowland, Caroline F.

    2017-01-01

    Historically, first language acquisition research was a painstaking process of observation, requiring the laborious hand coding of children's linguistic productions, followed by the generation of abstract theoretical proposals for how the developmental process unfolds. Recently, the ability to collect large-scale corpora of children's language…

  9. Spoken language development in children following cochlear implantation.

    PubMed

    Niparko, John K; Tobey, Emily A; Thal, Donna J; Eisenberg, Laurie S; Wang, Nae-Yuh; Quittner, Alexandra L; Fink, Nancy E

    2010-04-21

    Cochlear implantation is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe to profound sensorineural hearing loss (SNHL). To prospectively assess spoken language acquisition following cochlear implantation in young children. Prospective, longitudinal, and multidimensional assessment of spoken language development over a 3-year period in children who underwent cochlear implantation before 5 years of age (n = 188) from 6 US centers and hearing children of similar ages (n = 97) from 2 preschools recruited between November 2002 and December 2004. Follow-up completed between November 2005 and May 2008. Performance on measures of spoken language comprehension and expression (Reynell Developmental Language Scales). Children undergoing cochlear implantation showed greater improvement in spoken language performance (10.4; 95% confidence interval [CI], 9.6-11.2 points per year in comprehension; 8.4; 95% CI, 7.8-9.0 in expression) than would be predicted by their preimplantation baseline scores (5.4; 95% CI, 4.1-6.7, comprehension; 5.8; 95% CI, 4.6-7.0, expression), although mean scores were not restored to age-appropriate levels after 3 years. Younger age at cochlear implantation was associated with significantly steeper rate increases in comprehension (1.1; 95% CI, 0.5-1.7 points per year younger) and expression (1.0; 95% CI, 0.6-1.5 points per year younger). Similarly, each 1-year shorter history of hearing deficit was associated with steeper rate increases in comprehension (0.8; 95% CI, 0.2-1.2 points per year shorter) and expression (0.6; 95% CI, 0.2-1.0 points per year shorter). In multivariable analyses, greater residual hearing prior to cochlear implantation, higher ratings of parent-child interactions, and higher socioeconomic status were associated with greater rates of improvement in comprehension and expression. The use of cochlear implants in young children was

  10. Spoken Language Development in Children Following Cochlear Implantation

    PubMed Central

    Niparko, John K.; Tobey, Emily A.; Thal, Donna J.; Eisenberg, Laurie S.; Wang, Nae-Yuh; Quittner, Alexandra L.; Fink, Nancy E.

    2010-01-01

    Context Cochlear implantation (CI) is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe-to-profound sensorineural hearing loss (SNHL). Objective To prospectively assess spoken language acquisition following CI in young children with adjustment of co-variates. Design, Setting, and Participants Prospective, longitudinal, and multidimensional assessment of spoken language growth over a 3-year period following CI. Prospective cohort study of children who underwent CI before 5 years of age (n=188) from 6 US centers and hearing children of similar ages (n=97) from 2 preschools recruited between November, 2002 and December, 2004. Follow-up completed between November, 2005 and May, 2008. Main Outcome Measures Performance on measures of spoken language comprehension and expression. Results Children undergoing CI showed greater growth in spoken language performance (10.4;[95% confidence interval: 9.6–11.2] points/year in comprehension; 8.4;[7.8–9.0] in expression) than would be predicted by their pre-CI baseline scores (5.4;[4.1–6.7] comprehension; 5.8;[4.6–7.0] expression). Although mean scores were not restored to age-appropriate levels after 3 years, significantly greater annual rates of language acquisition were observed in children who were younger at CI (1.1;[0.5–1.7] points in comprehension per year younger; 1.0;[0.6–1.5] in expression), and in children with shorter histories of hearing deficit (0.8;[0.2,1.2] points in comprehension per year shorter; 0.6;[0.2–1.0] for expression). In multivariable analyses, greater residual hearing prior to CI, higher ratings of parent-child interactions, and higher SES associated with greater rates of growth in comprehension and expression. Conclusions The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their pre-implantation scores. However

  11. [Single and combining effects of Calculus Bovis and zolpidem on inhibitive neurotransmitter of rat striatum corpora].

    PubMed

    Liu, Ping; He, Xinrong; Guo, Mei

    2010-04-01

    To investigate the correlation effects between single or combined administration of Calculus Bovis or zolpidem and changes of inhibitive neurotransmitter in rat striatum corpora. Sampling from rat striatum corpora was carried out through microdialysis. The content of two inhibitive neurotransmitters in rat corpus striatum- glycine (Gly) and gama aminobutyric acid (GABA), was determined by HPLC, which involved pre-column derivation with orthophthaladehyde, reversed-phase gradient elution and fluorescence detection. GABA content of rat striatum corpora in Calculus Bovis group was significantly increased compared with saline group (P < 0.01). GABA content of zolpidem group and Calculus Boris plus zolpidem group were increased largely compared with saline group as well (P < 0.05). GABA content of Calculus Bovis group was higher than combination group (P < 0.05). GABA content of zolpidem group was not significantly different from combination group. Gly content of Calculus Bovis or zolpidem group was markedly increased compared with saline group or combination group (P < 0.05). Contents of two inhibitive neurotransmitters in rat striatum corpora were all significantly increased in Calculus Bovis group, zolpidem group and combination group. The magnitude of increase was lower in combination group than in Calculus Bovis group and Zolpidem group, suggesting that Calculus Bovis promoted encephalon inhibition is more powerful than zolpidem. The increase in two inhibitive neurotransmitters did not show reinforcing effect in combination group, suggesting that Calculus Bovis and zolpidem may compete the same receptors. Therefore, combination of Calculus Bovis containing drugs and zolpidem has no clinical significance. Calculus Bovis shouldn't as an aperture-opening drugs be used for resuscitation therapy.

  12. Co-occurrence frequency evaluated with large language corpora boosts semantic priming effects.

    PubMed

    Brunellière, Angèle; Perre, Laetitia; Tran, ThiMai; Bonnotte, Isabelle

    2017-09-01

    In recent decades, many computational techniques have been developed to analyse the contextual usage of words in large language corpora. The present study examined whether the co-occurrence frequency obtained from large language corpora might boost purely semantic priming effects. Two experiments were conducted: one with conscious semantic priming, the other with subliminal semantic priming. Both experiments contrasted three semantic priming contexts: an unrelated priming context and two related priming contexts with word pairs that are semantically related and that co-occur either frequently or infrequently. In the conscious priming presentation (166-ms stimulus-onset asynchrony, SOA), a semantic priming effect was recorded in both related priming contexts, which was greater with higher co-occurrence frequency. In the subliminal priming presentation (66-ms SOA), no significant priming effect was shown, regardless of the related priming context. These results show that co-occurrence frequency boosts pure semantic priming effects and are discussed with reference to models of semantic network.

  13. The influence of orthographic experience on the development of phonological preparation in spoken word production.

    PubMed

    Li, Chuchu; Wang, Min

    2017-08-01

    Three sets of experiments using the picture naming tasks with the form preparation paradigm investigated the influence of orthographic experience on the development of phonological preparation unit in spoken word production in native Mandarin-speaking children. Participants included kindergarten children who have not received formal literacy instruction, Grade 1 children who are comparatively more exposed to the alphabetic pinyin system and have very limited Chinese character knowledge, Grades 2 and 4 children who have better character knowledge and more exposure to characters, and skilled adult readers who have the most advanced character knowledge and most exposure to characters. Only Grade 1 children showed the form preparation effect in the same initial consonant condition (i.e., when a list of target words shared the initial consonant). Both Grade 4 children and adults showed the preparation effect when the initial syllable (but not tone) among target words was shared. Kindergartners and Grade 2 children only showed the preparation effect when the initial syllable including tonal information was shared. These developmental changes in phonological preparation could be interpreted as a joint function of the modification of phonological representation and attentional shift. Extensive pinyin experience encourages speakers to attend to and select onset phoneme in phonological preparation, whereas extensive character experience encourages speakers to prepare spoken words in syllables.

  14. Deleterious effects of progestagen treatment in VEGF expression in corpora lutea of pregnant ewes.

    PubMed

    Letelier, C A; Sanchez, M A; Garcia-Fernandez, R A; Sanchez, B; Garcia-Palencia, P; Gonzalez-Bulnes, A; Flores, J M

    2011-06-01

    The aim of the current study was to determine the possible effects of progestagen oestrous synchronization on vascular endothelial growth factor (VEGF) expression during sheep luteogenesis and the peri-implantation period and the relationship with luteal function. At days 9, 11, 13, 15, 17 and 21 of pregnancy, the ovaries from 30 progestagen treated and 30 ewes cycling after cloprostenol injection were evaluated by ultrasonography and, thereafter, collected and processed for immunohistochemical evaluation of VEGF; blood samples were drawn for evaluating plasma progesterone. The progestagen-treated group showed smaller corpora lutea than cloprostenol-treated and lower progesterone secretion. The expression of VEGF in the luteal cells increased with time in the cloprostenol group, but not in the progestagen-treated group, which even showed a decrease between days 11 and 13. In progestagen-treated sheep, VEGF expression in granulosa-derived parenchymal lobule capillaries was correlated with the size of the luteal tissue, larger corpora lutea had higher expression, and tended to have a higher progesterone secretion. In conclusion, the current study indicates the existence of deleterious effects from exogenous progestagen treatments on progesterone secretion from induced corpora lutea, which correlate with alterations in the expression of VEGF in the luteal tissue and, this, presumably in the processes of neoangiogenesis and luteogenesis. © 2010 Blackwell Verlag GmbH.

  15. TNF-alpha infusion impairs corpora cavernosa reactivity.

    PubMed

    Carneiro, Fernando S; Zemse, Saiprazad; Giachini, Fernanda R C; Carneiro, Zidonia N; Lima, Victor V; Webb, R Clinton; Tostes, Rita C

    2009-03-01

    Erectile dysfunction (ED), as well as cardiovascular diseases (CVDs), is associated with endothelial dysfunction and increased levels of proinflammatory cytokines, such as tumor necrosis factor-alpha (TNF-alpha). We hypothesized that increased TNF-alpha levels impair cavernosal function. In vitro organ bath studies were used to measure cavernosal reactivity in mice infused with vehicle or TNF-alpha (220 ng/kg/min) for 14 days. Gene expression of nitric oxide synthase isoforms was evaluated by real-time polymerase chain reaction. Corpora cavernosa from TNF-alpha-infused mice exhibited decreased nitric oxide (NO)-dependent relaxation, which was associated with decreased endothelial nitric oxide synthase (eNOS) and neuronal nitric oxide synthase (nNOS) cavernosal expression. Cavernosal strips from the TNF-alpha-infused mice displayed decreased nonadrenergic-noncholinergic (NANC)-induced relaxation (59.4 +/- 6.2 vs. control: 76.2 +/- 4.7; 16 Hz) compared with the control animals. These responses were associated with decreased gene expression of eNOS and nNOS (P < 0.05). Sympathetic-mediated, as well as phenylephrine (PE)-induced, contractile responses (PE-induced contraction; 1.32 +/- 0.06 vs. control: 0.9 +/- 0.09, mN) were increased in cavernosal strips from TNF-alpha-infused mice. Additionally, infusion of TNF-alpha increased cavernosal responses to endothelin-1 and endothelin receptor A subtype (ET(A)) receptor expression (P < 0.05) and slightly decreased tumor necrosis factor-alpha receptor 1 (TNFR1) expression (P = 0.063). Corpora cavernosa from TNF-alpha-infused mice display increased contractile responses and decreased NANC nerve-mediated relaxation associated with decreased eNOS and nNOS gene expression. These changes may trigger ED and indicate that TNF-alpha plays a detrimental role in erectile function. Blockade of TNF-alpha actions may represent an alternative therapeutic approach for ED, especially in pathologic conditions associated with increased levels

  16. Spoken words can make the invisible visible-Testing the involvement of low-level visual representations in spoken word processing.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-03-01

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Attentional Capture of Objects Referred to by Spoken Language

    ERIC Educational Resources Information Center

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  18. Enhancing the Performance of Female Students in Spoken English

    ERIC Educational Resources Information Center

    Inegbeboh, Bridget O.

    2009-01-01

    Female students have been discriminated against right from birth in their various cultures and this affects the way they perform in Spoken English class, and how they rate themselves. They have been conditioned to believe that the male gender is superior to the female gender, so they leave the male students to excel in spoken English, while they…

  19. Automatic processing of spoken dialogue in the home hemodialysis domain.

    PubMed

    Lacson, Ronilda; Barzilay, Regina

    2005-01-01

    Spoken medical dialogue is a valuable source of information, and it forms a foundation for diagnosis, prevention and therapeutic management. However, understanding even a perfect transcript of spoken dialogue is challenging for humans because of the lack of structure and the verbosity of dialogues. This work presents a first step towards automatic analysis of spoken medical dialogue. The backbone of our approach is an abstraction of a dialogue into a sequence of semantic categories. This abstraction uncovers structure in informal, verbose conversation between a caregiver and a patient, thereby facilitating automatic processing of dialogue content. Our method induces this structure based on a range of linguistic and contextual features that are integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). This work demonstrates the feasibility of automatically processing spoken medical dialogue.

  20. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  1. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences.

    PubMed

    Koeritzer, Margaret A; Rogers, Chad S; Van Engen, Kristin J; Peelle, Jonathan E

    2018-03-15

    The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. https://doi.org/10.23641/asha.5848059.

  2. Application of Learner Corpora to Second Language Learning and Teaching: An Overview

    ERIC Educational Resources Information Center

    Xu, Qi

    2016-01-01

    The paper gives an overview of learner corpora and their application to second language learning and teaching. It is proposed that there are four core components in learner corpus research, namely, corpus linguistics expertise, a good background in linguistic theory, knowledge of SLA theory, and a good understanding of foreign language teaching…

  3. "Now We Have Spoken."

    ERIC Educational Resources Information Center

    Zimmer, Patricia Moore

    2001-01-01

    Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…

  4. Training L2 Writers to Reference Corpora as a Self-Correction Tool

    ERIC Educational Resources Information Center

    Quinn, Cynthia

    2015-01-01

    Corpora have the potential to support the L2 writing process at the discourse level in contrast to the isolated dictionary entries that many intermediate writers rely on. To take advantage of this resource, learners need to be trained, which involves practising corpus research and referencing skills as well as learning to make data-based…

  5. Using the Longman Mini-concordancer on Tagged and Parsed Corpora, with Special Reference to Their Use as an Aid to Grammar Learning.

    ERIC Educational Resources Information Center

    Qiao, Hong Liang; Sussex, Roland

    1996-01-01

    Presents methods for using the Longman Mini-Concordancer on tagged and parsed corpora rather than plain text corpora. The article discusses several aspects with models to be applied in the classroom as an aid to grammar learning. This paper suggests exercises suitable for teaching English to both native and nonnative speakers. (13 references)…

  6. Luteinizing hormone receptors in human ovarian follicles and corpora lutea during the menstrual cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamoto, M.; Nakano, R.; Iwasaki, M.

    The binding of /sup 125/I-labeled human luteinizing hormone (hLH) to the 2000-g fraction of human ovarian follicles and corpora lutea during the entire menstrual cycle was examined. Specific high affinity, low capacity receptors for hLH were demonstrated in the 2000-g fraction of both follicles and corpora lutea. Specific binding of /sup 125/I-labeled hLH to follicular tissue increased from the early follicular phase to the ovulatory phase. Specific binding of /sup 125/I-labeled hLH to luteal tissue increased from the early luteal phase to the midluteal phase and decreased towards the late luteal phase. The results of the present study indicate thatmore » the increase and decrease in receptors for hLH during the menstrual cycle might play an important role in the regulation of the ovarian cycle.« less

  7. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults

    PubMed Central

    Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  8. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.

    PubMed

    Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  9. Children's Use of Evaluative Devices in Spoken and Written Narratives

    ERIC Educational Resources Information Center

    Drijbooms, Elise; Groen, Margriet A.; Verhoeven, Ludo

    2017-01-01

    This study investigated the development of evaluation in narratives from middle to late childhood, within the context of differentiating between spoken and written modalities. Two parallel forms of a picture story were used to elicit spoken and written narratives from fourth- and sixth-graders. It was expected that, in addition to an increase of…

  10. Inducing Multilingual Text Analysis Tools via Robust Projection across Aligned Corpora

    DTIC Science & Technology

    2001-01-01

    monolingual dictionary - derived list of canonical roots would resolve ambiguity re- garding which is the appropriate target. � Many of the errors are...system and set of algorithms for automati- cally inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity...corpora has tended to focus on their use in translation model training for MT rather than on monolingual applications. One exception is bilin- gual parsing

  11. Spoken Grammar: Where Are We and Where Are We Going?

    ERIC Educational Resources Information Center

    Carter, Ronald; McCarthy, Michael

    2017-01-01

    This article synthesises progress made in the description of spoken (especially conversational) grammar over the 20 years since the authors published a paper in this journal arguing for a re-thinking of grammatical description and pedagogy based on spoken corpus evidence. We begin with a glance back at the 16th century and the teaching of Latin…

  12. Terminating Devices in Spoken French.

    ERIC Educational Resources Information Center

    Andrews, Barry J.

    1989-01-01

    A study examines the way in which one group of discourse connectors, terminators, function in contemporary spoken French. Three types of terminators, elements used at the end of an utterance or section to indicate its completion, are investigated, including utterance terminators, interrogative tags, and terminal tags. (Author/MSE)

  13. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention

    ERIC Educational Resources Information Center

    Medwetsky, Larry

    2011-01-01

    Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…

  14. Spoken language outcomes after hemispherectomy: factoring in etiology.

    PubMed

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology. Copyright 2001 Elsevier Science.

  15. Scaling laws and model of words organization in spoken and written language

    NASA Astrophysics Data System (ADS)

    Bian, Chunhua; Lin, Ruokuang; Zhang, Xiaoyu; Ma, Qianli D. Y.; Ivanov, Plamen Ch.

    2016-01-01

    A broad range of complex physical and biological systems exhibits scaling laws. The human language is a complex system of words organization. Studies of written texts have revealed intriguing scaling laws that characterize the frequency of words occurrence, rank of words, and growth in the number of distinct words with text length. While studies have predominantly focused on the language system in its written form, such as books, little attention is given to the structure of spoken language. Here we investigate a database of spoken language transcripts and written texts, and we uncover that words organization in both spoken language and written texts exhibits scaling laws, although with different crossover regimes and scaling exponents. We propose a model that provides insight into words organization in spoken language and written texts, and successfully accounts for all scaling laws empirically observed in both language forms.

  16. The Application of Corpora in Teaching Grammar: The Case of English Relative Clause

    ERIC Educational Resources Information Center

    Sahragard, Rahman; Kushki, Ali; Ansaripour, Ehsan

    2013-01-01

    The study was conducted to see if the provision of implementing corpora on English relative clauses would prove useful for Iranian EFL learners or not. Two writing classes were held for the participants of intermediate level. A record of 15 writing samples produced by each participant was kept in the form of a portfolio. Participants' portfolios…

  17. Rapid modulation of spoken word recognition by visual primes.

    PubMed

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  18. Rapid modulation of spoken word recognition by visual primes

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2015-01-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296

  19. The Listening and Spoken Language Data Repository: Design and Project Overview

    ERIC Educational Resources Information Center

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  20. Infant perceptual development for faces and spoken words: An integrated approach

    PubMed Central

    Watson, Tamara L; Robbins, Rachel A; Best, Catherine T

    2014-01-01

    There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626

  1. Automatic extraction of property norm-like data from large text corpora.

    PubMed

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties.

  2. The Frequency and Functions of "Just" in British Academic Spoken English

    ERIC Educational Resources Information Center

    Grant, Lynn E.

    2011-01-01

    This study investigates the frequency and functions of "just" in British academic spoken English. It adopts the meanings of "just" established by Lindemann and Mauranen, 2001, taken from the occurrences of "just" across five speech events in the Michigan Corpus of Academic Spoken English (MICASE) to see if they also apply to occurrences of "just"…

  3. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  4. Attempting Arts Integration: Secondary Teachers' Experiences with Spoken Word Poetry

    ERIC Educational Resources Information Center

    Williams, Wendy R.

    2018-01-01

    Spoken word poetry is an art form that involves poetry writing and performance. Past research on spoken word has described the benefits for poets and looked at its use in pre-service teacher education; however, research is needed to understand how to assist in-service teachers in using this art form. During the 2016-2017 school year, 15 teachers…

  5. Evaluating Corpus Literacy Training for Pre-Service Language Teachers: Six Case Studies

    ERIC Educational Resources Information Center

    Heather, Julian; Helt, Marie

    2012-01-01

    Corpus literacy is the ability to use corpora--large, principled databases of spoken and written language--for language analysis and instruction. While linguists have emphasized the importance of corpus training in teacher preparation programs, few studies have investigated the process of initiating teachers into corpus literacy with the result…

  6. BAAL/CUP Seminars 2009

    ERIC Educational Resources Information Center

    Cutting, Joan; Murphy, Brona

    2010-01-01

    The seminar, organised by Joan Cutting and Brona Murphy, aimed: (1) to bring together researchers involved in both emergent and established academic corpora (written and spoken) as well as linguists, lecturers and teachers researching in education, be it language teaching, language-teacher training or continuing professional development in…

  7. Spoken Oral Language and Adult Struggling Readers

    ERIC Educational Resources Information Center

    Bakhtiari, Dariush; Greenberg, Daphne; Patton-Terry, Nicole; Nightingale, Elena

    2015-01-01

    Oral language is a critical component to the development of reading acquisition. Much of the research concerning the relationship between oral language and reading ability is focused on children, while there is a paucity of research focusing on this relationship for adults who struggle with their reading. Oral language as defined in this paper…

  8. Corpora Processing and Computational Scaffolding for a Web-Based English Learning Environment: The CANDLE Project

    ERIC Educational Resources Information Center

    Liou, Hsien-Chin; Chang, Jason S; Chen, Hao-Jan; Lin, Chih-Cheng; Liaw, Meei-Ling; Gao, Zhao-Ming; Jang, Jyh-Shing Roger; Yeh, Yuli; Chuang, Thomas C.; You, Geeng-Neng

    2006-01-01

    This paper describes the development of an innovative web-based environment for English language learning with advanced data-driven and statistical approaches. The project uses various corpora, including a Chinese-English parallel corpus ("Sinorama") and various natural language processing (NLP) tools to construct effective English…

  9. Ontogenetic Profile of the Expression of Thyroid Hormone Receptors in Rat and Human Corpora Cavernosa of the Penis

    PubMed Central

    Carosa, Eleonora; Di Sante, Stefania; Rossi, Simona; Castri, Alessandra; D'Adamo, Fabio; Gravina, Giovanni Luca; Ronchi, Piero; Kostrouch, Zdenek; Dolci, Susanna; Lenzi, Andrea; Jannini, Emmanuele A

    2010-01-01

    Introduction In the last few years, various studies have underlined a correlation between thyroid function and male sexual function, hypothesizing a direct action of thyroid hormones on the penis. Aim To study the spatiotemporal distribution of mRNA for the thyroid hormone nuclear receptors (TR) α1, α2 and β in the penis and smooth muscle cells (SMCs) of the corpora cavernosa of rats and humans during development. Methods We used several molecular biology techniques to study the TR expression in whole tissues or primary cultures from human and rodent penile tissues of different ages. Main Outcome Measure We measured our data by semi-quantitative reverse transcription polymerase chain reaction (RT-PCR) amplification, Northern blot and immunohistochemistry. Results We found that TRα1 and TRα2 are both expressed in the penis and in SMCs during ontogenesis without development-dependent changes. However, in the rodent model, TRβ shows an increase from 3 to 6 days post natum (dpn) to 20 dpn, remaining high in adulthood. The same expression profile was observed in humans. While the expression of TRβ is strictly regulated by development, TRα1 is the principal isoform present in corpora cavernosa, suggesting its importance in SMC function. These results have been confirmed by immunohistochemistry localization in SMCs and endothelial cells of the corpora cavernosa. Conclusions The presence of TRs in the penis provides the biological basis for the direct action of thyroid hormones on this organ. Given this evidence, physicians would be advised to investigate sexual function in men with thyroid disorders. Carosa E, Di Sante S, Rossi S, Castri A, D'Adamo F, Gravina GL, Ronchi P, Kostrouch Z, Dolci S, Lenzi A, and Jannini EA. Ontogenetic profile of the expression of thyroid hormone receptors in rat and human corpora cavernosa of the penis. J Sex Med 2010;7:1381–1390. PMID:20141582

  10. Mapping Students' Spoken Conceptions of Equality

    ERIC Educational Resources Information Center

    Anakin, Megan

    2013-01-01

    This study expands contemporary theorising about students' conceptions of equality. A nationally representative sample of New Zealand students' were asked to provide a spoken numerical response and an explanation as they solved an arithmetic additive missing number problem. Students' responses were conceptualised as acts of communication and…

  11. Combining MEDLINE and publisher data to create parallel corpora for the automatic translation of biomedical text

    PubMed Central

    2013-01-01

    Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733

  12. In a Manner of Speaking: Assessing Frequent Spoken Figurative Idioms to Assist ESL/EFL Teachers

    ERIC Educational Resources Information Center

    Grant, Lynn E.

    2007-01-01

    This article outlines criteria to define a figurative idiom, and then compares the frequent figurative idioms identified in two sources of spoken American English (academic and contemporary) to their frequency in spoken British English. This is done by searching the spoken part of the British National Corpus (BNC), to see whether they are frequent…

  13. The Development of Spoken Language in Deaf Children: Explaining the Unexplained Variance.

    ERIC Educational Resources Information Center

    Musselman, Carol; Kircaali-Iftar, Gonul

    1996-01-01

    This study compared 20 young deaf children with either exceptionally good or exceptionally poor spoken language for their hearing loss, age, and intelligence. Factors associated with high performance included earlier use of binaural ear-level aids, better educated mothers, auditory/verbal or auditory/oral instruction, reliance on spoken language…

  14. Learning in Parallel: Using Parallel Corpora to Enhance Written Language Acquisition at the Beginning Level

    ERIC Educational Resources Information Center

    Bluemel, Brody

    2014-01-01

    This article illustrates the pedagogical value of incorporating parallel corpora in foreign language education. It explores the development of a Chinese/English parallel corpus designed specifically for pedagogical application. The corpus tool was created to aid language learners in reading comprehension and writing development by making foreign…

  15. Prenatal development of the normal human vertebral corpora in different segments of the spine.

    PubMed

    Nolting, D; Hansen, B F; Keeling, J; Kjaer, I

    1998-11-01

    Vertebral columns from 13 normal human fetuses (10-24 weeks of gestation) that had aborted spontaneously were investigated as part of the legal autopsy procedure. The investigation included spinal cord analysis. To analyze the formation of the normal human vertebral corpora along the spine, including the early location and disappearance of the notochord. Reference material on the development of the normal human vertebral corpora is needed for interpretation of published observations on prenatal malformations in the spine, which include observations of various types of malformation (anencephaly, spina bifida) and various genotypes (trisomy 18, 21 and 13, as well as triploidy). The vertebral columns were studied by using radiography (Faxitron X-ray apparatus, Faxitron Model 43,855, Hewlett Packard) in lateral, frontal, and axial views and histology (decalcification, followed by toluidine blue and alcian blue staining) in and axial view. Immunohistochemical marking with Keratin Wide Spectrum also was done. Notochordal tissue (positive on marking with Keratin Wide Spectrum [DAKO, Denmark]) was located anterior to the cartilaginous body center in the youngest fetuses. The process of disintegration of the notochord and the morphology of the osseous vertebral corpora in the lumbosacral, thoracic, and cervical segments are described. Marked differences appeared in axial views, which were verified on horizontal histologic sections. Also, the increase in size was different in the different segments, being most pronounced in the thoracic and upper lumbar bodies. The lower thoracic bodies were the first to ossify. The morphologic changes observed by radiography were verified histologically. In this study, normal prenatal standards were established for the early development of the vertebral column. These standards can be used in the future--for evaluation of pathologic deviations in the human vertebral column in the second trimester.

  16. Are written and spoken recall of text equivalent?

    PubMed

    Kellogg, Ronald T

    2007-01-01

    Writing is less practiced than speaking, graphemic codes are activated only in writing, and the retrieved representations of the text must be maintained in working memory longer because handwritten output is slower than speech. These extra demands on working memory could result in less effort being given to retrieval during written compared with spoken text recall. To test this hypothesis, college students read or heard Bartlett's "War of the Ghosts" and then recalled the text in writing or speech. Spoken recall produced more accurately recalled propositions and more major distortions (e.g., inferences) than written recall. The results suggest that writing reduces the retrieval effort given to reconstructing the propositions of a text.

  17. Handbook for Spoken Mathematics: (Larry's Speakeasy).

    ERIC Educational Resources Information Center

    Chang, Lawrence A.; And Others

    This handbook is directed toward those who have to deal with spoken mathematics, yet have insufficient background to know the correct verbal expression for the written symbolic one. It compiles consistent and well-defined ways of uttering mathematical expressions so listeners will receive clear, unambiguous, and well-pronounced representations.…

  18. Prosodic Awareness and Punctuation Ability in Adult Readers

    ERIC Educational Resources Information Center

    Heggie, Lindsay; Wade-Woolley, Lesly

    2018-01-01

    We examined the relationship between two metalinguistic tasks: prosodic awareness and punctuation ability. Specifically, we investigated whether adults' ability to punctuate was related to the degree to which they are aware of and able to manipulate prosody in spoken language. English-speaking adult readers (n = 115) were administered a receptive…

  19. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    ERIC Educational Resources Information Center

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…

  20. SPOKEN CUZCO QUECHUA, UNITS 7-12.

    ERIC Educational Resources Information Center

    SOLA, DONALD F.; AND OTHERS

    THIS SECOND VOLUME OF AN INTRODUCTORY COURSE IN SPOKEN CUZCO QUECHUA ALSO COMPRISES ENOUGH MATERIAL FOR ONE INTENSIVE SUMMER SESSION COURSE OR ONE SEMESTER OF SEMI-INTENSIVE INSTRUCTION (120 CLASS HOURS). THE METHOD OF PRESENTATION IS ESSENTIALLY THE SAME AS IN THE FIRST VOLUME WITH FURTHER CONTRASTIVE, LINGUISTIC ANALYSIS OF ENGLISH-QUECHUA…

  1. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  2. "Poetry Does Really Educate": An Interview with Spoken Word Poet Luka Lesson

    ERIC Educational Resources Information Center

    Xerri, Daniel

    2016-01-01

    Spoken word poetry is a means of engaging young people with a genre that has often been much maligned in classrooms all over the world. This interview with the Australian spoken word poet Luka Lesson explores issues that are of pressing concern to poetry education. These include the idea that engagement with poetry in schools can be enhanced by…

  3. A Shared Platform for Studying Second Language Acquisition

    ERIC Educational Resources Information Center

    MacWhinney, Brian

    2017-01-01

    The study of second language acquisition (SLA) can benefit from the same process of datasharing that has proven effective in areas such as first language acquisition and aphasiology. Researchers can work together to construct a shared platform that combines data from spoken and written corpora, online tutors, and Web-based experimentation. Many of…

  4. SPOKEN AYACUCHO QUECHUA, UNITS 11-20.

    ERIC Educational Resources Information Center

    PARKER, GARY J.; SOLA, DONALD F.

    THE ESSENTIALS OF AYACUCHO GRAMMAR WERE PRESENTED IN THE FIRST VOLUME OF THIS SERIES, SPOKEN AYACUCHO QUECHUA, UNITS 1-10. THE 10 UNITS IN THIS VOLUME (11-20) ARE INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE, AND PRESENT THE STUDENT WITH LENGTHIER AND MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS AS WELL…

  5. Spoken Sentence Production in College Students with Dyslexia: Working Memory and Vocabulary Effects

    ERIC Educational Resources Information Center

    Wiseheart, Rebecca; Altmann, Lori J. P.

    2018-01-01

    Background: Individuals with dyslexia demonstrate syntactic difficulties on tasks of language comprehension, yet little is known about spoken language production in this population. Aims: To investigate whether spoken sentence production in college students with dyslexia is less proficient than in typical readers, and to determine whether group…

  6. Using machine learning to disentangle homonyms in large text corpora.

    PubMed

    Roll, Uri; Correia, Ricardo A; Berger-Tal, Oded

    2018-06-01

    Systematic reviews are an increasingly popular decision-making tool that provides an unbiased summary of evidence to support conservation action. These reviews bridge the gap between researchers and managers by presenting a comprehensive overview of all studies relating to a particular topic and identify specifically where and under which conditions an effect is present. However, several technical challenges can severely hinder the feasibility and applicability of systematic reviews, for example, homonyms (terms that share spelling but differ in meaning). Homonyms add noise to search results and cannot be easily identified or removed. We developed a semiautomated approach that can aid in the classification of homonyms among narratives. We used a combination of automated content analysis and artificial neural networks to quickly and accurately sift through large corpora of academic texts and classify them to distinct topics. As an example, we explored the use of the word reintroduction in academic texts. Reintroduction is used within the conservation context to indicate the release of organisms to their former native habitat; however, a Web of Science search for this word returned thousands of publications in which the term has other meanings and contexts. Using our method, we automatically classified a sample of 3000 of these publications with over 99% accuracy, relative to a manual classification. Our approach can be used easily with other homonyms and can greatly facilitate systematic reviews or similar work in which homonyms hinder the harnessing of large text corpora. Beyond homonyms we see great promise in combining automated content analysis and machine-learning methods to handle and screen big data for relevant information in conservation science. © 2017 Society for Conservation Biology.

  7. Blood transfusion and resuscitation using penile corpora: an experimental study.

    PubMed

    Abolyosr, Ahmad; Sayed, M A; Elanany, Fathy; Smeika, M A; Shaker, S E

    2005-10-01

    To test the feasibility of using the penile corpora cavernosa for blood transfusion and resuscitation purposes. Three male donkeys were used for autologous blood transfusion into the corpus cavernosum during three sessions with a 1-week interval between each. Two blood units (450 mL each) were transfused per session to each donkey. Moreover, three dogs were bled up until a state of shock was produced. The mean arterial blood pressure decreased to 60 mm Hg. The withdrawn blood (mean volume 396.3 mL) was transfused back into their corpora cavernosa under 150 mm Hg pressure. Different transfusion parameters were assessed. The Assiut faculty of medicine ethical committee approved the study before its initiation. For the donkey model, the mean time of blood collection was 12 minutes. The mean time needed to establish corporal access was 22 seconds. The mean time of blood transfusion was 14.2 minutes. The mean rate of blood transfusion was 31.7 mL/min. Mild penile elongation with or without mild penile tumescence was observed on four occasions. All penile shafts returned spontaneously to their pretransfusion state at a maximum of 5 minutes after cessation of blood transfusion. No extravasation, hematoma formation, or color changes occurred. Regarding the dog model, the mean rate of transfusion was 35.2 mL/min. All dogs were resuscitated at the end of the transfusion. The corpus cavernosum is a feasible, simple, rapid, and effective alternative route for blood transfusion and venous access. It can be resorted to whenever necessary. It is a reliable means for volume replacement and resuscitation in males.

  8. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults with Normal Hearing but Not Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker

    2016-01-01

    Purpose: Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Method: Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13…

  9. A Grammar of Spoken Brazilian Portuguese.

    ERIC Educational Resources Information Center

    Thomas, Earl W.

    This is a first-year text of Portuguese grammar based on the Portuguese of moderately educated Brazilians from the area around Rio de Janeiro. Spoken idiomatic usage is emphasized. An important innovation is found in the presentation of verb tenses; they are presented in the order in which the native speaker learns them. The text is intended to…

  10. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    PubMed

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  11. The Temporal Structure of Spoken Language Understanding.

    ERIC Educational Resources Information Center

    Marslen-Wilson, William; Tyler, Lorraine Komisarjevsky

    1980-01-01

    An investigation of word-by-word time-course of spoken language understanding focused on word recognition and structural and interpretative processes. Results supported an online interactive language processing theory, in which lexical, structural, and interpretative knowledge sources communicate and interact during processing efficiently and…

  12. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    ERIC Educational Resources Information Center

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  13. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    PubMed

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Incremental comprehension of spoken quantifier sentences: Evidence from brain potentials.

    PubMed

    Freunberger, Dominik; Nieuwland, Mante S

    2016-09-01

    Do people incrementally incorporate the meaning of quantifier expressions to understand an unfolding sentence? Most previous studies concluded that quantifiers do not immediately influence how a sentence is understood based on the observation that online N400-effects differed from offline plausibility judgments. Those studies, however, used serial visual presentation (SVP), which involves unnatural reading. In the current ERP-experiment, we presented spoken positive and negative quantifier sentences ("Practically all/practically no postmen prefer delivering mail, when the weather is good/bad during the day"). Different from results obtained in a previously reported SVP-study (Nieuwland, 2016) sentence truth-value N400 effects occurred in positive and negative quantifier sentences alike, reflecting fully incremental quantifier comprehension. This suggests that the prosodic information available during spoken language comprehension supports the generation of online predictions for upcoming words and that, at least for quantifier sentences, comprehension of spoken language may proceed more incrementally than comprehension during SVP reading. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.

    PubMed

    Douglas, Michael

    2016-02-01

    To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently

  16. SPOKEN AYACUCHO QUECHUA. UNITS 1-10.

    ERIC Educational Resources Information Center

    PARKER, GARY J.; SOLA, DONALD F.

    THIS BEGINNING COURSE IN AYACUCHO QUECHUA, SPOKEN BY ABOUT A MILLION PEOPLE IN SOUTH-CENTRAL PERU, WAS PREPARED TO INTRODUCE THE PHONOLOGY AND GRAMMAR OF THIS DIALECT TO SPEAKERS OF ENGLISH. THE FIRST OF TWO VOLUMES, IT SERVES AS A TEXT FOR A 6-WEEK INTENSIVE COURSE OF 20 CLASS HOURS A WEEK. THE AUTHORS COMPARE AND CONTRAST SIGNIFICANT FEATURES OF…

  17. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  18. Talker and accent variability effects on spoken word recognition

    NASA Astrophysics Data System (ADS)

    Nyang, Edna E.; Rogers, Catherine L.; Nishi, Kanae

    2003-04-01

    A number of studies have shown that words in a list are recognized less accurately in noise and with longer response latencies when they are spoken by multiple talkers, rather than a single talker. These results have been interpreted as support for an exemplar-based model of speech perception, in which it is assumed that detailed information regarding the speaker's voice is preserved in memory and used in recognition, rather than being eliminated via normalization. In the present study, the effects of varying both accent and talker are investigated using lists of words spoken by (a) a single native English speaker, (b) six native English speakers, (c) three native English speakers and three Japanese-accented English speakers. Twelve /hVd/ words were mixed with multi-speaker babble at three signal-to-noise ratios (+10, +5, and 0 dB) to create the word lists. Native English-speaking listeners' percent-correct recognition for words produced by native English speakers across the three talker conditions (single talker native, multi-talker native, and multi-talker mixed native and non-native) and three signal-to-noise ratios will be compared to determine whether sources of speaker variability other than voice alone add to the processing demands imposed by simple (i.e., single accent) speaker variability in spoken word recognition.

  19. Orthography and Modality Influence Speech Production in Adults and Children

    ERIC Educational Resources Information Center

    Saletta, Meredith; Goffman, Lisa; Hogan, Tiffany P.

    2016-01-01

    Purpose: The acquisition of literacy skills influences the perception and production of spoken language. We examined if orthography influences implicit processing in speech production in child readers and in adult readers with low and high reading proficiency. Method: Children (n = 17), adults with typical reading skills (n = 17), and adults…

  20. Working memory predicts semantic comprehension in dichotic listening in older adults.

    PubMed

    James, Philip J; Krishnan, Saloni; Aydelott, Jennifer

    2014-10-01

    Older adults have difficulty understanding spoken language in the presence of competing voices. Everyday social situations involving multiple simultaneous talkers may become increasingly challenging in later life due to changes in the ability to focus attention. This study examined whether individual differences in cognitive function predict older adults' ability to access sentence-level meanings in competing speech using a dichotic priming paradigm. Older listeners showed faster responses to words that matched the meaning of spoken sentences presented to the left or right ear, relative to a neutral baseline. However, older adults were more vulnerable than younger adults to interference from competing speech when the competing signal was presented to the right ear. This pattern of performance was strongly correlated with a non-auditory working memory measure, suggesting that cognitive factors play a key role in semantic comprehension in competing speech in healthy aging. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. "Jaja" in Spoken German: Managing Knowledge Expectations

    ERIC Educational Resources Information Center

    Taleghani-Nikazm, Carmen; Golato, Andrea

    2016-01-01

    In line with the other contributions to this issue on teaching pragmatics, this paper provides teachers of German with a two-day lesson plan for integrating authentic spoken language and its associated cultural background into their teaching. Specifically, the paper discusses how "jaja" and its phonetic variants are systematically used…

  2. Some Steps To Improve Your Spoken English.

    ERIC Educational Resources Information Center

    Mellor, Jeff

    The guide contains suggestions for University of Tennessee, Knoxville foreign teaching assistants to improve their command of spoken English. The first section offers four speech improvement suggestions requiring little or no extra time, including speaking English as much and as often as possible, imitating native speakers, asking for the correct…

  3. Does Emotional Disclosure About Stress Improve Health in Rheumatoid Arthritis? Randomized, Controlled Trials of Written and Spoken Disclosure

    PubMed Central

    Lumley, Mark A.; Leisen, James C.C.; Partridge, R. Ty; Meyer, Tina M.; Radcliffe, Alison M.; Macklem, Debra J.; Naoum, Linda A.; Cohen, Jay L.; Lasichak, Lydia M.; Lubetsky, Michael R.; Mosley-Williams, Angelia D.; Granda, Jose L.

    2011-01-01

    Studies of the effects of disclosing stressful experiences among patients with rheumatoid arthritis (RA) have yielded inconsistent findings, perhaps due to different disclosure methods—writing or speaking—and various methodological limitations. We randomized adults with RA to a writing (n = 88) or speaking (to a recorder) sample (n = 93), and within each sample, to either disclosure or one of two control groups (positive or neutral events), which conducted 4, 20-minute, at-home sessions. Follow-up evaluations at 1, 3, and 6 months included self-reported, behavioral, physiological, and blinded physician-assessed outcomes. In both writing and speaking samples, the disclosure and control groups were comparably credible, and the linguistic content differed as expected. Covariance analyses at each follow-up point indicated that written disclosure had minimal effects compared to combined controls—only pain was reduced at 1 and 6 months, but no other outcomes improved. Spoken disclosure led to faster walking speed at 3 months, and reduced pain, swollen joints, and physician-rated disease activity at 6 months, but there were no effects on other outcomes. Latent growth curve modeling examined differences in the trajectory of change over follow-ups. Written disclosure improved affective pain and walking speed; spoken disclosure showed only a marginal benefit on sensory pain. In both analyses, disclosure's few benefits occurred relative to both positive and neutral control groups. We conclude that both written and spoken disclosure have modest benefits for patients with RA, particularly at 6 months, but these effects are limited in scope and consistency. PMID:21315515

  4. "We communicated that way for a reason": language practices and language ideologies among hearing adults whose parents are deaf.

    PubMed

    Pizer, Ginger; Walters, Keith; Meier, Richard P

    2013-01-01

    Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing communities. Despite this variation, the interviewees' moral judgments of their own and others' communicative behavior suggest that these adults share a language ideology concerning the obligation of all family members to expend effort to overcome potential communication barriers. To our knowledge, such a language ideology is not similarly pervasive among spoken-language bilingual families, raising the question of whether there is something unique about family bimodal bilingualism that imposes different rights and responsibilities on family members than spoken-language family bilingualism does. This ideology unites an otherwise diverse group of interviewees, where each one preemptively denied being a "typical CODA [children of deaf adult]."

  5. Learning and Consolidation of Novel Spoken Words

    ERIC Educational Resources Information Center

    Davis, Matthew H.; Di Betta, Anna Maria; Macdonald, Mark J. E.; Gaskell, Gareth

    2009-01-01

    Two experiments explored the neural mechanisms underlying the learning and consolidation of novel spoken words. In Experiment 1, participants learned two sets of novel words on successive days. A subsequent recognition test revealed high levels of familiarity for both sets. However, a lexical decision task showed that only novel words learned on…

  6. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    PubMed Central

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  7. Finding abbreviations in biomedical literature: three BioC-compatible modules and four BioC-formatted corpora.

    PubMed

    Islamaj Doğan, Rezarta; Comeau, Donald C; Yeganova, Lana; Wilbur, W John

    2014-01-01

    BioC is a recently created XML format to share text data and annotations, and an accompanying input/output library to promote interoperability of data and tools for natural language processing of biomedical text. This article reports the use of BioC to address a common challenge in processing biomedical text information-that of frequent entity name abbreviation. We selected three different abbreviation definition identification modules, and used the publicly available BioC code to convert these independent modules into BioC-compatible components that interact seamlessly with BioC-formatted data, and other BioC-compatible modules. In addition, we consider four manually annotated corpora of abbreviations in biomedical text: the Ab3P corpus of 1250 PubMed abstracts, the BIOADI corpus of 1201 PubMed abstracts, the old MEDSTRACT corpus of 199 PubMed(®) citations and the Schwartz and Hearst corpus of 1000 PubMed abstracts. Annotations in these corpora have been re-evaluated by four annotators and their consistency and quality levels have been improved. We converted them to BioC-format and described the representation of the annotations. These corpora are used to measure the three abbreviation-finding algorithms and the results are given. The BioC-compatible modules, when compared with their original form, have no difference in their efficiency, running time or any other comparable aspects. They can be conveniently used as a common pre-processing step for larger multi-layered text-mining endeavors. Database URL: Code and data are available for download at the BioC site: http://bioc.sourceforge.net. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  8. An event-related potential study of memory for words spoken aloud or heard.

    PubMed

    Wilding, E L; Rugg, M D

    1997-09-01

    Subjects made old/new recognition judgements to visually presented words, half of which had been encountered in a prior study phase. For each word judged old, subjects made a subsequent source judgement, indicating whether they had pronounced the word aloud at study (spoken words), or whether they had heard the word spoken to them (heard words). Event-related potentials (ERPs) were compared for three classes of test item; words correctly judged to be new (correct rejections), and spoken and heard words that were correctly assigned to source (spoken hit/hit and heard hit/hit response categories). Consistent with previous findings (Wilding, E. L. and Rugg, M. D., Brain, 1996, 119, 889-905), two temporally and topographically dissociable components, with parietal and frontal maxima respectively, differentiated the ERPs to the hit/hit and correct rejection response categories. In addition, there was some evidence that the frontally distributed component could be decomposed into two distinct components, only one of which differentiated the two classes of hit/hit ERPs. The findings suggest that at least three functionally and neurologically dissociable processes can contribute to successful recovery of source information.

  9. Elementary School Students' Spoken Activities and Their Responses in Math Learning by Peer-Tutoring

    ERIC Educational Resources Information Center

    Baiduri

    2017-01-01

    Students' activities in the learning process are very important to indicate the quality of learning process. One of which is spoken activity. This study was intended to analyze the elementary school students' spoken activities and their responses in joining Math learning process by peer-tutoring. Descriptive qualitative design was piloted by means…

  10. The Cortical Organization of Lexical Knowledge: A Dual Lexicon Model of Spoken Language Processing

    ERIC Educational Resources Information Center

    Gow, David W., Jr.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood.…

  11. Phonological Competition within the Word: Evidence from the Phoneme Similarity Effect in Spoken Production

    ERIC Educational Resources Information Center

    Cohen-Goldberg, Ariel M.

    2012-01-01

    Theories of spoken production have not specifically addressed whether the phonemes of a word compete with each other for selection during phonological encoding (e.g., whether /t/ competes with /k/ in cat). Spoken production theories were evaluated and found to fall into three classes, theories positing (1) no competition, (2) competition among…

  12. A Mother Tongue Spoken Mainly by Fathers.

    ERIC Educational Resources Information Center

    Corsetti, Renato

    1996-01-01

    Reviews what is known about Esperanto as a home language and first language. Recorded cases of Esperanto-speaking families are known since 1919, and in nearly all of the approximately 350 families documented, the language is spoken to the children by the father. The data suggests that this "artificial bilingualism" can be as successful…

  13. Spoken Word Recognition in Toddlers Who Use Cochlear Implants

    PubMed Central

    Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.

    2010-01-01

    Purpose The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of speech competitors, while their eye movements to target objects were digitally recorded. Word recognition performance was quantified by measuring each child’s reaction time (i.e., the latency between the spoken auditory label and the first look at the target object) and accuracy (i.e., the amount of time that children looked at target objects within 367 ms to 2,000 ms after the label onset). Results Children with CIs were less accurate and took longer to fixate target objects than did age-matched children without hearing loss. Both groups of children showed reduced performance in the presence of the speech competitors, although many children continued to recognize labels at above-chance levels. Conclusion The results suggest that the unique auditory experience of young CI users slows the time course of spoken word recognition abilities. In addition, real-world listening environments may slow language processing in young language learners, regardless of their hearing status. PMID:19951921

  14. Well Spoken: Teaching Speaking to All Students

    ERIC Educational Resources Information Center

    Palmer, Erik

    2011-01-01

    All teachers at all grade levels in all subjects have speaking assignments for students, but many teachers believe they don't know how to teach speaking, and many even fear public speaking themselves. In his new book, "Well Spoken", veteran teacher and education consultant Erik Palmer shares the art of teaching speaking in any classroom. Teachers…

  15. Epistemic Stance in Spoken L2 English: The Effect of Task and Speaker Style

    ERIC Educational Resources Information Center

    Gablasova, Dana; Brezina, Vaclav; Mcenery, Tony; Boyd, Elaine

    2017-01-01

    The article discusses epistemic stance in spoken L2 production. Using a subset of the Trinity Lancaster Corpus of spoken L2 production, we analysed the speech of 132 advanced L2 speakers from different L1 and cultural backgrounds taking part in four speaking tasks: one largely monologic presentation task and three interactive tasks. The study…

  16. Stretched Verb Collocations with "Give": Their Use and Translation into Spanish Using the BNC and CREA Corpora

    ERIC Educational Resources Information Center

    Molina-Plaza, Silvia; de Gregorio-Godeo, Eduardo

    2010-01-01

    Within the context of on-going research, this paper explores the pedagogical implications of contrastive analyses of multiword units in English and Spanish based on electronic corpora as a CALL resource. The main tenets of collocations from a contrastive perspective--and the points of contact and departure between both languages--are discussed…

  17. Acquisition of graphic communication by a young girl without comprehension of spoken language.

    PubMed

    von Tetzchner, S; Øvreeide, K D; Jørgensen, K K; Ormhaug, B M; Oxholm, B; Warme, R

    To describe a graphic-mode communication intervention involving a girl with intellectual impairment and autism who did not develop comprehension of spoken language. The aim was to teach graphic-mode vocabulary that reflected her interests, preferences, and the activities and routines of her daily life, by providing sufficient cues to the meanings of the graphic representations so that she would not need to comprehend spoken instructions. An individual case study design was selected, including the use of written records, participant observation, and registration of the girl's graphic vocabulary and use of graphic signs and other communicative expressions. While the girl's comprehension (and hence use) of spoken language remained lacking over a 3-year period, she acquired an active use of over 80 photographs and pictograms. The girl was able to cope better with the cognitive and attentional requirements of graphic communication than those of spoken language and manual signs, which had been focused in earlier interventions. Her achievements demonstrate that it is possible for communication-impaired children to learn to use an augmentative and alternative communication system without speech comprehension, provided the intervention utilizes functional strategies and non-language cues to the meaning of the graphic representations that are taught.

  18. Lexical Competition in Non-Native Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Weber, Andrea; Cutler, Anne

    2004-01-01

    Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name ("pencil," given target "panda") than on less confusable distractors…

  19. Overlapping Networks Engaged during Spoken Language Production and Its Cognitive Control

    PubMed Central

    Wise, Richard J.S.; Mehta, Amrish; Leech, Robert

    2014-01-01

    Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and “rest,” to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. PMID:24966373

  20. Overlapping networks engaged during spoken language production and its cognitive control.

    PubMed

    Geranmayeh, Fatemeh; Wise, Richard J S; Mehta, Amrish; Leech, Robert

    2014-06-25

    Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and "rest," to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. Copyright © 2014 Geranmayeh et al.

  1. Reading Spoken Words: Orthographic Effects in Auditory Priming

    ERIC Educational Resources Information Center

    Chereau, Celine; Gaskell, M. Gareth; Dumay, Nicolas

    2007-01-01

    Three experiments examined the involvement of orthography in spoken word processing using a task--unimodal auditory priming with offset overlap--taken to reflect activation of prelexical representations. Two types of prime-target relationship were compared; both involved phonological overlap, but only one had a strong orthographic overlap (e.g.,…

  2. Spoken Idiom Recognition: Meaning Retrieval and Word Expectancy

    ERIC Educational Resources Information Center

    Tabossi, Patrizia; Fanari, Rachele; Wolf, Kinou

    2005-01-01

    This study investigates recognition of spoken idioms occurring in neutral contexts. Experiment 1 showed that both predictable and non-predictable idiom meanings are available at string offset. Yet, only predictable idiom meanings are active halfway through a string and remain active after the string's literal conclusion. Experiment 2 showed that…

  3. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  4. Citation Matching in Sanskrit Corpora Using Local Alignment

    NASA Astrophysics Data System (ADS)

    Prasad, Abhinandan S.; Rao, Shrisha

    Citation matching is the problem of finding which citation occurs in a given textual corpus. Most existing citation matching work is done on scientific literature. The goal of this paper is to present methods for performing citation matching on Sanskrit texts. Exact matching and approximate matching are the two methods for performing citation matching. The exact matching method checks for exact occurrence of the citation with respect to the textual corpus. Approximate matching is a fuzzy string-matching method which computes a similarity score between an individual line of the textual corpus and the citation. The Smith-Waterman-Gotoh algorithm for local alignment, which is generally used in bioinformatics, is used here for calculating the similarity score. This similarity score is a measure of the closeness between the text and the citation. The exact- and approximate-matching methods are evaluated and compared. The methods presented can be easily applied to corpora in other Indic languages like Kannada, Tamil, etc. The approximate-matching method can in particular be used in the compilation of critical editions and plagiarism detection in a literary work.

  5. Spoken English. "Educational Review" Occasional Publications Number Two.

    ERIC Educational Resources Information Center

    Wilkinson, Andrew; And Others

    Modifications of current assumptions both about the nature of the spoken language and about its functions in relation to personality development are suggested in this book. The discussion covers an explanation of "oracy" (the oral skills of speaking and listening); the contributions of linguistics to the teaching of English in Britain; the…

  6. Development of Lexical-Semantic Language System: N400 Priming Effect for Spoken Words in 18- and 24-Month Old Children

    ERIC Educational Resources Information Center

    Rama, Pia; Sirri, Louah; Serres, Josette

    2013-01-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…

  7. The time course of morphological processing during spoken word recognition in Chinese.

    PubMed

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-12-01

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  8. Spoken word recognition by Latino children learning Spanish as their first language*

    PubMed Central

    HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE

    2010-01-01

    Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157

  9. Individual differences in online spoken word recognition: Implications for SLI

    PubMed Central

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2012-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014

  10. Closed-Caption Television and Adult Students of English as a Second Language.

    ERIC Educational Resources Information Center

    Smith, Jennifer J.

    The use of closed-caption television (CCTV) to help teach English as a Second Language (ESL) to adults was studied with a group of adult students in the Arlington, Virginia, Education and Employment Program. Although CCTV is designed for the hearing impaired, its combination of written with spoken English in the visual context of television makes…

  11. Accessory corpora lutea formation in pregnant Hokkaido sika deer (Cervus nippon yesoensis) investigated by examination of ovarian dynamics and steroid hormone concentrations.

    PubMed

    Yanagawa, Yojiro; Matsuura, Yukiko; Suzuki, Masatsugu; Saga, Shin-Ichi; Okuyama, Hideto; Fukui, Daisuke; Bando, Gen; Nagano, Masashi; Katagiri, Seiji; Takahashi, Yoshiyuki; Tsubota, Toshio

    2015-01-01

    Generally, sika deer conceive a single fetus, but approximately 80% of pregnant females have two corpora lutea (CLs). The function of the accessory CL (ACL) is unknown; moreover, the process of ACL formation is unclear, and understanding this is necessary to know its role. To elucidate the process of ACL formation, the ovarian dynamics of six adult Hokkaido sika deer females were examined ultrasonographically together with peripheral estradiol-17β and progesterone concentrations. ACLs formed in three females that conceived at the first estrus of the breeding season, but not in those females that conceived at the second estrus. After copulation, postconception ovulation of the dominant follicle of the first wave is induced by an increase in estradiol-17β, which leads to formation of an ACL. A relatively low concentration of progesterone after the first estrus of the breeding season is considered to be responsible for the increase in estradiol-17β after copulation.

  12. Accessory corpora lutea formation in pregnant Hokkaido sika deer (Cervus nippon yesoensis) investigated by examination of ovarian dynamics and steroid hormone concentrations

    PubMed Central

    YANAGAWA, Yojiro; MATSUURA, Yukiko; SUZUKI, Masatsugu; SAGA, Shin-ichi; OKUYAMA, Hideto; FUKUI, Daisuke; BANDO, Gen; NAGANO, Masashi; KATAGIRI, Seiji; TAKAHASHI, Yoshiyuki; TSUBOTA, Toshio

    2014-01-01

    Generally, sika deer conceive a single fetus, but approximately 80% of pregnant females have two corpora lutea (CLs). The function of the accessory CL (ACL) is unknown; moreover, the process of ACL formation is unclear, and understanding this is necessary to know its role. To elucidate the process of ACL formation, the ovarian dynamics of six adult Hokkaido sika deer females were examined ultrasonographically together with peripheral estradiol-17β and progesterone concentrations. ACLs formed in three females that conceived at the first estrus of the breeding season, but not in those females that conceived at the second estrus. After copulation, postconception ovulation of the dominant follicle of the first wave is induced by an increase in estradiol-17β, which leads to formation of an ACL. A relatively low concentration of progesterone after the first estrus of the breeding season is considered to be responsible for the increase in estradiol-17β after copulation. PMID:25482110

  13. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  14. Pair Counting to Improve Grammar and Spoken Fluency

    ERIC Educational Resources Information Center

    Hanson, Stephanie

    2017-01-01

    English language learners are often more grammatically accurate in writing than in speaking. As students focus on meaning while speaking, their spoken fluency comes at a cost: their grammatical accuracy decreases. The author wanted to find a way to help her students improve their oral grammar; that is, she wanted them to focus on grammar while…

  15. The employment of a spoken language computer applied to an air traffic control task.

    NASA Technical Reports Server (NTRS)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  16. L2 Performance in Text-Chat and Spoken Discourse

    ERIC Educational Resources Information Center

    Sauro, Shannon

    2012-01-01

    The present study builds upon research in the CAF (complexity, accuracy, fluency) framework for examining learner performance to compare the lexical and syntactic complexity of learner output in spoken discourse and synchronous computer-mediated communication (SCMC) during completion of narrative tasks. Data were generated from transcripts and…

  17. Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension

    ERIC Educational Resources Information Center

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2015-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…

  18. The gender congruency effect during bilingual spoken-word recognition

    PubMed Central

    Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa

    2016-01-01

    We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132

  19. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    PubMed

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  20. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    ERIC Educational Resources Information Center

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  1. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    PubMed

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  2. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    PubMed

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to

  3. A preliminary study of subjective frequency estimates of words spoken in Cantonese.

    PubMed

    Yip, M C

    2001-06-01

    A database is presented of the subjective frequency estimates for a set of 30 Chinese homophones. The estimates are based on analysis of responses from a simple listening task by 120 University students. On the listening task, they are asked to mention the first meaning thought of upon hearing a Chinese homophone by writing down the corresponding Chinese characters. There was correlation of .66 between the frequency of spoken and written words, suggesting distributional information about the lexical representations is generally independent of modality. These subjective frequency counts should be useful in the construction of material sets for research on word recognition using spoken Chinese (Cantonese).

  4. Conversational Entrainment of Vocal Fry in Young Adult Female American English Speakers.

    PubMed

    Borrie, Stephanie A; Delfino, Christine R

    2017-07-01

    Conversational entrainment, the natural tendency for people to modify their behaviors to more closely match their communication partner, is examined as one possible mechanism modulating the prevalence of vocal fry in the speech of young American women engaged in spoken dialogue. Twenty young adult female American English speakers engaged in two spoken dialogue tasks-one with a young adult female American English conversational partner who exhibited substantial vocal fry and one with a young adult female American English conversational partner who exhibited quantifiably less vocal fry. Dialogues were analyzed for proportion of vocal fry, by speaker, and two measures of communicative success (efficiency and enjoyment). Participants employed significantly more vocal fry when conversing with the partner who exhibited substantial vocal fry than when conversing with the partner who exhibited quantifiably less vocal fry. Further, greater similarity between communication partners in their use of vocal fry tracked with higher scores of communicative efficiency and communicative enjoyment. Conversational entrainment offers a mechanistic framework that may be used to explain, to some degree, the frequency with which vocal fry is employed by young American women engaged in spoken dialogue. Further, young American women who modulated their vocal patterns during dialogue to match those of their conversational partner gained more efficiency and enjoyment from their interactions, demonstrating the cognitive and social benefits of entrainment. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  5. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    PubMed

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Context and Spoken Word Recognition in a Novel Lexicon

    ERIC Educational Resources Information Center

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2008-01-01

    Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments,…

  7. Visual Speech Primes Open-Set Recognition of Spoken Words

    ERIC Educational Resources Information Center

    Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.

    2009-01-01

    Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…

  8. Oral and Literate Strategies in Spoken and Written Narratives.

    ERIC Educational Resources Information Center

    Tannen, Deborah

    1982-01-01

    Discusses comparative analysis of spoken and written versions of a narrative to demonstrate that features which have been identified as characterizing oral discourse are also found in written discourse and that the written short story combines syntactic complexity expected in writing with features which create involvement expected in speaking.…

  9. Are Young Children with Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    ERIC Educational Resources Information Center

    Guo, Ling-Yu; McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose: The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation.…

  10. Dyslexia in adults: Evidence for deficits in non-word reading and in the phonological representation of lexical items.

    PubMed

    Elbro, C; Nielsen, I; Petersen, D K

    1994-01-01

    Difficulties in reading and language skills which persist from childhood into adult life are the concerns of this article. The aims were twofold: (1) to find measures of adult reading processes that validate adults' retrospective reports of difficulties in learning to read during the school years, and (2) to search for indications of basic deficits in phonological processing that may point toward underlying causes of reading difficulties. Adults who reported a history of difficulties in learning to read (n=102) were distinctly disabled in phonological coding in reading, compared to adults without similar histories (n=56). They were less disabled in the comprehension of written passages, and the comprehension disability was explained by the phonological difficulties. A number of indications were found that adults with poor phonological coding skills in reading (i.e., dyslexia) have basic deficits in phonological representations of spoken words, even when semantic word knowledge, phonemic awareness, educational level, and daily reading habits are taken into account. It is suggested that dyslexics possess less distinct phonological representations of spoken words.

  11. Spoken Word Recognition of Chinese Words in Continuous Speech

    ERIC Educational Resources Information Center

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  12. Semantic information mediates visual attention during spoken word recognition in Chinese: Evidence from the printed-word version of the visual-world paradigm.

    PubMed

    Shen, Wei; Qu, Qingqing; Li, Xingshan

    2016-07-01

    In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.

  13. Primary phonological planning units in spoken word production are language-specific: Evidence from an ERP study.

    PubMed

    Wang, Jie; Wong, Andus Wing-Kuen; Wang, Suiping; Chen, Hsuan-Chih

    2017-07-19

    It is widely acknowledged in Germanic languages that segments are the primary planning units at the phonological encoding stage of spoken word production. Mixed results, however, have been found in Chinese, and it is still unclear what roles syllables and segments play in planning Chinese spoken word production. In the current study, participants were asked to first prepare and later produce disyllabic Mandarin words upon picture prompts and a response cue while electroencephalogram (EEG) signals were recorded. Each two consecutive pictures implicitly formed a pair of prime and target, whose names shared the same word-initial atonal syllable or the same word-initial segments, or were unrelated in the control conditions. Only syllable repetition induced significant effects on event-related brain potentials (ERPs) after target onset: a widely distributed positivity in the 200- to 400-ms interval and an anterior positivity in the 400- to 600-ms interval. We interpret these to reflect syllable-size representations at the phonological encoding and phonetic encoding stages. Our results provide the first electrophysiological evidence for the distinct role of syllables in producing Mandarin spoken words, supporting a language specificity hypothesis about the primary phonological units in spoken word production.

  14. The Effect of the Integration of Corpora in Reading Comprehension Classrooms on English as a Foreign Language Learners' Vocabulary Development

    ERIC Educational Resources Information Center

    Gordani, Yahya

    2013-01-01

    This study used a randomized pretest-posttest control group design to examine the effect of the integration of corpora in general English courses on the students' vocabulary development. To enhance the learners' lexical repertoire and thereby improve their reading comprehension, an online corpus-based approach was integrated into 42 hours of…

  15. A Uniform Identity: Schoolgirl Snapshots and the Spoken Visual

    ERIC Educational Resources Information Center

    Spencer, Stephanie

    2007-01-01

    This article discusses the possibility for expanding our understanding of the visual to include the "spoken visual" within oral history analysis. It suggests that adding a further reading, that of the visualized body, to the voice-centred relational method we can consider the meaning of the uniformed body for the individual. It uses as a…

  16. Taming Big Data: An Information Extraction Strategy for Large Clinical Text Corpora.

    PubMed

    Gundlapalli, Adi V; Divita, Guy; Carter, Marjorie E; Redd, Andrew; Samore, Matthew H; Gupta, Kalpana; Trautner, Barbara

    2015-01-01

    Concepts of interest for clinical and research purposes are not uniformly distributed in clinical text available in electronic medical records. The purpose of our study was to identify filtering techniques to select 'high yield' documents for increased efficacy and throughput. Using two large corpora of clinical text, we demonstrate the identification of 'high yield' document sets in two unrelated domains: homelessness and indwelling urinary catheters. For homelessness, the high yield set includes homeless program and social work notes. For urinary catheters, concepts were more prevalent in notes from hospitalized patients; nursing notes accounted for a majority of the high yield set. This filtering will enable customization and refining of information extraction pipelines to facilitate extraction of relevant concepts for clinical decision support and other uses.

  17. BALDEY: A database of auditory lexical decisions.

    PubMed

    Ernestus, Mirjam; Cutler, Anne

    2015-01-01

    In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY ("biggest auditory lexical decision experiment yet") data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.

  18. Spoken language skills and educational placement in Finnish children with cochlear implants.

    PubMed

    Lonka, Eila; Hasan, Marja; Komulainen, Erkki

    2011-01-01

    This study reports the demographics, and the auditory and spoken language development as well as educational settings, for a total of 164 Finnish children with cochlear implants. Two questionnaires were employed: the first, concerning day care and educational placement, was filled in by professionals for rehabilitation guidance, and the second, evaluating language development (categories of auditory performance, spoken language skills, and main mode of communication), by speech and language therapists in audiology departments. Nearly half of the children were enrolled in normal kindergartens and 43% of school-aged children in mainstream schools. Categories of auditory performance were observed to grow in relation to age at cochlear implantation (p < 0.001) as well as in relation to proportional hearing age (p < 0.001). The composite scores for language development moved to more diversified ones in relation to increasing age at cochlear implantation and proportional hearing age (p < 0.001). Children without additional disorders outperformed those with additional disorders. The results indicate that the most favorable age for cochlear implantation could be earlier than 2. Compared to other children, spoken language evaluation scores of those with additional disabilities were significantly lower; however, these children showed gradual improvements in their auditory perception and language scores. Copyright © 2011 S. Karger AG, Basel.

  19. The time-course of the cross-modal semantic modulation of visual picture processing by naturalistic sounds and spoken words.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2013-01-01

    The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.

  20. Use of spoken and written Japanese did not protect Japanese-American men from cognitive decline in late life.

    PubMed

    Crane, Paul K; Gruhl, Jonathan C; Erosheva, Elena A; Gibbons, Laura E; McCurry, Susan M; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-11-01

    Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900-1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve.

  1. Use of Spoken and Written Japanese Did Not Protect Japanese-American Men From Cognitive Decline in Late Life

    PubMed Central

    Gruhl, Jonathan C.; Erosheva, Elena A.; Gibbons, Laura E.; McCurry, Susan M.; Rhoads, Kristoffer; Nguyen, Viet; Arani, Keerthi; Masaki, Kamal; White, Lon

    2010-01-01

    Objectives. Spoken bilingualism may be associated with cognitive reserve. Mastering a complicated written language may be associated with additional reserve. We sought to determine if midlife use of spoken and written Japanese was associated with lower rates of late life cognitive decline. Methods. Participants were second-generation Japanese-American men from the Hawaiian island of Oahu, born 1900–1919, free of dementia in 1991, and categorized based on midlife self-reported use of spoken and written Japanese (total n included in primary analysis = 2,520). Cognitive functioning was measured with the Cognitive Abilities Screening Instrument scored using item response theory. We used mixed effects models, controlling for age, income, education, smoking status, apolipoprotein E e4 alleles, and number of study visits. Results. Rates of cognitive decline were not related to use of spoken or written Japanese. This finding was consistent across numerous sensitivity analyses. Discussion. We did not find evidence to support the hypothesis that multilingualism is associated with cognitive reserve. PMID:20639282

  2. Error Awareness and Recovery in Conversational Spoken Language Interfaces

    DTIC Science & Technology

    2007-05-01

    portant step towards constructing autonomously self -improving systems. Furthermore, we developed a scalable, data-driven approach that allows a system...prob- lems in spoken dialog (as well as other interactive systems) and constitutes an important step towards building autonomously self -improving...implicitly-supervised learning approach is applicable to other problems, and represents an important step towards developing autonomous, self

  3. A Comparison between Written and Spoken Narratives in Aphasia

    ERIC Educational Resources Information Center

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  4. Project ASPIRE: Spoken Language Intervention Curriculum for Parents of Low-socioeconomic Status and Their Deaf and Hard-of-Hearing Children.

    PubMed

    Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen

    2016-02-01

    To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged < 4.5 years, hearing loss of ≥ 30 dB, between 500 and 4000 Hz, using at least one amplification device with adequate amplification (hearing aid, cochlear implant, osseo-integrated device). Behavioral. Caregiver-directed educational intervention curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.

  5. Hemispheric Differences in Indexical Specificity Effects in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Gonzalez, Julio; McLennan, Conor T.

    2007-01-01

    Variability in talker identity, one type of indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. Furthermore, neuropsychological evidence suggests that indexical and linguistic information may be represented and processed differently in the 2 cerebral hemispheres, and is consistent with findings from…

  6. Listening comprehension across the adult lifespan

    PubMed Central

    Sommers, Mitchell S.; Hale, Sandra; Myerson, Joel; Rose, Nathan; Tye-Murray, Nancy; Spehar, Brent

    2011-01-01

    Short Summary The current study provides the first systematic assessment of listening comprehension across the adult lifespan. A total of 433 participants ranging in age from 20-90 listened to spoken passages and answered comprehension questions following each passage. In addition, measures of auditory sensitivity were obtained from all participants to determine if hearing loss and listening comprehension changed similarly across the adult lifespan. As expected, auditory sensitivity declined from age 20 to age 90. In contrast, listening comprehension remained relatively unchanged until approximately age 65-70, with declines evident only for the oldest participants. PMID:21716112

  7. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    NASA Astrophysics Data System (ADS)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  8. Selective auditory attention in adults: effects of rhythmic structure of the competing language.

    PubMed

    Reel, Leigh Ann; Hicks, Candace Bourland

    2012-02-01

    The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Reception thresholds for English sentences were measured for 50 monolingual English-speaking adults in conditions with 2-talker (male-female) competing speech spoken in a stress-based (English, German), syllable-based (Spanish, French), or mora-based (Japanese) language. Two different masking signals were created for each language (i.e., 2 different 2-talker pairs). All subjects were tested in 10 competing conditions (2 conditions for each of the 5 languages). A significant difference was noted between the 2 masking signals within each language. Across languages, significantly greater listening difficulty was observed in conditions where competing speech was spoken in English, German, or Japanese, as compared with Spanish or French. Results suggest that (a) for a particular language, masking effectiveness can vary between different male-female 2-talker maskers and (b) for stress-based vs. syllable-based languages, competing speech is more difficult to ignore when spoken in a language from the native rhythmic class as compared with a nonnative rhythmic class, regardless of whether the language is familiar or unfamiliar to the listener.

  9. How Hierarchical Topics Evolve in Large Text Corpora.

    PubMed

    Cui, Weiwei; Liu, Shixia; Wu, Zhuofeng; Wei, Hao

    2014-12-01

    Using a sequence of topic trees to organize documents is a popular way to represent hierarchical and evolving topics in text corpora. However, following evolving topics in the context of topic trees remains difficult for users. To address this issue, we present an interactive visual text analysis approach to allow users to progressively explore and analyze the complex evolutionary patterns of hierarchical topics. The key idea behind our approach is to exploit a tree cut to approximate each tree and allow users to interactively modify the tree cuts based on their interests. In particular, we propose an incremental evolutionary tree cut algorithm with the goal of balancing 1) the fitness of each tree cut and the smoothness between adjacent tree cuts; 2) the historical and new information related to user interests. A time-based visualization is designed to illustrate the evolving topics over time. To preserve the mental map, we develop a stable layout algorithm. As a result, our approach can quickly guide users to progressively gain profound insights into evolving hierarchical topics. We evaluate the effectiveness of the proposed method on Amazon's Mechanical Turk and real-world news data. The results show that users are able to successfully analyze evolving topics in text data.

  10. Interference of spoken word recognition through phonological priming from visual objects and printed words.

    PubMed

    McQueen, James M; Huettig, Falk

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.

  11. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    ERIC Educational Resources Information Center

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  12. Individual Differences in Online Spoken Word Recognition: Implications for SLI

    ERIC Educational Resources Information Center

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2010-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…

  13. AI-Based Chatterbots and Spoken English Teaching: A Critical Analysis

    ERIC Educational Resources Information Center

    Sha, Guoquan

    2009-01-01

    The aim of various approaches implemented, whether the classical "three Ps" (presentation, practice, and production) or communicative language teaching (CLT), is to achieve communicative competence. Although a lot of software developed for teaching spoken English is dressed up to raise interaction, its methodology is largely rooted in tradition.…

  14. Teaching ESL to Competencies: A Departure from a Traditional Curriculum for Adult Learners with Specific Needs. Adult Education Series #12. Refugee Education Guide.

    ERIC Educational Resources Information Center

    Center for Applied Linguistics, Washington, DC. Language and Orientation Resource Center.

    Teaching English as a second language (ESL) to competencies requires that the instructional focus be on functional competencies and life-coping skills while developing the spoken and/or written English structures necessary to perform these skills. A step-by-step approach to develop and implement a competency-based approach to ESL for adults is…

  15. Dyslexia in Adults: Evidence for Deficits in Non-Word Reading and in the Phonological Representation of Lexical Items.

    ERIC Educational Resources Information Center

    Elbro, Carsten; And Others

    1994-01-01

    Compared to controls, adults (n=102) who reported a history of difficulties in learning to read were disabled in phonological coding, but less disabled in reading comprehension. Adults with poor phonological coding skills had basic deficits in phonological representations of spoken words, even when semantic word knowledge, phonemic awareness,…

  16. Reader for Advanced Spoken Tamil, Parts 1 and 2.

    ERIC Educational Resources Information Center

    Schiffman, Harold F.

    Part 1 of this reader consists of transcriptions of five Tamil radio plays, with exercises, notes, and discussion. Part 2 is a synopsis grammar and a glossary. Both are intended for advanced students of Tamil who have had at least two years of instruction in the spoken language at the college level. The materials have been tested in classroom use…

  17. Pedagogy for Liberation: Spoken Word Poetry in Urban Schools

    ERIC Educational Resources Information Center

    Fiore, Mia

    2015-01-01

    The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…

  18. Feature Statistics Modulate the Activation of Meaning during Spoken Word Processing

    ERIC Educational Resources Information Center

    Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.

    2016-01-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…

  19. Adult-specific insulin-producing neurons in Drosophila melanogaster.

    PubMed

    Ohhara, Yuya; Kobayashi, Satoru; Yamakawa-Kobayashi, Kimiko; Yamanaka, Naoki

    2018-06-01

    Holometabolous insects undergo metamorphosis to reorganize their behavioral and morphological features into adult-specific ones. In the central nervous system (CNS), some larval neurons undergo programmed cell death, whereas others go through remodeling of axonal and dendritic arbors to support functions of re-established adult organs. Although there are multiple neuropeptides that have stage-specific roles in holometabolous insects, the reorganization pattern of the entire neuropeptidergic system through metamorphosis still remains largely unclear. In this study, we conducted a mapping and lineage tracing of peptidergic neurons in the larval and adult CNS by using Drosophila genetic tools. We found that Diuretic hormone 44-producing median neurosecretory cells start expressing Insulin-like peptide 2 in the pharate adult stage. This neuronal cluster projects to the corpora cardiaca and dorsal vessel in both larval and adult stages, and also innervates an adult-specific structure in the digestive tract, the crop. We propose that the adult-specific insulin-producing cells may regulate functions of the digestive system in a stage-specific manner. Our study provides a neuroanatomical basis for understanding remodeling of the neuropeptidergic system during insect development and evolution. © 2018 Wiley Periodicals, Inc.

  20. User-Centred Design for Chinese-Oriented Spoken English Learning System

    ERIC Educational Resources Information Center

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  1. Orthographic Facilitation Effects on Spoken Word Production: Evidence from Chinese

    ERIC Educational Resources Information Center

    Zhang, Qingfang; Weekes, Brendan Stuart

    2009-01-01

    The aim of this experiment was to investigate the time course of orthographic facilitation on picture naming in Chinese. We used a picture-word paradigm to investigate orthographic and phonological facilitation on monosyllabic spoken word production in native Mandarin speakers. Both the stimulus-onset asynchrony (SOA) and the picture-word…

  2. Immediate effects of anticipatory coarticulation in spoken-word recognition

    PubMed Central

    Salverda, Anne Pier; Kleinschmidt, Dave; Tanenhaus, Michael K.

    2014-01-01

    Two visual-world experiments examined listeners’ use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as “The … ladder is the target”. With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200–240 ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word (“ The ladder … is the target”). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article’s vowel. With these stimuli, effects of speech on eye-movement control began about 70 ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for “data explanation” approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed. PMID:24511179

  3. Grammatical Deviations in the Spoken and Written Language of Hebrew-Speaking Children With Hearing Impairments.

    PubMed

    Tur-Kaspa, Hana; Dromi, Esther

    2001-04-01

    The present study reports a detailed analysis of written and spoken language samples of Hebrew-speaking children aged 11-13 years who are deaf. It focuses on the description of various grammatical deviations in the two modalities. Participants were 13 students with hearing impairments (HI) attending special classrooms integrated into two elementary schools in Tel Aviv, Israel, and 9 students with normal hearing (NH) in regular classes in these same schools. Spoken and written language samples were collected from all participants using the same five preplanned elicitation probes. Students with HI were found to display significantly more grammatical deviations than their NH peers in both their spoken and written language samples. Most importantly, between-modality differences were noted. The participants with HI exhibited significantly more grammatical deviations in their written language samples than in their spoken samples. However, the distribution of grammatical deviations across categories was similar in the two modalities. The most common grammatical deviations in order of their frequency were failure to supply obligatory morphological markers, failure to mark grammatical agreement, and the omission of a major syntactic constituent in a sentence. Word order violations were rarely recorded in the Hebrew samples. Performance differences in the two modalities encourage clinicians and teachers to facilitate target linguistic forms in diverse communication contexts. Furthermore, the identification of linguistic targets for intervention must be based on the unique grammatical structure of the target language.

  4. AG Bell Academy Certification Program for Listening and Spoken Language Specialists: Meeting a World-Wide Need for Qualified Professionals

    ERIC Educational Resources Information Center

    Goldberg, Donald M.; Dickson, Cheryl L.; Flexer, Carol

    2010-01-01

    This article discusses the AG Bell Academy for Listening and Spoken Language--an organization designed to build capacity of certified Listening and Spoken Language Specialists (LSLS) by defining and maintaining a set of professional standards for LSLS professionals and thereby addressing the global deficit of qualified LSLS. Definitions and…

  5. Functions of Japanese Exemplifying Particles in Spoken and Written Discourse

    ERIC Educational Resources Information Center

    Taylor, Yuki Io

    2010-01-01

    This dissertation examines how the Japanese particles "nado", "toka", and "tari" which all may be translated as "such as", "etc.", or "like" behave differently in written and spoken discourse. According to traditional analyses (e.g. Martin, 1987), these particles are assumed to be Exemplifying Particles (EP) used to provide concrete examples to…

  6. On-Line Syntax: Thoughts on the Temporality of Spoken Language

    ERIC Educational Resources Information Center

    Auer, Peter

    2009-01-01

    One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…

  7. The Impact of Orthographic Consistency on German Spoken Word Identification

    ERIC Educational Resources Information Center

    Beyermann, Sandra; Penke, Martina

    2014-01-01

    An auditory lexical decision experiment was conducted to find out whether sound-to-spelling consistency has an impact on German spoken word processing, and whether such an impact is different at different stages of reading development. Four groups of readers (school children in the second, third and fifth grades, and university students)…

  8. A Spoken Word Count (Children--Ages 5, 6 and 7).

    ERIC Educational Resources Information Center

    Wepman, Joseph M.; Hass, Wilbur

    Relatively little research has been done on the quantitative characteristics of children's word usage. This spoken count was undertaken to investigate those aspects of word usage and frequency which could cast light on lexical processes in grammar and verbal development in children. Three groups of 30 children each (boys and girls) from…

  9. L2 Gender Facilitation and Inhibition in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Behney, Jennifer N.

    2011-01-01

    This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…

  10. How long-term memory and accentuation interact during spoken language comprehension.

    PubMed

    Li, Xiaoqing; Yang, Yufang

    2013-04-01

    Spoken language comprehension requires immediate integration of different information types, such as semantics, syntax, and prosody. Meanwhile, both the information derived from speech signals and the information retrieved from long-term memory exert their influence on language comprehension immediately. Using EEG (electroencephalogram), the present study investigated how the information retrieved from long-term memory interacts with accentuation during spoken language comprehension. Mini Chinese discourses were used as stimuli, with an interrogative or assertive context sentence preceding the target sentence. The target sentence included one critical word conveying new information. The critical word was either highly expected or lowly expected given the information retrieved from long-term memory. Moreover, the critical word was either consistently accented or inconsistently de-accented. The results revealed that for lowly expected new information, inconsistently de-accented words elicited a larger N400 and larger theta power increases (4-6 Hz) than consistently accented words. In contrast, for the highly expected new information, consistently accented words elicited a larger N400 and larger alpha power decreases (8-14 Hz) than inconsistently de-accented words. The results suggest that, during spoken language comprehension, the effect of accentuation interacted with the information retrieved from long-term memory immediately. Moreover, our results also have important consequences for our understanding of the processing nature of the N400. The N400 amplitude is not only enhanced for incorrect information (new and de-accented word) but also enhanced for correct information (new and accented words). Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Physical stature of adult Tsimane' Amerindians, Bolivian Amazon in the 20th century.

    PubMed

    Godoy, Ricardo A; Leonard, William R; Reyes-García, Victoria; Goodman, Elizabeth; McDade, Thomas; Huanca, Tomás; Tanner, Susan; Vadez, Vincent

    2006-06-01

    We examine the association between exposure to the market and Western society on the height of adult Tsimane', a foraging-farming society in the Bolivian Amazon. As with other contemporary native peoples, we find little evidence of a significant secular change in height during 1920-1980. Female height bore a positive association with own schooling and fluency in spoken Spanish and with maternal modern human capital (schooling, writing ability, and fluency in spoken Spanish), but male heights bore no association with parental height or with modern human capital. The absence of a secular change likely reflects the persistence of traditional forms of social organization and production that protect health.

  12. Attitudes towards Literary Tamil and Standard Spoken Tamil in Singapore

    ERIC Educational Resources Information Center

    Saravanan, Vanithamani

    2007-01-01

    This is the first empirical study that focused on attitudes towards two varieties of Tamil, Literary Tamil (LT) and Standard Spoken Tamil (SST), with the multilingual state of Singapore as the backdrop. The attitudes of 46 Singapore Tamil teachers towards speakers of LT and SST were investigated using the matched-guise approach along with…

  13. Animated and Static Concept Maps Enhance Learning from Spoken Narration

    ERIC Educational Resources Information Center

    Adesope, Olusola O.; Nesbit, John C.

    2013-01-01

    An animated concept map represents verbal information in a node-link diagram that changes over time. The goals of the experiment were to evaluate the instructional effects of presenting an animated concept map concurrently with semantically equivalent spoken narration. The study used a 2 x 2 factorial design in which an animation factor (animated…

  14. Call and Responsibility: Critical Questions for Youth Spoken Word Poetry

    ERIC Educational Resources Information Center

    Weinstein, Susan; West, Anna

    2012-01-01

    In this article, Susan Weinstein and Anna West embark on a critical analysis of the maturing field of youth spoken word poetry (YSW). Through a blend of firsthand experience, analysis of YSW-related films and television, and interview data from six years of research, the authors identify specific dynamics that challenge young poets as they…

  15. Automated Scoring of L2 Spoken English with Random Forests

    ERIC Educational Resources Information Center

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  16. Professional Training in Listening and Spoken Language--A Canadian Perspective

    ERIC Educational Resources Information Center

    Fitzpatrick, Elizabeth

    2010-01-01

    Several factors undoubtedly influenced the development of listening and spoken language options for children with hearing loss in Canada. The concept of providing auditory-based rehabilitation was popularized in Canada in the 1960s through the work of Drs. Daniel Ling and Agnes Ling in Montreal. The Lings founded the McGill University Project for…

  17. Spoken Persuasive Discourse Abilities of Adolescents with Acquired Brain Injury

    ERIC Educational Resources Information Center

    Moran, Catherine; Kirk, Cecilia; Powell, Emma

    2012-01-01

    Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…

  18. The Activation of Embedded Words in Spoken Word Identification Is Robust but Constrained: Evidence from the Picture-Word Interference Paradigm

    ERIC Educational Resources Information Center

    Bowers, Jeffrey S.; Davis, Colin J.; Mattys, Sven L.; Damian, Markus F.; Hanley, Derek

    2009-01-01

    Three picture-word interference (PWI) experiments assessed the extent to which embedded subset words are activated during the identification of spoken superset words (e.g., "bone" in "trombone"). Participants named aloud pictures (e.g., "brain") while spoken distractors were presented. In the critical condition,…

  19. Deep bottleneck features for spoken language identification.

    PubMed

    Jiang, Bing; Song, Yan; Wei, Si; Liu, Jun-Hua; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    A key problem in spoken language identification (LID) is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF) for spoken LID, motivated by the success of Deep Neural Networks (DNN) in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV), using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09) show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  20. The Influence of Linguistic Proficiency on Masked Text Recognition Performance in Adults With and Without Congenital Hearing Impairment.

    PubMed

    Huysmans, Elke; Bolk, Elske; Zekveld, Adriana A; Festen, Joost M; de Groot, Annette M B; Goverts, S Theo

    2016-01-01

    The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded

  1. Building an ontology of pulmonary diseases with natural language processing tools using textual corpora.

    PubMed

    Baneyx, Audrey; Charlet, Jean; Jaulent, Marie-Christine

    2007-01-01

    Pathologies and acts are classified in thesauri to help physicians to code their activity. In practice, the use of thesauri is not sufficient to reduce variability in coding and thesauri are not suitable for computer processing. We think the automation of the coding task requires a conceptual modeling of medical items: an ontology. Our task is to help lung specialists code acts and diagnoses with software that represents medical knowledge of this concerned specialty by an ontology. The objective of the reported work was to build an ontology of pulmonary diseases dedicated to the coding process. To carry out this objective, we develop a precise methodological process for the knowledge engineer in order to build various types of medical ontologies. This process is based on the need to express precisely in natural language the meaning of each concept using differential semantics principles. A differential ontology is a hierarchy of concepts and relationships organized according to their similarities and differences. Our main research hypothesis is to apply natural language processing tools to corpora to develop the resources needed to build the ontology. We consider two corpora, one composed of patient discharge summaries and the other being a teaching book. We propose to combine two approaches to enrich the ontology building: (i) a method which consists of building terminological resources through distributional analysis and (ii) a method based on the observation of corpus sequences in order to reveal semantic relationships. Our ontology currently includes 1550 concepts and the software implementing the coding process is still under development. Results show that the proposed approach is operational and indicates that the combination of these methods and the comparison of the resulting terminological structures give interesting clues to a knowledge engineer for the building of an ontology.

  2. Spoken Language Development in Oral Preschool Children with Permanent Childhood Deafness

    ERIC Educational Resources Information Center

    Sarant, Julia Z.; Holt, Colleen M.; Dowell, Richard C.; Rickards, Field W.

    2009-01-01

    This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were…

  3. Individual Differences in Inhibitory Control Relate to Bilingual Spoken Word Processing

    ERIC Educational Resources Information Center

    Mercier, Julie; Pivneva, Irina; Titone, Debra

    2014-01-01

    We investigated whether individual differences in inhibitory control relate to bilingual spoken word recognition. While their eye movements were monitored, native English and native French English-French bilinguals listened to English words (e.g., "field") and looked at pictures corresponding to the target, a within-language competitor…

  4. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    ERIC Educational Resources Information Center

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  5. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    PubMed

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  6. Applicability of the Spoken Knowledge in Low Literacy Patients with Diabetes in Brazilian elderly.

    PubMed

    Souza, Jonas Gordilho; Apolinario, Daniel; Farfel, José Marcelo; Jaluul, Omar; Magaldi, Regina Miksian; Busse, Alexandre Leopold; Campora, Flávia; Jacob-Filho, Wilson

    2016-01-01

    To translate, adapt and evaluate the properties of a Brazilian Portuguese version of the Spoken Knowledge in Low Literacy Patients with Diabetes, which is a questionnaire that evaluate diabetes knowledge. A cross-sectional study with type 2 diabetes patients aged ≥60 years, seen at a public healthcare organization in the city of Sao Paulo (SP). After the development of the Portuguese version, we evaluated the psychometrics properties and the association with sociodemographic and clinical variables. The regression models were adjusted for sociodemographic data, functional health literacy, duration of disease, use of insulin, and glycemic control. We evaluated 129 type 2 diabetic patients, with mean age of 75.9 (±6.2) years, mean scholling of 5.2 (±4.4) years, mean glycosylated hemoglobin of 7.2% (±1.4), and mean score on Spoken Knowledge in Low Literacy Patients with Diabetes of 42.1% (±25.8). In the regression model, the variables independently associated to Spoken Knowledge in Low Literacy Patients with Diabetes were schooling (B=0.193; p=0.003), use of insulin (B=1.326; p=0.004), duration of diabetes (B=0.053; p=0.022) and health literacy (B=0.108; p=0.021). The determination coefficient was 0.273. The Cronbach a was 0.75, demonstrating appropriate internal consistency. This translated version of the Spoken Knowledge in Low Literacy Patients with Diabetes showed to be adequate to evaluate diabetes knowledge in elderly patients with low schooling levels. It presented normal distribution, adequate internal consistency, with no ceiling or floor effect. The tool is easy to be used, can be quickly applied and does not depend on reading skills. Traduzir, adaptar e avaliar as propriedades de uma versão, em português do Brasil, do Spoken Knowledge in Low Literacy Patients with Diabetes, um questionário que avalia conhecimento em diabetes. Estudo transversal, em diabéticos tipo 2, com idade ≥60 anos de uma instituição pública de saúde, em São Paulo (SP

  7. The Association Between Positive Relationships with Adults and Suicide-Attempt Resilience in American Indian Youth in New Mexico.

    PubMed

    FitzGerald, Courtney A; Fullerton, Lynne; Green, Dan; Hall, Meryn; Peñaloza, Linda J

    2017-01-01

    This study examined the 2013 New Mexico Youth Risk and Resiliency Survey (NM-YRRS) to determine whether cultural connectedness and positive relationships with adults protected against suicide attempts among American Indian and Alaska Native (AI/AN) youth and whether these relationships differed by gender. The sample included 2,794 AI/AN students in grades 9 to 12 who answered the question about past-year suicide attempts. Protective factor variables tested included relationships with adults at home, school, and the community. The language spoken at home was used as a proxy measure for cultural connectedness. Positive relationships with adults were negatively associated with the prevalence of past-year suicide attempts in bivariate analysis. However, language spoken at home was not associated with the prevalence of suicide attempts. Multivariate analysis showed that among girls, relationships with adults at home, at school, and in the community were independently associated with lower suicide-attempt prevalence. Among boys, only relationships with adults at home showed such an association. These results have important implications for the direction of future research about protective factors associated with AI/AN youth suicide risk as well as in the design of suicide intervention and prevention programs.

  8. Adult Speakers' Tongue-Palate Contact Patterns for Bilabial Stops within Complex Clusters

    ERIC Educational Resources Information Center

    Zharkova, Natalia; Schaeffler, Sonja; Gibbon, Fiona E.

    2009-01-01

    Previous studies using Electropalatography (EPG) have shown that individuals with speech disorders sometimes produce articulation errors that affect bilabial targets, but currently there is limited normative data available. In this study, EPG and acoustic data were recorded during complex word final sps clusters spoken by 20 normal adults. A total…

  9. Learning and Consolidation of New Spoken Words in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Henderson, Lisa; Powell, Anna; Gaskell, M. Gareth; Norbury, Courtenay

    2014-01-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words…

  10. Teaching Spoken Discourse Markers Explicitly: A Comparison of III and PPP

    ERIC Educational Resources Information Center

    Jones, Christian; Carter, Ronald

    2014-01-01

    This article reports on mixed methods classroom research carried out at a British university. The study investigates the effectiveness of two different explicit teaching frameworks, Illustration--Interaction--Induction (III) and Present--Practice--Produce (PPP) used to teach the same spoken discourse markers (DMs) to two different groups of…

  11. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers with Down Syndrome

    ERIC Educational Resources Information Center

    Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth

    2015-01-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…

  12. Parent Telegraphic Speech Use and Spoken Language in Preschoolers with ASD

    ERIC Educational Resources Information Center

    Venker, Courtney E.; Bolt, Daniel M.; Meyer, Allison; Sindberg, Heidi; Weismer, Susan Ellis; Tager-Flusberg, Helen

    2015-01-01

    Purpose: There is considerable controversy regarding whether to use telegraphic or grammatical input when speaking to young children with language delays, including children with autism spectrum disorder (ASD). This study examined telegraphic speech use in parents of preschoolers with ASD and associations with children's spoken language 1 year…

  13. Authentic ESL Spoken Materials: Soap Opera and Sitcom versus Natural Conversation

    ERIC Educational Resources Information Center

    Al-Surmi, Mansoor Ali

    2012-01-01

    TV shows, especially soap operas and sitcoms, are usually considered by ESL practitioners as a source of authentic spoken conversational materials presumably because they reflect the linguistic features of natural conversation. However, practitioners might be faced with the dilemma of how to evaluate whether such conversational materials reflect…

  14. English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition

    PubMed Central

    Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress. PMID:28056135

  15. Demonstration of a Spoken Dialogue Interface for Planning Activities of a Semi-autonomous Robot

    NASA Technical Reports Server (NTRS)

    Dowding, John; Frank, Jeremy; Hockey, Beth Ann; Jonsson, Ari; Aist, Gregory

    2002-01-01

    Planning and scheduling in the face of uncertainty and change pushes the capabilities of both planning and dialogue technologies by requiring complex negotiation to arrive at a workable plan. Planning for use of semi-autonomous robots involves negotiation among multiple participants with competing scientific and engineering goals to co-construct a complex plan. In NASA applications this plan construction is done under severe time pressure so having a dialogue interface to the plan construction tools can aid rapid completion of the process. But, this will put significant demands on spoken dialogue technology, particularly in the areas of dialogue management and generation. The dialogue interface will need to be able to handle the complex dialogue strategies that occur in negotiation dialogues, including hypotheticals and revisions, and the generation component will require an ability to summarize complex plans. This demonstration will describe a work in progress towards building a spoken dialogue interface to the EUROPA planner for the purposes of planning and scheduling the activities of a semi-autonomous robot. A prototype interface has been built for planning the schedule of the Personal Satellite Assistant (PSA), a mobile robot designed for micro-gravity environments that is intended for use on the Space Shuttle and International Space Station. The spoken dialogue interface gives the user the capability to ask for a description of the plan, ask specific questions about the plan, and update or modify the plan. We anticipate that a spoken dialogue interface to the planner will provide a natural augmentation or alternative to the visualization interface, in situations in which the user needs very targeted information about the plan, in situations where natural language can express complex ideas more concisely than GUI actions, or in situations in which a graphical user interface is not appropriate.

  16. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic.

    PubMed

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children's phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children's early morphological awareness in SpA explained variance in children's gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts.

  17. Development and Relationships Between Phonological Awareness, Morphological Awareness and Word Reading in Spoken and Standard Arabic

    PubMed Central

    Schiff, Rachel; Saiegh-Haddad, Elinor

    2018-01-01

    This study addressed the development of and the relationship between foundational metalinguistic skills and word reading skills in Arabic. It compared Arabic-speaking children’s phonological awareness (PA), morphological awareness, and voweled and unvoweled word reading skills in spoken and standard language varieties separately in children across five grade levels from childhood to adolescence. Second, it investigated whether skills developed in the spoken variety of Arabic predict reading in the standard variety. Results indicate that although individual differences between students in PA are eliminated toward the end of elementary school in both spoken and standard language varieties, gaps in morphological awareness and in reading skills persisted through junior and high school years. The results also show that the gap in reading accuracy and fluency between Spoken Arabic (SpA) and Standard Arabic (StA) was evident in both voweled and unvoweled words. Finally, regression analyses showed that morphological awareness in SpA contributed to reading fluency in StA, i.e., children’s early morphological awareness in SpA explained variance in children’s gains in reading fluency in StA. These findings have important theoretical and practical contributions for Arabic reading theory in general and they extend the previous work regarding the cross-linguistic relevance of foundational metalinguistic skills in the first acquired language to reading in a second language, as in societal bilingualism contexts, or a second language variety, as in diglossic contexts. PMID:29686633

  18. Spoken Grammar Practice and Feedback in an ASR-Based CALL System

    ERIC Educational Resources Information Center

    de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland

    2015-01-01

    Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…

  19. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    ERIC Educational Resources Information Center

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  20. Monitoring the Performance of Human and Automated Scores for Spoken Responses

    ERIC Educational Resources Information Center

    Wang, Zhen; Zechner, Klaus; Sun, Yu

    2018-01-01

    As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…

  1. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    ERIC Educational Resources Information Center

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  2. Corrective Feedback, Spoken Accuracy and Fluency, and the Trade-Off Hypothesis

    ERIC Educational Resources Information Center

    Chehr Azad, Mohammad Hassan; Farrokhi, Farahman; Zohrabi, Mohammad

    2018-01-01

    The current study was an attempt to investigate the effects of different corrective feedback (CF) conditions on Iranian EFL learners' spoken accuracy and fluency (AF) and the trade-off between them. Consequently, four pre-intermediate intact classes were randomly selected as the control, delayed explicit metalinguistic CF, extensive recast, and…

  3. Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).

    ERIC Educational Resources Information Center

    Pisoni, David B.

    This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…

  4. Targeted Help for Spoken Dialogue Systems: Intelligent Feedback Improves Naive Users' Performance

    NASA Technical Reports Server (NTRS)

    Hockey, Beth Ann; Lemon, Oliver; Campana, Ellen; Hiatt, Laura; Aist, Gregory; Hieronymous, Jim; Gruenstein, Alexander; Dowding, John

    2003-01-01

    We present experimental evidence that providing naive users of a spoken dialogue system with immediate help messages related to their out-of-coverage utterances improves their success in using the system. A grammar-based recognizer and a Statistical Language Model (SLM) recognizer are run simultaneously. If the grammar-based recognizer suceeds, the less accurate SLM recognizer hypothesis is not used. When the grammar-based recognizer fails and the SLM recognizer produces a recognition hypothesis, this result is used by the Targeted Help agent to give the user feed-back on what was recognized, a diagnosis of what was problematic about the utterance, and a related in-coverage example. The in-coverage example is intended to encourage alignment between user inputs and the language model of the system. We report on controlled experiments on a spoken dialogue system for command and control of a simulated robotic helicopter.

  5. WARCProcessor: An Integrative Tool for Building and Management of Web Spam Corpora.

    PubMed

    Callón, Miguel; Fdez-Glez, Jorge; Ruano-Ordás, David; Laza, Rosalía; Pavón, Reyes; Fdez-Riverola, Florentino; Méndez, Jose Ramón

    2017-12-22

    In this work we present the design and implementation of WARCProcessor, a novel multiplatform integrative tool aimed to build scientific datasets to facilitate experimentation in web spam research. The developed application allows the user to specify multiple criteria that change the way in which new corpora are generated whilst reducing the number of repetitive and error prone tasks related with existing corpus maintenance. For this goal, WARCProcessor supports up to six commonly used data sources for web spam research, being able to store output corpus in standard WARC format together with complementary metadata files. Additionally, the application facilitates the automatic and concurrent download of web sites from Internet, giving the possibility of configuring the deep of the links to be followed as well as the behaviour when redirected URLs appear. WARCProcessor supports both an interactive GUI interface and a command line utility for being executed in background.

  6. WARCProcessor: An Integrative Tool for Building and Management of Web Spam Corpora

    PubMed Central

    Callón, Miguel; Fdez-Glez, Jorge; Ruano-Ordás, David; Laza, Rosalía; Pavón, Reyes; Méndez, Jose Ramón

    2017-01-01

    In this work we present the design and implementation of WARCProcessor, a novel multiplatform integrative tool aimed to build scientific datasets to facilitate experimentation in web spam research. The developed application allows the user to specify multiple criteria that change the way in which new corpora are generated whilst reducing the number of repetitive and error prone tasks related with existing corpus maintenance. For this goal, WARCProcessor supports up to six commonly used data sources for web spam research, being able to store output corpus in standard WARC format together with complementary metadata files. Additionally, the application facilitates the automatic and concurrent download of web sites from Internet, giving the possibility of configuring the deep of the links to be followed as well as the behaviour when redirected URLs appear. WARCProcessor supports both an interactive GUI interface and a command line utility for being executed in background. PMID:29271913

  7. Italians Use Abstract Knowledge about Lexical Stress during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Sulpizio, Simone; McQueen, James M.

    2012-01-01

    In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress.…

  8. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    ERIC Educational Resources Information Center

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  9. Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2011-10-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.

  10. I Feel You: The Design and Evaluation of a Domotic Affect-Sensitive Spoken Conversational Agent

    PubMed Central

    Lutfi, Syaheerah Lebai; Fernández-Martínez, Fernando; Lorenzo-Trueba, Jaime; Barra-Chicote, Roberto; Montero, Juan Manuel

    2013-01-01

    We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction. PMID:23945740

  11. Effort Not Speed Characterizes Comprehension of Spoken Sentences by Older Adults with Mild Hearing Impairment

    PubMed Central

    Ayasse, Nicole D.; Lash, Amanda; Wingfield, Arthur

    2017-01-01

    In spite of the rapidity of everyday speech, older adults tend to keep up relatively well in day-to-day listening. In laboratory settings older adults do not respond as quickly as younger adults in off-line tests of sentence comprehension, but the question is whether comprehension itself is actually slower. Two unique features of the human eye were used to address this question. First, we tracked eye-movements as 20 young adults and 20 healthy older adults listened to sentences that referred to one of four objects pictured on a computer screen. Although the older adults took longer to indicate the referenced object with a cursor-pointing response, their gaze moved to the correct object as rapidly as that of the younger adults. Second, we concurrently measured dilation of the pupil of the eye as a physiological index of effort. This measure revealed that although poorer hearing acuity did not slow processing, success came at the cost of greater processing effort. PMID:28119598

  12. Effort Not Speed Characterizes Comprehension of Spoken Sentences by Older Adults with Mild Hearing Impairment.

    PubMed

    Ayasse, Nicole D; Lash, Amanda; Wingfield, Arthur

    2016-01-01

    In spite of the rapidity of everyday speech, older adults tend to keep up relatively well in day-to-day listening. In laboratory settings older adults do not respond as quickly as younger adults in off-line tests of sentence comprehension, but the question is whether comprehension itself is actually slower. Two unique features of the human eye were used to address this question. First, we tracked eye-movements as 20 young adults and 20 healthy older adults listened to sentences that referred to one of four objects pictured on a computer screen. Although the older adults took longer to indicate the referenced object with a cursor-pointing response, their gaze moved to the correct object as rapidly as that of the younger adults. Second, we concurrently measured dilation of the pupil of the eye as a physiological index of effort. This measure revealed that although poorer hearing acuity did not slow processing, success came at the cost of greater processing effort.

  13. Using Unscripted Spoken Texts in the Teaching of Second Language Listening

    ERIC Educational Resources Information Center

    Wagner, Elvis

    2014-01-01

    Most spoken texts that are used in second language (L2) listening classroom activities are scripted texts, where the text is written, revised, polished, and then read aloud with artificially clear enunciation and slow rate of speech. This article explores the field's overreliance on these scripted texts, at the expense of including unscripted…

  14. The Contribution of the Inferior Parietal Cortex to Spoken Language Production

    ERIC Educational Resources Information Center

    Geranmayeh, Fatemeh; Brownsett, Sonia L. E.; Leech, Robert; Beckmann, Christian F.; Woodhead, Zoe; Wise, Richard J. S.

    2012-01-01

    This functional MRI study investigated the involvement of the left inferior parietal cortex (IPC) in spoken language production (Speech). Its role has been apparent in some studies but not others, and is not convincingly supported by clinical studies as they rarely include cases with lesions confined to the parietal lobe. We compared Speech with…

  15. Phonological Neighborhood Effects in Spoken Word Production: An fMRI Study

    ERIC Educational Resources Information Center

    Peramunage, Dasun; Blumstein, Sheila E.; Myers, Emily B.; Goldrick, Matthew; Baese-Berk, Melissa

    2011-01-01

    The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g., cape) or lacked a minimal pair (NMP) neighbor (e.g., cake). The voiced neighbor never appeared in the…

  16. Investigating L2 Spoken English through the Role Play Learner Corpus

    ERIC Educational Resources Information Center

    Nava, Andrea; Pedrazzini, Luciana

    2011-01-01

    We describe an exploratory study carried out within the University of Milan, Department of English the aim of which was to analyse features of the spoken English of first-year Modern Languages undergraduates. We compiled a learner corpus, the "Role Play" corpus, which consisted of 69 role-play interactions in English carried out by…

  17. Do Phonological Constraints on the Spoken Word Affect Visual Lexical Decision?

    ERIC Educational Resources Information Center

    Lee, Yang; Moreno, Miguel A.; Carello, Claudia; Turvey, M. T.

    2013-01-01

    Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language's writing system and the assimilation of phonemes according to the language's constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes…

  18. Learning and consolidation of new spoken words in autism spectrum disorder.

    PubMed

    Henderson, Lisa; Powell, Anna; Gareth Gaskell, M; Norbury, Courtenay

    2014-11-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words and/or integrating them with existing knowledge. Nineteen boys with ASD and 19 typically developing (TD) boys matched on age and vocabulary knowledge showed similar improvements in recognition and recall of novel words (e.g. 'biscal') 24 hours after training, suggesting an intact ability to consolidate explicit knowledge of new spoken word forms. TD children showed competition effects for existing neighbors (e.g. 'biscuit') after 24 hours, suggesting that the new words had been integrated with existing knowledge over time. In contrast, children with ASD showed immediate competition effects that were not significant after 24 hours, suggesting a qualitative difference in the time course of lexical integration. These results are considered from the perspective of the dual-memory systems framework. © 2014 John Wiley & Sons Ltd.

  19. Role of Working Memory in Children's Understanding Spoken Narrative: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Montgomery, James W.; Polunenko, Anzhela; Marinellie, Sally A.

    2009-01-01

    The role of phonological short-term memory (PSTM), attentional resource capacity/allocation, and processing speed on children's spoken narrative comprehension was investigated. Sixty-seven children (6-11 years) completed a digit span task (PSTM), concurrent verbal processing and storage (CPS) task (resource capacity/allocation), auditory-visual…

  20. A Transactional Model of Spoken Vocabulary Variation in Toddlers with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Woynaroski, Tiffany; Yoder, Paul J.; Fey, Marc E.; Warren, Steven F.

    2014-01-01

    Purpose: The authors examined (a) whether dose frequency of milieu communication teaching (MCT) affects children's canonical syllabic communication and (b) whether the relation between early canonical syllabic communication and later spoken vocabulary is mediated by parental linguistic mapping in children with intellectual disabilities (ID).…

  1. Differential Processing of Thematic and Categorical Conceptual Relations in Spoken Word Production

    ERIC Educational Resources Information Center

    de Zubicaray, Greig I.; Hansen, Samuel; McMahon, Katie L.

    2013-01-01

    Studies of semantic context effects in spoken word production have typically distinguished between categorical (or taxonomic) and associative relations. However, associates tend to confound semantic features or morphological representations, such as whole-part relations and compounds (e.g., BOAT-anchor, BEE-hive). Using a picture-word interference…

  2. Speech perception in older adults: the importance of speech-specific cognitive abilities.

    PubMed

    Sommers, M S

    1997-05-01

    To provide a critical evaluation of studies examining the contribution of changes in language-specific cognitive abilities to the speech perception difficulties of older adults. A review of the literature on aging and speech perception. The research considered in the present review suggests that age-related changes in absolute sensitivity is the principal factor affecting older listeners' speech perception in quiet. However, under less favorable listening conditions, changes in a number of speech-specific cognitive abilities can also affect spoken language processing in older people. Clinically, these findings suggest that hearing aids, which have been the traditional treatment for improving speech perception in older adults, are likely to offer considerable benefit in quiet listening situations because the amplification they provide can serve to compensate for age-related hearing losses. However, such devices may be less beneficial in more natural environments, (e.g., noisy backgrounds, multiple talkers, reverberant rooms) because they are less effective for improving speech perception difficulties that result from age-related cognitive declines. It is suggested that an integrative approach to designing test batteries that can assess both sensory and cognitive abilities needed for processing spoken language offers the most promising approach for developing therapeutic interventions to improve speech perception in older adults.

  3. A dual contribution to the involuntary semantic processing of unexpected spoken words.

    PubMed

    Parmentier, Fabrice B R; Turner, Jacqueline; Perez, Laura

    2014-02-01

    Sounds are a major cause of distraction. Unexpected to-be-ignored auditory stimuli presented in the context of an otherwise repetitive acoustic background ineluctably break through selective attention and distract people from an unrelated visual task (deviance distraction). This involuntary capture of attention by deviant sounds has been hypothesized to trigger their semantic appraisal and, in some circumstances, interfere with ongoing performance, but it remains unclear how such processing compares with the automatic processing of distractors in classic interference tasks (e.g., Stroop, flanker, Simon tasks). Using a cross-modal oddball task, we assessed the involuntary semantic processing of deviant sounds in the presence and absence of deviance distraction. The results revealed that some involuntary semantic analysis of spoken distractors occurs in the absence of deviance distraction but that this processing is significantly greater in its presence. We conclude that the automatic processing of spoken distractors reflects 2 contributions, one that is contingent upon deviance distraction and one that is independent from it.

  4. Probabilistic Phonotactics as a Cue for Recognizing Spoken Cantonese Words in Speech

    ERIC Educational Resources Information Center

    Yip, Michael C. W.

    2017-01-01

    Previous experimental psycholinguistic studies suggested that the probabilistic phonotactics information might likely to hint the locations of word boundaries in continuous speech and hence posed an interesting solution to the empirical question on how we recognize/segment individual spoken word in speech. We investigated this issue by using…

  5. Comparing Spoken Language Treatments for Minimally Verbal Preschoolers with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-01-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in…

  6. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    ERIC Educational Resources Information Center

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  7. How Struggling Adult Readers Use Contextual Information When Comprehending Speech: Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Ng, Shukhan; Payne, Brennan R.; Stine-Morrow, Elizabeth A. L.; Federmeier, Kara D.

    2018-01-01

    We investigated how struggling adult readers make use of sentence context to facilitate word processing when comprehending spoken language, conditions under which print decoding is not a barrier to comprehension. Stimuli were strongly and weakly constraining sentences (as measured by cloze probability), which ended with the most expected word…

  8. Conducting spoken word recognition research online: Validation and a new timing method.

    PubMed

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  9. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    PubMed Central

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  10. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    PubMed

    Jones, -A C; Toscano, E; Botting, N; Marshall, C-R; Atkinson, J R; Denmark, T; Herman, -R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  11. From spoken narratives to domain knowledge: mining linguistic data for medical image understanding.

    PubMed

    Guo, Xuan; Yu, Qi; Alm, Cecilia Ovesdotter; Calvelli, Cara; Pelz, Jeff B; Shi, Pengcheng; Haake, Anne R

    2014-10-01

    Extracting useful visual clues from medical images allowing accurate diagnoses requires physicians' domain knowledge acquired through years of systematic study and clinical training. This is especially true in the dermatology domain, a medical specialty that requires physicians to have image inspection experience. Automating or at least aiding such efforts requires understanding physicians' reasoning processes and their use of domain knowledge. Mining physicians' references to medical concepts in narratives during image-based diagnosis of a disease is an interesting research topic that can help reveal experts' reasoning processes. It can also be a useful resource to assist with design of information technologies for image use and for image case-based medical education systems. We collected data for analyzing physicians' diagnostic reasoning processes by conducting an experiment that recorded their spoken descriptions during inspection of dermatology images. In this paper we focus on the benefit of physicians' spoken descriptions and provide a general workflow for mining medical domain knowledge based on linguistic data from these narratives. The challenge of a medical image case can influence the accuracy of the diagnosis as well as how physicians pursue the diagnostic process. Accordingly, we define two lexical metrics for physicians' narratives--lexical consensus score and top N relatedness score--and evaluate their usefulness by assessing the diagnostic challenge levels of corresponding medical images. We also report on clustering medical images based on anchor concepts obtained from physicians' medical term usage. These analyses are based on physicians' spoken narratives that have been preprocessed by incorporating the Unified Medical Language System for detecting medical concepts. The image rankings based on lexical consensus score and on top 1 relatedness score are well correlated with those based on challenge levels (Spearman correlation>0.5 and Kendall

  12. Examining the Time Course of Indexical Specificity Effects in Spoken Word Recognition

    ERIC Educational Resources Information Center

    McLennan, Conor T.; Luce, Paul A.

    2005-01-01

    Variability in talker identity and speaking rate, commonly referred to as indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. The present study examines the time course of indexical specificity effects to evaluate the hypothesis that such effects occur relatively late in the perceptual processing of…

  13. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    ERIC Educational Resources Information Center

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  14. Crossmodal Semantic Priming by Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2011-01-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…

  15. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    PubMed Central

    Geers, Ann E.; Nicholas, Johanna G.

    2013-01-01

    Purpose To determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12–38 months of age. Relative advantages of receiving a bilateral CI after age 4.5, better pre-CI aided hearing, and longer CI experience were also examined. Method Sixty children participated in a prospective longitudinal study of outcomes at 4.5 and 10.5 years of age. Twenty-nine children received a sequential second CI. Test scores were compared to normative samples of hearing age-mates and predictors of outcomes identified. Results Standard scores on language tests at 10.5 years of age remained significantly correlated with age of first cochlear implantation. Scores were not associated with receipt of a second, sequentially-acquired CI. Significantly higher scores were achieved for vocabulary as compared with overall language, a finding not evident when the children were tested at younger ages. Conclusion Age-appropriate spoken language skills continued to be more likely with younger AOI, even after an average of 8.6 years of additional CI use. Receipt of a second implant between ages 4–10 years and longer duration of device use did not provide significant added benefit. PMID:23275406

  16. Online Lexical Competition during Spoken Word Recognition and Word Learning in Children and Adults

    ERIC Educational Resources Information Center

    Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth

    2013-01-01

    Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children…

  17. The Influence of Recent Scene Events on Spoken Comprehension: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Knoeferle, Pia; Crocker, Matthew W.

    2007-01-01

    Evidence from recent experiments that monitored attention in clipart scenes during spoken comprehension suggests that people preferably rely on non-stereotypical depicted events over stereotypical thematic knowledge for incremental interpretation. "The Coordinated Interplay Account [Knoeferle, P., & Crocker, M. W. (2006). "The coordinated…

  18. Using Spoken Language Benchmarks to Characterize the Expressive Language Skills of Young Children With Autism Spectrum Disorders

    PubMed Central

    Weismer, Susan Ellis

    2015-01-01

    Purpose Spoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels. Method The communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years. Results The majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors. Conclusion The spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth. PMID:26254475

  19. Dose-Volume Parameters of the Corpora Cavernosa Do Not Correlate With Erectile Dysfunction After External Beam Radiotherapy for Prostate Cancer: Results From a Dose-Escalation Trial

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wielen, Gerard J. van der; Hoogeman, Mischa S.; Dohle, Gert R.

    2008-07-01

    Purpose: To analyze the correlation between dose-volume parameters of the corpora cavernosa and erectile dysfunction (ED) after external beam radiotherapy (EBRT) for prostate cancer. Methods and Materials: Between June 1997 and February 2003, a randomized dose-escalation trial comparing 68 Gy and 78 Gy was conducted. Patients at our institute were asked to participate in an additional part of the trial evaluating sexual function. After exclusion of patients with less than 2 years of follow-up, ED at baseline, or treatment with hormonal therapy, 96 patients were eligible. The proximal corpora cavernosa (crura), the superiormost 1-cm segment of the crura, and themore » penile bulb were contoured on the planning computed tomography scan and dose-volume parameters were calculated. Results: Two years after EBRT, 35 of the 96 patients had developed ED. No statistically significant correlations between ED 2 years after EBRT and dose-volume parameters of the crura, the superiormost 1-cm segment of the crura, or the penile bulb were found. The few patients using potency aids typically indicated to have ED. Conclusion: No correlation was found between ED after EBRT for prostate cancer and radiation dose to the crura or penile bulb. The present study is the largest study evaluating the correlation between ED and radiation dose to the corpora cavernosa after EBRT for prostate cancer. Until there is clear evidence that sparing the penile bulb or crura will reduce ED after EBRT, we advise to be careful in sparing these structures, especially when this involves reducing treatment margins.« less

  20. Cross-modal representation of spoken and written word meaning in left pars triangularis.

    PubMed

    Liuzzi, Antonietta Gabriella; Bruffaerts, Rose; Peeters, Ronald; Adamczuk, Katarzyna; Keuleers, Emmanuel; De Deyne, Simon; Storms, Gerrit; Dupont, Patrick; Vandenberghe, Rik

    2017-04-15

    The correspondence in meaning extracted from written versus spoken input remains to be fully understood neurobiologically. Here, in a total of 38 subjects, the functional anatomy of cross-modal semantic similarity for concrete words was determined based on a dual criterion: First, a voxelwise univariate analysis had to show significant activation during a semantic task (property verification) performed with written and spoken concrete words compared to the perceptually matched control condition. Second, in an independent dataset, in these clusters, the similarity in fMRI response pattern to two distinct entities, one presented as a written and the other as a spoken word, had to correlate with the similarity in meaning between these entities. The left ventral occipitotemporal transition zone and ventromedial temporal cortex, retrosplenial cortex, pars orbitalis bilaterally, and the left pars triangularis were all activated in the univariate contrast. Only the left pars triangularis showed a cross-modal semantic similarity effect. There was no effect of phonological nor orthographic similarity in this region. The cross-modal semantic similarity effect was confirmed by a secondary analysis in the cytoarchitectonically defined BA45. A semantic similarity effect was also present in the ventral occipital regions but only within the visual modality, and in the anterior superior temporal cortex only within the auditory modality. This study provides direct evidence for the coding of word meaning in BA45 and positions its contribution to semantic processing at the confluence of input-modality specific pathways that code for meaning within the respective input modalities. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.

    ERIC Educational Resources Information Center

    Burton, John K.; Bruning, Roger H.

    1982-01-01

    Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…

  2. Context Effects and Spoken Word Recognition of Chinese: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Yip, Michael C. W.; Zhai, Mingjun

    2018-01-01

    This study examined the time-course of context effects on spoken word recognition during Chinese sentence processing. We recruited 60 native Mandarin listeners to participate in an eye-tracking experiment. In this eye-tracking experiment, listeners were told to listen to a sentence carefully, which ended with a Chinese homophone, and look at…

  3. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    ERIC Educational Resources Information Center

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  4. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment

    PubMed Central

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070

  5. Spoken and Written Narratives in Swedish Children and Adolescents with Hearing Impairment

    ERIC Educational Resources Information Center

    Asker-Arnason, Lena; Akerlund, Viktoria; Skoglund, Cecilia; Ek-Lagergren, Ingela; Wengelin, Asa; Sahlen, Birgitta

    2012-01-01

    Twenty 10- to 18-year-old children and adolescents with varying degrees of hearing impairment (HI) and hearing aids (HA), ranging from mild-moderate to severe, produced picture-elicited narratives in a spoken and written version. Their performance was compared to that of 63 normally hearing (NH) peers within the same age span. The participants…

  6. Eye Movements to Pictures Reveal Transient Semantic Activation during Spoken Word Recognition

    ERIC Educational Resources Information Center

    Yee, Eiling; Sedivy, Julie C.

    2006-01-01

    Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an…

  7. Effects of Tasks on Spoken Interaction and Motivation in English Language Learners

    ERIC Educational Resources Information Center

    Carrero Pérez, Nubia Patricia

    2016-01-01

    Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…

  8. Experimentally-induced Increases in Early Gesture Lead to Increases in Spoken Vocabulary

    PubMed Central

    LeBarton, Eve Sauer; Goldin-Meadow, Susan; Raudenbush, Stephen

    2014-01-01

    Differences in vocabulary that children bring with them to school can be traced back to the gestures they produce at 1;2, which, in turn, can be traced back to the gestures their parents produce at the same age (Rowe & Goldin-Meadow, 2009b). We ask here whether child gesture can be experimentally increased and, if so, whether the increases lead to increases in spoken vocabulary. Fifteen children aged 1;5 participated in an 8-week at-home intervention study (6 weekly training sessions plus follow-up 2 weeks later) in which all were exposed to object words, but only some were told to point at the named objects. Before each training session and at follow-up, children interacted naturally with caregivers to establish a baseline against which changes in communication were measured. Children who were told to gesture increased the number of gesture meanings they conveyed, not only during training but also during interactions with caregivers. These experimentally-induced increases in gesture led to larger spoken repertoires at follow-up. PMID:26120283

  9. Differential processing of thematic and categorical conceptual relations in spoken word production.

    PubMed

    de Zubicaray, Greig I; Hansen, Samuel; McMahon, Katie L

    2013-02-01

    Studies of semantic context effects in spoken word production have typically distinguished between categorical (or taxonomic) and associative relations. However, associates tend to confound semantic features or morphological representations, such as whole-part relations and compounds (e.g., BOAT-anchor, BEE-hive). Using a picture-word interference paradigm and functional magnetic resonance imaging (fMRI), we manipulated categorical (COW-rat) and thematic (COW-pasture) TARGET-distractor relations in a balanced design, finding interference and facilitation effects on naming latencies, respectively, as well as differential patterns of brain activation compared with an unrelated distractor condition. While both types of distractor relation activated the middle portion of the left middle temporal gyrus (MTG) consistent with retrieval of conceptual or lexical representations, categorical relations involved additional activation of posterior left MTG, consistent with retrieval of a lexical cohort. Thematic relations involved additional activation of the left angular gyrus. These results converge with recent lesion evidence implicating the left inferior parietal lobe in processing thematic relations and may indicate a potential role for this region during spoken word production. 2013 APA, all rights reserved

  10. Receipt and use of spoken and written over-the-counter medicine information: insights into Australian and UK consumers' experiences.

    PubMed

    Tong, Vivien; Raynor, David K; Aslani, Parisa

    2018-04-01

    To explore Australian and UK consumers' receipt and use of spoken and written medicine information and examine the role of leaflets for consumers of over-the-counter (OTC) medicines. Semistructured interviews were conducted with 37 Australian and 39 UK consumers to explore information received with their most recent OTC medicine purchase, and how information was used at different times post-purchase. Interviews were audio-recorded, transcribed verbatim and thematically analysed. Similarities were evident between the key themes identified from Australian and UK consumers' experiences. Consumers infrequently sought spoken information and reported that pharmacy staff provided minimal spoken information for OTC medicines. Leaflets were not always received or wanted and had a less salient role as an information source for repeat OTC purchases. Consumers tended not to read OTC labels or leaflets. Product familiarity led to consumers tending not to seek information on labels or leaflets. When labels were consulted, directions for use were commonly read. However, OTC medicine information in general was infrequently revisited. As familiarity is not an infallible proxy for safe and effective medication use, strategies to promote the value and use of these OTC medicine information sources are important and needed. Minimal spoken information provision coupled with limited written information use may adversely impact medication safety in self-management. © 2017 Royal Pharmaceutical Society.

  11. An Investigation of Spoken Brazilian Portuguese: Part I, Technical Report. Final Report.

    ERIC Educational Resources Information Center

    Hutchins, John A.

    This final report of a study which developed a working corpus of spoken and written Portuguese from which syntactical studies could be conducted includes computer-processed data on which the findings and analysis are based. A data base, obtained by taping some 487 conversations between Brazil and the United States, serves as the corpus from which…

  12. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  13. Children's Verbal Working Memory: Role of Processing Complexity in Predicting Spoken Sentence Comprehension

    ERIC Educational Resources Information Center

    Magimairaj, Beula M.; Montgomery, James W.

    2012-01-01

    Purpose: This study investigated the role of processing complexity of verbal working memory tasks in predicting spoken sentence comprehension in typically developing children. Of interest was whether simple and more complex working memory tasks have similar or different power in predicting sentence comprehension. Method: Sixty-five children (6- to…

  14. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language.

    PubMed

    Williams, Joshua T; Newman, Sharlene D

    2017-02-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.

  15. An investigation of phonology and orthography in spoken-word recognition.

    PubMed

    Slowiaczek, Louisa M; Soltano, Emily G; Wieting, Shani J; Bishop, Karyn L

    2003-02-01

    The possible influence of initial phonological and/or orthographic information on spoken-word processing was examined in six experiments modelled after and extending the work Jakimik, Cole, and Rudnicky (1985). Following Jakimik et al., Experiment 1 used polysyllabic primes with monosyllabic targets (e.g., BUCKLE-BUCK/[symbol: see text]; MYSTERY-MISS,/[symbol: see text]). Experiments 2, 3, and 4 used polysyllabic primes and polysyllabic targets whose initial syllables shared phonological information (e.g., NUISANCE-NOODLE,/[symbol: see text]), orthographic information (e.g., RATIO-RATIFY,/[symbol: see text]), both (e.g., FUNNEL-FUNNY,/[symbol: see text]), or were unrelated (e.g., SERMON-NOODLE,/[symbol: see text]). Participants engaged in a lexical decision (Experiments 1, 3, and 4) or a shadowing (Experiment 2) task with a single-trial (Experiments 2 and 3) or subsequent-trial (Experiments 1 and 4) priming procedure. Experiment 5 tested primes and targets that varied in the number of shared graphemes while holding shared phonemes constant at one. Experiment 6 used the procedures of Experiment 2 but a low proportion of related trials. Results revealed that response times were facilitated for prime-target pairs that shared initial phonological and orthographic information. These results were confirmed under conditions when strategic processing was greatly reduced suggesting that phonological and orthographic information is automatically activated during spoken-word processing.

  16. Semantic fluency in deaf children who use spoken and signed language in comparison with hearing peers

    PubMed Central

    Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2017-01-01

    Abstract Background Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language delays. Aims We compared deaf and hearing children's performance on a semantic fluency task. Optimal performance on this task requires a systematic search of the mental lexicon, the retrieval of words within a subcategory and, when that subcategory is exhausted, switching to a new subcategory. We compared retrieval patterns between groups, and also compared the responses of deaf children who used British Sign Language (BSL) with those who used spoken English. We investigated how semantic fluency performance related to children's expressive vocabulary and executive function skills, and also retested semantic fluency in the majority of the children nearly 2 years later, in order to investigate how much progress they had made in that time. Methods & Procedures Participants were deaf children aged 6–11 years (N = 106, comprising 69 users of spoken English, 29 users of BSL and eight users of Sign Supported English—SSE) compared with hearing children (N = 120) of the same age who used spoken English. Semantic fluency was tested for the category ‘animals’. We coded for errors, clusters (e.g., ‘pets’, ‘farm animals’) and switches. Participants also completed the Expressive One‐Word Picture Vocabulary Test and a battery of six non‐verbal executive function tasks. In addition, we collected follow‐up semantic fluency data for 70 deaf and 74 hearing children, nearly 2 years after they were first tested. Outcomes & Results Deaf children, whether using spoken or signed language, produced fewer items in the semantic fluency task than hearing children, but they showed similar patterns of responses for items

  17. Developing a corpus of spoken language variability

    NASA Astrophysics Data System (ADS)

    Carmichael, Lesley; Wright, Richard; Wassink, Alicia Beckford

    2003-10-01

    We are developing a novel, searchable corpus as a research tool for investigating phonetic and phonological phenomena across various speech styles. Five speech styles have been well studied independently in previous work: reduced (casual), careful (hyperarticulated), citation (reading), Lombard effect (speech in noise), and ``motherese'' (child-directed speech). Few studies to date have collected a wide range of styles from a single set of speakers, and fewer yet have provided publicly available corpora. The pilot corpus includes recordings of (1) a set of speakers participating in a variety of tasks designed to elicit the five speech styles, and (2) casual peer conversations and wordlists to illustrate regional vowels. The data include high-quality recordings and time-aligned transcriptions linked to text files that can be queried. Initial measures drawn from the database provide comparison across speech styles along the following acoustic dimensions: MLU (changes in unit duration); relative intra-speaker intensity changes (mean and dynamic range); and intra-speaker pitch values (minimum, maximum, mean, range). The corpus design will allow for a variety of analyses requiring control of demographic and style factors, including hyperarticulation variety, disfluencies, intonation, discourse analysis, and detailed spectral measures.

  18. Recombinative generalization of within-syllable units in nonreading adults with mental retardation.

    PubMed

    Saunders, Kathryn J; O'Donnell, Jennifer; Vaidya, Manish; Williams, Dean C

    2003-01-01

    Two adults with mental retardation demonstrated the recombination of within-syllable units (onsets and rimes) using a spoken-to-printed-word matching-to-sample (MTS) procedure. Further testing with 1 participant showed comprehension of the printed words. Printed-word naming was minimal before, but greater after, comprehension tests. The findings suggest that these procedures hold promise for further basic and applied analyses of word-attack skills.

  19. Strategies to Reduce the Negative Effects of Spoken Explanatory Text on Integrated Tasks

    ERIC Educational Resources Information Center

    Singh, Anne-Marie; Marcus, Nadine; Ayres, Paul

    2017-01-01

    Two experiments involving 125 grade-10 students learning about commerce investigated strategies to overcome the transient information effect caused by explanatory spoken text. The transient information effect occurs when learning is reduced as a result of information disappearing before the learner has time to adequately process it, or link it…

  20. Improving the Grammatical Accuracy of the Spoken English of Indonesian International Kindergarten Students

    ERIC Educational Resources Information Center

    Gozali, Imelda; Harjanto, Ignatius

    2014-01-01

    The need to improve the spoken English of kindergarten students in an international preschool in Surabaya prompted this Classroom Action Research (CAR). It involved the implementation of Form-Focused Instruction (FFI) strategy coupled with Corrective Feedback (CF) in Grammar lessons. Four grammar topics were selected, namely Regular Plural form,…

  1. Development of lexical-semantic language system: N400 priming effect for spoken words in 18- and 24-month old children.

    PubMed

    Rämä, Pia; Sirri, Louah; Serres, Josette

    2013-04-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related (e.g., train-bike) or unrelated (e.g., chicken-bike). The results showed that the N400-like priming effect occurred in 24-month-olds over the right parietal-occipital recording sites. In 18-month-olds the effect was observed similarly to 24-month-olds only in those children with higher word production ability. The results suggest that words are categorically organized in the mental lexicon of children at the age of 2 years and even earlier in children with a high vocabulary. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing

    PubMed Central

    Gow, David W.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing. PMID:22498237

  3. Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans

    NASA Astrophysics Data System (ADS)

    Pei, Xiaomei; Barbour, Dennis L.; Leuthardt, Eric C.; Schalk, Gerwin

    2011-08-01

    Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.

  4. The socially weighted encoding of spoken words: a dual-route approach to speech perception.

    PubMed

    Sumner, Meghan; Kim, Seung Kyung; King, Ed; McGowan, Kevin B

    2013-01-01

    Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially weighted, resulting in sparse, but high-resolution clusters of socially idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information.

  5. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    PubMed

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Spoken Language Processing in the Clarissa Procedure Browser

    NASA Technical Reports Server (NTRS)

    Rayner, M.; Hockey, B. A.; Renders, J.-M.; Chatzichrisafis, N.; Farrell, K.

    2005-01-01

    Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station, is as far as we know the first spoken dialog system in space. We describe the objectives of the Clarissa project and the system's architecture. In particular, we focus on three key problems: grammar-based speech recognition using the Regulus toolkit; methods for open mic speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations. We first describe the grammar-based recogniser we have build using Regulus, and report experiments where we compare it against a class N-gram recogniser trained off the same 3297 utterance dataset. We obtained a 15% relative improvement in WER and a 37% improvement in semantic error rate. The grammar-based recogniser moreover outperforms the class N-gram version for utterances of all lengths from 1 to 9 words inclusive. The central problem in building an open-mic speech recognition system is being able to distinguish between commands directed at the system, and other material (cross-talk), which should be rejected. Most spoken dialogue systems make the accept/reject decision by applying a threshold to the recognition confidence score. NASA shows how a simple and general method, based on standard approaches to document classification using Support Vector Machines, can give substantially better performance, and report experiments showing a relative reduction in the task-level error rate by about 25% compared to the baseline confidence threshold method. Finally, we describe a general side-effect free dialogue management architecture that we have implemented in Clarissa, which extends the "update semantics'' framework by including task as well as dialogue information in the information state. We show that this enables elegant treatments of several dialogue management problems, including corrections, confirmations, querying of the environment, and regression

  7. N400 brain responses to spoken phrases paired with photographs of scenes: implications for visual scene displays in AAC systems.

    PubMed

    Wilkinson, Krista M; Stutzman, Allyson; Seisler, Andrea

    2015-03-01

    Augmentative and alternative communication (AAC) systems are often implemented for individuals whose speech cannot meet their full communication needs. One type of aided display is called a Visual Scene Display (VSD). VSDs consist of integrated scenes (such as photographs) in which language concepts are embedded. Often, the representations of concepts on VSDs are perceptually similar to their referents. Given this physical resemblance, one may ask how well VSDs support development of symbolic functioning. We used brain imaging techniques to examine whether matches and mismatches between the content of spoken messages and photographic images of scenes evoke neural activity similar to activity that occurs to spoken or written words. Electroencephalography (EEG) was recorded from 15 college students who were shown photographs paired with spoken phrases that were either matched or mismatched to the concepts embedded within each photograph. Of interest was the N400 component, a negative deflecting wave 400 ms post-stimulus that is considered to be an index of semantic functioning. An N400 response in the mismatched condition (but not the matched) would replicate brain responses to traditional linguistic symbols. An N400 was found, exclusively in the mismatched condition, suggesting that mismatches between spoken messages and VSD-type representations set the stage for the N400 in ways similar to traditional linguistic symbols.

  8. Test Anxiety Analysis of Chinese College Students in Computer-Based Spoken English Test

    ERIC Educational Resources Information Center

    Yanxia, Yang

    2017-01-01

    Test anxiety was a commonly known or assumed factor that could greatly influence performance of test takers. With the employment of designed questionnaires and computer-based spoken English test, this paper explored test anxiety manifestation of Chinese college students from both macro and micro aspects, and found out that the major anxiety in…

  9. Contribution to terminology internationalization by word alignment in parallel corpora.

    PubMed

    Deléger, Louise; Merkel, Magnus; Zweigenbaum, Pierre

    2006-01-01

    Creating a complete translation of a large vocabulary is a time-consuming task, which requires skilled and knowledgeable medical translators. Our goal is to examine to which extent such a task can be alleviated by a specific natural language processing technique, word alignment in parallel corpora. We experiment with translation from English to French. Build a large corpus of parallel, English-French documents, and automatically align it at the document, sentence and word levels using state-of-the-art alignment methods and tools. Then project English terms from existing controlled vocabularies to the aligned word pairs, and examine the number and quality of the putative French translations obtained thereby. We considered three American vocabularies present in the UMLS with three different translation statuses: the MeSH, SNOMED CT, and the MedlinePlus Health Topics. We obtained several thousand new translations of our input terms, this number being closely linked to the number of terms in the input vocabularies. Our study shows that alignment methods can extract a number of new term translations from large bodies of text with a moderate human reviewing effort, and thus contribute to help a human translator obtain better translation coverage of an input vocabulary. Short-term perspectives include their application to a corpus 20 times larger than that used here, together with more focused methods for term extraction.

  10. Contribution to Terminology Internationalization by Word Alignment in Parallel Corpora

    PubMed Central

    Deléger, Louise; Merkel, Magnus; Zweigenbaum, Pierre

    2006-01-01

    Background and objectives Creating a complete translation of a large vocabulary is a time-consuming task, which requires skilled and knowledgeable medical translators. Our goal is to examine to which extent such a task can be alleviated by a specific natural language processing technique, word alignment in parallel corpora. We experiment with translation from English to French. Methods Build a large corpus of parallel, English-French documents, and automatically align it at the document, sentence and word levels using state-of-the-art alignment methods and tools. Then project English terms from existing controlled vocabularies to the aligned word pairs, and examine the number and quality of the putative French translations obtained thereby. We considered three American vocabularies present in the UMLS with three different translation statuses: the MeSH, SNOMED CT, and the MedlinePlus Health Topics. Results We obtained several thousand new translations of our input terms, this number being closely linked to the number of terms in the input vocabularies. Conclusion Our study shows that alignment methods can extract a number of new term translations from large bodies of text with a moderate human reviewing effort, and thus contribute to help a human translator obtain better translation coverage of an input vocabulary. Short-term perspectives include their application to a corpus 20 times larger than that used here, together with more focused methods for term extraction. PMID:17238328

  11. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    PubMed Central

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  12. The influence of talker and foreign-accent variability on spoken word identification.

    PubMed

    Bent, Tessa; Holt, Rachael Frush

    2013-03-01

    In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.

  13. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    ERIC Educational Resources Information Center

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed…

  14. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    PubMed

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  15. Deaf Children With Cochlear Implants Do Not Appear to Use Sentence Context to Help Recognize Spoken Words

    PubMed Central

    Conway, Christopher M.; Deocampo, Joanne A.; Walk, Anne M.; Anaya, Esperanza M.; Pisoni, David B.

    2015-01-01

    Purpose The authors investigated the ability of deaf children with cochlear implants (CIs) to use sentence context to facilitate the perception of spoken words. Method Deaf children with CIs (n = 24) and an age-matched group of children with normal hearing (n = 31) were presented with lexically controlled sentences and were asked to repeat each sentence in its entirety. Performance was analyzed at each of 3 word positions of each sentence (first, second, and third key word). Results Whereas the children with normal hearing showed robust effects of contextual facilitation—improved speech perception for the final words in a sentence—the deaf children with CIs on average showed no such facilitation. Regression analyses indicated that for the deaf children with CIs, Forward Digit Span scores significantly predicted accuracy scores for all 3 positions, whereas performance on the Stroop Color and Word Test, Children’s Version (Golden, Freshwater, & Golden, 2003) predicted how much contextual facilitation was observed at the final word. Conclusions The pattern of results suggests that some deaf children with CIs do not use sentence context to improve spoken word recognition. The inability to use sentence context may be due to possible interactions between language experience and cognitive factors that affect the ability to successfully integrate temporal–sequential information in spoken language. PMID:25029170

  16. The Debate over Literary Tamil versus Standard Spoken Tamil: What Do Teachers Say?

    ERIC Educational Resources Information Center

    Saravanan, Vanithamani; Lakshmi, Seetha; Caleon, Imelda S.

    2009-01-01

    This study aims to determine the attitudes toward Standard Spoken Tamil (SST) and Literary Tamil (LT) of 46 Tamil teachers in Singapore. The teachers' attitudes were used as an indicator of the acceptance or nonacceptance of SST as a viable option in the teaching of Tamil in the classroom, in which the focus has been largely on LT. The…

  17. Listening comprehension across the adult lifespan.

    PubMed

    Sommers, Mitchell S; Hale, Sandra; Myerson, Joel; Rose, Nathan; Tye-Murray, Nancy; Spehar, Brent

    2011-01-01

    Although age-related declines in perceiving spoken language are well established, the primary focus of research has been on perception of phonemes, words, and sentences. In contrast, relatively few investigations have been directed at establishing the effects of age on the comprehension of extended spoken passages. Moreover, most previous work has used extreme-group designs in which the performance of a group of young adults is contrasted with that of a group of older adults and little if any information is available regarding changes in listening comprehension across the adult lifespan. Accordingly, the goals of the current investigation were to determine whether there are age differences in listening comprehension across the adult lifespan and, if so, whether similar trajectories are observed for age-related changes in auditory sensitivity and listening comprehension. This study used a cross-sectional lifespan design in which approximately 60 individuals in each of 7 decades, from age 20 to 89 yr (a total of 433 participants), were tested on three different measures of listening comprehension. In addition, we obtained measures of auditory sensitivity from all participants. Changes in auditory sensitivity across the adult lifespan exhibited the progressive high-frequency loss typical of age-related hearing impairment. Performance on the listening comprehension measures, however, demonstrated a very different pattern, with scores on all measures remaining relatively stable until age 65 to 70 yr, after which significant declines were observed. Follow-up analyses indicated that this same general pattern was observed across three different types of passages (lectures, interviews, and narratives) and three different question types (information, integration, and inference). Multiple regression analyses indicated that low-frequency pure-tone average was the single largest contributor to age-related variance in listening comprehension for individuals older than 65 yr, but

  18. Intentional and Reactive Inhibition during Spoken-Word Stroop Task Performance in People with Aphasia

    ERIC Educational Resources Information Center

    Pompon, Rebecca Hunting; McNeil, Malcolm R.; Spencer, Kristie A.; Kendall, Diane L.

    2015-01-01

    Purpose: The integrity of selective attention in people with aphasia (PWA) is currently unknown. Selective attention is essential for everyday communication, and inhibition is an important part of selective attention. This study explored components of inhibition--both intentional and reactive inhibition--during spoken-word production in PWA and in…

  19. Perception and Lateralization of Spoken Emotion by Youths with High-Functioning Forms of Autism

    ERIC Educational Resources Information Center

    Baker, Kimberly F.; Montgomery, Allen A.; Abramson, Ruth

    2010-01-01

    The perception and the cerebral lateralization of spoken emotions were investigated in children and adolescents with high-functioning forms of autism (HFFA), and age-matched typically developing controls (TDC). A dichotic listening task using nonsense passages was used to investigate the recognition of four emotions: happiness, sadness, anger, and…

  20. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora

    PubMed Central

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2017-01-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets (e.g., application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes. PMID:28210517

  1. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora.

    PubMed

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2016-10-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets ( e.g. , application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes.

  2. Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind

    PubMed Central

    Burton, Harold; Sinclair, Robert J.; Agato, Alvin

    2012-01-01

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836

  3. "We Communicated That Way for a Reason": Language Practices and Language Ideologies among Hearing Adults Whose Parents Are Deaf

    ERIC Educational Resources Information Center

    Pizer, Ginger; Walters, Keith; Meier, Richard P.

    2013-01-01

    Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing…

  4. Beyond Rhyme or Reason: ERPs Reveal Task-Specific Activation of Orthography on Spoken Language

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Perre, Laetitia; Ziegler, Johannes C.

    2011-01-01

    Metaphonological tasks, such as rhyme judgment, have been the primary tool for the investigation of the effects of orthographic knowledge on spoken language. However, it has been recently argued that the orthography effect in rhyme judgment does not reflect the automatic activation of orthographic codes but rather stems from sophisticated response…

  5. Neural Correlates of Priming Effects in Children during Spoken Word Processing with Orthographic Demands

    ERIC Educational Resources Information Center

    Cao, Fan; Khalid, Kainat; Zaveri, Rishi; Bolger, Donald J.; Bitan, Tali; Booth, James R.

    2010-01-01

    Priming effects were examined in 40 children (9-15 years old) using functional magnetic resonance imaging (fMRI). An orthographic judgment task required participants to determine if two sequentially presented spoken words had the same spelling for the rime. Four lexical conditions were designed: similar orthography and phonology (O[superscript…

  6. A randomized comparison of the effect of two prelinguistic communication interventions on the acquisition of spoken communication in preschoolers with ASD.

    PubMed

    Yoder, Paul; Stone, Wendy L

    2006-08-01

    This randomized group experiment compared the efficacy of 2 communication interventions (Responsive Education and Prelinguistic Milieu Teaching [RPMT] and the Picture Exchange Communication System [PECS]) on spoken communication in 36 preschoolers with autism spectrum disorders (ASD). Each treatment was delivered to children for a maximum total of 24 hr over a 6-month period. Spoken communication was assessed in a rigorous test of generalization at pretreatment, posttreatment, and 6-month follow-up periods. PECS was more successful than RPMT in increasing the number of nonimitative spoken communication acts and the number of different nonimitative words used at the posttreatment period. Considering growth over all 3 measurement periods, an exploratory analysis showed that growth rate of the number of different nonimitative words was faster in the PECS group than in the RPMT group for children who began treatment with relatively high object exploration. In contrast, analogous slopes were steeper in the RPMT group than in the PECS group for children who began treatment with relatively low object exploration.

  7. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  8. A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension

    ERIC Educational Resources Information Center

    Ostarek, Markus; Huettig, Falk

    2017-01-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…

  9. Encoding lexical tones in jTRACE: a simulation of monosyllabic spoken word recognition in Mandarin Chinese.

    PubMed

    Shuai, Lan; Malins, Jeffrey G

    2017-02-01

    Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.

  10. Brain-to-text: decoding spoken phrases from phone representations in the brain.

    PubMed

    Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja

    2015-01-01

    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.

  11. Brain-to-text: decoding spoken phrases from phone representations in the brain

    PubMed Central

    Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja

    2015-01-01

    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech. PMID:26124702

  12. Transnationalism and Rights in the Age of Empire: Spoken Word, Music, and Digital Culture in the Borderlands

    ERIC Educational Resources Information Center

    Hicks, Emily D.

    2004-01-01

    The cultural activities, including the performance of music and spoken word are documented. The cultural activities in the San Diego-Tijuana region that is described is emerged from rhizomatic, transnational points of contact.

  13. L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.

    PubMed

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  14. Brain basis of phonological awareness for spoken language in children and its disruption in dyslexia.

    PubMed

    Kovelman, Ioulia; Norton, Elizabeth S; Christodoulou, Joanna A; Gaab, Nadine; Lieberman, Daniel A; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-04-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7-13) and a younger group of kindergarteners (ages 5-6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia.

  15. Decreased Sensitivity to Phonemic Mismatch in Spoken Word Processing in Adult Developmental Dyslexia

    ERIC Educational Resources Information Center

    Janse, Esther; de Bree, Elise; Brouwer, Susanne

    2010-01-01

    Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as "procodile for crocodile") for the atypical population of dyslexic adults to see to what…

  16. TEES 2.2: Biomedical Event Extraction for Diverse Corpora

    PubMed Central

    2015-01-01

    Background The Turku Event Extraction System (TEES) is a text mining program developed for the extraction of events, complex biomedical relationships, from scientific literature. Based on a graph-generation approach, the system detects events with the use of a rich feature set built via dependency parsing. The TEES system has achieved record performance in several of the shared tasks of its domain, and continues to be used in a variety of biomedical text mining tasks. Results The TEES system was quickly adapted to the BioNLP'13 Shared Task in order to provide a public baseline for derived systems. An automated approach was developed for learning the underlying annotation rules of event type, allowing immediate adaptation to the various subtasks, and leading to a first place in four out of eight tasks. The system for the automated learning of annotation rules is further enhanced in this paper to the point of requiring no manual adaptation to any of the BioNLP'13 tasks. Further, the scikit-learn machine learning library is integrated into the system, bringing a wide variety of machine learning methods usable with TEES in addition to the default SVM. A scikit-learn ensemble method is also used to analyze the importances of the features in the TEES feature sets. Conclusions The TEES system was introduced for the BioNLP'09 Shared Task and has since then demonstrated good performance in several other shared tasks. By applying the current TEES 2.2 system to multiple corpora from these past shared tasks an overarching analysis of the most promising methods and possible pitfalls in the evolving field of biomedical event extraction are presented. PMID:26551925

  17. TEES 2.2: Biomedical Event Extraction for Diverse Corpora.

    PubMed

    Björne, Jari; Salakoski, Tapio

    2015-01-01

    The Turku Event Extraction System (TEES) is a text mining program developed for the extraction of events, complex biomedical relationships, from scientific literature. Based on a graph-generation approach, the system detects events with the use of a rich feature set built via dependency parsing. The TEES system has achieved record performance in several of the shared tasks of its domain, and continues to be used in a variety of biomedical text mining tasks. The TEES system was quickly adapted to the BioNLP'13 Shared Task in order to provide a public baseline for derived systems. An automated approach was developed for learning the underlying annotation rules of event type, allowing immediate adaptation to the various subtasks, and leading to a first place in four out of eight tasks. The system for the automated learning of annotation rules is further enhanced in this paper to the point of requiring no manual adaptation to any of the BioNLP'13 tasks. Further, the scikit-learn machine learning library is integrated into the system, bringing a wide variety of machine learning methods usable with TEES in addition to the default SVM. A scikit-learn ensemble method is also used to analyze the importances of the features in the TEES feature sets. The TEES system was introduced for the BioNLP'09 Shared Task and has since then demonstrated good performance in several other shared tasks. By applying the current TEES 2.2 system to multiple corpora from these past shared tasks an overarching analysis of the most promising methods and possible pitfalls in the evolving field of biomedical event extraction are presented.

  18. Reconsidering the role of temporal order in spoken word recognition.

    PubMed

    Toscano, Joseph C; Anderson, Nathaniel D; McMurray, Bob

    2013-10-01

    Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.

  19. Effects of early auditory experience on the spoken language of deaf children at 3 years of age.

    PubMed

    Nicholas, Johanna Grant; Geers, Ann E

    2006-06-01

    By age 3, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe to profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated 3-yr-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 mo of age and who received a cochlear implant between 12 and 38 mo of age. The purpose of the analysis was to examine the effects of age, duration, and type of early auditory experience on spoken language competence at age 3.5 yr. The spoken language skills of 76 children who had used a cochlear implant for at least 7 mo were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting more than 30 days. Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant pure-tone average (PTA) threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/age at implantation (the last two variables were practically identical because all children were tested between 40 and 44

  20. Scope of Lexical Access in Spoken Sentence Production: Implications for the Conceptual-Syntactic Interface

    ERIC Educational Resources Information Center

    Allum, Paul H.; Wheeldon, L.

    2009-01-01

    Building on P. H. Allum and L. Wheeldon (2007), the authors conducted 5 experiments to investigate the scope of lexical access during spoken sentence production in Japanese and English. Speakers described pairs of pictured objects, and on critical trials, 1 object was previewed. In Japanese, sentence onset is speeded by the preview of each of the…

  1. Spoken and Gestural Production in a Naming Task by Young Children with Down Syndrome

    ERIC Educational Resources Information Center

    Stefanini, Silvia; Caselli, Maria Cristina; Volterra, Virginia

    2007-01-01

    Lexical production in children with Down syndrome (DS) was investigated by examining spoken naming accuracy and the use of spontaneous gestures in a picture naming task. Fifteen children with DS (range 3.8-8.3 years) were compared to typically developing children (TD), matched for chronological age and developmental age (range 2.6-4.3 years).…

  2. Using Key Part-of-Speech Analysis to Examine Spoken Discourse by Taiwanese EFL Learners

    ERIC Educational Resources Information Center

    Lin, Yen-Liang

    2015-01-01

    This study reports on a corpus analysis of samples of spoken discourse between a group of British and Taiwanese adolescents, with the aim of exploring the statistically significant differences in the use of grammatical categories between the two groups of participants. The key word method extended to a part-of-speech level using the web-based…

  3. Contribution of spoken language and socio-economic background to adolescents' educational achievement at age 16 years.

    PubMed

    Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert

    2017-03-01

    Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and socio-economic background during adolescence, particularly in populations without language impairment. To investigate: (1) whether adolescents with higher educational outcomes overall had higher language abilities; and (2) associations between adolescent language ability, socio-economic background and educational outcomes, specifically in relation to Mathematics, English Language and English Literature GCSE grade. A total of 151 participants completed five standardized language assessments measuring vocabulary, comprehension of sentences and spoken paragraphs, and narrative skills and one nonverbal assessment when between 13 and 14 years old. These data were compared with the participants' educational achievement obtained upon leaving secondary education (16 years old). Univariate logistic regressions were employed to identify those language assessments and demographic factors that were associated with achieving a targeted A * -C grade in English Language, English Literature and Mathematics General Certificate of Secondary Education (GCSE) at 16 years. Further logistic regressions were then conducted to examine further the contribution of socio-economic background and spoken language skills in the multivariate models. Vocabulary, comprehension of sentences and spoken paragraphs, and mean length utterance in a narrative task along with socio-economic background contributed to whether participants achieved an A * -C grade in GCSE Mathematics and English Language and English Literature. Nonverbal ability contributed to English Language and Mathematics. The results of multivariate logistic regressions then found that vocabulary skills

  4. "Poetry Is Not a Special Club": How Has an Introduction to the Secondary Discourse of Spoken Word Made Poetry a Memorable Learning Experience for Young People?

    ERIC Educational Resources Information Center

    Dymoke, Sue

    2017-01-01

    This paper explores the impact of a Spoken Word Education Programme (SWEP hereafter) on young people's engagement with poetry in a group of schools in London, UK. It does so with reference to the secondary Discourses of school-based learning and the Spoken Word community, an artistic "community of practice" into which they were being…

  5. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    PubMed

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  6. Spectrotemporal processing drives fast access to memory traces for spoken words.

    PubMed

    Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C

    2012-05-01

    The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Positive Emotional Language in the Final Words Spoken Directly Before Execution

    PubMed Central

    Hirschmüller, Sarah; Egloff, Boris

    2016-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality. PMID:26793135

  8. The Role of Oral Communicative Tasks (OCT) in Developing the Spoken Proficiency of Engineering Students

    ERIC Educational Resources Information Center

    Shantha, S.; Mekala, S.

    2017-01-01

    The mastery of speaking skills in English has become a major requisite in engineering industry. Engineers are expected to possess speaking skills for executing their routine activities and career prospects. The article focuses on the experimental study conducted to improve English spoken proficiency of Indian engineering students using task-based…

  9. The Roles of Tonal and Segmental Information in Mandarin Spoken Word Recognition: An Eyetracking Study

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2010-01-01

    We used eyetracking to examine how tonal versus segmental information influence spoken word recognition in Mandarin Chinese. Participants heard an auditory word and were required to identify its corresponding picture from an array that included the target item ("chuang2" "bed"), a phonological competitor (segmental: chuang1 "window"; cohort:…

  10. Orthography and Modality Influence Speech Production in Adults and Children.

    PubMed

    Saletta, Meredith; Goffman, Lisa; Hogan, Tiffany P

    2016-12-01

    The acquisition of literacy skills influences the perception and production of spoken language. We examined if orthography influences implicit processing in speech production in child readers and in adult readers with low and high reading proficiency. Children (n = 17), adults with typical reading skills (n = 17), and adults demonstrating low reading proficiency (n = 18) repeated or read aloud nonwords varying in orthographic transparency. Analyses of implicit linguistic processing (segmental accuracy and speech movement stability) were conducted. The accuracy and articulatory stability of productions of the nonwords were assessed before and after repetition or reading. Segmental accuracy results indicate that all 3 groups demonstrated greater learning when they were able to read, rather than just hear, the nonwords. Speech movement results indicate that, for adults with poor reading skills, exposure to the nonwords in a transparent spelling reduces the articulatory variability of speech production. Reading skill was correlated with speech movement stability in the groups of adults. In children and adults, orthography interacts with speech production; all participants integrate orthography into their lexical representations. Adults with poor reading skills do not use the same reading or speaking strategies as children with typical reading skills.

  11. Orthography and Modality Influence Speech Production in Adults and Children

    PubMed Central

    Goffman, Lisa; Hogan, Tiffany P.

    2016-01-01

    Purpose The acquisition of literacy skills influences the perception and production of spoken language. We examined if orthography influences implicit processing in speech production in child readers and in adult readers with low and high reading proficiency. Method Children (n = 17), adults with typical reading skills (n = 17), and adults demonstrating low reading proficiency (n = 18) repeated or read aloud nonwords varying in orthographic transparency. Analyses of implicit linguistic processing (segmental accuracy and speech movement stability) were conducted. The accuracy and articulatory stability of productions of the nonwords were assessed before and after repetition or reading. Results Segmental accuracy results indicate that all 3 groups demonstrated greater learning when they were able to read, rather than just hear, the nonwords. Speech movement results indicate that, for adults with poor reading skills, exposure to the nonwords in a transparent spelling reduces the articulatory variability of speech production. Reading skill was correlated with speech movement stability in the groups of adults. Conclusions In children and adults, orthography interacts with speech production; all participants integrate orthography into their lexical representations. Adults with poor reading skills do not use the same reading or speaking strategies as children with typical reading skills. PMID:27942710

  12. Established and Evolving Spoken Research Process Genres: Plenary Lecture and Poster Session Discussions at Academic Conferences.

    ERIC Educational Resources Information Center

    Shalom, Celia

    1993-01-01

    Two types of main hall discussions at an ecology conference were studied: plenary lecture discussion as an accepted, established spoken research process genre with its own conventions; and poster session discussion as a fragile, embryonic research process genre struggling for definition. (14 references) (Author/LB)

  13. Defining Spoken Language Benchmarks and Selecting Measures of Expressive Language Development for Young Children With Autism Spectrum Disorders

    PubMed Central

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2010-01-01

    Purpose The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken language ability in the expressive modality and to set benchmarks for determining a child’s language level in order to establish a framework for comparing outcomes across intervention studies. Method The National Institute on Deafness and Other Communication Disorders assembled a group of researchers with interests and experience in the study of language development and disorders in young children with autism spectrum disorders. The group worked for 18 months through a series of conference calls and correspondence, culminating in a meeting held in December 2007 to achieve consensus on these aims. Results The authors recommend moving away from using the term functional speech, replacing it with a developmental framework. Rather, they recommend multiple sources of information to define language phases, including natural language samples, parent report, and standardized measures. They also provide guidelines and objective criteria for defining children’s spoken language expression in three major phases that correspond to developmental levels between 12 and 48 months of age. PMID:19380608

  14. Mental Imagery as Revealed by Eye Movements and Spoken Predicates: A Test of Neurolinguistic Programming.

    ERIC Educational Resources Information Center

    Elich, Matthew; And Others

    1985-01-01

    Tested Bandler and Grinder's proposal that eye movement direction and spoken predicates are indicative of sensory modality of imagery. Subjects reported images in the three modes, but no relation between imagery and eye movements or predicates was found. Visual images were most vivid and often reported. Most subjects rated themselves as visual,…

  15. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    ERIC Educational Resources Information Center

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  16. Radioisotope penile plethysmography: A technique for evaluating corpora cavernosal blood flow during early tumescence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz, A.N.; Graham, M.M.; Ferency, G.F.

    1989-04-01

    Radioisotope penile plethysmography is a nuclear medicine technique which assists in the evaluation of patients with erectile dysfunction. This technique attempts to noninvasively quantitate penile corpora cavernosal blood flow during early penile tumescence using technetium-99m-labeled red blood cells. Penile images and counts were acquired in a steady-state blood-pool phase prior to and after the administration of intracorporal papaverine. Penile counts, images, and time-activity curves were computer analyzed in order to determine peak corporal flow and volume changes. Peak corporal flow rates were compared to arterial integrity (determined by angiography) and venosinusoidal corporal leak (determined by cavernosometry). Peak corporal flow correlatedmore » well with arterial integrity (r = 0.91) but did not correlate with venosinusoidal leak parameters (r = 0.01). This report focuses on the methodology and the assumptions which form the foundation of this technique. The strong correlation of peak corporal flow and angiography suggests that radioisotope penile plethysmography could prove useful in the evaluation of arterial inflow disorders in patients with erectile dysfunction.« less

  17. Block-suffix shifting: fast, simultaneous medical concept set identification in large medical record corpora.

    PubMed

    Liu, Ying; Lita, Lucian Vlad; Niculescu, Radu Stefan; Mitra, Prasenjit; Giles, C Lee

    2008-11-06

    Owing to new advances in computer hardware, large text databases have become more prevalent than ever.Automatically mining information from these databases proves to be a challenge due to slow pattern/string matching techniques. In this paper we present a new, fast multi-string pattern matching method based on the well known Aho-Chorasick algorithm. Advantages of our algorithm include:the ability to exploit the natural structure of text, the ability to perform significant character shifting, avoiding backtracking jumps that are not useful, efficiency in terms of matching time and avoiding the typical "sub-string" false positive errors.Our algorithm is applicable to many fields with free text, such as the health care domain and the scientific document field. In this paper, we apply the BSS algorithm to health care data and mine hundreds of thousands of medical concepts from a large Electronic Medical Record (EMR) corpora simultaneously and efficiently. Experimental results show the superiority of our algorithm when compared with the top of the line multi-string matching algorithms.

  18. Specification of Drosophila Corpora Cardiaca Neuroendocrine Cells from Mesoderm Is Regulated by Notch Signaling

    PubMed Central

    Park, Sangbin; Bustamante, Erika L.; Antonova, Julie; McLean, Graeme W.; Kim, Seung K.

    2011-01-01

    Drosophila neuroendocrine cells comprising the corpora cardiaca (CC) are essential for systemic glucose regulation and represent functional orthologues of vertebrate pancreatic α-cells. Although Drosophila CC cells have been regarded as developmental orthologues of pituitary gland, the genetic regulation of CC development is poorly understood. From a genetic screen, we identified multiple novel regulators of CC development, including Notch signaling factors. Our studies demonstrate that the disruption of Notch signaling can lead to the expansion of CC cells. Live imaging demonstrates localized emergence of extra precursor cells as the basis of CC expansion in Notch mutants. Contrary to a recent report, we unexpectedly found that CC cells originate from head mesoderm. We show that Tinman expression in head mesoderm is regulated by Notch signaling and that the combination of Daughterless and Tinman is sufficient for ectopic CC specification in mesoderm. Understanding the cellular, genetic, signaling, and transcriptional basis of CC cell specification and expansion should accelerate discovery of molecular mechanisms regulating ontogeny of organs that control metabolism. PMID:21901108

  19. La mort d'une langue: le judeo-espagnol (The Death of a Language: The Spanish Spoken by Jews)

    ERIC Educational Resources Information Center

    Renard, Raymond

    1971-01-01

    Describes the Sephardic culture which flourished in the Balkans, Ottoman Empire, and North Africa during the Middle Ages. Suggests the use of Ladino", the language of medieval Spain spoken by the expelled Jews. (DS)

  20. Comprehension of spoken language in non-speaking children with severe cerebral palsy: an explorative study on associations with motor type and disabilities.

    PubMed

    Geytenbeek, Joke J M; Vermeulen, R Jeroen; Becher, Jules G; Oostrom, Kim J

    2015-03-01

    To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic (46%) CP (Gross Motor Function Classification System [GMFCS] levels IV [39%] and V [61%]) underwent spoken language comprehension assessment with the computer-based instrument for low motor language testing (C-BiLLT), a new and validated diagnostic instrument. A multiple linear regression model was used to investigate which variables explained the variation in C-BiLLT scores. Associations between spoken language comprehension abilities (expressed in z-score or age-equivalent score) and motor type of CP, GMFCS and Manual Ability Classification System (MACS) levels, gestational age, and epilepsy were analysed with Fisher's exact test. A p-value <0.05 was considered statistically significant. Chronological age, motor type, and GMFCS classification explained 33% (R=0.577, R(2) =0.33) of the variance in spoken language comprehension. Of the children aged younger than 6 years 6 months, 52.4% of the children with dyskinetic CP attained comprehension scores within the average range (z-score ≥-1.6) as opposed to none of the children with spastic CP. Of the children aged older than 6 years 6 months, 32% of the children with dyskinetic CP reached the highest achievable age-equivalent score compared to 4% of the children with spastic CP. No significant difference in disability was found between CP-related variables (MACS levels, gestational age, epilepsy), with the exception of GMFCS which showed a significant difference in children aged younger than 6 years 6 months (p=0.043). Despite communication disabilities in children with severe CP, particularly in dyskinetic CP, spoken language comprehension may show no or only moderate delay. These findings emphasize the importance of introducing

  1. Words Spoken with Insistence: "Wak'as" and the Limits of the Bolivian Multi-Institutional Democracy

    ERIC Educational Resources Information Center

    Cuelenaere, Laurence Janine

    2009-01-01

    Building on 18 months of fieldwork in the Bolivian highlands, this dissertation examines how traversing landscapes, through the mediation of spatial practices and spoken words, are embedded in systems of belief. By focusing on "wak'as" (i.e. sacred objects) and on how the inhabitants of the Altiplano relate to the Andean deities known as…

  2. How Are Pronunciation Variants of Spoken Words Recognized? A Test of Generalization to Newly Learned Words

    ERIC Educational Resources Information Center

    Pitt, Mark A.

    2009-01-01

    One account of how pronunciation variants of spoken words (center-> "senner" or "sennah") are recognized is that sublexical processes use information about variation in the same phonological environments to recover the intended segments [Gaskell, G., & Marslen-Wilson, W. D. (1998). Mechanisms of phonological inference in speech perception.…

  3. Cross-modal metaphorical mapping of spoken emotion words onto vertical space.

    PubMed

    Montoro, Pedro R; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a 'positive-up/negative-down' embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis.

  4. Cross-modal metaphorical mapping of spoken emotion words onto vertical space

    PubMed Central

    Montoro, Pedro R.; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a ‘positive-up/negative-down’ embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis. PMID:26322007

  5. Effects of lexical competition on immediate memory span for spoken words.

    PubMed

    Goh, Winston D; Pisoni, David B

    2003-08-01

    Current theories and models of the structural organization of verbal short-term memory are primarily based on evidence obtained from manipulations of features inherent in the short-term traces of the presented stimuli, such as phonological similarity. In the present study, we investigated whether properties of the stimuli that are not inherent in the short-term traces of spoken words would affect performance in an immediate memory span task. We studied the lexical neighbourhood properties of the stimulus items, which are based on the structure and organization of words in the mental lexicon. The experiments manipulated lexical competition by varying the phonological neighbourhood structure (i.e., neighbourhood density and neighbourhood frequency) of the words on a test list while controlling for word frequency and intra-set phonological similarity (family size). Immediate memory span for spoken words was measured under repeated and nonrepeated sampling procedures. The results demonstrated that lexical competition only emerged when a nonrepeated sampling procedure was used and the participants had to access new words from their lexicons. These findings were not dependent on individual differences in short-term memory capacity. Additional results showed that the lexical competition effects did not interact with proactive interference. Analyses of error patterns indicated that item-type errors, but not positional errors, were influenced by the lexical attributes of the stimulus items. These results complement and extend previous findings that have argued for separate contributions of long-term knowledge and short-term memory rehearsal processes in immediate verbal serial recall tasks.

  6. Intelligibility of emotional speech in younger and older adults.

    PubMed

    Dupuis, Kate; Pichora-Fuller, M Kathleen

    2014-01-01

    Little is known about the influence of vocal emotions on speech understanding. Word recognition accuracy for stimuli spoken to portray seven emotions (anger, disgust, fear, sadness, neutral, happiness, and pleasant surprise) was tested in younger and older listeners. Emotions were presented in either mixed (heterogeneous emotions mixed in a list) or blocked (homogeneous emotion blocked in a list) conditions. Three main hypotheses were tested. First, vocal emotion affects word recognition accuracy; specifically, portrayals of fear enhance word recognition accuracy because listeners orient to threatening information and/or distinctive acoustical cues such as high pitch mean and variation. Second, older listeners recognize words less accurately than younger listeners, but the effects of different emotions on intelligibility are similar across age groups. Third, blocking emotions in list results in better word recognition accuracy, especially for older listeners, and reduces the effect of emotion on intelligibility because as listeners develop expectations about vocal emotion, the allocation of processing resources can shift from emotional to lexical processing. Emotion was the within-subjects variable: all participants heard speech stimuli consisting of a carrier phrase followed by a target word spoken by either a younger or an older talker, with an equal number of stimuli portraying each of seven vocal emotions. The speech was presented in multi-talker babble at signal to noise ratios adjusted for each talker and each listener age group. Listener age (younger, older), condition (mixed, blocked), and talker (younger, older) were the main between-subjects variables. Fifty-six students (Mage= 18.3 years) were recruited from an undergraduate psychology course; 56 older adults (Mage= 72.3 years) were recruited from a volunteer pool. All participants had clinically normal pure-tone audiometric thresholds at frequencies ≤3000 Hz. There were significant main effects of

  7. Effects of ageing and streptozotocin-induced diabetes on connexin43 and P2 purinoceptor expression in the rat corpora cavernosa and urinary bladder.

    PubMed

    Suadicani, Sylvia O; Urban-Maldonado, Marcia; Tar, Moses T; Melman, Arnold; Spray, David C

    2009-06-01

    To investigate whether ageing and diabetes alter the expression of the gap junction protein connexin43 (Cx43) and of particular purinoceptor (P2R) subtypes in the corpus cavernosum and urinary bladder, and determine whether changes in expression of these proteins correlate with development of erectile and bladder dysfunction in diabetic and ageing rats. Erectile and bladder function of streptozotocin (STZ)-induced diabetic, insulin-treated and age-matched control Fischer-344 rats were evaluated 2, 4 and 8 months after diabetes induction by in vivo cystometry and cavernosometry. Corporal and bladder tissue were then isolated at each of these sample times and protein expression levels of Cx43 and of various P2R subtypes were determined by Western blotting. In the corpora of control rats ageing was accompanied by a significant decrease in Cx43 and P2X(1)R, and increase in P2X(7)R expression. There was decreased Cx43 and increased P2Y(4)R expression in the ageing control rat bladder. There was a significant negative correlation between erectile capacity and P2X(1)R expression levels, and a positive correlation between bladder spontaneous activity and P2Y(4)R expression levels. There was already development of erectile dysfunction and bladder overactivity at 2 months after inducing diabetes, the earliest sample measured in the study. The development of these urogenital complications was accompanied by significant decreases in Cx43, P2Y(2)R, P2X(4)R and increase in P2X(1)R expression in the corpora, and by a doubling in Cx43 and P2Y(2)R, and significant increase in P2Y(4)R expression in the bladder. Changes in Cx43 and P2R expression were largely prevented by insulin therapy. Ageing and diabetes mellitus markedly altered the expression of the gap junction protein Cx43 and of particular P2R subtypes in the rat penile corpora and urinary bladder. These changes in Cx43 and P2R expression provide the molecular substrate for altered gap junction and purinergic signalling in

  8. Effects of ageing and streptozotocin–induced diabetes on connexin43 and P2 purinoceptor expression in the rat corpora cavernosa and urinary bladder

    PubMed Central

    Suadicani, Sylvia O.; Urban–Maldonado, Marcia; Tar, Moses T.; Melman, Arnold; Spray, David C.

    2012-01-01

    OBJECTIVE To investigate whether ageing and diabetes alter the expression of the gap junction protein connexin43 (Cx43) and of particular purinoceptor (P2R) subtypes in the corpus cavernosum and urinary bladder, and determine whether changes in expression of these proteins correlate with development of erectile and bladder dysfunction in diabetic and ageing rats. MATERIALS AND METHODS Erectile and bladder function of streptozotocin (STZ)-induced diabetic, insulin-treated and age-matched control Fischer-344 rats were evaluated 2, 4 and 8 months after diabetes induction by in vivo cystometry and cavernosometry. Corporal and bladder tissue were then isolated at each of these sample times and protein expression levels of Cx43 and of various P2R subtypes were determined by Western blotting. RESULTS In the corpora of control rats ageing was accompanied by a significant decrease in Cx43 and P2X1R, and increase in P2X7R expression. There was decreased Cx43 and increased P2Y4R expression in the ageing control rat bladder. There was a significant negative correlation between erectile capacity and P2X1R expression levels, and a positive correlation between bladder spontaneous activity and P2Y4R expression levels. There was already development of erectile dysfunction and bladder overactivity at 2 months after inducing diabetes, the earliest sample measured in the study. The development of these urogenital complications was accompanied by significant decreases in Cx43, P2Y2R, P2X4R and increase in P2X1R expression in the corpora, and by a doubling in Cx43 and P2Y2R, and significant increase in P2Y4R expression in the bladder. Changes in Cx43 and P2R expression were largely prevented by insulin therapy. CONCLUSION Ageing and diabetes mellitus markedly altered the expression of the gap junction protein Cx43 and of particular P2R subtypes in the rat penile corpora and urinary bladder. These changes in Cx43 and P2R expression provide the molecular substrate for altered gap junction

  9. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2012-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an event-related potential (ERP) norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a…

  10. Are Phonological Representations of Printed and Spoken Language Isomorphic? Evidence from the Restrictions on Unattested Onsets

    ERIC Educational Resources Information Center

    Berent, Iris

    2008-01-01

    Are the phonological representations of printed and spoken words isomorphic? This question is addressed by investigating the restrictions on onsets. Cross-linguistic research suggests that onsets of rising sonority are preferred to sonority plateaus, which, in turn, are preferred to sonority falls (e.g., bnif, bdif, lbif). Of interest is whether…

  11. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    PubMed

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Does it really matter whether students' contributions are spoken versus typed in an intelligent tutoring system with natural language?

    PubMed

    D'Mello, Sidney K; Dowell, Nia; Graesser, Arthur

    2011-03-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The speech facilitation hypothesis predicts that spoken input will increase learning, whereas the text facilitation hypothesis predicts typed input will be superior. The modality equivalence hypothesis claims that learning gains will be equivalent. Previous experiments that tested these hypotheses were confounded by automated speech recognition systems with substantial error rates that were detected by learners. We addressed this concern in two experiments via a Wizard of Oz procedure, where a human intercepted the learner's speech and transcribed the utterances before submitting them to the tutor. The overall pattern of the results supported the following conclusions: (1) learning gains associated with spoken and typed input were on par and quantitatively higher than a no-intervention control, (2) participants' evaluations of the session were not influenced by modality, and (3) there were no modality effects associated with differences in prior knowledge and typing proficiency. Although the results generally support the modality equivalence hypothesis, highly motivated learners reported lower cognitive load and demonstrated increased learning when typing compared with speaking. We discuss the implications of our findings for intelligent tutoring systems that can support typed and spoken input.

  13. Inefficient conjunction search made efficient by concurrent spoken delivery of target identity.

    PubMed

    Reali, Florencia; Spivey, Michael J; Tyler, Melinda J; Terranova, Joseph

    2006-08-01

    Visual search based on a conjunction of two features typically elicits reaction times that increase linearly as a function of the number of distractors, whereas search based on a single feature is essentially unaffected by set size. These and related findings have often been interpreted as evidence of a serial search stage that follows a parallel search stage. However, a wide range of studies has been showing a form of blending of these two processes. For example, when a spoken instruction identifies the conjunction target concurrently with the visual display, the effect of set size is significantly reduced, suggesting that incremental linguistic processing of the first feature adjective and then the second feature adjective may facilitate something approximating a parallel extraction of objects during search for the target. Here, we extend these results to a variety of experimental designs. First, we replicate the result with a mixed-trials design (ruling out potential strategies associated with the blocked design of the original study). Second, in a mixed-trials experiment, the order of adjective types in the spoken query varies randomly across conditions. In a third experiment, we extend the effect to a triple-conjunction search task. A fourth (control) experiment demonstrates that these effects are not due to an efficient odd-one-out search that ignores the linguistic input. This series of experiments, along with attractor-network simulations of the phenomena, provide further evidence toward understanding linguistically mediated influences in real-time visual search processing.

  14. The Interaction of Lexical Semantics and Cohort Competition in Spoken Word Recognition: An fMRI Study

    ERIC Educational Resources Information Center

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K.

    2011-01-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…

  15. Understanding the Relationship between Latino Students' Preferred Learning Styles and Their Language Spoken at Home

    ERIC Educational Resources Information Center

    Maldonado Torres, Sonia Enid

    2016-01-01

    The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…

  16. Causal Influence of Articulatory Motor Cortex on Comprehending Single Spoken Words: TMS Evidence.

    PubMed

    Schomers, Malte R; Kirilina, Evgeniya; Weigand, Anne; Bajbouj, Malek; Pulvermüller, Friedemann

    2015-10-01

    Classic wisdom had been that motor and premotor cortex contribute to motor execution but not to higher cognition and language comprehension. In contrast, mounting evidence from neuroimaging, patient research, and transcranial magnetic stimulation (TMS) suggest sensorimotor interaction and, specifically, that the articulatory motor cortex is important for classifying meaningless speech sounds into phonemic categories. However, whether these findings speak to the comprehension issue is unclear, because language comprehension does not require explicit phonemic classification and previous results may therefore relate to factors alien to semantic understanding. We here used the standard psycholinguistic test of spoken word comprehension, the word-to-picture-matching task, and concordant TMS to articulatory motor cortex. TMS pulses were applied to primary motor cortex controlling either the lips or the tongue as subjects heard critical word stimuli starting with bilabial lip-related or alveolar tongue-related stop consonants (e.g., "pool" or "tool"). A significant cross-over interaction showed that articulatory motor cortex stimulation delayed comprehension responses for phonologically incongruent words relative to congruous ones (i.e., lip area TMS delayed "tool" relative to "pool" responses). As local TMS to articulatory motor areas differentially delays the comprehension of phonologically incongruous spoken words, we conclude that motor systems can take a causal role in semantic comprehension and, hence, higher cognition. © The Author 2014. Published by Oxford University Press.

  17. Interference from related actions in spoken word production: Behavioural and fMRI evidence.

    PubMed

    de Zubicaray, Greig; Fraser, Douglas; Ramajoo, Kori; McMahon, Katie

    2017-02-01

    Few investigations of lexical access in spoken word production have investigated the cognitive and neural mechanisms involved in action naming. These are likely to be more complex than the mechanisms involved in object naming, due to the ways in which conceptual features of action words are represented. The present study employed a blocked cyclic naming paradigm to examine whether related action contexts elicit a semantic interference effect akin to that observed with categorically related objects. Participants named pictures of intransitive actions to avoid a confound with object processing. In Experiment 1, body-part related actions (e.g., running, walking, skating, hopping) were named significantly slower compared to unrelated actions (e.g., laughing, running, waving, hiding). Experiment 2 employed perfusion functional Magnetic Resonance Imaging (fMRI) to investigate the neural mechanisms involved in this semantic interference effect. Compared to unrelated actions, naming related actions elicited significant perfusion signal increases in frontotemporal cortex, including bilateral inferior frontal gyrus (IFG) and hippocampus, and decreases in bilateral posterior temporal, occipital and parietal cortices, including intraparietal sulcus (IPS). The findings demonstrate a role for temporoparietal cortex in conceptual-lexical processing of intransitive action knowledge during spoken word production, and support the proposed involvement of interference resolution and incremental learning mechanisms in the blocked cyclic naming paradigm. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Causal Influence of Articulatory Motor Cortex on Comprehending Single Spoken Words: TMS Evidence

    PubMed Central

    Schomers, Malte R.; Kirilina, Evgeniya; Weigand, Anne; Bajbouj, Malek; Pulvermüller, Friedemann

    2015-01-01

    Classic wisdom had been that motor and premotor cortex contribute to motor execution but not to higher cognition and language comprehension. In contrast, mounting evidence from neuroimaging, patient research, and transcranial magnetic stimulation (TMS) suggest sensorimotor interaction and, specifically, that the articulatory motor cortex is important for classifying meaningless speech sounds into phonemic categories. However, whether these findings speak to the comprehension issue is unclear, because language comprehension does not require explicit phonemic classification and previous results may therefore relate to factors alien to semantic understanding. We here used the standard psycholinguistic test of spoken word comprehension, the word-to-picture-matching task, and concordant TMS to articulatory motor cortex. TMS pulses were applied to primary motor cortex controlling either the lips or the tongue as subjects heard critical word stimuli starting with bilabial lip-related or alveolar tongue-related stop consonants (e.g., “pool” or “tool”). A significant cross-over interaction showed that articulatory motor cortex stimulation delayed comprehension responses for phonologically incongruent words relative to congruous ones (i.e., lip area TMS delayed “tool” relative to “pool” responses). As local TMS to articulatory motor areas differentially delays the comprehension of phonologically incongruous spoken words, we conclude that motor systems can take a causal role in semantic comprehension and, hence, higher cognition. PMID:25452575

  19. Neural organization of linguistic short-term memory is sensory modality-dependent: evidence from signed and spoken language.

    PubMed

    Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory

    2008-12-01

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  20. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    ERIC Educational Resources Information Center

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  1. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    ERIC Educational Resources Information Center

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  2. The missing foundation in teacher education: Knowledge of the structure of spoken and written language.

    PubMed

    Moats, L C

    1994-01-01

    Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.

  3. Natural language processing of spoken diet records (SDRs).

    PubMed

    Lacson, Ronilda; Long, William

    2006-01-01

    Dietary assessment is a fundamental aspect of nutritional evaluation that is essential for management of obesity as well as for assessing dietary impact on chronic diseases. Various methods have been used for dietary assessment including written records, 24-hour recalls, and food frequency questionnaires. The use of mobile phones to provide real-time dietary records provides potential advantages for accessibility, ease of use and automated documentation. However, understanding even a perfect transcript of spoken dietary records (SDRs) is challenging for people. This work presents a first step towards automatic analysis of SDRs. Our approach consists of four steps - identification of food items, identification of food quantifiers, classification of food quantifiers and temporal annotation. Our method enables automatic extraction of dietary information from SDRs, which in turn allows automated mapping to a Diet History Questionnaire dietary database. Our model has an accuracy of 90%. This work demonstrates the feasibility of automatically processing SDRs.

  4. Cultural Identity of Young Deaf Adults with Cochlear Implants in Comparison to Deaf without Cochlear Implants and Hard-of-Hearing Young Adults.

    PubMed

    Goldblat, Ester; Most, Tova

    2018-07-01

    This study examined the relationships between cultural identity, severity of hearing loss (HL), and the use of a cochlear implant (CI). One hundred and forty-one adolescents and young adults divided into three groups (deaf with CI, deaf without CI, and hard-of-hearing (HH)) and 134 parents participated. Adolescents and young adults completed questionnaires on cultural identity (hearing, Deaf, marginal, bicultural-hearing, and bicultural-deaf) and communication proficiencies (hearing, spoken language, and sign language). Parents completed a speech quality questionnaire. Deaf participants without CI and those with CI differed in all identities except marginal identity. CI users and HH participants had similar identities except for a stronger bicultural-deaf identity among CI users. Three clusters of participants evolved: participants with a dominant bicultural-deaf identity, participants with a dominant bicultural-hearing identity and participants without a formed cultural identity. Adolescents and young adults who were proficient in one of the modes of communication developed well-established bicultural identities. Adolescents and young adults who were not proficient in one of the modes of communication did not develop a distinguished cultural identity. These results suggest that communication proficiencies are crucial for developing defined identities.

  5. Semantic Fluency in Deaf Children Who Use Spoken and Signed Language in Comparison with Hearing Peers

    ERIC Educational Resources Information Center

    Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2018-01-01

    Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…

  6. Some considerations in evaluating spoken word recognition by normal-hearing, noise-masked normal-hearing, and cochlear implant listeners. I: The effects of response format.

    PubMed

    Sommers, M S; Kirk, K I; Pisoni, D B

    1997-04-01

    The purpose of the present studies was to assess the validity of using closed-set response formats to measure two cognitive processes essential for recognizing spoken words---perceptual normalization (the ability to accommodate acoustic-phonetic variability) and lexical discrimination (the ability to isolate words in the mental lexicon). In addition, the experiments were designed to examine the effects of response format on evaluation of these two abilities in normal-hearing (NH), noise-masked normal-hearing (NMNH), and cochlear implant (CI) subject populations. The speech recognition performance of NH, NMNH, and CI listeners was measured using both open- and closed-set response formats under a number of experimental conditions. To assess talker normalization abilities, identification scores for words produced by a single talker were compared with recognition performance for items produced by multiple talkers. To examine lexical discrimination, performance for words that are phonetically similar to many other words (hard words) was compared with scores for items with few phonetically similar competitors (easy words). Open-set word identification for all subjects was significantly poorer when stimuli were produced in lists with multiple talkers compared with conditions in which all of the words were spoken by a single talker. Open-set word recognition also was better for lexically easy compared with lexically hard words. Closed-set tests, in contrast, failed to reveal the effects of either talker variability or lexical difficulty even when the response alternatives provided were systematically selected to maximize confusability with target items. These findings suggest that, although closed-set tests may provide important information for clinical assessment of speech perception, they may not adequately evaluate a number of cognitive processes that are necessary for recognizing spoken words. The parallel results obtained across all subject groups indicate that NH

  7. Polling the effective neighborhoods of spoken words with the verbal transformation effect.

    PubMed

    Bashford, James A; Warren, Richard M; Lenz, Peter W

    2006-04-01

    Studies of the effects of lexical neighbors upon the recognition of spoken words have generally assumed that the most salient competitors differ by a single phoneme. The present study employs a procedure that induces the listeners to perceive and call out the salient competitors. By presenting a recording of a monosyllable repeated over and over, perceptual adaptation is produced, and perception of the stimulus is replaced by perception of a competitor. Reports from groups of subjects were obtained for monosyllables that vary in their frequency-weighted neighborhood density. The findings are compared with predictions based upon the neighborhood activation model.

  8. Development of Mandarin spoken language after pediatric cochlear implantation.

    PubMed

    Li, Bei; Soli, Sigfrid D; Zheng, Yun; Li, Gang; Meng, Zhaoli

    2014-07-01

    The purpose of this study was to evaluate early spoken language development in young Mandarin-speaking children during the first 24 months after cochlear implantation, as measured by receptive and expressive vocabulary growth rates. Growth rates were compared with those of normally hearing children and with growth rates for English-speaking children with cochlear implants. Receptive and expressive vocabularies were measured with the simplified short form (SSF) version of the Mandarin Communicative Development Inventory (MCDI) in a sample of 112 pediatric implant recipients at baseline, 3, 6, 12, and 24 months after implantation. Implant ages ranged from 1 to 5 years. Scores were expressed in terms of normal equivalent ages, allowing normalized vocabulary growth rates to be determined. Scores for English-speaking children were re-expressed in these terms, allowing direct comparisons of Mandarin and English early spoken language development. Vocabulary growth rates during the first 12 months after implantation were similar to those for normally hearing children less than 16 months of age. Comparisons with growth rates for normally hearing children 16-30 months of age showed that the youngest implant age group (1-2 years) had an average growth rate of 0.68 that of normally hearing children; while the middle implant age group (2-3 years) had an average growth rate of 0.65; and the oldest implant age group (>3 years) had an average growth rate of 0.56, significantly less than the other two rates. Growth rates for English-speaking children with cochlear implants were 0.68 in the youngest group, 0.54 in the middle group, and 0.57 in the oldest group. Growth rates in the middle implant age groups for the two languages differed significantly. The SSF version of the MCDI is suitable for assessment of Mandarin language development during the first 24 months after cochlear implantation. Effects of implant age and duration of implantation can be compared directly across

  9. Unkindest Cuts? Some Effects of Picture Editing on Recall of Television News Information.

    ERIC Educational Resources Information Center

    Davies, Maire Messenger; And Others

    1985-01-01

    This study examined the effects of pictorial changes on recall of spoken text in a television news broadcast viewed by adolescents and adults. Adult recall of spoken text was impaired by mid-sentence picture editing while adolescents' recall was enhanced so long as accompanying pictures were relevant to verbal text. (MBR)

  10. Spoken Language Comprehension of Phrases, Simple and Compound-Active Sentences in Non-Speaking Children with Severe Cerebral Palsy

    ERIC Educational Resources Information Center

    Geytenbeek, Joke J. M.; Heim, Margriet J. M.; Knol, Dirk L.; Vermeulen, R. Jeroen; Oostrom, Kim J.

    2015-01-01

    Background Children with severe cerebral palsy (CP) (i.e. "non-speaking children with severely limited mobility") are restricted in many domains that are important to the acquisition of language. Aims To investigate comprehension of spoken language on sentence type level in non-speaking children with severe CP. Methods & Procedures…

  11. Examining the Concept of Subordination in Spoken L1 and L2 English: The Case of "If"-Clauses

    ERIC Educational Resources Information Center

    Basterrechea, María; Weinert, Regina

    2017-01-01

    This article explores the applications of research on native spoken language into second language learning in the concept of subordination. Second language (L2) learners' ability to integrate subordinate clauses is considered an indication of higher proficiency (e.g., Ellis & Barkhuizen, 2005; Tarone & Swierzbin, 2009). However, the notion…

  12. Reliability and validity of the C-BiLLT: a new instrument to assess comprehension of spoken language in young children with cerebral palsy and complex communication needs.

    PubMed

    Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J

    2014-09-01

    In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.

  13. Text exposure predicts spoken production of complex sentences in eight and twelve year old children and adults

    PubMed Central

    Montag, Jessica L.; MacDonald, Maryellen C.

    2015-01-01

    There is still much debate about the nature of the experiential and maturational changes that take place during childhood to bring about the sophisticated language abilities of an adult. The present study investigated text exposure as a possible source of linguistic experience that plays a role in the development of adult-like language abilities. Corpus analyses of object and passive relative clauses (Object: The book that the woman carried; Passive: The book that was carried by the woman) established the frequencies of these sentence types in child-directed speech and children's literature. We found that relative clauses of either type were more frequent in the written corpus, and that the ratio of passive to object relatives was much higher in the written corpus as well. This analysis suggests that passive relative clauses are much more frequent in a child's linguistic environment if they have high rates of text exposure. We then elicited object and passive relative clauses using a picture-description production task with eight and twelve year old children and adults. Both group and individual differences were consistent with the corpus analyses, such that older individuals and individuals with more text exposure produced more passive relative clauses. These findings suggest that the qualitatively different patterns of text versus speech may be an important source of linguistic experience for the development of adult-like language behavior. PMID:25844625

  14. Exposure to bisphenol A in young adult mice does not alter ovulation but does alter the fertilization ability of oocytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore-Ambriz, Teresita Rocio; Acuña-Hernández, Deyanira Guadalupe; Ramos-Robles, Brenda

    Follicle growth culminates in ovulation, which allows for the expulsion of fertilizable oocytes and the formation of corpora lutea. Bisphenol A (BPA) is present in many consumer products, and it has been suggested that BPA impairs ovulation; however, the underlying mechanisms are unknown. Therefore, this study first evaluated whether BPA alters ovulation by affecting folliculogenesis, the number of corpora lutea or eggs shed to the oviduct, ovarian gonadotropin responsiveness, hormone levels, and estrous cyclicity. Because it has been suggested (but not directly confirmed) that BPA exerts toxic effects on the fertilization ability of oocytes, a second aim was to evaluatemore » whether BPA impacts the oocyte fertilization rate using an in vitro fertilization assay and mating. The possible effects on early zygote development were also examined. Young adult female C57BL/6J mice (39 days old) were orally dosed with corn oil (vehicle) or 50 μg/kg bw/day BPA for a period encompassing the first three reproductive cycles (12–15 days). BPA exposure did not alter any parameters related to ovulation. Moreover, BPA exposure reduced the percentage of fertilized oocytes after either in vitro fertilization or mating, but it did not alter the zygotic stages. The data indicate that exposure to the reference dose of BPA does not impact ovulation but that it does influence the oocyte quality in terms of its fertilization ability. - Highlights: • Bisphenol A targets the fertilization ability of oocytes. • Bisphenol A does not alter ovulation. • Young adult females may be susceptible to the effects of bisphenol A on fertilization.« less

  15. Does It Really Matter whether Students' Contributions Are Spoken versus Typed in an Intelligent Tutoring System with Natural Language?

    ERIC Educational Resources Information Center

    D'Mello, Sidney K.; Dowell, Nia; Graesser, Arthur

    2011-01-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The "speech facilitation" hypothesis predicts that spoken input will "increase" learning,…

  16. Linguistic Context Versus Semantic Competition in Word Recognition by Younger and Older Adults With Cochlear Implants.

    PubMed

    Amichetti, Nicole M; Atagi, Eriko; Kong, Ying-Yee; Wingfield, Arthur

    The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a

  17. "Brief report: increase in production of spoken words in some children with autism after PECS teaching to Phase III".

    PubMed

    Carr, Deborah; Felce, Janet

    2007-04-01

    The context for this work was an evaluation study [Carr, D., & Felce, J. A. (in press)] of the early phases of the Picture Exchange Communication System (PECS) [Frost, L. A., & Bondy, A. S. (1994). The picture exchange communication system training manual. Cherry Hill, NJ: Pyramid Educational Consultants, Inc.; Frost, L. A., & Bondy, A. S. (2004). The picture exchange communication system training manual, 2nd edn. Newark, DE: Pyramid Educational Consultants, Inc.]. This paper reports that five of 24 children who received 15 h of PECS teaching towards Phase III over a period of 4-5 weeks, showed concomitant increases in speech production, either in initiating communication with staff or in responding, or both. No children in the PECS group demonstrated a decrease in spoken words after receiving PECS teaching. In the control group, only one of 17 children demonstrated a minimal increase and four of 17 children demonstrated a decrease in use of spoken words after a similar period without PECS teaching.

  18. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    PubMed

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  19. Neural Signatures of Language Co-activation and Control in Bilingual Spoken Word Comprehension

    PubMed Central

    Chen, Peiyao; Bobb, Susan C.; Hoshino, Noriko; Marian, Viorica

    2017-01-01

    To examine the neural signatures of language co-activation and control during bilingual spoken word comprehension, Korean-English bilinguals and English monolinguals were asked to make overt or covert semantic relatedness judgments on auditorily-presented English word pairs. In two critical conditions, participants heard word pairs consisting of an English-Korean interlingual homophone (e.g., the sound /mu:n/ means “moon” in English and “door” in Korean) as the prime and an English word as the target. In the homophone-related condition, the target (e.g., “lock”) was related to the homophone’s Korean meaning, but not related to the homophone’s English meaning. In the homophone-unrelated condition, the target was unrelated to either the homophone’s Korean meaning or the homophone’s English meaning. In overtly responded situations, ERP results revealed that the reduced N400 effect in bilinguals for homophone-related word pairs correlated positively with the amount of their daily exposure to Korean. In covertly responded situations, ERP results showed a reduced late positive component for homophone-related word pairs in the right hemisphere, and this late positive effect was related to the neural efficiency of suppressing interference in a non-linguistic task. Together, these findings suggest 1) that the degree of language co-activation in bilingual spoken word comprehension is modulated by the amount of daily exposure to the non-target language; and 2) that bilinguals who are less influenced by cross-language activation may also have greater efficiency in suppressing interference in a non-linguistic task. PMID:28372943

  20. Neural signatures of language co-activation and control in bilingual spoken word comprehension.

    PubMed

    Chen, Peiyao; Bobb, Susan C; Hoshino, Noriko; Marian, Viorica

    2017-06-15

    To examine the neural signatures of language co-activation and control during bilingual spoken word comprehension, Korean-English bilinguals and English monolinguals were asked to make overt or covert semantic relatedness judgments on auditorily-presented English word pairs. In two critical conditions, participants heard word pairs consisting of an English-Korean interlingual homophone (e.g., the sound /mu:n/ means "moon" in English and "door" in Korean) as the prime and an English word as the target. In the homophone-related condition, the target (e.g., "lock") was related to the homophone's Korean meaning, but not related to the homophone's English meaning. In the homophone-unrelated condition, the target was unrelated to either the homophone's Korean meaning or the homophone's English meaning. In overtly responded situations, ERP results revealed that the reduced N400 effect in bilinguals for homophone-related word pairs correlated positively with the amount of their daily exposure to Korean. In covertly responded situations, ERP results showed a reduced late positive component for homophone-related word pairs in the right hemisphere, and this late positive effect was related to the neural efficiency of suppressing interference in a non-linguistic task. Together, these findings suggest 1) that the degree of language co-activation in bilingual spoken word comprehension is modulated by the amount of daily exposure to the non-target language; and 2) that bilinguals who are less influenced by cross-language activation may also have greater efficiency in suppressing interference in a non-linguistic task. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Oscillatory brain responses in spoken word production reflect lexical frequency and sentential constraint.

    PubMed

    Piai, Vitória; Roelofs, Ardi; Maris, Eric

    2014-01-01

    Two fundamental factors affecting the speed of spoken word production are lexical frequency and sentential constraint, but little is known about their timing and electrophysiological basis. In the present study, we investigated event-related potentials (ERPs) and oscillatory brain responses induced by these factors, using a task in which participants named pictures after reading sentences. Sentence contexts were either constraining or nonconstraining towards the final word, which was presented as a picture. Picture names varied in their frequency of occurrence in the language. Naming latencies and electrophysiological responses were examined as a function of context and lexical frequency. Lexical frequency is an index of our cumulative learning experience with words, so lexical-frequency effects most likely reflect access to memory representations for words. Pictures were named faster with constraining than nonconstraining contexts. Associated with this effect, starting around 400 ms pre-picture presentation, oscillatory power between 8 and 30 Hz was lower for constraining relative to nonconstraining contexts. Furthermore, pictures were named faster with high-frequency than low-frequency names, but only for nonconstraining contexts, suggesting differential ease of memory access as a function of sentential context. Associated with the lexical-frequency effect, starting around 500 ms pre-picture presentation, oscillatory power between 4 and 10 Hz was higher for high-frequency than for low-frequency names, but only for constraining contexts. Our results characterise electrophysiological responses associated with lexical frequency and sentential constraint in spoken word production, and point to new avenues for studying these fundamental factors in language production. © 2013 Published by Elsevier Ltd.

  2. Predictors of Spoken Language Learning

    ERIC Educational Resources Information Center

    Wong, Patrick C. M.; Ettlinger, Marc

    2011-01-01

    We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We…

  3. The Attitudes and Motivation of Children towards Learning Rarely Spoken Foreign Languages: A Case Study from Saudi Arabia

    ERIC Educational Resources Information Center

    Al-Nofaie, Haifa

    2018-01-01

    This article discusses the attitudes and motivations of two Saudi children learning Japanese as a foreign language (hence JFL), a language which is rarely spoken in the country. Studies regarding children's motivation for learning foreign languages that are not widely spread in their contexts in informal settings are scarce. The aim of the study…

  4. Sentence comprehension following moderate closed head injury in adults.

    PubMed

    Leikin, Mark; Ibrahim, Raphiq; Aharon-Peretz, Judith

    2012-09-01

    The current study explores sentence comprehension impairments among adults following moderate closed head injury. It was hypothesized that if the factor of syntactic complexity significantly affects sentence comprehension in these patients, it would testify to the existence of syntactic processing deficit along with working-memory problems. Thirty-six adults (18 closed head injury patients and 18 healthy controls matched in age, gender, and IQ) participated in the study. A picture-sentence matching task together with various tests for memory, language, and reading abilities were used to explore whether sentence comprehension impairments exist as a result of a deficit in syntactic processing or of working-memory dysfunction. Results indicate significant impairment in sentence comprehension among adults with closed head injury compared with their non-head-injured peers. Results also reveal that closed head injury patients demonstrate considerable decline in working memory, short-term memory, and semantic knowledge. Analysis of the results shows that memory impairment and syntactic complexity contribute significantly to sentence comprehension difficulties in closed head injury patients. At the same time, the presentation mode (spoken or written language) was found to have no effect on comprehension among adults with closed head injury, and their reading abilities appear to be relatively intact.

  5. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    PubMed

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-05-01

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  6. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    PubMed

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  7. Modeling Longitudinal Changes in Older Adults’ Memory for Spoken Discourse: Findings from the ACTIVE Cohort

    PubMed Central

    Payne, Brennan R.; Gross, Alden L.; Parisi, Jeanine M.; Sisco, Shannon M.; Stine-Morrow, Elizabeth A. L.; Marsiske, Michael; Rebok, George W.

    2014-01-01

    Episodic memory shows substantial declines with advancing age, but research on longitudinal trajectories of spoken discourse memory (SDM) in older adulthood is limited. Using parallel process latent growth curve models, we examined 10 years of longitudinal data from the no-contact control group (N = 698) of the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) randomized controlled trial in order to test (a) the degree to which SDM declines with advancing age, (b) predictors of these age-related declines, and (c) the within-person relationship between longitudinal changes in SDM and longitudinal changes in fluid reasoning and verbal ability over 10 years, independent of age. Individuals who were younger, White, had more years of formal education, were male, and had better global cognitive function and episodic memory performance at baseline demonstrated greater levels of SDM on average. However, only age at baseline uniquely predicted longitudinal changes in SDM, such that declines accelerated with greater age. Independent of age, within-person decline in reasoning ability over the 10-year study period was substantially correlated with decline in SDM (r = .87). An analogous association with SDM did not hold for verbal ability. The findings suggest that longitudinal declines in fluid cognition are associated with reduced spoken language comprehension. Unlike findings from memory for written prose, preserved verbal ability may not protect against developmental declines in memory for speech. PMID:24304364

  8. Scaling-up NLP Pipelines to Process Large Corpora of Clinical Notes.

    PubMed

    Divita, G; Carter, M; Redd, A; Zeng, Q; Gupta, K; Trautner, B; Samore, M; Gundlapalli, A

    2015-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". This paper describes the scale-up efforts at the VA Salt Lake City Health Care System to address processing large corpora of clinical notes through a natural language processing (NLP) pipeline. The use case described is a current project focused on detecting the presence of an indwelling urinary catheter in hospitalized patients and subsequent catheter-associated urinary tract infections. An NLP algorithm using v3NLP was developed to detect the presence of an indwelling urinary catheter in hospitalized patients. The algorithm was tested on a small corpus of notes on patients for whom the presence or absence of a catheter was already known (reference standard). In planning for a scale-up, we estimated that the original algorithm would have taken 2.4 days to run on a larger corpus of notes for this project (550,000 notes), and 27 days for a corpus of 6 million records representative of a national sample of notes. We approached scaling-up NLP pipelines through three techniques: pipeline replication via multi-threading, intra-annotator threading for tasks that can be further decomposed, and remote annotator services which enable annotator scale-out. The scale-up resulted in reducing the average time to process a record from 206 milliseconds to 17 milliseconds or a 12- fold increase in performance when applied to a corpus of 550,000 notes. Purposely simplistic in nature, these scale-up efforts are the straight forward evolution from small scale NLP processing to larger scale extraction without incurring associated complexities that are inherited by the use of the underlying UIMA framework. These efforts represent generalizable and widely applicable techniques that will aid other computationally complex NLP pipelines that are of need to be scaled out for processing and analyzing big data.

  9. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    PubMed

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  10. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    PubMed Central

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  11. Defining Spoken Language Benchmarks and Selecting Measures of Expressive Language Development for Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2009-01-01

    Purpose: The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken…

  12. Case Studies of Multilingual/Multicultural Asian Deaf Adults: Strategies for Success.

    PubMed

    Wang, Qiuying; Andrews, Jean; Liu, Hsiu Tan; Liu, Chun Jung

    2016-01-01

    Case studies of adult d/Deaf or Hard of Hearing Multilingual Learners (DMLs) are few, especially studies of DMLs who learn more than one sign language and read logographic and alphabetic scripts. To reduce this paucity, two descriptive case studies are presented. Written questionnaires, face-to-face interviews, and self-appraisals of language-use rubrics were used to explore (a) the language and literacy histories of two adult Asian DMLs who had learned multiple languages: Chinese (spoken/written), English (written), Chinese Sign Language, and American Sign Language; and (b) how each language was used in different cultural communities with diverse conversational partners. Home literacy environment, family support, visual access to languages, peer and sibling support, role models, encouragement, perseverance, and Deaf identity all played vital roles in the participants' academic success. The findings provide insights into the acquisition of multiple languages and bi-literacy through social communication and academic content.

  13. Effects of cooperating and conflicting prosody in spoken English garden path sentences: ERP evidence for the boundary deletion hypothesis.

    PubMed

    Pauker, Efrat; Itzhak, Inbal; Baum, Shari R; Steinhauer, Karsten

    2011-10-01

    In reading, a comma in the wrong place can cause more severe misunderstandings than the lack of a required comma. Here, we used ERPs to demonstrate that a similar effect holds for prosodic boundaries in spoken language. Participants judged the acceptability of temporarily ambiguous English "garden path" sentences whose prosodic boundaries were either in line or in conflict with the actual syntactic structure. Sentences with incongruent boundaries were accepted less than those with missing boundaries and elicited a stronger on-line brain response in ERPs (N400/P600 components). Our results support the notion that mentally deleting an overt prosodic boundary is more costly than postulating a new one and extend previous findings, suggesting an immediate role of prosody in sentence comprehension. Importantly, our study also provides new details on the profile and temporal dynamics of the closure positive shift (CPS), an ERP component assumed to reflect prosodic phrasing in speech and music in real time. We show that the CPS is reliably elicited at the onset of prosodic boundaries in English sentences and is preceded by negative components. Its early onset distinguishes the speech CPS in adults both from prosodic ERP correlates in infants and from the "music CPS" previously reported for trained musicians.

  14. Lexical frequency and acoustic reduction in spoken Dutch

    NASA Astrophysics Data System (ADS)

    Pluymaekers, Mark; Ernestus, Mirjam; Baayen, R. Harald

    2005-10-01

    This study investigates the effects of lexical frequency on the durational reduction of morphologically complex words in spoken Dutch. The hypothesis that high-frequency words are more reduced than low-frequency words was tested by comparing the durations of affixes occurring in different carrier words. Four Dutch affixes were investigated, each occurring in a large number of words with different frequencies. The materials came from a large database of face-to-face conversations. For each word containing a target affix, one token was randomly selected for acoustic analysis. Measurements were made of the duration of the affix as a whole and the durations of the individual segments in the affix. For three of the four affixes, a higher frequency of the carrier word led to shorter realizations of the affix as a whole, individual segments in the affix, or both. Other relevant factors were the sex and age of the speaker, segmental context, and speech rate. To accommodate for these findings, models of speech production should allow word frequency to affect the acoustic realizations of lower-level units, such as individual speech sounds occurring in affixes.

  15. Non-linear processing of a linear speech stream: The influence of morphological structure on the recognition of spoken Arabic words.

    PubMed

    Gwilliams, L; Marantz, A

    2015-08-01

    Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Type VII Collagen Expression in the Human Vitreoretinal Interface, Corpora Amylacea and Inner Retinal Layers

    PubMed Central

    Wullink, Bart; Pas, Hendri H.; Van der Worp, Roelofje J.; Kuijer, Roel; Los, Leonoor I.

    2015-01-01

    Type VII collagen, as a major component of anchoring fibrils found at basement membrane zones, is crucial in anchoring epithelial tissue layers to their underlying stroma. Recently, type VII collagen was discovered in the inner human retina by means of immunohistochemistry, while proteomic investigations demonstrated type VII collagen at the vitreoretinal interface of chicken. Because of its potential anchoring function at the vitreoretinal interface, we further assessed the presence of type VII collagen at this site. We evaluated the vitreoretinal interface of human donor eyes by means of immunohistochemistry, confocal microscopy, immunoelectron microscopy, and Western blotting. Firstly, type VII collagen was detected alongside vitreous fibers6 at the vitreoretinal interface. Because of its known anchoring function, it is likely that type VII collagen is involved in vitreoretinal attachment. Secondly, type VII collagen was found within cytoplasmic vesicles of inner retinal cells. These cells resided most frequently in the ganglion cell layer and inner plexiform layer. Thirdly, type VII collagen was found in astrocytic cytoplasmic inclusions, known as corpora amylacea. The intraretinal presence of type VII collagen was confirmed by Western blotting of homogenized retinal preparations. These data add to the understanding of vitreoretinal attachment, which is important for a better comprehension of common vitreoretinal attachment pathologies. PMID:26709927

  17. Early Gesture Provides a Helping Hand to Spoken Vocabulary Development for Children with Autism, Down Syndrome, and Typical Development

    ERIC Educational Resources Information Center

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie

    2017-01-01

    Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…

  18. The Development and Validation of the "Academic Spoken English Strategies Survey (ASESS)" for Non-Native English Speaking Graduate Students

    ERIC Educational Resources Information Center

    Schroeder, Rui M.

    2016-01-01

    This study reports on the three-year development and validation of a new assessment tool--the Academic Spoken English Strategies Survey (ASESS). The questionnaire is the first of its kind to assess the listening and speaking strategy use of non-native English speaking (NNES) graduate students. A combination of sources was used to develop the…

  19. A cascaded neuro-computational model for spoken word recognition

    NASA Astrophysics Data System (ADS)

    Hoya, Tetsuya; van Leeuwen, Cees

    2010-03-01

    In human speech recognition, words are analysed at both pre-lexical (i.e., sub-word) and lexical (word) levels. The aim of this paper is to propose a constructive neuro-computational model that incorporates both these levels as cascaded layers of pre-lexical and lexical units. The layered structure enables the system to handle the variability of real speech input. Within the model, receptive fields of the pre-lexical layer consist of radial basis functions; the lexical layer is composed of units that perform pattern matching between their internal template and a series of labels, corresponding to the winning receptive fields in the pre-lexical layer. The model adapts through self-tuning of all units, in combination with the formation of a connectivity structure through unsupervised (first layer) and supervised (higher layers) network growth. Simulation studies show that the model can achieve a level of performance in spoken word recognition similar to that of a benchmark approach using hidden Markov models, while enabling parallel access to word candidates in lexical decision making.

  20. Self-Ratings of Spoken Language Dominance: A Multilingual Naming Test (MINT) and Preliminary Norms for Young and Aging Spanish-English Bilinguals

    ERIC Educational Resources Information Center

    Gollan, Tamar H.; Weissberger, Gali H.; Runnqvist, Elin; Montoya, Rosa I.; Cera, Cynthia M.

    2012-01-01

    This study investigated correspondence between different measures of bilingual language proficiency contrasting self-report, proficiency interview, and picture naming skills. Fifty-two young (Experiment 1) and 20 aging (Experiment 2) Spanish-English bilinguals provided self-ratings of proficiency level, were interviewed for spoken proficiency, and…

  1. Ambiguity and Inconsistencies in Mathematics Spoken in the Classroom: The Need for Teacher Training and Rules for Communication of Mathematics

    ERIC Educational Resources Information Center

    Isaacson, M.D.; Srinivasan, S.; Lloyd, L.L.

    2012-01-01

    Mathematics has the potential for being spoken ambiguously. This is problematic for many students, in particular those who have disabilities that inhibit processing of printed material. This paper documents the magnitude of potential ambiguity arising from textbooks and provides a measure of the degree to which potential ambiguity is actualized…

  2. Elicited Speech from Graph Items on the Test of Spoken English[TM]. Research Reports. Report 74. RR-04-06

    ERIC Educational Resources Information Center

    Katz, Irvin R.; Xi, Xiaoming; Kim, Hyun-Joo; Cheng, Peter C. H.

    2004-01-01

    This research applied a cognitive model to identify item features that lead to irrelevant variance on the Test of Spoken English[TM] (TSE[R]). The TSE is an assessment of English oral proficiency and includes an item that elicits a description of a statistical graph. This item type sometimes appears to tap graph-reading skills--an irrelevant…

  3. The Differences between Spoken and Written Grammar in English, in Comparison with Vietnamese (Las Diferencias entre la Gramática Oral y Escrita del Idioma Inglés en Comparación con el Idioma Vietnamita)

    ERIC Educational Resources Information Center

    Thanh, Nguyen Cao

    2015-01-01

    The fundamental point of this paper is to describe and evaluate some differences between spoken and written grammar in English, and compare some of the points with Vietnamese. This paper illustrates that spoken grammar is less rigid than written grammar. Moreover, it highlights the distinction between speaking and writing in terms of subordination…

  4. Examination of validity in spoken language evaluations: Adult onset stuttering following mild traumatic brain injury.

    PubMed

    Roth, Carole R; Cornis-Pop, Micaela; Beach, Woodford A

    2015-01-01

    Reports of increased incidence of adult onset stuttering in veterans and service members with mild traumatic brain injury (mTBI) from combat operations in Iraq and Afghanistan lead to a reexamination of the neurogenic vs. psychogenic etiology of stuttering. This article proposes to examine the merit of the dichotomy between neurogenic and psychogenic bases of stuttering, including symptom exaggeration, for the evaluation and treatment of the disorder. Two case studies of adult onset stuttering in service members with mTBI from improvised explosive device blasts are presented in detail. Speech fluency was disrupted by abnormal pauses and speech hesitations, brief blocks, rapid repetitions, and occasional prolongations. There was also wide variability in the frequency of stuttering across topics and conversational situations. Treatment focused on reducing the frequency and severity of dysfluencies and included educational, psychological, environmental, and behavioral interventions. Stuttering characteristics as well as the absence of objective neurological findings ruled out neurogenic basis of stuttering in these two cases and pointed to psychogenic causes. However, the differential diagnosis had only limited value for developing the plan of care. The successful outcomes of the treatment serve to illustrate the complex interaction of neurological, psychological, emotional, and environmental factors of post-concussive symptoms and to underscore the notion that there are many facets to symptom presentation in post-combat health.

  5. Correlations between ultrasonographic characteristics of corpora lutea and systemic concentrations of progesterone during the discrete stages of corpora lutea lifespan and secretory activity in cyclic ewes.

    PubMed

    Gallienne, Jacqueline; Gregg, Caroline; LeBlanc, Evan; Yaakob, Norazlin; Wu, Di; Davies, Kate; Rawlings, Norman; Pierson, Roger; Deardon, Rob; Bartlewski, Pawel

    2012-05-01

    Associations between physical characteristics and functionality of corpora lutea (CL) have previously been reported in monovulatory species, albeit several studies in cattle and humans have refuted the existence of temporal relationships between CL size, echotexture and serum progesterone (P(4)) concentrations. The main objective of the present study was to examine whether or not there were correlations between ultrasonographic image attributes of CL and systemic concentrations of P(4) during the discrete stages of the luteal phase in two breeds of sheep differing in ovulation rates (non-prolific Western White Face [WWF] ewes and prolific Finn [F] sheep). Transrectal ovarian ultrasonography utilized a 7.5-MHz linear-array transducer connected to a portable scanner (Aloka SSD-500) and the images were analyzed using commercially available image analytical software (Image ProPlus(®)) validated for the present application in sheep. The correlations were assessed using the Pearson's Product Moment (PPM) analysis and also, to increase the accuracy of statistical tests, the analysis of covariance (ANCOVA), with the number of CL as a co-factor. In WWF ewes, serum concentrations of P(4) correlated significantly with the total luteal area (TLA) during the CL growth phase (days 3-6; day 0 = ovulation) and functional luteolysis (days 12-15), and with numerical pixel values (NPVs--pixel intensity) during luteolysis; the results obtained by using two different statistical methods were generally similar. In prolific F ewes, serum P(4) concentrations were directly correlated with TLA during CL growth (days 3-6; ANCOVA), functional luteolysis (days 13-14; PPM), and structural CL regression (days 11-14; PPM and ANCOVA), and with NPVs during functional luteolysis (PPM and ANCOVA). We concluded that systemic P(4) concentrations could only be accurately predicted from the changes in luteal area during CL growth and regression, and from NPVs during luteolysis, in both prolific and non

  6. The Influence of Topic Status on Written and Spoken Sentence Production

    PubMed Central

    Cowles, H. Wind; Ferreira, Victor S.

    2012-01-01

    Four experiments investigate the influence of topic status and givenness on how speakers and writers structure sentences. The results of these experiments show that when a referent is previously given, it is more likely to be produced early in both sentences and word lists, confirming prior work showing that givenness increases the accessibility of given referents. When a referent is previously given and assigned topic status, it is even more likely to be produced early in a sentence, but not in a word list. Thus, there appears to be an early mention advantage for topics that is present in both written and spoken modalities, but is specific to sentence production. These results suggest that information-structure constructs like topic exert an influence that is not based only on increased accessibility, but also reflects mapping to syntactic structure during sentence production. PMID:22408281

  7. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    ERIC Educational Resources Information Center

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  8. A Randomized Comparison of the Effect of Two Prelinguistic Communication Interventions on the Acquisition of Spoken Communication in Preschoolers with ASD

    ERIC Educational Resources Information Center

    Yoder, Paul; Stone, Wendy L.

    2006-01-01

    Purpose: This randomized group experiment compared the efficacy of 2 communication interventions (Responsive Education and Prelinguistic Milieu Teaching [RPMT] and the Picture Exchange Communication System [PECS]) on spoken communication in 36 preschoolers with autism spectrum disorders (ASD). Method: Each treatment was delivered to children for a…

  9. Use of Automated Scoring in Spoken Language Assessments for Test Takers with Speech Impairments. Research Report. ETS RR-17-42

    ERIC Educational Resources Information Center

    Loukina, Anastassia; Buzick, Heather

    2017-01-01

    This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…

  10. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Feenaughty, Lynda

    judged each speech sample using the perceptual construct of Speech Severity using a visual analog scale. Additional measures obtained to describe participants included the Sentence Intelligibility Test (SIT), the 10-item Communication Participation Item Bank (CPIB), and standard biopsychosocial measures of depression (Beck Depression Inventory-Fast Screen; BDI-FS), fatigue (Fatigue Severity Scale; FSS), and overall disease severity (Expanded Disability Status Scale; EDSS). Healthy controls completed all measures, with the exception of the CPIB and EDSS. All data were analyzed using standard, descriptive and parametric statistics. For the MSCI group, the relationship between neuropsychological test scores and speech-language variables were explored for each speech task using Pearson correlations. The relationship between neuropsychological test scores and Speech Severity also was explored. Results and Discussion: Topic familiarity for descriptive discourse did not strongly influence speech production or perceptual variables; however, results indicated predicted task-related differences for some spoken language measures. With the exception of the MSCI group, all speaker groups produced the same or slower global speech timing (i.e., speech and articulatory rates), more silent and filled pauses, more grammatical and longer silent pause durations in spontaneous discourse compared to reading aloud. Results revealed no appreciable task differences for linguistic complexity measures. Results indicated group differences for speech rate. The MSCI group produced significantly faster speech rates compared to the MSDYS group. Both the MSDYS and the MSCI groups were judged to have significantly poorer perceived Speech Severity compared to typically aging adults. The Task x Group interaction was only significant for the number of silent pauses. The MSDYS group produced fewer silent pauses in spontaneous speech and more silent pauses in the reading task compared to other groups. Finally

  11. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults With Normal Hearing but Not Older Adults With Hearing Loss.

    PubMed

    Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker

    2016-06-01

    Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility. Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise. We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.

  12. Top Languages Spoken by English Language Learners Nationally and by State. ELL Information Center Fact Sheet Series. No. 3

    ERIC Educational Resources Information Center

    Batalova, Jeanne; McHugh, Margie

    2010-01-01

    While English Language Learner (ELL) students in the United States speak more than 150 languages, Spanish is by far the most common home or first language, but is not the top language spoken by ELLs in every state. This fact sheet, based on analysis of the U.S. Census Bureau's 2009 American Community Survey, documents the top languages spoken…

  13. A Spoken Language Intervention for School-Aged Boys with fragile X Syndrome

    PubMed Central

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2015-01-01

    Using a single case design, a parent-mediated spoken language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared story-telling using wordless picture books and targeted three empirically-derived language support strategies. All sessions were implemented via distance video-teleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data was collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214

  14. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    ERIC Educational Resources Information Center

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  15. The Acoustic Trigger to Conceptualization: An Hypothesis Concerning the Role of the Spoken Word in the Development of Higher Mental Processes.

    ERIC Educational Resources Information Center

    Dance, Frank E. X.

    One of many aspects of the linguistic centrality of the spoken word is the "acoustic trigger" to conceptualization, the most significant primal trigger in human beings, which when activated results in contrast and comparison leading to symbolic conceptualization. The oral/aural mode, or vocal production and acoustic perception, is developmentally…

  16. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    PubMed

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  17. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    PubMed

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  18. A randomized trial comparison of the effects of verbal and pictorial naturalistic communication strategies on spoken language for young children with autism.

    PubMed

    Schreibman, Laura; Stahmer, Aubyn C

    2014-05-01

    Presently there is no consensus on the specific behavioral treatment of choice for targeting language in young nonverbal children with autism. This randomized clinical trial compared the effectiveness of a verbally-based intervention, Pivotal Response Training (PRT) to a pictorially-based behavioral intervention, the Picture Exchange Communication System (PECS) on the acquisition of spoken language by young (2-4 years), nonverbal or minimally verbal (≤9 words) children with autism. Thirty-nine children were randomly assigned to either the PRT or PECS condition. Participants received on average 247 h of intervention across 23 weeks. Dependent measures included overall communication, expressive vocabulary, pictorial communication and parent satisfaction. Children in both intervention groups demonstrated increases in spoken language skills, with no significant difference between the two conditions. Seventy-eight percent of all children exited the program with more than 10 functional words. Parents were very satisfied with both programs but indicated PECS was more difficult to implement.

  19. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    PubMed

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Bimodal Bilinguals Co-activate Both Languages during Spoken Comprehension

    PubMed Central

    Shook, Anthony; Marian, Viorica

    2012-01-01

    Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals’ and English monolinguals’ eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing items than at phonologically unrelated items, and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension. PMID:22770677