Science.gov

Sample records for adult spoken corpora

  1. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    ERIC Educational Resources Information Center

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Halle, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and…

  2. Experimental induction of corpora amylacea in adult rat brain.

    PubMed

    Schipper, H M

    1998-10-01

    Corpora amylacea (CA) are glycoproteinaceous inclusions that accumulate in astroglia and other brain cells as a function of advancing age and, to an even greater extent, in several human neurodegenerative conditions. The mechanisms responsible for their biogenesis and their subcellular origin(s) remain unclear. We previously demonstrated that the sulfhydryl agent, cysteamine (CSH), promotes the accumulation of CA-like inclusions in cultured rat astroglia. In the present study, we show that subcutaneous administration of CSH to adult rats (150 mg/kg for 6 weeks followed by a 5-week drug-washout period) elicits the accumulation of CA in many cortical and subcortical brain regions. As in the aging human brain and in CSH-treated rat astrocyte cultures, the inclusions are periodic acid-Schiff -positive and are consistently immunostained with antibodies directed against mitochondrial epitopes and ubiquitin. Our findings support our contention that mitochondria are important structural precursors of CA, and that CSH accelerates aging-like processes in rat astroglia both in vitro and in the intact brain.

  3. JH Biosynthesis by Reproductive Tissues and Corpora Allata in Adult Longhorned Beetles, Apriona germari

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We report on juvenile hormone (JH) biosynthesis from long-chain intermediates by specific reproductive system tissues and the corpora allata (CA) prepared from adult longhorned beetles, Apriona germari. Testes, male accessory glands (MAGs), ovaries and CA contain the long-chain intermediates in the ...

  4. The Effect of Redundant Cues on Comprehension of Spoken Messages by Aphasic Adults.

    ERIC Educational Resources Information Center

    Venus, Carol A.; Canter, Gerald J.

    1987-01-01

    Aphasic adults (N=16) with severe auditory comprehension impairment were evaluated for comprehension of redundant and nonredundant spoken and/or gestured messages. Results indicated redundancy was not reliably superior to spoken messages alone. (Author/DB)

  5. Spoken Oral Language and Adult Struggling Readers

    ERIC Educational Resources Information Center

    Bakhtiari, Dariush; Greenberg, Daphne; Patton-Terry, Nicole; Nightingale, Elena

    2015-01-01

    Oral language is a critical component to the development of reading acquisition. Much of the research concerning the relationship between oral language and reading ability is focused on children, while there is a paucity of research focusing on this relationship for adults who struggle with their reading. Oral language as defined in this paper…

  6. A Linguistic Inquiry and Word Count Analysis of the Adult Attachment Interview in Two Large Corpora

    PubMed Central

    Waters, Theodore E. A.; Steele, Ryan D.; Roisman, Glenn I.; Haydon, Katherine C.; Booth-LaForce, Cathryn

    2015-01-01

    An emerging literature suggests that variation in Adult Attachment Interview (AAI; George, Kaplan, & Main, 1985) states of mind about childhood experiences with primary caregivers is reflected in specific linguistic features captured by the Linguistic Inquiry Word Count automated text analysis program (LIWC; Pennebaker, Booth, & Francis, 2007). The current report addressed limitations of prior studies in this literature by using two large AAI corpora (Ns = 826 and 857) and a broader range of linguistic variables, as well as examining associations of LIWC-derived AAI dimensions with key developmental antecedents. First, regression analyses revealed that dismissing states of mind were associated with transcripts that were more truncated and deemphasized discussion of the attachment relationship whereas preoccupied states of mind were associated with longer, more conflicted, and angry narratives. Second, in aggregate, LIWC variables accounted for over a third of the variation in AAI dismissing and preoccupied states of mind, with regression weights cross-validating across samples. Third, LIWC-derived dismissing and preoccupied state of mind dimensions were associated with direct observations of maternal and paternal sensitivity as well as infant attachment security in childhood, replicating the pattern of results reported in Haydon, Roisman, Owen, Booth-LaForce, and Cox (2014) using coder-derived dismissing and preoccupation scores in the same sample. PMID:27065477

  7. A Corpus-Based Study on Turkish Spoken Productions of Bilingual Adults

    ERIC Educational Resources Information Center

    Agçam, Reyhan; Bulut, Adem

    2016-01-01

    The current study investigated whether monolingual adult speakers of Turkish and bilingual adult speakers of Arabic and Turkish significantly differ regarding their spoken productions in Turkish. Accordingly, two groups of undergraduate students studying Turkish Language and Literature at a state university in Turkey were presented two videos on a…

  8. Differences between young and older adults' spoken language production in descriptions of negative versus neutral pictures.

    PubMed

    Castro, Nichol; James, Lori E

    2014-01-01

    Young and older participants produced oral picture descriptions that were analyzed to determine the impact of negative emotional content on spoken language production. An interaction was found for speech disfluencies: young adults' disfluencies did not vary, whereas older adults' disfluencies increased, for negative compared to neutral pictures. Young adults adopted a faster speech rate while describing negative compared to neutral pictures, but older adults did not. Reference errors were uncommon for both age groups, but occurred more during descriptions of negative than neutral pictures. Our findings indicate that negative content can be differentially disruptive to older adults' spoken language production, and add to the literature on aging, emotion, and cognition by exploring effects within the domain of language production.

  9. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    PubMed

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing.

  10. Relations among Linguistic and Cognitive Skills and Spoken Word Recognition in Adults with Cochlear Implants

    ERIC Educational Resources Information Center

    Collison, Elizabeth A.; Munson, Benjamin; Carney, Arlene Earley

    2004-01-01

    This study examined spoken word recognition in adults with cochlear implants (CIs) to determine the extent to which linguistic and cognitive abilities predict variability in speech-perception performance. Both a traditional consonant-vowel-consonant (CVC)-repetition measure and a gated-word recognition measure (F. Grosjean, 1996) were used.…

  11. Towards a Framework for Teaching Spoken Grammar

    ERIC Educational Resources Information Center

    Timmis, Ivor

    2005-01-01

    Since the advent of spoken corpora, descriptions of native speaker spoken grammar have become far more detailed and comprehensive. These insights, however, have been relatively slow to filter through to ELT practice. The aim of this article is to outline an approach to the teaching of native-speaker spoken grammar which is not only pedagogically…

  12. A Bruner-Potter effect in audition? Spoken word recognition in adult aging.

    PubMed

    Lash, Amanda; Wingfield, Arthur

    2014-12-01

    Bruner and Potter (1964) demonstrated the surprising finding that incrementally increasing the clarity of images until they were correctly recognized (ascending presentation) was less effective for recognition than presenting images in a single presentation at that same clarity level. This has been attributed to interference from incorrect perceptual hypotheses formed on the initial presentations under ascending conditions. We demonstrate an analogous effect for spoken word recognition in older adults, with the size of the effect predicted by working memory span. This effect did not appear for young adults, whose group spans exceeded that of the older adults.

  13. Relationships among vocabulary size, nonverbal cognition, and spoken word recognition in adults with cochlear implants

    NASA Astrophysics Data System (ADS)

    Collison, Elizabeth A.; Munson, Benjamin; Carney, Arlene E.

    2002-05-01

    Recent research has attempted to identify the factors that predict speech perception performance among users of cochlear implants (CIs). Studies have found that approximately 20%-60% of the variance in speech perception scores can be accounted for by factors including duration of deafness, etiology, type of device, and length of implant use, leaving approximately 50% of the variance unaccounted for. The current study examines the extent to which vocabulary size and nonverbal cognitive ability predict CI listeners' spoken word recognition. Fifteen postlingually deafened adults with nucleus or clarion CIs were given standardized assessments of nonverbal cognitive ability and expressive vocabulary size: the Expressive Vocabulary Test, the Test of Nonverbal Intelligence-III, and the Woodcock-Johnson-III Test of Cognitive Ability, Verbal Comprehension subtest. Two spoken word recognition tasks were administered. In the first, listeners identified isophonemic CVC words. In the second, listeners identified gated words varying in lexical frequency and neighborhood density. Analyses will examine the influence of lexical frequency and neighborhood density on the uniqueness point in the gating task, as well as relationships among nonverbal cognitive ability, vocabulary size, and the two spoken word recognition measures. [Work supported by NIH Grant P01 DC00110 and by the Lions 3M Hearing Foundation.

  14. The gating paradigm: effects of presentation format on spoken word recognition by children and adults.

    PubMed

    Walley, A C; Michela, V L; Wood, D R

    1995-04-01

    This study focused on the impact of stimulus presentation format in the gating paradigm with age. Two presentation formats were employed--the standard, successive format and a duration-blocked one, in which gates from word onset were blocked by duration (i.e., gates for the same word were not temporally adjacent). In Experiment 1, the effect of presentation format on adults' recognition was assessed as a function of response format (written vs. oral). In Experiment 2, the effect of presentation format on kindergarteners', first graders', and adults' recognition was assessed with an oral response format only. Performance was typically poorer for the successive format than for the duration-blocked one. The role of response perseveration and negative feedback in producing this effect is considered, as is the effect of word frequency and cohort size on recognition. Although the successive format yields a conservative picture of recognition, presentation format did not have a markedly different effect across the three age levels studied. Thus, the gating paradigm would seem to be an appropriate one for making developmental comparisons of spoken word recognition.

  15. Constructing the Taiwanese Componet of the Louvain International Database of Spoken English Interlanguage (LINDSEI)

    ERIC Educational Resources Information Center

    Huang, Lan-fen

    2014-01-01

    This paper reports the compilation of a corpus of Taiwanese students' spoken English, which is one of the sub-corpora of the Louvain International Database of Spoken English Interlanguage (LINDSEI) (Gilquin, De Cock, & Granger, 2010). LINDSEI is one of the largest corpora of learner speech. The compilation process follows the design criteria…

  16. Neural processing during older adults' comprehension of spoken sentences: age differences in resource allocation and connectivity.

    PubMed

    Peelle, Jonathan E; Troiani, Vanessa; Wingfield, Arthur; Grossman, Murray

    2010-04-01

    Speech comprehension remains largely preserved in older adults despite significant age-related neurophysiological change. However, older adults' performance declines more rapidly than that of young adults when listening conditions are challenging. We investigated the cortical network underlying speech comprehension in healthy aging using short sentences differing in syntactic complexity, with processing demands further manipulated through speech rate. Neural activity was monitored using blood oxygen level-dependent functional magnetic resonance imaging. Comprehension of syntactically complex sentences activated components of a core sentence-processing network in both young and older adults, including the left inferior and middle frontal gyri, left inferior parietal cortex, and left middle temporal gyrus. However, older adults showed reduced recruitment of inferior frontal regions relative to young adults; the individual degree of recruitment predicted accuracy at the more difficult fast speech rate. Older adults also showed increased activity in frontal regions outside the core sentence-processing network, which may have played a compensatory role. Finally, a functional connectivity analysis demonstrated reduced coherence between activated regions in older adults. We conclude that decreased activation of specialized processing regions, and limited ability to coordinate activity between regions, contribute to older adults' difficulty with sentence comprehension under difficult listening conditions.

  17. Online Lexical Competition during Spoken Word Recognition and Word Learning in Children and Adults

    ERIC Educational Resources Information Center

    Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth

    2013-01-01

    Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children…

  18. Effort Not Speed Characterizes Comprehension of Spoken Sentences by Older Adults with Mild Hearing Impairment

    PubMed Central

    Ayasse, Nicole D.; Lash, Amanda; Wingfield, Arthur

    2017-01-01

    In spite of the rapidity of everyday speech, older adults tend to keep up relatively well in day-to-day listening. In laboratory settings older adults do not respond as quickly as younger adults in off-line tests of sentence comprehension, but the question is whether comprehension itself is actually slower. Two unique features of the human eye were used to address this question. First, we tracked eye-movements as 20 young adults and 20 healthy older adults listened to sentences that referred to one of four objects pictured on a computer screen. Although the older adults took longer to indicate the referenced object with a cursor-pointing response, their gaze moved to the correct object as rapidly as that of the younger adults. Second, we concurrently measured dilation of the pupil of the eye as a physiological index of effort. This measure revealed that although poorer hearing acuity did not slow processing, success came at the cost of greater processing effort. PMID:28119598

  19. Decreased Sensitivity to Phonemic Mismatch in Spoken Word Processing in Adult Developmental Dyslexia

    ERIC Educational Resources Information Center

    Janse, Esther; de Bree, Elise; Brouwer, Susanne

    2010-01-01

    Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as "procodile for crocodile") for the atypical population of dyslexic adults to see to what…

  20. Spoken Lebanese.

    ERIC Educational Resources Information Center

    Feghali, Maksoud N.

    This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…

  1. Regulation of the corpora allata in male larvae of the cockroach Diploptera punctata

    SciTech Connect

    Paulson, C.R.

    1986-01-01

    The regulation of corpora allata was studied in final instar males of Diploptera punctata. The glands were manipulated in vivo and removed to determine the effect by in vitro radiochemical assay for juvenile hormone synthesis. Corpora allata were also treated with putative regulatory factors in vitro. During the final stadium the corpora allata were inhibited both by nerves and by humoral factors. Neural inhibition was shown by an increase in juvenile hormone synthesis following denervation of the corpora allata. This operation elicited an extra larval instar. Humoral inhibition was shown by the decline in juvenile hormone synthesis of adult female corpora allata following transplantation into final instar larval hosts, and conversely the increase in juvenile hormone synthesis by larval corpora allata following implantation into adult females. Humoral inhibition was prevented by decapitation of larvae prior to the head critical period for molting and restored by implantation of a larval brain, showing that the brain is the source of this inhibition.

  2. Fast Mapping Semantic Features: Performance of Adults with Normal Language, History of Disorders of Spoken and Written Language, and Attention Deficit Hyperactivity Disorder on a Word-Learning Task

    ERIC Educational Resources Information Center

    Alt, Mary; Gutmann, Michelle L.

    2009-01-01

    Purpose: This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD). Methods: Sixty-eight adults were required to associate a novel object with a novel label, and then…

  3. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults

    PubMed Central

    Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  4. Integrating Learner Corpora and Natural Language Processing: A Crucial Step towards Reconciling Technological Sophistication and Pedagogical Effectiveness

    ERIC Educational Resources Information Center

    Granger, Sylviane; Kraif, Olivier; Ponton, Claude; Antoniadis, Georges; Zampa, Virginie

    2007-01-01

    Learner corpora, electronic collections of spoken or written data from foreign language learners, offer unparalleled access to many hitherto uncovered aspects of learner language, particularly in their error-tagged format. This article aims to demonstrate the role that the learner corpus can play in CALL, particularly when used in conjunction with…

  5. Is the time course of lexical activation and competition in spoken word recognition affected by adult aging? An event-related potential (ERP) study.

    PubMed

    Hunter, Cynthia R

    2016-10-01

    Adult aging is associated with decreased accuracy for recognizing speech, particularly in noisy backgrounds and for high neighborhood density words, which sound similar to many other words. In the current study, the time course of neighborhood density effects in young and older adults was compared using event-related potentials (ERP) and behavioral responses in a lexical decision task for spoken words and nonwords presented either in quiet or in noise. Target items sounded similar either to many or to few other words (neighborhood density) but were balanced for the frequency of their component sounds (phonotactic probability). Behavioral effects of density were similar across age groups, but the event-related potential effects of density differed as a function of age group. For young adults, density modulated the amplitude of both the N400 and the later P300 or late positive complex (LPC). For older adults, density modulated only the amplitude of the P300/LPC. Thus, spreading activation to the semantics of lexical neighbors, indexed by the N400 density effect, appears to be reduced or delayed in adult aging. In contrast, effects of density on P300/LPC amplitude were present in both age groups, perhaps reflecting attentional allocation to items that resemble few words in the mental lexicon. The results constitute the first evidence that ERP effects of neighborhood density are affected by adult aging. The age difference may reflect either a unitary density effect that is delayed by approximately 150ms in older adults, or multiple processes that are differentially affected by aging.

  6. Peptidomic Analysis of the Brain and Corpora Cardiaca-Corpora Allata Complex in the Bombyx mori

    PubMed Central

    Liu, Xiaoguang; Ning, Xia; Zhang, Yan; Chen, Wenfeng; Zhao, Zhangwu; Zhang, Qingwen

    2012-01-01

    The silkworm, Bombyx mori, is an important economic insect for silk production. However, many of the mature peptides relevant to its various life stages remain unknown. Using RP-HPLC, MALDI-TOF MS, and previously identified peptides from B. mori and other insects in the transcriptome database, we created peptide profiles showing a total of 6 ion masses that could be assigned to peptides in eggs, including one previously unidentified peptide. A further 49 peptides were assigned to larval brains. 17 new mature peptides were identified in isolated masses. 39 peptides were found in pupal brains with 8 unidentified peptides. 48 were found in adult brains with 12 unidentified peptides. These new unidentified peptides showed highly significant matches in all MS analysis. These matches were then searched against the National Center for Biotechnology Information (NCBI) database to provide new annotations for these mature peptides. In total, 59 mature peptides in 19 categories were found in the brains of silkworms at the larval, pupal, and adult stages. These results demonstrate that peptidomic variation across different developmental stages can be dramatic. Moreover, the corpora cardiaca-corpora allata (CC-CA) complex was examined during the fifth larval instar. A total of 41 ion masses were assigned to peptides. PMID:23316247

  7. Text exposure predicts spoken production of complex sentences in eight and twelve year old children and adults

    PubMed Central

    Montag, Jessica L.; MacDonald, Maryellen C.

    2015-01-01

    There is still much debate about the nature of the experiential and maturational changes that take place during childhood to bring about the sophisticated language abilities of an adult. The present study investigated text exposure as a possible source of linguistic experience that plays a role in the development of adult-like language abilities. Corpus analyses of object and passive relative clauses (Object: The book that the woman carried; Passive: The book that was carried by the woman) established the frequencies of these sentence types in child-directed speech and children's literature. We found that relative clauses of either type were more frequent in the written corpus, and that the ratio of passive to object relatives was much higher in the written corpus as well. This analysis suggests that passive relative clauses are much more frequent in a child's linguistic environment if they have high rates of text exposure. We then elicited object and passive relative clauses using a picture-description production task with eight and twelve year old children and adults. Both group and individual differences were consistent with the corpus analyses, such that older individuals and individuals with more text exposure produced more passive relative clauses. These findings suggest that the qualitatively different patterns of text versus speech may be an important source of linguistic experience for the development of adult-like language behavior. PMID:25844625

  8. Spoken Records. Third Edition.

    ERIC Educational Resources Information Center

    Roach, Helen

    Surveying 75 years of accomplishment in the field of spoken recording, this reference work critically evaluates commercially available recordings selected for excellence of execution, literary or historical merit, interest, and entertainment value. Some types of spoken records included are early recording, documentaries, lectures, interviews,…

  9. Automatic translation among spoken languages

    NASA Astrophysics Data System (ADS)

    Walter, Sharon M.; Costigan, Kelly

    1994-02-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  10. Automatic translation among spoken languages

    NASA Technical Reports Server (NTRS)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  11. Research on Spoken Dialogue Systems

    NASA Technical Reports Server (NTRS)

    Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel

    2010-01-01

    Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.

  12. Proposed Framework for the Evaluation of Standalone Corpora Processing Systems: An Application to Arabic Corpora

    PubMed Central

    Al-Thubaity, Abdulmohsen; Alqifari, Reem

    2014-01-01

    Despite the accessibility of numerous online corpora, students and researchers engaged in the fields of Natural Language Processing (NLP), corpus linguistics, and language learning and teaching may encounter situations in which they need to develop their own corpora. Several commercial and free standalone corpora processing systems are available to process such corpora. In this study, we first propose a framework for the evaluation of standalone corpora processing systems and then use it to evaluate seven freely available systems. The proposed framework considers the usability, functionality, and performance of the evaluated systems while taking into consideration their suitability for Arabic corpora. While the results show that most of the evaluated systems exhibited comparable usability scores, the scores for functionality and performance were substantially different with respect to support for the Arabic language and N-grams profile generation. The results of our evaluation will help potential users of the evaluated systems to choose the system that best meets their needs. More importantly, the results will help the developers of the evaluated systems to enhance their systems and developers of new corpora processing systems by providing them with a reference framework. PMID:25610910

  13. Teaching the Spoken Language.

    ERIC Educational Resources Information Center

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  14. Spoken Korean: Book One.

    ERIC Educational Resources Information Center

    Lukoff, Fred

    This text is designed for students planning to learn spoken Korean. Ten lessons and two review sections based on cultural experiences commonly shared by Koreans are included in the text. Grouped in series of five lessons, the instructional materials include (1) basic sentences, (2) word study and review of basic sentences, (3) listening…

  15. Functional characterization of an allatotropin receptor expressed in the corpora allata of mosquitoes.

    PubMed

    Nouzova, Marcela; Brockhoff, Anne; Mayoral, Jaime G; Goodwin, Marianne; Meyerhof, Wolfgang; Noriega, Fernando G

    2012-03-01

    Allatotropin is an insect neuropeptide with pleiotropic actions on a variety of different tissues. In the present work we describe the identification, cloning and functional and molecular characterization of an Aedes aegypti allatotropin receptor (AeATr) and provide a detailed quantitative study of the expression of the AeATr gene in the adult mosquito. Analysis of the tissue distribution of AeATr mRNA in adult female revealed high transcript levels in the nervous system (brain, abdominal, thoracic and ventral ganglia), corpora allata-corpora cardiaca complex and ovary. The receptor is also expressed in heart, hindgut and male testis and accessory glands. Separation of the corpora allata (CA) and corpora cardiaca followed by analysis of gene expression in the isolated glands revealed expression of the AeATr primarily in the CA. In the female CA, the AeATr mRNA levels were low in the early pupae, started increasing 6h before adult eclosion and reached a maximum 24h after female emergence. Blood feeding resulted in a decrease in transcript levels. The pattern of changes of AeATr mRNA resembles the changes in JH biosynthesis. Fluorometric Imaging Plate Reader recordings of calcium transients in HEK293 cells expressing the AeATr showed a selective response to A. aegypti allatotropin stimulation in the low nanomolar concentration range. Our studies suggest that the AeATr play a role in the regulation of JH synthesis in mosquitoes.

  16. Mining Quality Phrases from Massive Text Corpora

    PubMed Central

    Liu, Jialu; Shang, Jingbo; Wang, Chi; Ren, Xiang; Han, Jiawei

    2015-01-01

    Text data are ubiquitous and play an essential role in big data applications. However, text data are mostly unstructured. Transforming unstructured text into structured units (e.g., semantically meaningful phrases) will substantially reduce semantic ambiguity and enhance the power and efficiency at manipulating such data using database technology. Thus mining quality phrases is a critical research problem in the field of databases. In this paper, we propose a new framework that extracts quality phrases from text corpora integrated with phrasal segmentation. The framework requires only limited training but the quality of phrases so generated is close to human judgment. Moreover, the method is scalable: both computation time and required space grow linearly as corpus size increases. Our experiments on large text corpora demonstrate the quality and efficiency of the new method. PMID:26705375

  17. Sharing Spoken Language: Sounds, Conversations, and Told Stories

    ERIC Educational Resources Information Center

    Birckmayer, Jennifer; Kennedy, Anne; Stonehouse, Anne

    2010-01-01

    Infants and toddlers encounter numerous spoken story experiences early in their lives: conversations, oral stories, and language games such as songs and rhymes. Many adults are even surprised to learn that children this young need these kinds of natural language experiences at all. Adults help very young children take a step along the path toward…

  18. Spoken name pronunciation evaluation

    NASA Astrophysics Data System (ADS)

    Tepperman, Joseph; Narayanan, Shrikanth

    2004-10-01

    Recognition of spoken names is an important ASR task since many speech applications can be associated with it. However, the task is also among the most difficult ones due to the large number of names, their varying origins, and the multiple valid pronunciations of any given name, largely dependent upon the speaker's mother tongue and familiarity with the name. In order to explore the speaker- and language-dependent pronunciation variability issues present in name pronunciation, a spoken name database was collected from 101 speakers with varying native languages. Each speaker was asked to pronounce 80 polysyllabic names, uniformly chosen from ten language origins. In preliminary experiments, various prosodic features were used to train Gaussian mixture models (GMMs) to identify misplaced syllabic emphasis within the name, at roughly 85% accuracy. Articulatory features (voicing, place, and manner of articulation) derived from MFCCs were also incorporated for that purpose. The combined prosodic and articulatory features were used to automatically grade the quality of name pronunciation. These scores can be used to provide meaningful feedback to foreign language learners. A detailed description of the name database and some preliminary results on the accuracy of detecting misplaced stress patterns will be reported.

  19. "A Unified Poet Alliance": The Personal and Social Outcomes of Youth Spoken Word Poetry Programming

    ERIC Educational Resources Information Center

    Weinstein, Susan

    2010-01-01

    This article places youth spoken word (YSW) poetry programming within the larger framework of arts education. Drawing primarily on transcripts of interviews with teen poets and adult teaching artists and program administrators, the article identifies specific benefits that participants ascribe to youth spoken word, including the development of…

  20. Sign Language Versus Spoken Language

    ERIC Educational Resources Information Center

    Stokoe, William C.

    1978-01-01

    In the debate over continuities versus discontinuities in the emergence of language, sign language is not taken to be the antithesis, but is presented as the antecedent of spoken languages. (Author/HP)

  1. Corpora and Language Assessment: The State of the Art

    ERIC Educational Resources Information Center

    Park, Kwanghyun

    2014-01-01

    This article outlines the current state of and recent developments in the use of corpora for language assessment and considers future directions with a special focus on computational methodology. Because corpora began to make inroads into language assessment in the 1990s, test developers have increasingly used them as a reference resource to…

  2. The Importance of Corpora in Translation Studies: A Practical Case

    ERIC Educational Resources Information Center

    Bermúdez Bausela, Montserrat

    2016-01-01

    This paper deals with the use of corpora in Translation Studies, particularly with the so-called "'ad hoc' corpus" or "translator's corpus" as a working tool both in the classroom and for the professional translator. We believe that corpora are an inestimable source not only for terminology and phraseology extraction (cf. Maia,…

  3. Corpora and Language Teaching: Just a Fling or Wedding Bells?

    ERIC Educational Resources Information Center

    Gabrielatos, Costas

    2005-01-01

    Electronic language corpora, and their attendant computer software, are proving increasingly influential in language teaching as sources of language descriptions and pedagogical materials. However, few teachers are clear about their nature or their relevance to language teaching. This paper defines corpora and their types, discusses their…

  4. Learner Corpora: The Missing Link in EAP Pedagogy

    ERIC Educational Resources Information Center

    Gilquin, Gaetanelle; Granger, Sylviane; Paquot, Magali

    2007-01-01

    This article deals with the place of learner corpora, i.e. corpora containing authentic language data produced by learners of a foreign/second language, in English for academic purposes (EAP) pedagogy and sets out to demonstrate that they have a valuable contribution to make to the field. Following an initial brief introduction to corpus-based…

  5. Properties of Spoken and Written Language. Technical Report No. 5.

    ERIC Educational Resources Information Center

    Chafe, Wallace; Danielwicz, Jane

    To find differences and similarities between spoken and written English, analyses were made of four specific kinds of language. Twenty adults, either graduate students or university professors, provided a sample of each of the following: conversations, lectures, informal letters, and academic papers. Conversations and lecture samples came from…

  6. Recognizing Young Readers' Spoken Questions

    ERIC Educational Resources Information Center

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  7. Spoken (Yucatec) Maya. [Preliminary Edition].

    ERIC Educational Resources Information Center

    Blair, Robert W.; Vermont-Salas, Refugio

    This two-volume set of 18 tape-recorded lesson units represents a first attempt at preparing a course in the modern spoken language of some 300,000 inhabitants of the peninsula of Yucatan, the Guatemalan department of the Peten, and certain border areas of Belize. (A short account of the research and background of this material is given in the…

  8. 20-Hydroxyecdysone stimulation of juvenile hormone biosynthesis by the mosquito corpora allata.

    PubMed

    Areiza, Maria; Nouzova, Marcela; Rivera-Perez, Crisalejandra; Noriega, Fernando G

    2015-09-01

    Juvenile hormone III (JH) is synthesized by the corpora allata (CA) and plays a key role in mosquito development and reproduction. JH titer decreases in the last instar larvae allowing pupation and metamorphosis to progress. As the anti-metamorphic role of JH comes to an end, the CA of the late pupa (or pharate adult) becomes again "competent" to synthesize JH, which plays an essential role orchestrating reproductive maturation. 20-hydroxyecdysone (20E) prepares the pupae for ecdysis, and would be an ideal candidate to direct a developmental program in the CA of the pharate adult mosquito. In this study, we provide evidence that 20E acts as an age-linked hormonal signal, directing CA activation in the mosquito pupae. Stimulation of the inactive brain-corpora allata-corpora cardiaca complex (Br-CA-CC) of the early pupa (24 h before adult eclosion or -24 h) in vitro with 20E resulted in a remarkable increase in JH biosynthesis, as well as increase in the activity of juvenile hormone acid methyltransferase (JHAMT). Addition of methyl farnesoate but not farnesoic acid also stimulated JH synthesis by the Br-CA-CC of the -24 h pupae, proving that epoxidase activity is present, but not JHAMT activity. Separation of the CA-CC complex from the brain (denervation) in the -24 h pupae also activated JH synthesis. Our results suggest that an increase in 20E titer might override an inhibitory effect of the brain on JH synthesis, phenocopying denervation. All together these findings provide compelling evidence that 20E acts as a developmental signal that ensures proper reactivation of JH synthesis in the mosquito pupae.

  9. Effects of Aging and Noise on Real-Time Spoken Word Recognition: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Ben-David, Boaz M.; Chambers, Craig G.; Daneman, Meredyth; Pichora-Fuller, M. Kathleen; Reingold, Eyal M.; Schneider, Bruce A.

    2011-01-01

    Purpose: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. Method: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted…

  10. Adaptive interface for spoken dialog

    NASA Astrophysics Data System (ADS)

    Dusan, Sorin; Flanagan, James

    2002-05-01

    Speech has become increasingly important in human-computer interaction. Spoken dialog interfaces rely on automatic speech recognition, speech synthesis, language understanding, and dialog management. A main issue in dialog systems is that they typically are limited to pre-programmed vocabularies and sets of sentences. The research reported here focuses on developing an adaptive spoken dialog interface capable of acquiring new linguistic units and their corresponding semantics during the human-computer interaction. The adaptive interface identifies unknown words and phrases in the users utterances and asks the user for the corresponding semantics. The user can provide the meaning or the semantic representation of the new linguistic units through multiple modalities, including speaking, typing, pointing, touching, or showing. The interface then stores the new linguistic units in a semantic grammar and creates new objects defining the corresponding semantic representation. This process takes place during natural interaction between user and computer and, thus, the interface does not have to be rewritten and compiled to incorporate the newly acquired language. Users can personalize the adaptive spoken interface for different domain applications, or according to their personal preferences. [Work supported by NSF.

  11. Spoken Dialogue Interfaces: Integrating Usability

    NASA Astrophysics Data System (ADS)

    Spiliotopoulos, Dimitris; Stavropoulou, Pepi; Kouroupetroglou, Georgios

    Usability is a fundamental requirement for natural language interfaces. Usability evaluation reflects the impact of the interface and the acceptance from the users. This work examines the potential of usability evaluation in terms of issues and methodologies for spoken dialogue interfaces along with the appropriate designer-needs analysis. It unfolds the perspective to the usability integration in the spoken language interface design lifecycle and provides a framework description for creating and testing usable content and applications for conversational interfaces. Main concerns include the problem identification of design issues for usability design and evaluation, the use of customer experience for the design of voice interfaces and dialogue, and the problems that arise from real-life deployment. Moreover it presents a real-life paradigm of a hands-on approach for applying usability methodologies in a spoken dialogue application environment to compare against a DTMF approach. Finally, the scope and interpretation of results from both the designer and the user standpoint of usability evaluation are discussed.

  12. Corpora of Vietnamese texts: lexical effects of intended audience and publication place.

    PubMed

    Pham, Giang; Kohnert, Kathryn; Carney, Edward

    2008-02-01

    This article has two primary aims. The first is to introduce a new Vietnamese text-based corpus. The Corpora of Vietnamese Texts (CVT; Tang, 2006a) consists of approximately 1 million words drawn from newspapers and children's literature, and is available online at www.vnspeechtherapy.com/vi/CVT. The second aim is to investigate potential differences in lexical frequency and distributional characteristics in the CVT on the basis of place of publication (Vietnam or Western countries) and intended audience: adult-directed texts (newspapers) or child-directed texts (children's literature). We found clear differences between adult- and child-directed texts, particularly in the distributional frequencies of pronouns or kinship terms, which were more frequent in children's literature. Within child- and adult-directed texts, lexical characteristics did not differ on the basis of place of publication. Implications of these findings for future research are discussed.

  13. New perspectives on corpora amylacea in the human brain.

    PubMed

    Augé, Elisabet; Cabezón, Itsaso; Pelegrí, Carme; Vilaplana, Jordi

    2017-02-03

    Corpora amylacea are structures of unknown origin and function that appear with age in human brains and are profuse in selected brain areas in several neurodegenerative conditions. They are constituted of glucose polymers and may contain waste elements derived from different cell types. As we previously found on particular polyglucosan bodies in mouse brain, we report here that corpora amylacea present some neo-epitopes that can be recognized by natural antibodies, a certain kind of antibodies that are involved in tissue homeostasis. We hypothesize that corpora amylacea, and probably some other polyglucosan bodies, are waste containers in which deleterious or residual products are isolated to be later eliminated through the action of the innate immune system. In any case, the presence of neo-epitopes on these structures and the existence of natural antibodies directed against them could become a new focal point for the study of both age-related and degenerative brain processes.

  14. New perspectives on corpora amylacea in the human brain

    PubMed Central

    Augé, Elisabet; Cabezón, Itsaso; Pelegrí, Carme; Vilaplana, Jordi

    2017-01-01

    Corpora amylacea are structures of unknown origin and function that appear with age in human brains and are profuse in selected brain areas in several neurodegenerative conditions. They are constituted of glucose polymers and may contain waste elements derived from different cell types. As we previously found on particular polyglucosan bodies in mouse brain, we report here that corpora amylacea present some neo-epitopes that can be recognized by natural antibodies, a certain kind of antibodies that are involved in tissue homeostasis. We hypothesize that corpora amylacea, and probably some other polyglucosan bodies, are waste containers in which deleterious or residual products are isolated to be later eliminated through the action of the innate immune system. In any case, the presence of neo-epitopes on these structures and the existence of natural antibodies directed against them could become a new focal point for the study of both age-related and degenerative brain processes. PMID:28155917

  15. Spoken English Language Development Among Native Signing Children With Cochlear Implants

    PubMed Central

    Davidson, Kathryn

    2014-01-01

    Bilingualism is common throughout the world, and bilingual children regularly develop into fluently bilingual adults. In contrast, children with cochlear implants (CIs) are frequently encouraged to focus on a spoken language to the exclusion of sign language. Here, we investigate the spoken English language skills of 5 children with CIs who also have deaf signing parents, and so receive exposure to a full natural sign language (American Sign Language, ASL) from birth, in addition to spoken English after implantation. We compare their language skills with hearing ASL/English bilingual children of deaf parents. Our results show comparable English scores for the CI and hearing groups on a variety of standardized language measures, exceeding previously reported scores for children with CIs with the same age of implantation and years of CI use. We conclude that natural sign language input does no harm and may mitigate negative effects of early auditory deprivation for spoken language development. PMID:24150489

  16. Using Online Corpora to Develop Students' Writing Skills

    ERIC Educational Resources Information Center

    Gilmore, Alex

    2009-01-01

    Large corpora such as the British National Corpus and the COBUILD Corpus and Collocations Sampler are now accessible, free of charge, online and can be usefully incorporated into a process writing approach to help develop students' writing skills. This article aims to familiarize readers with these resources and to show how they can be usefully…

  17. Relaxin bioactivity and immunoactivity in human corpora lutea.

    PubMed

    O'Byrne, E M; Flitcraft, J F; Sawyer, W K; Hochman, J; Weiss, G; Steinetz, B G

    1978-05-01

    Relaxin-like activity in extracts of corpora lutea (CL) from pregnant and non-pregnant women was determined by radioimmunoassay and by guinea pig pubic symphysis palpation assay. The biologically determined activity paralled the immunoactivity of extracts of CL of pregnancy. The relaxin content of CL of non-pregnant women was too low for detection by the bioassay.

  18. Authenticating Corpora for Language Learning: A Problem and Its Resolution

    ERIC Educational Resources Information Center

    Mishan, Freda

    2004-01-01

    This paper questions the assumption that corpora are authentic, with particular reference to their application in language pedagogy. The author argues that, because of the form the corpus takes, authentic source texts forfeit a crucial criterion for authenticity, namely context, in the transition from source to electronic data. Other authentic…

  19. Proficiency Level--A Fuzzy Variable in Computer Learner Corpora

    ERIC Educational Resources Information Center

    Carlsen, Cecilie

    2012-01-01

    This article focuses on the proficiency level of texts in Computer Learner Corpora (CLCs). A claim is made that proficiency levels are often poorly defined in CLC design, and that the methods used for level assignment of corpus texts are not always adequate. Proficiency level can therefore, best be described as a fuzzy variable in CLCs,…

  20. Some Benefits of Corpora as a Language Learning Tool

    ERIC Educational Resources Information Center

    Marjanovic, Tatjana

    2012-01-01

    What this paper is meant to do is share illustrations and insights into how English learners and teachers alike can benefit from using corpora in their work. Arguments are made for their multifaceted possibilities as grammatical, lexical and discourse pools suitable for discovering ways of the language, be they regularities or idiosyncrasies. The…

  1. How to Measure Development in Corpora? An Association Strength Approach

    ERIC Educational Resources Information Center

    Stoll, Sabine; Gries, Stefan Th.

    2009-01-01

    In this paper we propose a method for characterizing development in large longitudinal corpora. The method has the following three features: (i) it suggests how to represent development without assuming predefined stages; (ii) it includes caregiver speech/child-directed speech; (iii) it uses statistical association measures for investigating…

  2. Annotation of Korean Learner Corpora for Particle Error Detection

    ERIC Educational Resources Information Center

    Lee, Sun-Hee; Jang, Seok Bae; Seo, Sang-Kyu

    2009-01-01

    In this study, we focus on particle errors and discuss an annotation scheme for Korean learner corpora that can be used to extract heuristic patterns of particle errors efficiently. We investigate different properties of particle errors so that they can be later used to identify learner errors automatically, and we provide resourceful annotation…

  3. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  4. Developmental differences in the influence of phonological similarity on spoken word processing in Mandarin Chinese.

    PubMed

    Malins, Jeffrey G; Gao, Danqi; Tao, Ran; Booth, James R; Shu, Hua; Joanisse, Marc F; Liu, Li; Desroches, Amy S

    2014-11-01

    The developmental trajectory of spoken word recognition has been well established in Indo-European languages, but to date remains poorly characterized in Mandarin Chinese. In this study, typically developing children (N=17; mean age 10; 5) and adults (N=17; mean age 24) performed a picture-word matching task in Mandarin while we recorded ERPs. Mismatches diverged from expectations in different components of the Mandarin syllable; namely, word-initial phonemes, word-final phonemes, and tone. By comparing responses to different mismatch types, we uncovered evidence suggesting that both children and adults process words incrementally. However, we also observed key developmental differences in how subjects treated onset and rime mismatches. This was taken as evidence for a stronger influence of top-down processing on spoken word recognition in adults compared to children. This work therefore offers an important developmental component to theories of Mandarin spoken word recognition.

  5. Recent Changes in the Spoken Polish Language.

    ERIC Educational Resources Information Center

    Birkenmayer, Sigmund S.

    Both spoken and written Polish have undergone profound changes during the past twenty-eight years. The increasing urbanization of Polish culture and the forced change in Polish society are the main factors influencing the change in the language. Indirect evidence of changes which have occurred in the vocabulary and idioms of spoken Polish in the…

  6. Reader for Advanced Spoken Tamil. Final Report.

    ERIC Educational Resources Information Center

    Schiffman, Harold

    This final report describes the development of a textbook for advanced, spoken Tamil. There is a marked dirrerence between literary Tamil and spoken Tamil, and training in the former is not sufficient for speaking the language in everyday situations with reasonably educated native speakers. There is difficulty in finding suitable material that…

  7. How Do Raters Judge Spoken Vocabulary?

    ERIC Educational Resources Information Center

    Li, Hui

    2016-01-01

    The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…

  8. The Neural Substrates of Spoken Idiom Comprehension

    ERIC Educational Resources Information Center

    Hillert, Dieter G.; Buracas, Giedrius T.

    2009-01-01

    To examine the neural correlates of spoken idiom comprehension, we conducted an event-related functional MRI study with a "rapid sentence decision" task. The spoken sentences were equally familiar but varied in degrees of "idiom figurativeness". Our results show that "figurativeness" co-varied with neural activity in the left ventral dorsolateral…

  9. A Corpus-Based EAP Course for NNS Doctoral Students: Moving from Available Specialized Corpora to Self-Compiled Corpora

    ERIC Educational Resources Information Center

    Lee, David; Swales, John

    2006-01-01

    This paper presents a discussion of an experimental, innovative course in corpus-informed EAP for doctoral students. Participants were given access to specialized corpora of academic writing and speaking, instructed in the tools of the trade (web- and PC-based concordancers) and gradually inducted into the skills needed to best exploit the data…

  10. Come studiare e insegnare l'italiano attraverso i corpora (How To Study and Teach Italian through Corpora).

    ERIC Educational Resources Information Center

    Laviosa, Sara

    1999-01-01

    Examines recent studies of linguistic corpora in Italian, and presents the results of an analysis of "piace" and "piacciono" conducted on a corpus of 3.5 million words of written Italian, accessible at the University of Birmingham in England. Focuses on pedagogical applications of the data and on the inductive methodologies…

  11. Cognitive aging and hearing acuity: modeling spoken language comprehension

    PubMed Central

    Wingfield, Arthur; Amichetti, Nicole M.; Lash, Amanda

    2015-01-01

    The comprehension of spoken language has been characterized by a number of “local” theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled. PMID:26124724

  12. Comparing syntactic complexity in medical and non-medical corpora.

    PubMed

    Campbell, D A; Johnson, S B

    2001-01-01

    With the growing use of Natural Language Processing (NLP) techniques as solutions in Medical Informatics, the need to quickly and efficiently create the knowledge structures used by these systems has grown concurrently. Automatic discovery of a lexicon for use by an NLP system through machine learning will require information about the syntax of medical language. Understanding the syntactic differences between medical and non-medical corpora may allow more efficient acquisition of a lexicon. Three experiments designed to quantify the syntactic differences in medical and non-medical corpora were conducted. The results show that the syntax of medical language shows less variation than non-medical language and is likely simpler. The differences were great enough to question the applicability of general language tools on medical language. These differences may reduce the difficulty of some free text machine learning problems by capitalizing on the simpler nature of narrative medical syntax.

  13. A Statistical Word-Level Translation Model for Comparable Corpora

    DTIC Science & Technology

    2000-06-01

    readily available resources such as corpora, thesauri, bilingual and multilingual lexicons and dictionaries. The acquisition of such resources has...could aid in Monolingual Information Retrieval (MIR) by methods of query expansion, and thesauri construction. To date, most of the existing...testing the limits of its performance. Future directions include testing the model with a monolingual comparable corpus, e.g. WSJ [42M] and either IACA/B

  14. Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity

    ERIC Educational Resources Information Center

    Wong, Miranda Kit-Yi; So, Wing Chee

    2016-01-01

    This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…

  15. Feature-level sentiment analysis by using comparative domain corpora

    NASA Astrophysics Data System (ADS)

    Quan, Changqin; Ren, Fuji

    2016-06-01

    Feature-level sentiment analysis (SA) is able to provide more fine-grained SA on certain opinion targets and has a wider range of applications on E-business. This study proposes an approach based on comparative domain corpora for feature-level SA. The proposed approach makes use of word associations for domain-specific feature extraction. First, we assign a similarity score for each candidate feature to denote its similarity extent to a domain. Then we identify domain features based on their similarity scores on different comparative domain corpora. After that, dependency grammar and a general sentiment lexicon are applied to extract and expand feature-oriented opinion words. Lastly, the semantic orientation of a domain-specific feature is determined based on the feature-oriented opinion lexicons. In evaluation, we compare the proposed method with several state-of-the-art methods (including unsupervised and semi-supervised) using a standard product review test collection. The experimental results demonstrate the effectiveness of using comparative domain corpora.

  16. Famous talker effects in spoken word recognition.

    PubMed

    Maibauer, Alisa M; Markis, Teresa A; Newell, Jessica; McLennan, Conor T

    2014-01-01

    Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers.

  17. Combining Treatment for Written and Spoken Naming

    PubMed Central

    Beeson, Pélagie M.; Egnor, Heather

    2007-01-01

    Individuals with left-hemisphere damage often have concomitant impairment of spoken and written language. Whereas some treatment studies have shown that reading paired with spoken naming can benefit both language modalities, little systematic research has been directed toward the treatment of spelling combined with spoken naming. The purpose of this study was to examine the therapeutic effect of pairing a lexical spelling treatment referred to as Copy and Recall Treatment (CART) with verbal repetition of target words. This approach (CART + Repetition) was compared with treatment using verbal repetition without the inclusion of orthographic training (Repetition Only). Two individuals with moderate aphasia and severe impairment of spelling participated in the study using a multiple baseline design across stimulus sets and treatment conditions. Both participants improved spelling of targeted words as well as spoken naming of those items, but improvement in spoken naming was marked for one individual in the CART + Repetition condition, while the other participant made smaller gains in spoken than written naming irrespective of treatment condition. Consideration of the participant profiles suggested that CART + Repetition provides greater benefit when there is some residual phonological ability and the treatment serves to stimulate links between orthography and phonology. PMID:17064445

  18. Integration of Partial Information within and across Modalities: Contributions to Spoken and Written Sentence Recognition

    ERIC Educational Resources Information Center

    Smith, Kimberly G.; Fogerty, Daniel

    2015-01-01

    Purpose: This study evaluated the extent to which partial spoken or written information facilitates sentence recognition under degraded unimodal and multimodal conditions. Method: Twenty young adults with typical hearing completed sentence recognition tasks in unimodal and multimodal conditions across 3 proportions of preservation. In the unimodal…

  19. Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items

    ERIC Educational Resources Information Center

    Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.

    2013-01-01

    An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…

  20. Talker Familiarity and Spoken Word Recognition in School-Age Children

    ERIC Educational Resources Information Center

    Levi, Susannah V.

    2015-01-01

    Research with adults has shown that spoken language processing is improved when listeners are familiar with talkers' voices, known as the familiar talker advantage. The current study explored whether this ability extends to school-age children, who are still acquiring language. Children were familiarized with the voices of three German-English…

  1. Predictors of Spoken Language Learning

    ERIC Educational Resources Information Center

    Wong, Patrick C. M.; Ettlinger, Marc

    2011-01-01

    We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We…

  2. Juvenile Hormone Biosynthesis Gene Expression in the corpora allata of Honey Bee (Apis mellifera L.) Female Castes

    PubMed Central

    Rosa, Gustavo Conrado Couto; Moda, Livia Maria; Martins, Juliana Ramos; Bitondi, Márcia Maria Gentile; Hartfelder, Klaus; Simões, Zilá Luz Paulino

    2014-01-01

    Juvenile hormone (JH) controls key events in the honey bee life cycle, viz. caste development and age polyethism. We quantified transcript abundance of 24 genes involved in the JH biosynthetic pathway in the corpora allata-corpora cardiaca (CA-CC) complex. The expression of six of these genes showing relatively high transcript abundance was contrasted with CA size, hemolymph JH titer, as well as JH degradation rates and JH esterase (jhe) transcript levels. Gene expression did not match the contrasting JH titers in queen and worker fourth instar larvae, but jhe transcript abundance and JH degradation rates were significantly lower in queen larvae. Consequently, transcriptional control of JHE is of importance in regulating larval JH titers and caste development. In contrast, the same analyses applied to adult worker bees allowed us inferring that the high JH levels in foragers are due to increased JH synthesis. Upon RNAi-mediated silencing of the methyl farnesoate epoxidase gene (mfe) encoding the enzyme that catalyzes methyl farnesoate-to-JH conversion, the JH titer was decreased, thus corroborating that JH titer regulation in adult honey bees depends on this final JH biosynthesis step. The molecular pathway differences underlying JH titer regulation in larval caste development versus adult age polyethism lead us to propose that mfe and jhe genes be assayed when addressing questions on the role(s) of JH in social evolution. PMID:24489805

  3. Juvenile hormone biosynthesis gene expression in the corpora allata of honey bee (Apis mellifera L.) female castes.

    PubMed

    Bomtorin, Ana Durvalina; Mackert, Aline; Rosa, Gustavo Conrado Couto; Moda, Livia Maria; Martins, Juliana Ramos; Bitondi, Márcia Maria Gentile; Hartfelder, Klaus; Simões, Zilá Luz Paulino

    2014-01-01

    Juvenile hormone (JH) controls key events in the honey bee life cycle, viz. caste development and age polyethism. We quantified transcript abundance of 24 genes involved in the JH biosynthetic pathway in the corpora allata-corpora cardiaca (CA-CC) complex. The expression of six of these genes showing relatively high transcript abundance was contrasted with CA size, hemolymph JH titer, as well as JH degradation rates and JH esterase (jhe) transcript levels. Gene expression did not match the contrasting JH titers in queen and worker fourth instar larvae, but jhe transcript abundance and JH degradation rates were significantly lower in queen larvae. Consequently, transcriptional control of JHE is of importance in regulating larval JH titers and caste development. In contrast, the same analyses applied to adult worker bees allowed us inferring that the high JH levels in foragers are due to increased JH synthesis. Upon RNAi-mediated silencing of the methyl farnesoate epoxidase gene (mfe) encoding the enzyme that catalyzes methyl farnesoate-to-JH conversion, the JH titer was decreased, thus corroborating that JH titer regulation in adult honey bees depends on this final JH biosynthesis step. The molecular pathway differences underlying JH titer regulation in larval caste development versus adult age polyethism lead us to propose that mfe and jhe genes be assayed when addressing questions on the role(s) of JH in social evolution.

  4. Professionals' Guidance about Spoken Language Multilingualism and Spoken Language Choice for Children with Hearing Loss

    ERIC Educational Resources Information Center

    Crowe, Kathryn; McLeod, Sharynne

    2016-01-01

    The purpose of this research was to investigate factors that influence professionals' guidance of parents of children with hearing loss regarding spoken language multilingualism and spoken language choice. Sixteen professionals who provide services to children and young people with hearing loss completed an online survey, rating the importance of…

  5. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension.

    PubMed

    Sekerina, Irina A; Campanelli, Luca; Van Dyke, Julie A

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals.

  6. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    PubMed Central

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  7. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements.

    PubMed

    Hadar, Britt; Skrzypek, Joshua E; Wingfield, Arthur; Ben-David, Boaz M

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the "visual world" eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., "point at the candle"). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions.

  8. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension

    PubMed Central

    Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974

  9. How Can We Use Corpus Wordlists for Language Learning? Interfaces between Computer Corpora and Expert Intervention

    ERIC Educational Resources Information Center

    Chen, Yu-Hua; Bruncak, Radovan

    2015-01-01

    With the advances in technology, wordlists retrieved from computer corpora have become increasingly popular in recent years. The lexical items in those wordlists are usually selected, according to a set of robust frequency and dispersion criteria, from large corpora of authentic and naturally occurring language. Corpus wordlists are of great value…

  10. The Role of Native and Learner Corpora in Vocabulary Test Design

    ERIC Educational Resources Information Center

    Akeel, Eman Saleh

    2016-01-01

    The growing field of corpus linguistics has been engaged heavily in language pedagogy during the last two decades. This has encouraged researchers to look for more applications that corpora have on language teaching and learning and led to the emergence of using corpora in language testing. The aim of this article is to provide an overview of…

  11. On-Line Access to Linguistically Annotated Text Corpora of Dutch via Internet.

    ERIC Educational Resources Information Center

    Kruyt, J. G.; Raaijmakers, S. A.; van der Kamp, P. H. J.; van Strien, R. J.

    Corpora of present-day Dutch developed by the Institute for Dutch Lexicology include two linguistically annotated corpora that can be accessed via Internet: a 5-million word corpus covering a variety of topics and text types, and a 27-million word newspaper corpus. The texts of both were acquired in machine-readable form and have been lemmatized…

  12. Use of English Corpora as a Primary Resource to Teach English to the Bengali Learners

    ERIC Educational Resources Information Center

    Dash, Niladri Sekhar

    2011-01-01

    In this paper we argue in favour of teaching English as a second language to the Bengali learners with direct utilisation of English corpora. The proposed strategy is meant to be assisted with computer and is based on data, information, and examples retrieved from the present-day English corpora developed with various text samples composed by…

  13. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    PubMed

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates.

  14. 31 CFR 358.5 - Which bearer corpora or detached bearer coupons are eligible for conversion to non-transferable...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Which bearer corpora or detached... CORPORA AND DETACHED BEARER COUPONS § 358.5 Which bearer corpora or detached bearer coupons are eligible... associated with the corpus are not submitted with the corpus, the corpus will be converted to a...

  15. 31 CFR 358.4 - Which bearer corpora or detached bearer coupons are eligible for conversion to transferable BECCS...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Which bearer corpora or detached... CORPORA AND DETACHED BEARER COUPONS § 358.4 Which bearer corpora or detached bearer coupons are eligible for conversion to transferable BECCS or CUBES securities? (a) For a callable corpus to be eligible...

  16. 31 CFR 358.3 - Are there any bearer corpora or detached bearer coupons that are not eligible for conversion?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Are there any bearer corpora or... BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING BOOK-ENTRY CONVERSION OF BEARER CORPORA AND DETACHED BEARER COUPONS § 358.3 Are there any bearer corpora or detached bearer coupons that are not eligible...

  17. 31 CFR 358.8 - Are there fees for the conversion of bearer corpora or detached bearer coupons?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... bearer corpora or detached bearer coupons? 358.8 Section 358.8 Money and Finance: Treasury Regulations... DEBT REGULATIONS GOVERNING BOOK-ENTRY CONVERSION OF BEARER CORPORA AND DETACHED BEARER COUPONS § 358.8 Are there fees for the conversion of bearer corpora or detached bearer coupons? We do not charge...

  18. Spoken Grammar and Its Role in the English Language Classroom

    ERIC Educational Resources Information Center

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  19. Handbook for Spoken Mathematics: (Larry's Speakeasy).

    ERIC Educational Resources Information Center

    Chang, Lawrence A.; And Others

    This handbook is directed toward those who have to deal with spoken mathematics, yet have insufficient background to know the correct verbal expression for the written symbolic one. It compiles consistent and well-defined ways of uttering mathematical expressions so listeners will receive clear, unambiguous, and well-pronounced representations.…

  20. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    ERIC Educational Resources Information Center

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…

  1. Well Spoken: Teaching Speaking to All Students

    ERIC Educational Resources Information Center

    Palmer, Erik

    2011-01-01

    All teachers at all grade levels in all subjects have speaking assignments for students, but many teachers believe they don't know how to teach speaking, and many even fear public speaking themselves. In his new book, "Well Spoken", veteran teacher and education consultant Erik Palmer shares the art of teaching speaking in any classroom. Teachers…

  2. "Jaja" in Spoken German: Managing Knowledge Expectations

    ERIC Educational Resources Information Center

    Taleghani-Nikazm, Carmen; Golato, Andrea

    2016-01-01

    In line with the other contributions to this issue on teaching pragmatics, this paper provides teachers of German with a two-day lesson plan for integrating authentic spoken language and its associated cultural background into their teaching. Specifically, the paper discusses how "jaja" and its phonetic variants are systematically used…

  3. Automatic Discrimination of Emotion from Spoken Finnish

    ERIC Educational Resources Information Center

    Toivanen, Juhani; Vayrynen, Eero; Seppanen, Tapio

    2004-01-01

    In this paper, experiments on the automatic discrimination of basic emotions from spoken Finnish are described. For the purpose of the study, a large emotional speech corpus of Finnish was collected; 14 professional actors acted as speakers, and simulated four primary emotions when reading out a semantically neutral text. More than 40 prosodic…

  4. Learning and Consolidation of Novel Spoken Words

    ERIC Educational Resources Information Center

    Davis, Matthew H.; Di Betta, Anna Maria; Macdonald, Mark J. E.; Gaskell, Gareth

    2009-01-01

    Two experiments explored the neural mechanisms underlying the learning and consolidation of novel spoken words. In Experiment 1, participants learned two sets of novel words on successive days. A subsequent recognition test revealed high levels of familiarity for both sets. However, a lexical decision task showed that only novel words learned on…

  5. The Company That Words Keep: Comparing the Statistical Structure of Child- versus Adult-Directed Language

    ERIC Educational Resources Information Center

    Hills, Thomas

    2013-01-01

    Does child-directed language differ from adult-directed language in ways that might facilitate word learning? Associative structure (the probability that a word appears with its free associates), contextual diversity, word repetitions and frequency were compared longitudinally across six language corpora, with four corpora of language directed at…

  6. Neural stages of spoken, written, and signed word processing in beginning second language learners.

    PubMed

    Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric

    2013-01-01

    WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.

  7. Towards Environment-Independent Spoken Language Systems

    DTIC Science & Technology

    1990-01-01

    Towards Environment-Independent Spoken Language Systems Alejandro Acero and Richard M. Stern Department of Electrical and Computer Engineering...applications of spectral subtraction and spectral equaliza- tion for speech recognition systems include the work of Van Compemolle [5] and Stem and Acero [12... Acero and Stem [1] proposed an approach to environment normalization in the cepstral domain, going beyond the noise stripping problem. In this paper we

  8. Spoken word recognition without a TRACE

    PubMed Central

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  9. Development of a Spoken Language System

    DTIC Science & Technology

    1992-04-01

    military logistical transportation planning domain. " Created a videotape to illustrate these capabilities and some potential applications of spoken...34 Participated in all evaluation tests. 5 We now give a brief description of these highlights. I During this project we modified our natural language system...processor. (For this part of our system, we have found a value of N=5 to give best understanding results.) The natural language system processes these

  10. "Do That Again": Evaluating Spoken Dialogue Interfaces

    NASA Technical Reports Server (NTRS)

    James, Frankie; Rayner, Manny; Hockey, Beth Ann

    2000-01-01

    We present a new technique for evaluating spoken dialogue interfaces that allows us to separate the dialogue behavior from the rest of the speech system. By using a dialogue simulator that we have developed, we can gather usability data on the system s dialogue interaction and behaviors that can guide improvements to the speech interface. Preliminary testing has shown promising results, suggesting that it is possible to test properties of dialogue separately from other factors such as recognition quality.

  11. A Spoken English Recognition Expert System.

    DTIC Science & Technology

    1983-09-01

    by Avron Barr and-dward A. Felbenbaum. DTIC document number AD A076873, 1979. 11. Kabrisky, Matthew. A Proposed Model for Visual Information Processing...functionally equivalent, in many ways, to the syntax processing of spoken English in the human brain. Because it closely models the syntax processing...previously described one should consider the following model : VOCMV SEANI PROCESSORE Figure ~ ~ ~ ~ ~ ~ ~~- 1..Hearhe7f1pehReonto SYTATI MEOR This modSe

  12. Models of spoken-word recognition.

    PubMed

    Weber, Andrea; Scharenborg, Odette

    2012-05-01

    All words of the languages we know are stored in the mental lexicon. Psycholinguistic models describe in which format lexical knowledge is stored and how it is accessed when needed for language use. The present article summarizes key findings in spoken-word recognition by humans and describes how models of spoken-word recognition account for them. Although current models of spoken-word recognition differ considerably in the details of implementation, there is general consensus among them on at least three aspects: multiple word candidates are activated in parallel as a word is being heard, activation of word candidates varies with the degree of match between the speech signal and stored lexical representations, and activated candidate words compete for recognition. No consensus has been reached on other aspects such as the flow of information between different processing levels, and the format of stored prelexical and lexical representations. WIREs Cogn Sci 2012, 3:387-401. doi: 10.1002/wcs.1178 For further resources related to this article, please visit the WIREs website.

  13. Use of Partial Information by Cochlear Implant Users and Listeners with Normal Hearing in Identifying Spoken Words: Some Preliminary Analyses.

    ERIC Educational Resources Information Center

    Lachs, Lorin; Weiss, Jonathan W.; Pisoni, David B.

    2000-01-01

    An error analysis of the word recognition responses of 20 adult cochlear implant users and 19 typical listeners was conducted to determine the types of partial information used when they identified spoken words under auditory-alone and audiovisual conditions. Results indicate there were no significant interactions with hearing status. (Contains…

  14. Setting the Tone: An ERP Investigation of the Influences of Phonological Similarity on Spoken Word Recognition in Mandarin Chinese

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2012-01-01

    We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin ("N" = 19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following…

  15. Inducing Multilingual Text Analysis Tools via Robust Projection across Aligned Corpora

    DTIC Science & Technology

    2001-01-01

    Inducing Multilingual Text Analysis Tools via Robust Projection across Aligned Corpora David Yarowsky Dept. of Computer Science Johns Hopkins...system and set of algorithms for automati- cally inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity... multilingual , text analysis, part-of-speech tagging, noun phrase brac- keting, named entity, morphology, lemmatization, parallel corpora 1. TASK OVERVIEW

  16. How Hierarchical Topics Evolve in Large Text Corpora.

    PubMed

    Cui, Weiwei; Liu, Shixia; Wu, Zhuofeng; Wei, Hao

    2014-12-01

    Using a sequence of topic trees to organize documents is a popular way to represent hierarchical and evolving topics in text corpora. However, following evolving topics in the context of topic trees remains difficult for users. To address this issue, we present an interactive visual text analysis approach to allow users to progressively explore and analyze the complex evolutionary patterns of hierarchical topics. The key idea behind our approach is to exploit a tree cut to approximate each tree and allow users to interactively modify the tree cuts based on their interests. In particular, we propose an incremental evolutionary tree cut algorithm with the goal of balancing 1) the fitness of each tree cut and the smoothness between adjacent tree cuts; 2) the historical and new information related to user interests. A time-based visualization is designed to illustrate the evolving topics over time. To preserve the mental map, we develop a stable layout algorithm. As a result, our approach can quickly guide users to progressively gain profound insights into evolving hierarchical topics. We evaluate the effectiveness of the proposed method on Amazon's Mechanical Turk and real-world news data. The results show that users are able to successfully analyze evolving topics in text data.

  17. Citation Matching in Sanskrit Corpora Using Local Alignment

    NASA Astrophysics Data System (ADS)

    Prasad, Abhinandan S.; Rao, Shrisha

    Citation matching is the problem of finding which citation occurs in a given textual corpus. Most existing citation matching work is done on scientific literature. The goal of this paper is to present methods for performing citation matching on Sanskrit texts. Exact matching and approximate matching are the two methods for performing citation matching. The exact matching method checks for exact occurrence of the citation with respect to the textual corpus. Approximate matching is a fuzzy string-matching method which computes a similarity score between an individual line of the textual corpus and the citation. The Smith-Waterman-Gotoh algorithm for local alignment, which is generally used in bioinformatics, is used here for calculating the similarity score. This similarity score is a measure of the closeness between the text and the citation. The exact- and approximate-matching methods are evaluated and compared. The methods presented can be easily applied to corpora in other Indic languages like Kannada, Tamil, etc. The approximate-matching method can in particular be used in the compilation of critical editions and plagiarism detection in a literary work.

  18. An fMRI Study of Concreteness Effects during Spoken Word Recognition in Aging. Preservation or Attenuation?

    PubMed Central

    Roxbury, Tracy; McMahon, Katie; Coulthard, Alan; Copland, David A.

    2016-01-01

    It is unclear whether healthy aging influences concreteness effects (i.e., the processing advantage seen for concrete over abstract words) and its associated neural mechanisms. We conducted an fMRI study on young and older healthy adults performing auditory lexical decisions on concrete vs. abstract words. We found that spoken comprehension of concrete and abstract words appears relatively preserved for healthy older individuals, including the concreteness effect. This preserved performance was supported by altered activity in left hemisphere regions including the inferior and middle frontal gyri, angular gyrus, and fusiform gyrus. This pattern is consistent with age-related compensatory mechanisms supporting spoken word processing. PMID:26793097

  19. Talker familiarity and spoken word recognition in school-age children*

    PubMed Central

    Levi, Susannah V.

    2014-01-01

    Research with adults has shown that spoken language processing is improved when listeners are familiar with talkers’ voices, known as the familiar talker advantage. The current study explored whether this ability extends to school-age children, who are still acquiring language. Children were familiarized with the voices of three German–English bilingual talkers and were tested on the speech of six bilinguals, three of whom were familiar. Results revealed that children do show improved spoken language processing when they are familiar with the talkers, but this improvement was limited to highly familiar lexical items. This restriction of the familiar talker advantage is attributed to differences in the representation of highly familiar and less familiar lexical items. In addition, children did not exhibit accent-general learning; despite having been exposed to German-accented talkers during training, there was no improvement for novel German-accented talkers. PMID:25159173

  20. Spoken Language Research and ELT: Where Are We Now?

    ERIC Educational Resources Information Center

    Timmis, Ivor

    2012-01-01

    This article examines the relationship between spoken language research and ELT practice over the last 20 years. The first part is retrospective. It seeks first to capture the general tenor of recent spoken research findings through illustrative examples. The article then considers the sociocultural issues that arose when the relevance of these…

  1. Beyond Single Words: The Most Frequent Collocations in Spoken English

    ERIC Educational Resources Information Center

    Shin, Dongkwang; Nation, Paul

    2008-01-01

    This study presents a list of the highest frequency collocations of spoken English based on carefully applied criteria. In the literature, more than forty terms have been used for designating multi-word units, which are generally not well defined. To avoid this confusion, six criteria are strictly applied. The ten million word BNC spoken section…

  2. Direction Asymmetries in Spoken and Signed Language Interpreting

    ERIC Educational Resources Information Center

    Nicodemus, Brenda; Emmorey, Karen

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…

  3. Attentional Capture of Objects Referred to by Spoken Language

    ERIC Educational Resources Information Center

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  4. Automatic Activation of Orthography in Spoken Word Recognition: Pseudohomograph Priming

    ERIC Educational Resources Information Center

    Taft, Marcus; Castles, Anne; Davis, Chris; Lazendic, Goran; Nguyen-Hoan, Minh

    2008-01-01

    There is increasing evidence that orthographic information has an impact on spoken word processing. However, much of this evidence comes from tasks that are subject to strategic effects. In the three experiments reported here, we examined activation of orthographic information during spoken word processing within a paradigm that is unlikely to…

  5. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  6. Distinguish Spoken English from Written English: Rich Feature Analysis

    ERIC Educational Resources Information Center

    Tian, Xiufeng

    2013-01-01

    This article aims at the feature analysis of four expository essays (Text A/B/C/D) written by secondary school students with a focus on the differences between spoken and written language. Texts C and D are better written compared with the other two (Texts A&B) which are considered more spoken in language using. The language features are…

  7. Presentation video retrieval using automatically recovered slide and spoken text

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  8. Syllables in the processing of spoken Italian.

    PubMed

    Tabossi, P; Collina, S; Mazzetti, M; Zoppello, M

    2000-04-01

    Five experiments explored the role of the syllable in the processing of spoken Italian. According to the syllabic hypothesis, the sublexical unit used by speakers of Romance languages to segment speech and access the lexicon is the syllable. However, languages with different degrees of acoustic-phonetic transparency give rise to syllabic effects that vary in robustness. It follows from this account that speakers of phonologically similar languages should behave in a similar way. By exploiting the similarities between Spanish and Italian, the authors tested this prediction in Experiments 1-4. Indeed, Italian listeners were found to produce syllabic effects similar to those observed in Spanish listeners. In Experiment 5, the predictions of the syllabic hypothesis with respect to lexical access were tested. The results corroborated these predictions. The findings are discussed in relation to current models of speech processing.

  9. Automatic discrimination of emotion from spoken Finnish.

    PubMed

    Toivanen, Juhani; Väyrynen, Eero; Seppänen, Tapio

    2004-01-01

    In this paper, experiments on the automatic discrimination of basic emotions from spoken Finnish are described. For the purpose of the study, a large emotional speech corpus of Finnish was collected; 14 professional actors acted as speakers, and simulated four primary emotions when reading out a semantically neutral text. More than 40 prosodic features were derived and automatically computed from the speech samples. Two application scenarios were tested: the first scenario was speaker-independent for a small domain of speakers while the second scenario was completely speaker-independent. Human listening experiments were conducted to assess the perceptual adequacy of the emotional speech samples. Statistical classification experiments indicated that, with the optimal combination of prosodic feature vectors, automatic emotion discrimination performance close to human emotion recognition ability was achievable.

  10. Hierarchical processing in spoken language comprehension.

    PubMed

    Davis, Matthew H; Johnsrude, Ingrid S

    2003-04-15

    Understanding spoken language requires a complex series of processing stages to translate speech sounds into meaning. In this study, we use functional magnetic resonance imaging to explore the brain regions that are involved in spoken language comprehension, fractionating this system into sound-based and more abstract higher-level processes. We distorted English sentences in three acoustically different ways, applying each distortion to varying degrees to produce a range of intelligibility (quantified as the number of words that could be reported) and collected whole-brain echo-planar imaging data from 12 listeners using sparse imaging. The blood oxygenation level-dependent signal correlated with intelligibility along the superior and middle temporal gyri in the left hemisphere and in a less-extensive homologous area on the right, the left inferior frontal gyrus (LIFG), and the left hippocampus. Regions surrounding auditory cortex, bilaterally, were sensitive to intelligibility but also showed a differential response to the three forms of distortion, consistent with sound-form-based processes. More distant intelligibility-sensitive regions within the superior and middle temporal gyri, hippocampus, and LIFG were insensitive to the acoustic form of sentences, suggesting more abstract nonacoustic processes. The hierarchical organization suggested by these results is consistent with cognitive models and auditory processing in nonhuman primates. Areas that were particularly active for distorted speech conditions and, thus, might be involved in compensating for distortion, were found exclusively in the left hemisphere and partially overlapped with areas sensitive to intelligibility, perhaps reflecting attentional modulation of auditory and linguistic processes.

  11. Deep bottleneck features for spoken language identification.

    PubMed

    Jiang, Bing; Song, Yan; Wei, Si; Liu, Jun-Hua; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    A key problem in spoken language identification (LID) is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF) for spoken LID, motivated by the success of Deep Neural Networks (DNN) in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV), using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09) show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  12. The Use of General and Specialized Corpora as Reference Sources for Academic English Writing: A Case Study

    ERIC Educational Resources Information Center

    Chang, Ji-Yeon

    2014-01-01

    Corpora have been suggested as valuable sources for teaching English for academic purposes (EAP). Since previous studies have mainly focused on corpus use in classroom settings, more research is needed to reveal how students react to using corpora on their own and what should be provided to help them become autonomous corpus users, considering…

  13. Complete transection of the urethra and corpora cavernosa: a complication after laparoscopic repair (TEP) of an inguinal hernia.

    PubMed

    Rehme, C; Rübben, H; Heß, J

    2016-06-01

    Complete transection of both corpora cavernosa and the urethra is a very rare condition in urology. We report the case of a 59-year-old man with complete transection of the corpora cavernosa and the urethra during a laparoscopic repair of a recurrent inguinal hernia.

  14. Modeling Spoken Word Recognition Performance by Pediatric Cochlear Implant Users using Feature Identification

    PubMed Central

    Frisch, Stefan A.; Pisoni, David B.

    2012-01-01

    Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with

  15. 31 CFR 358.6 - What is the procedure for converting bearer corpora and detached bearer coupons to book-entry?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... bearer corpora and detached bearer coupons to book-entry? 358.6 Section 358.6 Money and Finance: Treasury... PUBLIC DEBT REGULATIONS GOVERNING BOOK-ENTRY CONVERSION OF BEARER CORPORA AND DETACHED BEARER COUPONS § 358.6 What is the procedure for converting bearer corpora and detached bearer coupons to...

  16. Direct and Indirect Access to Corpora: An Exploratory Case Study Comparing Students' Error Correction and Learning Strategy Use in L2 Writing

    ERIC Educational Resources Information Center

    Yoon, Hyunsook; Jo, Jung Won

    2014-01-01

    Studies on students' use of corpora in L2 writing have demonstrated the benefits of corpora not only as a linguistic resource to improve their writing abilities but also as a cognitive tool to develop their learning skills and strategies. Most of the corpus studies, however, adopted either direct use or indirect use of corpora by students, without…

  17. 31 CFR 358.6 - What is the procedure for converting bearer corpora and detached bearer coupons to book-entry?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... bearer corpora and detached bearer coupons to book-entry? 358.6 Section 358.6 Money and Finance: Treasury... PUBLIC DEBT REGULATIONS GOVERNING BOOK-ENTRY CONVERSION OF BEARER CORPORA AND DETACHED BEARER COUPONS § 358.6 What is the procedure for converting bearer corpora and detached bearer coupons to...

  18. Automatic concept recognition using the human phenotype ontology reference and test suite corpora.

    PubMed

    Groza, Tudor; Köhler, Sebastian; Doelken, Sandra; Collier, Nigel; Oellrich, Anika; Smedley, Damian; Couto, Francisco M; Baynam, Gareth; Zankl, Andreas; Robinson, Peter N

    2015-01-01

    Concept recognition tools rely on the availability of textual corpora to assess their performance and enable the identification of areas for improvement. Typically, corpora are developed for specific purposes, such as gene name recognition. Gene and protein name identification are longstanding goals of biomedical text mining, and therefore a number of different corpora exist. However, phenotypes only recently became an entity of interest for specialized concept recognition systems, and hardly any annotated text is available for performance testing and training. Here, we present a unique corpus, capturing text spans from 228 abstracts manually annotated with Human Phenotype Ontology (HPO) concepts and harmonized by three curators, which can be used as a reference standard for free text annotation of human phenotypes. Furthermore, we developed a test suite for standardized concept recognition error analysis, incorporating 32 different types of test cases corresponding to 2164 HPO concepts. Finally, three established phenotype concept recognizers (NCBO Annotator, OBO Annotator and Bio-LarK CR) were comprehensively evaluated, and results are reported against both the text corpus and the test suites. The gold standard and test suites corpora are available from http://bio-lark.org/hpo_res.html. Database URL: http://bio-lark.org/hpo_res.html.

  19. Application of Learner Corpora to Second Language Learning and Teaching: An Overview

    ERIC Educational Resources Information Center

    Xu, Qi

    2016-01-01

    The paper gives an overview of learner corpora and their application to second language learning and teaching. It is proposed that there are four core components in learner corpus research, namely, corpus linguistics expertise, a good background in linguistic theory, knowledge of SLA theory, and a good understanding of foreign language teaching…

  20. The Application of Corpora in Teaching Grammar: The Case of English Relative Clause

    ERIC Educational Resources Information Center

    Sahragard, Rahman; Kushki, Ali; Ansaripour, Ehsan

    2013-01-01

    The study was conducted to see if the provision of implementing corpora on English relative clauses would prove useful for Iranian EFL learners or not. Two writing classes were held for the participants of intermediate level. A record of 15 writing samples produced by each participant was kept in the form of a portfolio. Participants' portfolios…

  1. Learning in Parallel: Using Parallel Corpora to Enhance Written Language Acquisition at the Beginning Level

    ERIC Educational Resources Information Center

    Bluemel, Brody

    2014-01-01

    This article illustrates the pedagogical value of incorporating parallel corpora in foreign language education. It explores the development of a Chinese/English parallel corpus designed specifically for pedagogical application. The corpus tool was created to aid language learners in reading comprehension and writing development by making foreign…

  2. What Does "Informed Consent" Mean in the Internet Age? Publishing Sign Language Corpora as Open Content

    ERIC Educational Resources Information Center

    Crasborn, Onno

    2010-01-01

    Recent technologies in the area of video and Internet are allowing the creation and online publication of large signed language corpora. Primarily addressing the needs of linguists and other researchers, because of their unique character in history these data collections are also made accessible online for a general audience. This "open access"…

  3. Corpora Processing and Computational Scaffolding for a Web-Based English Learning Environment: The CANDLE Project

    ERIC Educational Resources Information Center

    Liou, Hsien-Chin; Chang, Jason S; Chen, Hao-Jan; Lin, Chih-Cheng; Liaw, Meei-Ling; Gao, Zhao-Ming; Jang, Jyh-Shing Roger; Yeh, Yuli; Chuang, Thomas C.; You, Geeng-Neng

    2006-01-01

    This paper describes the development of an innovative web-based environment for English language learning with advanced data-driven and statistical approaches. The project uses various corpora, including a Chinese-English parallel corpus ("Sinorama") and various natural language processing (NLP) tools to construct effective English…

  4. Training L2 Writers to Reference Corpora as a Self-Correction Tool

    ERIC Educational Resources Information Center

    Quinn, Cynthia

    2015-01-01

    Corpora have the potential to support the L2 writing process at the discourse level in contrast to the isolated dictionary entries that many intermediate writers rely on. To take advantage of this resource, learners need to be trained, which involves practising corpus research and referencing skills as well as learning to make data-based…

  5. Toward Automatic Determination of the Semantics of Connectives in Large Newspaper Corpora

    ERIC Educational Resources Information Center

    Bestgen, Yves; Degand, Liesbeth; Spooren, Wilbert

    2006-01-01

    We explored the possibility of using automatic techniques to analyze the use of backward causal connectives in large Dutch newspaper corpora. With the help of 2 techniques, Latent Semantic Analysis and Thematic Text Analysis, the contexts of more than 14,000 connectives were studied. The method of analysis is described. We found that differences…

  6. Language Teachers with Corpora in Mind: From Starting Steps to Walking Tall

    ERIC Educational Resources Information Center

    Chambers, Angela; Farr, Fiona; O'Riordan, Stephanie

    2011-01-01

    Although the use of corpus data in language learning is a steadily growing research area, direct access to corpora by teachers and learners and the use of the data in the classroom are developing slowly. This paper explores how teachers can integrate corpus approaches in their practice. After situating the topic in relation to current research and…

  7. Automatic concept recognition using the Human Phenotype Ontology reference and test suite corpora

    PubMed Central

    Groza, Tudor; Köhler, Sebastian; Doelken, Sandra; Collier, Nigel; Oellrich, Anika; Smedley, Damian; Couto, Francisco M; Baynam, Gareth; Zankl, Andreas; Robinson, Peter N.

    2015-01-01

    Concept recognition tools rely on the availability of textual corpora to assess their performance and enable the identification of areas for improvement. Typically, corpora are developed for specific purposes, such as gene name recognition. Gene and protein name identification are longstanding goals of biomedical text mining, and therefore a number of different corpora exist. However, phenotypes only recently became an entity of interest for specialized concept recognition systems, and hardly any annotated text is available for performance testing and training. Here, we present a unique corpus, capturing text spans from 228 abstracts manually annotated with Human Phenotype Ontology (HPO) concepts and harmonized by three curators, which can be used as a reference standard for free text annotation of human phenotypes. Furthermore, we developed a test suite for standardized concept recognition error analysis, incorporating 32 different types of test cases corresponding to 2164 HPO concepts. Finally, three established phenotype concept recognizers (NCBO Annotator, OBO Annotator and Bio-LarK CR) were comprehensively evaluated, and results are reported against both the text corpus and the test suites. The gold standard and test suites corpora are available from http://bio-lark.org/hpo_res.html. Database URL: http://bio-lark.org/hpo_res.html PMID:25725061

  8. Gonadotropin binding sites in human ovarian follicles and corpora lutea during the menstrual cycle

    SciTech Connect

    Shima, K.; Kitayama, S.; Nakano, R.

    1987-05-01

    Gonadotropin binding sites were localized by autoradiography after incubation of human ovarian sections with /sup 125/I-labeled gonadotropins. The binding sites for /sup 125/I-labeled human follicle-stimulating hormone (/sup 125/I-hFSH) were identified in the granulosa cells and in the newly formed corpora lutea. The /sup 125/I-labeled human luteinizing hormone (/sup 125/I-hLH) binding to the thecal cells increased during follicular maturation, and a dramatic increase was preferentially observed in the granulosa cells of the large preovulatory follicle. In the corpora lutea, the binding of /sup 125/I-hLH increased from the early luteal phase and decreased toward the late luteal phase. The changes in 3 beta-hydroxysteroid dehydrogenase activity in the corpora lutea corresponded to the /sup 125/I-hLH binding. Thus, the changes in gonadotropin binding sites in the follicles and corpora lutea during the menstrual cycle may help in some important way to regulate human ovarian function.

  9. A Linear-RBF Multikernel SVM to Classify Big Text Corpora

    PubMed Central

    Romero, R.; Iglesias, E. L.; Borrajo, L.

    2015-01-01

    Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers. PMID:25879039

  10. Co-occurrence frequency evaluated with large language corpora boosts semantic priming effects.

    PubMed

    Brunellière, Angèle; Perre, Laetitia; Tran, ThiMai; Bonnotte, Isabelle

    2017-09-01

    In recent decades, many computational techniques have been developed to analyse the contextual usage of words in large language corpora. The present study examined whether the co-occurrence frequency obtained from large language corpora might boost purely semantic priming effects. Two experiments were conducted: one with conscious semantic priming, the other with subliminal semantic priming. Both experiments contrasted three semantic priming contexts: an unrelated priming context and two related priming contexts with word pairs that are semantically related and that co-occur either frequently or infrequently. In the conscious priming presentation (166-ms stimulus-onset asynchrony, SOA), a semantic priming effect was recorded in both related priming contexts, which was greater with higher co-occurrence frequency. In the subliminal priming presentation (66-ms SOA), no significant priming effect was shown, regardless of the related priming context. These results show that co-occurrence frequency boosts pure semantic priming effects and are discussed with reference to models of semantic network.

  11. Exploring the application of deep learning techniques on medical text corpora.

    PubMed

    Minarro-Giménez, José Antonio; Marín-Alonso, Oscar; Samwald, Matthias

    2014-01-01

    With the rapidly growing amount of biomedical literature it becomes increasingly difficult to find relevant information quickly and reliably. In this study we applied the word2vec deep learning toolkit to medical corpora to test its potential for improving the accessibility of medical knowledge. We evaluated the efficiency of word2vec in identifying properties of pharmaceuticals based on mid-sized, unstructured medical text corpora without any additional background knowledge. Properties included relationships to diseases ('may treat') or physiological processes ('has physiological effect'). We evaluated the relationships identified by word2vec through comparison with the National Drug File - Reference Terminology (NDF-RT) ontology. The results of our first evaluation were mixed, but helped us identify further avenues for employing deep learning technologies in medical information retrieval, as well as using them to complement curated knowledge captured in ontologies and taxonomies.

  12. A linear-RBF multikernel SVM to classify big text corpora.

    PubMed

    Romero, R; Iglesias, E L; Borrajo, L

    2015-01-01

    Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.

  13. Luteinizing hormone receptors in human ovarian follicles and corpora lutea during the menstrual cycle

    SciTech Connect

    Yamoto, M.; Nakano, R.; Iwasaki, M.; Ikoma, H.; Furukawa, K.

    1986-08-01

    The binding of /sup 125/I-labeled human luteinizing hormone (hLH) to the 2000-g fraction of human ovarian follicles and corpora lutea during the entire menstrual cycle was examined. Specific high affinity, low capacity receptors for hLH were demonstrated in the 2000-g fraction of both follicles and corpora lutea. Specific binding of /sup 125/I-labeled hLH to follicular tissue increased from the early follicular phase to the ovulatory phase. Specific binding of /sup 125/I-labeled hLH to luteal tissue increased from the early luteal phase to the midluteal phase and decreased towards the late luteal phase. The results of the present study indicate that the increase and decrease in receptors for hLH during the menstrual cycle might play an important role in the regulation of the ovarian cycle.

  14. What is the coverage of SNOMED CT®on scientific medical corpora?

    PubMed

    Kokkinakis, Dimitrios

    2011-01-01

    This paper reports on the results of a large scale mapping of SNOMED CT on scientific medical corpora. The aim is to automatically access the validity, reliability and coverage of the Swedish SNOMED-CT translation, the largest, most extensive available resource of medical terminology. The method described here is based on the generation of predominantly safe harbor term variants which together with simple linguistic processing and the already available SNOMED term content are mapped to large corpora. The results show that term variations are very frequent and this may have implication on technological applications (such as indexing and information retrieval, decision support systems, text mining) using SNOMED CT. Naïve approaches to terminology mapping and indexing would critically affect the performance, success and results of such applications. SNOMED CT appears not well-suited for automatically capturing the enormous variety of concepts in scientific corpora (only 6,3% of all SNOMED terms could be directly matched to the corpus) unless extensive variant forms are generated and fuzzy and partial matching techniques are applied with the risk of allowing the recognition of a large number of false positives and spurious results.

  15. Combining MEDLINE and publisher data to create parallel corpora for the automatic translation of biomedical text

    PubMed Central

    2013-01-01

    Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733

  16. Neighborhood density effects in spoken word recognition in Spanish.

    PubMed

    Vitevitch, Michael S; Rodríguez, Eva

    2004-01-01

    The present work examined the relationships among familiarity ratings, frequency of occurrence, neighborhood density, and word length in a corpus of Spanish words. The observed relationships were similar to the relationships found among the same variables in English. An auditory lexical decision task was then performed to examine the influence of word frequency, neighborhood density, and neighborhood frequency on spoken word recognition in Spanish. In contrast to the competitive effect of phonological neighborhoods typically observed in English, a facilitative effect of neighborhood density and neighborhood frequency was found in Spanish. Implications for models of spoken word recognition and language disorders are discussed.

  17. Neighborhood density effects in spoken word recognition in Spanish

    PubMed Central

    VITEVITCH, MICHAEL S.; RODRÍGUEZ, EVA

    2008-01-01

    The present work examined the relationships among familiarity ratings, frequency of occurrence, neighborhood density, and word length in a corpus of Spanish words. The observed relationships were similar to the relationships found among the same variables in English. An auditory lexical decision task was then performed to examine the influence of word frequency, neighborhood density, and neighborhood frequency on spoken word recognition in Spanish. In contrast to the competitive effect of phonological neighborhoods typically observed in English, a facilitative effect of neighborhood density and neighborhood frequency was found in Spanish. Implications for models of spoken word recognition and language disorders are discussed. PMID:19018293

  18. Aspects of Authentic Spoken German: Awareness and Recognition of Elision in the German Classroom

    ERIC Educational Resources Information Center

    Lightfoot, Douglas

    2016-01-01

    This work discusses the importance of spoken German in classroom instruction. The paper examines the nature of natural spoken language as opposed to written language. We find a general consensus that the prevailing language measure (whether pertaining to written or spoken language) in instructional settings more often typifies the rules associated…

  19. Discourse Markers and Spoken English: Nonnative Use in the Turkish EFL Setting

    ERIC Educational Resources Information Center

    Asik, Asuman; Cephe, Pasa Tevfik

    2013-01-01

    This study investigated the production of discourse markers by non-native speakers of English and their occurrences in their spoken English by comparing them with those used in native speakers' spoken discourse. Because discourse markers (DMs) are significant items in spoken discourse of native speakers, a study about the use of DMs by nonnative…

  20. Instructional Benefits of Spoken Words: A Review of Cognitive Load Factors

    ERIC Educational Resources Information Center

    Kalyuga, Slava

    2012-01-01

    Spoken words have always been an important component of traditional instruction. With the development of modern educational technology tools, spoken text more often replaces or supplements written or on-screen textual representations. However, there could be a cognitive load cost involved in this trend, as spoken words can have both benefits and…

  1. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention

    ERIC Educational Resources Information Center

    Medwetsky, Larry

    2011-01-01

    Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…

  2. In a Manner of Speaking: Assessing Frequent Spoken Figurative Idioms to Assist ESL/EFL Teachers

    ERIC Educational Resources Information Center

    Grant, Lynn E.

    2007-01-01

    This article outlines criteria to define a figurative idiom, and then compares the frequent figurative idioms identified in two sources of spoken American English (academic and contemporary) to their frequency in spoken British English. This is done by searching the spoken part of the British National Corpus (BNC), to see whether they are frequent…

  3. L2 Gender Facilitation and Inhibition in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Behney, Jennifer N.

    2011-01-01

    This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…

  4. Animated and Static Concept Maps Enhance Learning from Spoken Narration

    ERIC Educational Resources Information Center

    Adesope, Olusola O.; Nesbit, John C.

    2013-01-01

    An animated concept map represents verbal information in a node-link diagram that changes over time. The goals of the experiment were to evaluate the instructional effects of presenting an animated concept map concurrently with semantically equivalent spoken narration. The study used a 2 x 2 factorial design in which an animation factor (animated…

  5. Lexical Representation of Phonological Variation in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Ranbom, Larissa J.; Connine, Cynthia M.

    2007-01-01

    There have been a number of mechanisms proposed to account for recognition of phonological variation in spoken language. Five of these mechanisms were considered here, including underspecification, inference, feature parsing, tolerance, and a frequency-based representational account. A corpus analysis and five experiments using the nasal flap…

  6. Individual Differences in Online Spoken Word Recognition: Implications for SLI

    ERIC Educational Resources Information Center

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2010-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…

  7. Prosodic Parallelism—Comparing Spoken and Written Language

    PubMed Central

    Wiese, Richard

    2016-01-01

    The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015), this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed. PMID:27807425

  8. Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on languages spoken by English learners (ELs) are: (1) Twenty most common EL languages, as reported in states' top five lists: SY 2013-14; (2) States,…

  9. Call and Responsibility: Critical Questions for Youth Spoken Word Poetry

    ERIC Educational Resources Information Center

    Weinstein, Susan; West, Anna

    2012-01-01

    In this article, Susan Weinstein and Anna West embark on a critical analysis of the maturing field of youth spoken word poetry (YSW). Through a blend of firsthand experience, analysis of YSW-related films and television, and interview data from six years of research, the authors identify specific dynamics that challenge young poets as they…

  10. Reader for Advanced Spoken Tamil, Parts 1 and 2.

    ERIC Educational Resources Information Center

    Schiffman, Harold F.

    Part 1 of this reader consists of transcriptions of five Tamil radio plays, with exercises, notes, and discussion. Part 2 is a synopsis grammar and a glossary. Both are intended for advanced students of Tamil who have had at least two years of instruction in the spoken language at the college level. The materials have been tested in classroom use…

  11. Time Pressure and Phonological Advance Planning in Spoken Production

    ERIC Educational Resources Information Center

    Damian, Markus F.; Dumay, Nicolas

    2007-01-01

    Current accounts of spoken production debate the extent to which speakers plan ahead. Here, we investigated whether the scope of phonological planning is influenced by changes in time pressure constraints. The first experiment used a picture-word interference task and showed that picture naming latencies were shorter when word distractors shared…

  12. A Uniform Identity: Schoolgirl Snapshots and the Spoken Visual

    ERIC Educational Resources Information Center

    Spencer, Stephanie

    2007-01-01

    This article discusses the possibility for expanding our understanding of the visual to include the "spoken visual" within oral history analysis. It suggests that adding a further reading, that of the visualized body, to the voice-centred relational method we can consider the meaning of the uniformed body for the individual. It uses as a…

  13. Predicting the Vocabulary of Children from Written or Spoken Texts.

    ERIC Educational Resources Information Center

    Moore, Michael; Goldstein, Zahava

    A study investigated the use of a mathematical model to predict individuals' total active Hebrew vocabulary from samples of their written and spoken language. The model is based on a generalized inverse Gaussian distribution. The subjects were Israeli junior high school students from both high and low socioeconomic groups. Hebrew language samples…

  14. Hemispheric Differences in Indexical Specificity Effects in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Gonzalez, Julio; McLennan, Conor T.

    2007-01-01

    Variability in talker identity, one type of indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. Furthermore, neuropsychological evidence suggests that indexical and linguistic information may be represented and processed differently in the 2 cerebral hemispheres, and is consistent with findings from…

  15. Lexical Bundles in University Spoken and Written Registers

    ERIC Educational Resources Information Center

    Biber, Douglas; Barbieri, Federica

    2007-01-01

    Lexical bundles--recurrent sequences of words--are important building blocks of discourse in spoken and written registers. Previous research has shown that lexical bundles are especially prevalent in university classroom teaching, where they serve three major discourse functions: stance expressions, discourse organizers, and referential…

  16. SPOKEN MARATHI. BOOK I, FIRST-YEAR INTENSIVE COURSE.

    ERIC Educational Resources Information Center

    KAVADI, NARESH B.; SOUTHWORTH, FRANKLIN C.

    "SPOKEN MARATHI" PRESENTS THE BEGINNING STUDENT WITH THE BASIC PHONOLOGY AND STRUCTURE OF MODERN MARATHI. IT IS ASSUMED THAT THE STUDENT WILL SPEND MOST OF HIS STUDY TIME LISTENING TO AND SPEAKING THE LANGUAGE, EITHER WITH A NATIVE SPEAKER INSTRUCTOR OR WITH RECORDED MATERIALS. THIS TEXT IS DESIGNED TO PROVIDE MATERIAL FOR A ONE-YEAR…

  17. Functions of Japanese Exemplifying Particles in Spoken and Written Discourse

    ERIC Educational Resources Information Center

    Taylor, Yuki Io

    2010-01-01

    This dissertation examines how the Japanese particles "nado", "toka", and "tari" which all may be translated as "such as", "etc.", or "like" behave differently in written and spoken discourse. According to traditional analyses (e.g. Martin, 1987), these particles are assumed to be Exemplifying Particles (EP) used to provide concrete examples to…

  18. Spoken Word Recognition in Toddlers Who Use Cochlear Implants

    ERIC Educational Resources Information Center

    Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.

    2009-01-01

    Purpose: The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method: Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of…

  19. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    ERIC Educational Resources Information Center

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  20. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  1. Lexical Competition in Non-Native Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Weber, Andrea; Cutler, Anne

    2004-01-01

    Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name ("pencil," given target "panda") than on less confusable distractors…

  2. Orthographic Facilitation Effects on Spoken Word Production: Evidence from Chinese

    ERIC Educational Resources Information Center

    Zhang, Qingfang; Weekes, Brendan Stuart

    2009-01-01

    The aim of this experiment was to investigate the time course of orthographic facilitation on picture naming in Chinese. We used a picture-word paradigm to investigate orthographic and phonological facilitation on monosyllabic spoken word production in native Mandarin speakers. Both the stimulus-onset asynchrony (SOA) and the picture-word…

  3. Contextualizing Reflective Dialogue in a Spoken Conversational Tutor

    ERIC Educational Resources Information Center

    Pon-Barry, Heather; Clark, Brady; Schultz, Karl; Bratt, Elizabeth Owen; Peters, Stanley; Haley, David

    2005-01-01

    In this paper we describe the ways that SCoT, a Spoken Conversational Tutor, uses flexible and adaptive planning as well as multimodal task modeling to support the contextualization of learning in reflective dialogues. Past research on human tutoring has shown reflective discussions (discussions occurring after problem-solving) to be effective in…

  4. Automated Scoring of L2 Spoken English with Random Forests

    ERIC Educational Resources Information Center

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  5. Riverside English: The Spoken Language of a Southern California Community.

    ERIC Educational Resources Information Center

    Metcalf, Allan A.; And Others

    This booklet points out some of the characteristics of the varieties of English spoken in Riverside and in the rest of California. The first chapter provides a general discussion of language variation and change on the levels of vocabulary, pronunciation, and grammar. The second chapter discusses California English and pronunciation and vocabulary…

  6. The Impact of Orthographic Consistency on German Spoken Word Identification

    ERIC Educational Resources Information Center

    Beyermann, Sandra; Penke, Martina

    2014-01-01

    An auditory lexical decision experiment was conducted to find out whether sound-to-spelling consistency has an impact on German spoken word processing, and whether such an impact is different at different stages of reading development. Four groups of readers (school children in the second, third and fifth grades, and university students)…

  7. Pedagogy for Liberation: Spoken Word Poetry in Urban Schools

    ERIC Educational Resources Information Center

    Fiore, Mia

    2015-01-01

    The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…

  8. Lexical and Post-Lexical Phonological Representations in Spoken Production

    ERIC Educational Resources Information Center

    Goldrick, Matthew; Rapp, Brenda

    2007-01-01

    Theories of spoken word production generally assume a distinction between at least two types of phonological processes and representations: lexical phonological processes that recover relatively arbitrary aspects of word forms from long-term memory and post-lexical phonological processes that specify the predictable aspects of phonological…

  9. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    ERIC Educational Resources Information Center

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  10. Context and Spoken Word Recognition in a Novel Lexicon

    ERIC Educational Resources Information Center

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2008-01-01

    Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments,…

  11. Attitudes towards Literary Tamil and Standard Spoken Tamil in Singapore

    ERIC Educational Resources Information Center

    Saravanan, Vanithamani

    2007-01-01

    This is the first empirical study that focused on attitudes towards two varieties of Tamil, Literary Tamil (LT) and Standard Spoken Tamil (SST), with the multilingual state of Singapore as the backdrop. The attitudes of 46 Singapore Tamil teachers towards speakers of LT and SST were investigated using the matched-guise approach along with…

  12. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    ERIC Educational Resources Information Center

    Geers, Anne E.; Nicholas, Johanna G.

    2013-01-01

    Purpose: In this article, the authors sought to determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12 and 38 months of age. Relative advantages of receiving a bilateral CI after age 4.5 years, better…

  13. Spoken Idiom Recognition: Meaning Retrieval and Word Expectancy

    ERIC Educational Resources Information Center

    Tabossi, Patrizia; Fanari, Rachele; Wolf, Kinou

    2005-01-01

    This study investigates recognition of spoken idioms occurring in neutral contexts. Experiment 1 showed that both predictable and non-predictable idiom meanings are available at string offset. Yet, only predictable idiom meanings are active halfway through a string and remain active after the string's literal conclusion. Experiment 2 showed that…

  14. Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension

    ERIC Educational Resources Information Center

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2015-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…

  15. Orthographic Representations in Spoken Word Priming: No Early Automatic Activation

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Kolinsky, Regine; Ventura, Paulo; Radeau, Monique; Morais, Jose

    2007-01-01

    The current study investigated the modulation by orthographic knowledge of the final overlap phonological priming effect, contrasting spoken prime-target pairs with congruent spellings (e.g., "carreau-bourreau", /karo/-/buro/) to pairs with incongruent spellings (e.g., "zero-bourreau", /zero/-/buro/). Using materials and…

  16. Testing Spoken English for Credit within the Indian University System

    ERIC Educational Resources Information Center

    Chaudhary, Shreesh

    2008-01-01

    Courses in Spoken English (SE) are yet to be acceptable in Indian universities because conducting session-end tests in SE is assumed to be logistically difficult and academically problematic. This article argues that it need not necessarily be so; session-end tests can be conducted just as in other courses. With voice recording, preferably a…

  17. A Comparison between Written and Spoken Narratives in Aphasia

    ERIC Educational Resources Information Center

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  18. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    PubMed

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith.

  19. Corpora amylacea deposition in the hippocampus of patients with mesial temporal lobe epilepsy: A new role for an old gene?

    PubMed Central

    Das, Abhijit; Balan, Shabeesh; Mathew, Anila; Radhakrishnan, Venkataraman; Banerjee, Moinak; Radhakrishnan, Kurupath

    2011-01-01

    BACKGROUND: Mesial temporal lobe epilepsy (MTLE) is the most common medically refractory epilepsy syndrome in adults, and hippocampal sclerosis (HS) is the most frequently encountered lesion in patients with MTLE. Premature accumulation of corpora amylacea (CoA), which plays an important role in the sequestration of toxic cellular metabolites, is found in the hippocampus of 50–60% of the patients who undergo surgery for medically refractory MTLE-HS. However, the etiopathogenesis and clinical importance of this phenomenon are still uncertain. The ABCB1 gene product P-glycoprotein (P-gp) plays a prominent role as an antiapoptotic factor in addition to its efflux transporter function. ABCB1 polymorphism has been found to be associated with downregulation of P-gp expression. We hypothesized that a similar polymorphism will be found in patients with CoA deposition, as the polymorphism predisposes the hippocampal neuronal and glial cells to seizure-induced excitotoxic damage and CoA formation ensues as a buffer response. MATERIALS AND METHODS: We compared five single nucleotide polymorphisms in the ABCB1 gene Ex06+139C/T (rs1202168), Ex 12 C1236T (rs1128503), Ex 17-76T/A (rs1922242), Ex 21 G2677T/A (rs2032582), Ex26 C3435T (rs1045642) among 46 MTLE-HS patients of south Indian ancestry with and without CoA accumulation. RESULTS: We found that subjects carrying the Ex-76T/A polymorphism (TA genotype) had a five-times higher risk of developing CoA accumulation than subjects without this genotype (Odds ratio 5.0, 95% confidence intervals 1.34-18.55; P = 0.016). CONCLUSION: We speculate that rs1922242 polymorphism results in the downregulation of P-gp function, which predisposes the hippocampal cells to seizure-induced apoptosis, and CoA gets accumulated as a buffer response. PMID:21747587

  20. Lynx reproduction--long-lasting life cycle of corpora lutea in a feline species.

    PubMed

    Jewgenow, Katarina; Painer, Johanna; Amelkina, Olga; Dehnhard, Martin; Goeritz, Frank

    2014-04-01

    A review of lynxes' reproductive biology and comparison between the reproductive cycles of the domestic cat and lynxes is presented. Three of the four lynx species (the bobcat excluded) express quite similar reproductive pattern (age at sexual maturity, estrus and pregnancy length, litter size). Similarly to the domestic cat, the bobcat is polyestric and can have more than one litter per year. Domestic cats and many other felid species are known to express anovulatory, pregnant and pseudo-pregnant reproductive cycles in dependence on ovulation induction and fertilization. The formation of corpora lutea (CLs) occurs after ovulation. In pregnant animals, luteal function ends with parturition, whereas during pseudo-pregnancy a shorter life span and lower hormone secretion are observed. The life cycle of corpora lutea in Eurasian lynxes is different from the pattern described in domestic cats. Lynx CLs produce progestagens in distinctive amounts permanently for at least two years, regardless of their origin (pregnancy or pseudo-pregnancy). It is suggested that long-lasting CLs induce a negative feedback to inactivate folliculogenesis, turning a normally polyestric cycle observed in most felids into a monoestric cycle in lynxes.

  1. Planum temporale: where spoken and written language meet.

    PubMed

    Nakada, T; Fujii, Y; Yoneoka, Y; Kwee, I L

    2001-01-01

    Functional magnetic resonance imaging studies on spoken versus written language processing were performed in 20 right-handed normal volunteers on a high-field (3.0-tesla) system. The areas activated in common by both auditory (listening) and visual (reading) language comprehension paradigms were mapped onto the planum temporale (20/20), primary auditory region (2/20), superior temporal sulcus area (2/20) and planum parietale (3/20). The study indicates that the planum temporale represents a common traffic area for cortical processing which needs to access the system of language comprehension. The destruction of this area can result in comprehension deficits in both spoken and written language, i.e. a classical case of Wernicke's aphasia.

  2. Spoken word production: A theory of lexical access

    PubMed Central

    Levelt, Willem J. M.

    2001-01-01

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker's focusing on a target concept and ending with the initiation of articulation. The initial stages of preparation are concerned with lexical selection, which is zooming in on the appropriate lexical item in the mental lexicon. The following stages concern form encoding, i.e., retrieving a word's morphemic phonological codes, syllabifying the word, and accessing the corresponding articulatory gestures. The theory is based on chronometric measurements of spoken word production, obtained, for instance, in picture-naming tasks. The theory is largely computationally implemented. It provides a handle on the analysis of multiword utterance production as well as a guide to the analysis and design of neuroimaging studies of spoken utterance production. PMID:11698690

  3. Orthographic representations in spoken word priming: no early automatic activation.

    PubMed

    Pattamadilok, Chotiga; Kolinsky, Régine; Ventura, Paulo; Radeau, Monique; Morais, José

    2007-01-01

    The current study investigated the modulation by orthographic knowledge of the final overlap phonological priming effect, contrasting spoken prime-target pairs with congruent spellings (e.g., 'carreau-bourreau', /karo/-/buro/) to pairs with incongruent spellings (e.g., 'zéro-bourreau', /zero/-/buro/). Using materials and designs aimed at reducing the impact of response biases or strategies, no orthographic congruency effect was found in shadowing, a speech recognition task that can be performed prelexically. In lexical decision, an orthographic effect occurred only when the processing environment reduced the prominence of phonological overlap and thus induced participants to rely on word spelling. Overall, the data do not support the assumption of early, automatic activation of orthographic representations during spoken word recognition.

  4. Rapid modulation of spoken word recognition by visual primes

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2015-01-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296

  5. Spoken language outcomes after hemispherectomy: factoring in etiology.

    PubMed

    Curtiss, S; de Bode, S; Mathern, G W

    2001-12-01

    We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology.

  6. Individual differences in online spoken word recognition: Implications for SLI

    PubMed Central

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2012-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014

  7. Bilinguals Show Weaker Lexical Access During Spoken Sentence Comprehension

    PubMed Central

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2014-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German bilinguals, German-English bilinguals and English monolinguals listened for target words in spoken English sentences while their eye-movements were recorded. Bilinguals’ eye-movements reflected weaker lexical access relative to monolinguals; furthermore, the effect of semantic constraint differed across first vs. second language processing. Specifically, English-native bilinguals showed fewer overall looks to target items, regardless of sentence constraint; German-native bilinguals activated target items more slowly and maintained target activation over a longer period of time in the Low-Constraint condition compared with monolinguals. No eye movements to cross-linguistic competitors were observed, suggesting that these lexical access disadvantages were present during bilingual spoken sentence comprehension even in the absence of overt interlingual competition. PMID:25266052

  8. A coordinated expression of biosynthetic enzymes controls the flux of juvenile hormone precursors in the corpora allata of mosquitoes.

    PubMed

    Nouzova, Marcela; Edwards, Marten J; Mayoral, Jaime G; Noriega, Fernando G

    2011-09-01

    Juvenile hormone (JH) is a key regulator of metamorphosis and ovarian development in mosquitoes. Adult female Aedes aegypti mosquitoes show developmental and dynamically regulated changes of JH synthesis. Newly emerged females have corpora allata (CA) with low biosynthetic activity, but they produce high amounts of JH a day later; blood feeding results in a striking decrease in JH synthesis, but the CA returns to a high level of JH synthesis three days later. To understand the molecular bases of these dynamic changes we combined transcriptional studies of 11 of the 13 enzymes of the JH pathway with a functional analysis of JH synthesis. We detected up to a 1000-fold difference in the levels of mRNA in the CA among the JH biosynthetic enzymes studied. There was a coordinated expression of the 11 JH biosynthetic enzymes in female pupae and adult mosquito. Increases or decreases in transcript levels for all the enzymes resulted in increases or decreases of JH synthesis; suggesting that transcript changes are at least partially responsible for the dynamic changes of JH biosynthesis observed. JH synthesis by the CA was progressively increased in vitro by addition of exogenous precursors such as geranyl-diphosphate, farnesyl-diphosphate, farnesol, farnesal and farnesoic acid. These results suggest that the supply of these precursors and not the activity of the last 6 pathway enzymes is rate limiting in these glands. Nutrient reserves play a key role in the regulation of JH synthesis. Nutritionally deficient females had reduced transcript levels for the genes encoding JH biosynthetic enzymes and reduced JH synthesis. Our studies suggest that JH synthesis is controlled by the rate of flux of isoprenoids, which is the outcome of a complex interplay of changes in precursor pools, enzyme levels and external regulators such as nutrients and brain factors. Enzyme levels might need to surpass a minimum threshold to achieve a net flux of precursors through the biosynthetic

  9. Towards spoken clinical-question answering: evaluating and adapting automatic speech-recognition systems for spoken clinical questions

    PubMed Central

    Liu, Feifan; Tur, Gokhan; Hakkani-Tür, Dilek

    2011-01-01

    Objective To evaluate existing automatic speech-recognition (ASR) systems to measure their performance in interpreting spoken clinical questions and to adapt one ASR system to improve its performance on this task. Design and measurements The authors evaluated two well-known ASR systems on spoken clinical questions: Nuance Dragon (both generic and medical versions: Nuance Gen and Nuance Med) and the SRI Decipher (the generic version SRI Gen). The authors also explored language model adaptation using more than 4000 clinical questions to improve the SRI system's performance, and profile training to improve the performance of the Nuance Med system. The authors reported the results with the NIST standard word error rate (WER) and further analyzed error patterns at the semantic level. Results Nuance Gen and Med systems resulted in a WER of 68.1% and 67.4% respectively. The SRI Gen system performed better, attaining a WER of 41.5%. After domain adaptation with a language model, the performance of the SRI system improved 36% to a final WER of 26.7%. Conclusion Without modification, two well-known ASR systems do not perform well in interpreting spoken clinical questions. With a simple domain adaptation, one of the ASR systems improved significantly on the clinical question task, indicating the importance of developing domain/genre-specific ASR systems. PMID:21705457

  10. Stretched Verb Collocations with "Give": Their Use and Translation into Spanish Using the BNC and CREA Corpora

    ERIC Educational Resources Information Center

    Molina-Plaza, Silvia; de Gregorio-Godeo, Eduardo

    2010-01-01

    Within the context of on-going research, this paper explores the pedagogical implications of contrastive analyses of multiword units in English and Spanish based on electronic corpora as a CALL resource. The main tenets of collocations from a contrastive perspective--and the points of contact and departure between both languages--are discussed…

  11. 31 CFR 358.7 - Where do I send my bearer corpora and detached bearer coupons to be converted?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... detached bearer coupons to be converted to: Bureau of the Fiscal Service, Division of Customer Service, P... Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE FISCAL SERVICE REGULATIONS GOVERNING BOOK-ENTRY CONVERSION OF BEARER CORPORA AND DETACHED BEARER COUPONS §...

  12. The Effect of the Integration of Corpora in Reading Comprehension Classrooms on English as a Foreign Language Learners' Vocabulary Development

    ERIC Educational Resources Information Center

    Gordani, Yahya

    2013-01-01

    This study used a randomized pretest-posttest control group design to examine the effect of the integration of corpora in general English courses on the students' vocabulary development. To enhance the learners' lexical repertoire and thereby improve their reading comprehension, an online corpus-based approach was integrated into 42 hours of…

  13. Taming Big Data: An Information Extraction Strategy for Large Clinical Text Corpora.

    PubMed

    Gundlapalli, Adi V; Divita, Guy; Carter, Marjorie E; Redd, Andrew; Samore, Matthew H; Gupta, Kalpana; Trautner, Barbara

    2015-01-01

    Concepts of interest for clinical and research purposes are not uniformly distributed in clinical text available in electronic medical records. The purpose of our study was to identify filtering techniques to select 'high yield' documents for increased efficacy and throughput. Using two large corpora of clinical text, we demonstrate the identification of 'high yield' document sets in two unrelated domains: homelessness and indwelling urinary catheters. For homelessness, the high yield set includes homeless program and social work notes. For urinary catheters, concepts were more prevalent in notes from hospitalized patients; nursing notes accounted for a majority of the high yield set. This filtering will enable customization and refining of information extraction pipelines to facilitate extraction of relevant concepts for clinical decision support and other uses.

  14. Automatic Entity Recognition and Typing from Massive Text Corpora: A Phrase and Network Mining Approach.

    PubMed

    Ren, Xiang; El-Kishky, Ahmed; Wang, Chi; Han, Jiawei

    2015-08-01

    In today's computerized and information-based society, we are soaked with vast amounts of text data, ranging from news articles, scientific publications, product reviews, to a wide range of textual information from social media. To unlock the value of these unstructured text data from various domains, it is of great importance to gain an understanding of entities and their relationships. In this tutorial, we introduce data-driven methods to recognize typed entities of interest in massive, domain-specific text corpora. These methods can automatically identify token spans as entity mentions in documents and label their types (e.g., people, product, food) in a scalable way. We demonstrate on real datasets including news articles and tweets how these typed entities aid in knowledge discovery and management.

  15. How Do Adults Use Repetition? A Comparison of Conversations with Young Children and with Multiply-Handicapped Adolescents

    ERIC Educational Resources Information Center

    Bocerean, Christine; Canut, Emmanuelle; Musiol, Michel

    2012-01-01

    The aim of this research is to compare the types and functions of repetitions in two different corpora, one constituted of verbal interactions between adults and multiply-handicapped adolescents, the other between adults and young children of the same mental age as the adolescents. Our overall aim is to observe whether the communicative…

  16. Preliminary evaluations of a spoken web enabled care management platform.

    PubMed

    Padman, Rema; Beam, Erika; Szewczyk, Rachel

    2013-01-01

    Telephones are a ubiquitous and widely accepted technology worldwide. The low ownership cost, simple user interface, intuitive voice-based access and long history contribute to the wide-spread use and success of telephones, and more recently, that of mobile phones. This study reports on our preliminary efforts to leverage this technology to bridge disparities in the access to and delivery of personalized health and wellness care by developing and evaluating a Spoken Web enabled Care Management solution. Early results with two proxy evaluations and a few visually impaired users highlight both the potential and challenges associated with this novel, voice-enabled healthcare delivery solution.

  17. The gender congruency effect during bilingual spoken-word recognition

    PubMed Central

    Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa

    2016-01-01

    We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132

  18. Comprehending spoken metaphoric reference: a real-time analysis.

    PubMed

    Stewart, Mark T; Heredia, Roberto R

    2002-01-01

    Speakers and writers often use metaphor to describe someone or something in a referential fashion (e.g., The creampuff didn't show up for the fight to refer to a cowardly boxer). Research has demonstrated that readers do not comprehend metaphoric reference as easily as they do literal reference (Gibbs, 1990; Onishi & Murphy, 1993). In two experiments, we used a naming version of the cross-modal lexical priming (CMLP) paradigm to monitor the time-course of comprehending spoken metaphoric reference. In Experiment 1, listeners responded to visual probe words of either a figurative or literal nature that were presented at offset or 1000 ms after a critical prime word. Significant facilitatory priming was observed at prime offset to probes consistent with the metaphorical interpretation of the figuratively referring description, yet no priming was found for either probe type at the downstream location. In Experiment 2, we partially replicated Experiment 1 results at prime offset and found no priming at a probe point placed 1000 ms upstream from prime onset. Taken together, the data from these two experiments indicate that listeners are able to comprehend metaphoric reference faster than literal reference. Moreover, the effect appears to be strongest at prime offset, suggesting that activation of the nonliteral interpretation is closely tied to the relationship between the figuratively referring description and the intended referent. Implications for theories of metaphor comprehension, as well as for research in spoken metaphor, are discussed.

  19. Phonetic discrimination and non-native spoken-word recognition

    NASA Astrophysics Data System (ADS)

    Weber, Andrea; Cutler, Anne

    2002-05-01

    When phoneme categories of a non-native language do not correspond to those of the native language, non-native categories may be inaccurately perceived. This may impair non-native spoken-word recognition. Weber and Cutler investigated the effect of phonetic discrimination difficulties on competitor activation in non-native listening. They tested whether Dutch listeners use English phonetic contrasts to resolve potential competition. Eye movements of Dutch participants were monitored as they followed spoken English instructions to click on pictures of objects. A target picture (e.g., picture of a paddle) was always presented along with distractor pictures. The name of a distractor picture either shared initial segments with the name of the target picture (e.g., target paddle, /paedl/ and competitor pedal, /pEdl/) or not (e.g., strawberry and duck). Half of the target-competitor pairs contained English vowels that are often confused by Dutch listeners (e.g., /ae/ and /E/ as in ``paddle-pedal''), half contained vowels that are unlikely to be confused (e.g., /ae/ and /aI/ as in ``parrot-pirate''). Dutch listeners fixated distractor pictures with confusable English vowels longer than distractor pictures with distinct vowels. The results demonstrate that the sensitivity of non-native listeners to phonetic contrasts can result in spurious competitors that should not be activated for native listeners.

  20. Visual speech primes open-set recognition of spoken words

    PubMed Central

    Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.

    2011-01-01

    Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins, 2004). In this paper, we used a cross-modality repetition priming paradigm with visual speech lexical primes and auditory lexical targets to explore the nature of this priming effect. First, we report that participants identified spoken words mixed with noise more accurately when the words were preceded by a visual speech prime of the same word compared with a control condition. Second, analyses of the responses indicated that both correct and incorrect responses were constrained by the visual speech information in the prime. These complementary results suggest that the visual speech primes have an effect on lexical access by increasing the likelihood that words with certain phonetic properties are selected. Third, we found that the cross-modality repetition priming effect was maintained even when visual and auditory signals came from different speakers, and thus different instances of the same lexical item. We discuss implications of these results for current theories of speech perception. PMID:21544260

  1. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  2. The Cortical Organization of Lexical Knowledge: A Dual Lexicon Model of Spoken Language Processing

    ERIC Educational Resources Information Center

    Gow, David W., Jr.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood.…

  3. Why Are Written Picture Naming Latencies (Not) Longer than Spoken Naming?

    ERIC Educational Resources Information Center

    Perret, Cyril; Laganaro, Marina

    2013-01-01

    The comparison between spoken and handwritten production in picture naming tasks represents an important source of information for building models of cognitive processes involved in writing. Studies using this methodology systematically reported longer latencies for handwritten than for spoken production. To uncover the origin of this difference…

  4. Vocabulary Training of Spoken Words in Hard-of-Hearing Children

    ERIC Educational Resources Information Center

    Mollink, Hannah; Hermans, Daan; Knoors, Harry

    2008-01-01

    This study examined the effects of using signs in spoken language vocabulary training of hard-of-hearing children. Fourteen hard-of-hearing children participated in the present study. Vocabulary training with the support of signs showed a statistically significant effect in the participants' learning and retention of new spoken language…

  5. How Do Syllables Contribute to the Perception of Spoken English? Insight from the Migration Paradigm

    ERIC Educational Resources Information Center

    Mattys, Sven L.; Melhorn, James F.

    2005-01-01

    The involvement of syllables in the perception of spoken English has traditionally been regarded as minimal because of ambiguous syllable boundaries and overriding rhythmic segmentation cues. The present experiments test the perceptual separability of syllables and vowels in spoken English using the migration paradigm. Experiments 1 and 2 show…

  6. Phonological Competition within the Word: Evidence from the Phoneme Similarity Effect in Spoken Production

    ERIC Educational Resources Information Center

    Cohen-Goldberg, Ariel M.

    2012-01-01

    Theories of spoken production have not specifically addressed whether the phonemes of a word compete with each other for selection during phonological encoding (e.g., whether /t/ competes with /k/ in cat). Spoken production theories were evaluated and found to fall into three classes, theories positing (1) no competition, (2) competition among…

  7. "Poetry Does Really Educate": An Interview with Spoken Word Poet Luka Lesson

    ERIC Educational Resources Information Center

    Xerri, Daniel

    2016-01-01

    Spoken word poetry is a means of engaging young people with a genre that has often been much maligned in classrooms all over the world. This interview with the Australian spoken word poet Luka Lesson explores issues that are of pressing concern to poetry education. These include the idea that engagement with poetry in schools can be enhanced by…

  8. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  9. Effects of Ethnicity and Gender on Teachers' Evaluation of Students' Spoken Responses

    ERIC Educational Resources Information Center

    Shepherd, Michael A.

    2011-01-01

    To update and extend research on teachers' expectations of students of different sociocultural groups, 57 Black, White, Asian, and Hispanic teachers were asked to evaluate responses spoken by Black, White, and Hispanic 2nd- and 3rd-grade boys and girls. The results show that responses perceived as spoken by minority boys, minority girls, and White…

  10. Le Francais parle. Etudes sociolinguistiques (Spoken French. Sociolinguistic Studies). Current Inquiry into Languages and Linguistics 30.

    ERIC Educational Resources Information Center

    Thibault, Pierrette

    This volume contains twelve articles dealing with the French language as spoken in Quebec. The following topics are addressed: (1) language change and variation; (2) coordinating expressions in the French spoken in Montreal; (3) expressive language as source of language change; (4) the role of correction in conversation; (5) social change and…

  11. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  12. Guide to Spoken-Word Recordings: Popular Literature. No. 87-1.

    ERIC Educational Resources Information Center

    Redmond, Linda, Comp.

    Compiled from responses to a questionnaire sent to producers and distributors of spoken-word recordings by the National Library Service (NLS) for the Blind and Physically Handicapped, this reference circular lists 53 selected sources for purchasing or renting fiction and nonfiction spoken-word recordings on disc and cassette. Subjects and types of…

  13. Ontogenetic Profile of the Expression of Thyroid Hormone Receptors in Rat and Human Corpora Cavernosa of the Penis

    PubMed Central

    Carosa, Eleonora; Di Sante, Stefania; Rossi, Simona; Castri, Alessandra; D'Adamo, Fabio; Gravina, Giovanni Luca; Ronchi, Piero; Kostrouch, Zdenek; Dolci, Susanna; Lenzi, Andrea; Jannini, Emmanuele A

    2010-01-01

    Introduction In the last few years, various studies have underlined a correlation between thyroid function and male sexual function, hypothesizing a direct action of thyroid hormones on the penis. Aim To study the spatiotemporal distribution of mRNA for the thyroid hormone nuclear receptors (TR) α1, α2 and β in the penis and smooth muscle cells (SMCs) of the corpora cavernosa of rats and humans during development. Methods We used several molecular biology techniques to study the TR expression in whole tissues or primary cultures from human and rodent penile tissues of different ages. Main Outcome Measure We measured our data by semi-quantitative reverse transcription polymerase chain reaction (RT-PCR) amplification, Northern blot and immunohistochemistry. Results We found that TRα1 and TRα2 are both expressed in the penis and in SMCs during ontogenesis without development-dependent changes. However, in the rodent model, TRβ shows an increase from 3 to 6 days post natum (dpn) to 20 dpn, remaining high in adulthood. The same expression profile was observed in humans. While the expression of TRβ is strictly regulated by development, TRα1 is the principal isoform present in corpora cavernosa, suggesting its importance in SMC function. These results have been confirmed by immunohistochemistry localization in SMCs and endothelial cells of the corpora cavernosa. Conclusions The presence of TRs in the penis provides the biological basis for the direct action of thyroid hormones on this organ. Given this evidence, physicians would be advised to investigate sexual function in men with thyroid disorders. Carosa E, Di Sante S, Rossi S, Castri A, D'Adamo F, Gravina GL, Ronchi P, Kostrouch Z, Dolci S, Lenzi A, and Jannini EA. Ontogenetic profile of the expression of thyroid hormone receptors in rat and human corpora cavernosa of the penis. J Sex Med 2010;7:1381–1390. PMID:20141582

  14. Preschoolers Don't Practice What They Preach: Preschoolers' Planning Performances with Manual and Spoken Response Requirements

    ERIC Educational Resources Information Center

    Byrd, Dana L.; van der Veen, Tanja K.; McNamara, Joseph P. H.; Berg, W. Keith

    2004-01-01

    Three-, 4-, and 5-year-olds performed Tower of London problems under spoken, manual, and combined (requiring both spoken and manual) response conditions. Preschoolers' solutions were most goal-focused when required to give only a spoken response, intermediately goal-focused when required to give both response types, and least goal-focused when…

  15. Pictures and Spoken Descriptions Elicit Similar Eye Movements during Mental Imagery, Both in Light and in Complete Darkness

    ERIC Educational Resources Information Center

    Johansson, Roger; Holsanova, Jana; Holmqvist, Kenneth

    2006-01-01

    This study provides evidence that eye movements reflect the positions of objects while participants listen to a spoken description, retell a previously heard spoken description, and describe a previously seen picture. This effect is equally strong in retelling from memory, irrespective of whether the original elicitation was spoken or visual. In…

  16. Orthography and Modality Influence Speech Production in Adults and Children

    ERIC Educational Resources Information Center

    Saletta, Meredith; Goffman, Lisa; Hogan, Tiffany P.

    2016-01-01

    Purpose: The acquisition of literacy skills influences the perception and production of spoken language. We examined if orthography influences implicit processing in speech production in child readers and in adult readers with low and high reading proficiency. Method: Children (n = 17), adults with typical reading skills (n = 17), and adults…

  17. Continuing Education of Deaf Adults. Report of a Survey.

    ERIC Educational Resources Information Center

    Schein, Jerome D.; And Others

    The identification of problems relating to deaf adults' access to nondegree-oriented programs of adult and continuing education (ACE), and suggestions for alternative solutions are the focus of this report of a national survey of 641 (response rate) deaf persons (those persons who cannot hear and understand speech spoken directly into their better…

  18. A corpora allata farnesyl diphosphate synthase in mosquitoes displaying a metal ion dependent substrate specificity

    PubMed Central

    Rivera-Perez, Crisalejandra; Nyati, Pratik; Noriega, Fernando G.

    2015-01-01

    Farnesyl diphosphate synthase (FPPS) is a key enzyme in isoprenoid biosynthesis, it catalyzes the head-to-tail condensation of dimethylallyl diphosphate (DMAPP) with two molecules of isopentenyl diphosphate (IPP) to generate farnesyl diphosphate (FPP), a precursor of juvenile hormone (JH). In this study, we functionally characterized an Aedes aegypti FPPS (AaFPPS) expressed in the corpora allata. AaFPPS is the only FPPS gene present in the genome of the yellow fever mosquito, it encodes a 49.6 kDa protein exhibiting all the characteristic conserved sequence domains on prenyltransferases. AaFPPS displays its activity in the presence of metal cofactors; and the product condensation is dependent of the divalent cation. Mg2+ ions lead to the production of FPP, while the presence of Co2+ ions lead to geranyl diphosphate (GPP) production. In the presence of Mg2+ the AaFPPS affinity for allylic substrates is GPP>DMAPP>IPP. These results suggest that AaFPPS displays “catalytic promiscuity”, changing the type and ratio of products released (GPP or FPP) depending on allylic substrate concentrations and the presence of different metal cofactors. This metal ion-dependent regulatory mechanism allows a single enzyme to selectively control the metabolites it produces, thus potentially altering the flow of carbon into separate metabolic pathways. PMID:26188328

  19. A corpora allata farnesyl diphosphate synthase in mosquitoes displaying a metal ion dependent substrate specificity.

    PubMed

    Rivera-Perez, Crisalejandra; Nyati, Pratik; Noriega, Fernando G

    2015-09-01

    Farnesyl diphosphate synthase (FPPS) is a key enzyme in isoprenoid biosynthesis, it catalyzes the head-to-tail condensation of dimethylallyl diphosphate (DMAPP) with two molecules of isopentenyl diphosphate (IPP) to generate farnesyl diphosphate (FPP), a precursor of juvenile hormone (JH). In this study, we functionally characterized an Aedes aegypti FPPS (AaFPPS) expressed in the corpora allata. AaFPPS is the only FPPS gene present in the genome of the yellow fever mosquito, it encodes a 49.6 kDa protein exhibiting all the characteristic conserved sequence domains on prenyltransferases. AaFPPS displays its activity in the presence of metal cofactors; and the product condensation is dependent of the divalent cation. Mg(2+) ions lead to the production of FPP, while the presence of Co(2+) ions lead to geranyl diphosphate (GPP) production. In the presence of Mg(2+) the AaFPPS affinity for allylic substrates is GPP > DMAPP > IPP. These results suggest that AaFPPS displays "catalytic promiscuity", changing the type and ratio of products released (GPP or FPP) depending on allylic substrate concentrations and the presence of different metal cofactors. This metal ion-dependent regulatory mechanism allows a single enzyme to selectively control the metabolites it produces, thus potentially altering the flow of carbon into separate metabolic pathways.

  20. Negative Feedbacks by Isoprenoids on a Mevalonate Kinase Expressed in the Corpora Allata of Mosquitoes

    PubMed Central

    Noriega, Fernando G.

    2015-01-01

    Background Juvenile hormones (JH) regulate development and reproductive maturation in insects. JHs are synthesized through the mevalonate pathway (MVAP), an ancient metabolic pathway present in the three domains of life. Mevalonate kinase (MVK) is a key enzyme in the MVAP. MVK catalyzes the synthesis of phosphomevalonate (PM) by transferring the γ-phosphoryl group from ATP to the C5 hydroxyl oxygen of mevalonic acid (MA). Despite the importance of MVKs, these enzymes have been poorly characterized in insects. Results We functionally characterized an Aedes aegypti MVK (AaMVK) expressed in the corpora allata (CA) of the mosquito. AaMVK displayed its activity in the presence of metal cofactors. Different nucleotides were used by AaMVK as phosphoryl donors. In the presence of Mg2+, the enzyme has higher affinity for MA than ATP. The activity of AaMVK was regulated by feedback inhibition from long-chain isoprenoids, such as geranyl diphosphate (GPP) and farnesyl diphosphate (FPP). Conclusions AaMVK exhibited efficient inhibition by GPP and FPP (Ki less than 1 μM), and none by isopentenyl pyrophosphate (IPP) and dimethyl allyl pyrophosphate (DPPM). These results suggest that GPP and FPP might act as physiological inhibitors in the synthesis of isoprenoids in the CA of mosquitoes. Changing MVK activity can alter the flux of precursors and therefore regulate juvenile hormone biosynthesis. PMID:26566274

  1. Ultrastructural changes of corpora cavernosa in men with erectile dysfunction and chronic renal failure.

    PubMed

    Bellinghieri, Guido; Santoro, Giuseppe; Santoro, Domenico; Lo Forti, Bruno; Savica, Vincenzo; Favazzi, Pietro; Magaudda, Ludovico; Cohen, Arthur H

    2004-09-01

    Erectile dysfunction (ED) is a common and often distressing side effect of renal failure. Uremic men of different ages report a high variety of sexual problems, including sexual hormonal pattern alterations, reduced or loss of libido, infertility, and impotence, thereby influencing their well-being. The pathogenic mechanisms include physiologic, psychologic, and organic causes. To determine the contribution of morphologic factors to impotence we studied the ultrastructure of the corpora cavernosa in 20 patients with end-stage renal disease who were treated with chronic dialysis and compared the findings with 6 individuals with no clinical history of impotence. Our results indicated that in male uremic patients with sexual disturbances there were major changes in smooth muscle cells. This was characterized by reduction of dense bodies in the cytoplasm, thick basement membranes, and increased interstitial collagen fibers with resultant reduction of cell-to-cell contact. In addition, there was thickening and lamination of basement membranes of endothelial cells and increased accumulation of collagen between nerve fibers. These alterations were more evident in patients with longer time on dialysis and were independent of type of primary renal disease. We hypothesize that ED in dialysis patients is not related to the primary disease but to the uremic state.

  2. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora.

    PubMed

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2016-10-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets (e.g., application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes.

  3. Corpora Amylacea of Brain Tissue from Neurodegenerative Diseases Are Stained with Specific Antifungal Antibodies

    PubMed Central

    Pisa, Diana; Alonso, Ruth; Rábano, Alberto; Carrasco, Luis

    2016-01-01

    The origin and potential function of corpora amylacea (CA) remains largely unknown. Low numbers of CA are detected in the aging brain of normal individuals but they are abundant in the central nervous system of patients with neurodegenerative diseases. In the present study, we show that CA from patients diagnosed with Alzheimer's disease (AD) contain fungal proteins as detected by immunohistochemistry analyses. Accordingly, CA were labeled with different anti-fungal antibodies at the external surface, whereas the central portion composed of calcium salts contain less proteins. Detection of fungal proteins was achieved using a number of antibodies raised against different fungal species, which indicated cross-reactivity between the fungal proteins present in CA and the antibodies employed. Importantly, these antibodies do not immunoreact with cellular proteins. Additionally, CNS samples from patients diagnosed with amyotrophic lateral sclerosis (ALS) and Parkinson's disease (PD) also contained CA that were immunoreactive with a range of antifungal antibodies. However, CA were less abundant in ALS or PD patients as compared to CNS samples from AD. By contrast, CA from brain tissue of control subjects were almost devoid of fungal immunoreactivity. These observations are consistent with the concept that CA associate with fungal infections and may contribute to the elucidation of the origin of CA. PMID:27013948

  4. Corpora Amylacea of Brain Tissue from Neurodegenerative Diseases Are Stained with Specific Antifungal Antibodies.

    PubMed

    Pisa, Diana; Alonso, Ruth; Rábano, Alberto; Carrasco, Luis

    2016-01-01

    The origin and potential function of corpora amylacea (CA) remains largely unknown. Low numbers of CA are detected in the aging brain of normal individuals but they are abundant in the central nervous system of patients with neurodegenerative diseases. In the present study, we show that CA from patients diagnosed with Alzheimer's disease (AD) contain fungal proteins as detected by immunohistochemistry analyses. Accordingly, CA were labeled with different anti-fungal antibodies at the external surface, whereas the central portion composed of calcium salts contain less proteins. Detection of fungal proteins was achieved using a number of antibodies raised against different fungal species, which indicated cross-reactivity between the fungal proteins present in CA and the antibodies employed. Importantly, these antibodies do not immunoreact with cellular proteins. Additionally, CNS samples from patients diagnosed with amyotrophic lateral sclerosis (ALS) and Parkinson's disease (PD) also contained CA that were immunoreactive with a range of antifungal antibodies. However, CA were less abundant in ALS or PD patients as compared to CNS samples from AD. By contrast, CA from brain tissue of control subjects were almost devoid of fungal immunoreactivity. These observations are consistent with the concept that CA associate with fungal infections and may contribute to the elucidation of the origin of CA.

  5. Understanding and producing the reduced relative construction: Evidence from ratings, editing and corpora

    PubMed Central

    Hare, Mary; Tanenhaus, Michael K.; McRae, Ken

    2011-01-01

    Tworating studies demonstrate that English speakers willingly produce reduced relatives with internal cause verbs (e.g., Whisky fermented in oak barrels can have a woody taste), and judge their acceptability based on factors known to influence ambiguity resolution, rather than on the internal/external cause distinction. Regression analyses demonstrate that frequency of passive usage predicts reduced relative frequency in corpora, but internal/external cause status does not. The authors conclude that reduced relatives with internal cause verbs are rare because few of these verbs occur in the passive. This contrasts with the claim in McKoon and Ratcliff (McKoon, G., & Ratcliff, R. (2003). Meaning through syntax: Language comprehension and the reduced relative clause construction. Psychological Review, 110, 490–525) that reduced relatives like The horse raced past the barn fell are rare and, when they occur, incomprehensible, because the meaning of the reduced relative construction prohibits the use of a verb with an internal cause event template. PMID:22162904

  6. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora

    PubMed Central

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2017-01-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets (e.g., application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes. PMID:28210517

  7. Spoken Language Processing in the Clarissa Procedure Browser

    NASA Technical Reports Server (NTRS)

    Rayner, M.; Hockey, B. A.; Renders, J.-M.; Chatzichrisafis, N.; Farrell, K.

    2005-01-01

    Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station, is as far as we know the first spoken dialog system in space. We describe the objectives of the Clarissa project and the system's architecture. In particular, we focus on three key problems: grammar-based speech recognition using the Regulus toolkit; methods for open mic speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations. We first describe the grammar-based recogniser we have build using Regulus, and report experiments where we compare it against a class N-gram recogniser trained off the same 3297 utterance dataset. We obtained a 15% relative improvement in WER and a 37% improvement in semantic error rate. The grammar-based recogniser moreover outperforms the class N-gram version for utterances of all lengths from 1 to 9 words inclusive. The central problem in building an open-mic speech recognition system is being able to distinguish between commands directed at the system, and other material (cross-talk), which should be rejected. Most spoken dialogue systems make the accept/reject decision by applying a threshold to the recognition confidence score. NASA shows how a simple and general method, based on standard approaches to document classification using Support Vector Machines, can give substantially better performance, and report experiments showing a relative reduction in the task-level error rate by about 25% compared to the baseline confidence threshold method. Finally, we describe a general side-effect free dialogue management architecture that we have implemented in Clarissa, which extends the "update semantics'' framework by including task as well as dialogue information in the information state. We show that this enables elegant treatments of several dialogue management problems, including corrections, confirmations, querying of the environment, and regression

  8. The hardest butter to button: immediate context effects in spoken word identification.

    PubMed

    Brock, Jon; Nation, Kate

    2014-01-01

    According to some theories, the context in which a spoken word is heard has no impact on the earliest stages of word identification. This view has been challenged by recent studies indicating an interactive effect of context and acoustic similarity on language-mediated eye movements. However, an alternative explanation for these results is that participants looked less at acoustically similar objects in constraining contexts simply because they were looking more at other objects that were cued by the context. The current study addressed this concern whilst providing a much finer grained analysis of the temporal evolution of context effects. Thirty-two adults listened to sentences while viewing a computer display showing four objects. As expected, shortly after the onset of a target word (e.g., "button") in a neutral context, participants saccaded preferentially towards a cohort competitor of the word (e.g., butter). This effect was significantly reduced when the preceding verb made the competitor an unlikely referent (e.g., "Sam fastened the button"), even though there were no other contextually congruent objects in the display. Moreover, the time-course of these two effects was identical to within approximately 30 ms, indicating that certain forms of contextual information can have a near-immediate effect on word identification.

  9. Implicit learning of nonadjacent phonotactic dependencies in the perception of spoken language

    NASA Astrophysics Data System (ADS)

    McLennan, Conor T.; Luce, Paul A.

    2004-05-01

    We investigated the learning of nonadjacent phonotactic dependencies in adults. Following previous research examining learning of dependencies at a grammatical level (Gomez, 2002), we manipulated the co-occurrence of nonadjacent phonological segments within a spoken syllable. Each listener was exposed to consonant-vowel-consonant nonword stimuli produced by one of two phonological grammars. Both languages contained the same adjacent dependencies between the initial consonant-vowel and final vowel-consonant sequences but differed on the co-occurrences of initial and final consonants. The number of possible types of vowels that intervened between the initial and final consonants was also manipulated. Listeners learning of nonadjacent segmental dependencies were evaluated in a speeded recognition task in which they heard (1) old nonwords on which they had been trained, (2) new nonwords generated by the grammar on which they had been trained, and (3) new nonwords generated by the grammar on which they had not been trained. The results provide evidence for listener's sensitivity to nonadjacent dependencies. However, this sensitivity is manifested as an inhibitory competition effect rather than a facilitative effect on pattern processing. [Research supported by Research Grant No. R01 DC 0265802 from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health.

  10. Psychosocial Development of Women: Linkages to Teaching and Leadership in Adult Education. Information Series No. 350.

    ERIC Educational Resources Information Center

    Caffarella, Rosemary S.

    Theories and models of adult development over the life span provide one of the foundational pieces for gaining a clearer understandings of learning in adulthood. Only recently have adult educators spoken more forcefully about the lack of integrated research on adult women. The purpose of this monograph is to describe those missing female voices as…

  11. Aldehyde dehydrogenase 3 converts farnesal into farnesoic acid in the corpora allata of mosquitoes.

    PubMed

    Rivera-Perez, Crisalejandra; Nouzova, Marcela; Clifton, Mark E; Garcia, Elena Martin; LeBlanc, Elizabeth; Noriega, Fernando G

    2013-08-01

    The juvenile hormones (JHs) play a central role in insect reproduction, development and behavior. Interrupting JH biosynthesis has long been considered a promising strategy for the development of target-specific insecticides. Using a combination of RNAi, in vivo and in vitro studies we characterized the last unknown biosynthetic enzyme of the JH pathway, a fatty aldehyde dehydrogenase (AaALDH3) that oxidizes farnesal into farnesoic acid (FA) in the corpora allata (CA) of mosquitoes. The AaALDH3 is structurally and functionally a NAD(+)-dependent class 3 ALDH showing tissue- and developmental-stage-specific splice variants. Members of the ALDH3 family play critical roles in the development of cancer and Sjögren-Larsson syndrome in humans, but have not been studies in groups other than mammals. Using a newly developed assay utilizing fluorescent tags, we demonstrated that AaALDH3 activity, as well as the concentrations of farnesol, farnesal and FA were different in CA of sugar and blood-fed females. In CA of blood-fed females the low catalytic activity of AaALDH3 limited the flux of precursors and caused a remarkable increase in the pool of farnesal with a decrease in FA and JH synthesis. The accumulation of the potentially toxic farnesal stimulated the activity of a reductase that converted farnesal back into farnesol, resulting in farnesol leaking out of the CA. Our studies indicated AaALDH3 plays a key role in the regulation of JH synthesis in blood-fed females and mosquitoes seem to have developed a "trade-off" system to balance the key role of farnesal as a JH precursor with its potential toxicity.

  12. Synthesis and reception of prostaglandins in corpora lutea of domestic cat and lynx.

    PubMed

    Zschockelt, Lina; Amelkina, Olga; Siemieniuch, Marta J; Kowalewski, Mariusz P; Dehnhard, Martin; Jewgenow, Katarina; Braun, Beate C

    2016-08-01

    Felids show different reproductive strategies related to the luteal phase. Domestic cats exhibit a seasonal polyoestrus and ovulation is followed by formation of corpora lutea (CL). Pregnant and non-pregnant cycles are reflected by diverging plasma progesterone (P4) profiles. Eurasian and Iberian lynxes show a seasonal monooestrus, in which physiologically persistent CL (perCL) support constantly elevated plasma P4 levels. Prostaglandins (PGs) represent key regulators of reproduction, and we aimed to characterise PG synthesis in feline CL to identify their contribution to the luteal lifespan. We assessed mRNA and protein expression of PG synthases (PTGS2/COX2, PTGES, PGFS/AKR1C3) and PG receptors (PTGER2, PTGER4, PTGFR), and intra-luteal levels of PGE2 and PGF2α Therefore, CL of pregnant (pre-implantation, post-implantation, regression stages) and non-pregnant (formation, development/maintenance, early regression, late regression stages) domestic cats, and prooestrous Eurasian (perCL, pre-mating) and metoestrous Iberian (perCL, freshCL, post-mating) lynxes were investigated. Expression of PTGS2/COX2, PTGES and PTGER4 was independent of the luteal stage in the investigated species. High levels of luteotrophic PGE2 in perCL might be associated with persistence of luteal function in lynxes. Signals for PGFS/AKR1C3 expression were weak in mid and late luteal stages of cats but were absent in lynxes, concomitant with low PGF2α levels in these species. Thus, regulation of CL regression by luteal PGF2α seems negligible. In contrast, expression of PTGFR was evident in nearly all investigated CL of cat and lynxes, implying that luteal regression, e.g. at the end of pregnancy, is triggered by extra-luteal PGF2α.

  13. Nitric oxide release in penile corpora cavernosa in a rat model of erection

    PubMed Central

    Escrig, A; Gonzalez-Mora, J L; Mas, M

    1999-01-01

    Nitric oxide (NO) levels were measured in the corpus cavernosum of urethane-anaesthetized rats by using differential normal pulse voltammetry with carbon fibre microelectrodes coated with a polymeric porphyrin and a cation exchanger (Nafion). A NO oxidation peak could be recorded at 650 mV vs. a Ag-AgCl reference electrode every 100 s.This NO signal was greatly decreased by the NO synthase inhibitor NG-nitro-L-arginine methyl ester (L-NAME), given by local and systemic routes, and enhanced by the NO precursor L-arginine. Treatment with L-arginine reversed the effect of L-NAME on the NO peak.Both the NO signal and the intracavernosal pressure (ICP) were increased by electrical stimulation of cavernosal nerves (ESCN). However, the rise in the NO levels long outlived the rapid return to baseline of the ICP values at the end of nerve stimulation.The ICP and the NO responses to ESCN were suppressed by local and systemic injections of L-NAME. Subsequent treatment with L-arginine of L-NAME-treated animals restored the NO signal to basal levels and the NO response to ESCN. The ICP response to ESCN was restored only in part by L-arginine.The observed temporal dissociation between the NO and ICP responses could be accounted for by several factors, including the buffering of NO by the blood filling the cavernosal spaces during erection.These findings indicate that an increased production of NO in the corpora cavernosa is necessary but not sufficient for maintaining penile erection and suggest a complex modulation of the NO-cGMP-cavernosal smooth muscle relaxation cascade. PMID:10066939

  14. Adrenomedullin in rat follicles and corpora lutea: expression, functions and interaction with endothelin-1

    PubMed Central

    2011-01-01

    Background Adrenomedullin (ADM), a novel vasorelaxant peptide, was found in human/rat ovaries. The present study investigated the interaction of ADM and endothelin-1 (ET-1) in follicles and newly formed corpora lutea (CL) and the actions of ADM on progesterone production in CL during pregnancy. Methods The peptide and gene expression level of adrenomedullin in small antral follicles, large antral follicles and CL was studied by real-time RT-PCR and EIA. The effect of ADM treatment on oestradiol production in 5-day follicular culture and on progesterone production from CL of different pregnant stages was measured by EIA. The interaction of ADM and ET-1 in follicles and CL at their gene expression level was studied by real-time RT-PCR. Results In the rat ovary, the gene expression of Adm increased during development from small antral follicles to large antral follicles and CL. In vitro treatment of preantral follicular culture for 5 days with ADM increased oestradiol production but did not affect follicular growth or ovulation rate. The regulation of progesterone production by ADM in CL in culture was pregnancy-stage dependent, inhibitory at early and late pregnancy but stimulatory at mid-pregnancy, which might contribute to the high progesterone production rate of the CL at mid-pregnancy. Moreover, the interaction between ADM and ET-1 at both the production and functional levels indicates that these two vasoactive peptides may form an important local, fine-tuning regulatory system together with LH and prolactin for progesterone production in rat CL. Conclusions As the CL is the major source of progesterone production even after the formation of placenta in rats, ADM may be an important regulator in progesterone production to meet the requirement of pregnancy. PMID:21824440

  15. Role of heme oxygenase-1 in the biogenesis of corpora amylacea.

    PubMed

    Sahlas, D J; Liberman, A; Schipper, H M

    2002-01-01

    Corpora amylacea (CA) are glycoproteinaceous inclusions that accumulate in the human brain during normal aging and to a greater extent in Alzheimer's disease. We previously demonstrated that, in cultured rat astroglia, cysteamine (CSH) upregulates heme oxygenase-1 (HO-1) and promotes the transformation of normal mitochondria into CA-like inclusions. In the current study, primary cultures of neonatal rat astroglia were exposed to 880 micro M CSH for three months in the presence or absence of dexamethasone, a suppressor of HO-1 gene transcription. Cells were double-labeled with periodic acid-Schiff reagent (PAS) and antisera against ubiquitin, HO-1, or a mitochondrial epitope. CA were quantified and their immunostaining characteristics analyzed using confocal microscopy. HO-1 immunofluorescence was more abundant in cultures exposed to CSH alone relative to untreated control cultures and cultures exposed to both CSH and dexamethasone. Mature CA appeared as large (5-50 microM), spherical or polygonal, intensely PAS-positive inclusions within glial cytoplasm or deposited extracellularly. The inclusions manifested intense rim and, less commonly, homogeneous or stippled patterns of immunoreactivity for ubiquitin, HO-1, and the mitochondrial marker. Monolayers exposed to CSH exhibited 660% more CA relative to untreated controls (P < 0.05). Numbers of CA in cultures exposed to CSH were diminished by co-administration of 50 microg/ml dexamethasone (P < 0.05 relative to CSH alone) or 100 microg/ml dexamethasone (P < 0.05 relative to CSH alone). Numbers of CA in cultures co-treated with CSH and 50 microg/ml dexamethasone or 100 microg/ml dexamethasone were not significantly different from untreated control values. Up-regulation of HO-1 may contribute to the formation of CA in aging astroglia.

  16. Nitric oxide/cGMP signaling in the corpora allata of female grasshoppers.

    PubMed

    Wirmer, Andrea; Heinrich, Ralf

    2011-01-01

    The corpora allata (CA) of various insects express enzymes with fixation resistant NADPHdiaphorase activity. In female grasshoppers, juvenile hormone (JH) released from the CA is necessary to establish reproductive readiness, including sound production. Previous studies demonstrated that female sound production is also promoted by systemic inhibition of nitric oxide (NO) formation. In addition, allatotropin and allatostatin expressing central brain neurons were located in close vicinity of NO generating cells. It was therefore speculated that NO signaling may contribute to the control of juvenile hormone release from the CA. This study demonstrates the presence of NO/cGMP signaling in the CA of female Chorthippus biguttulus. CA parenchymal cells exhibit NADPHdiaphorase activity, express anti NOS immunoreactivity and accumulate citrulline, which is generated as a byproduct of NO generation. Varicose terminals from brain neurons in the dorsal pars intercerebralis and pars lateralis that accumulate cGMP upon stimulation with NO donors serve as intrinsic targets of NO in the CA. Both accumulation of citrulline and cyclic GMP were inhibited by the NOS inhibitor aminoguanidine, suggesting that NO in CA is produced by NOS. These results suggest that NO is a retrograde transmitter that provides feedback to projection neurons controlling JH production. Combined immunostainings and backfill experiments detected CA cells with processes extending into the CC and the protocerebrum that expressed immunoreactivity against the pan-neural marker anti-HRP. Allatostatin and allatotropin immunopositive brain neurons do not express NOS but subpopulations accumulate cGMP upon NO-formation. Direct innervation of CA by these peptidergic neurons was not observed.

  17. The time course of indexical specificity effects in the perception of spoken words

    NASA Astrophysics Data System (ADS)

    McLennan, Conor T.; Luce, Paul A.

    2003-10-01

    This research investigates the time-course of indexical specificity effects in spoken word recognition by examining the circumstances under which the variability in the speaking rate affects the participant's perception of spoken words. Previous research has demonstrated that variability has both representational and processing consequences. The current research examines one of the conditions expected to influence the extent to which indexical variability plays a role in spoken word recognition, namely the time-course of processing. Based on our past work, it was hypothesized that indexical specificity effects associated with the speaking rate would only affect later stages of processing in spoken word recognition. The results confirm this hypothesis: Specificity effects are only in evidence when processing is relatively slow. [Research supported (in part) by Research Grant No. R01 DC 0265801 from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health.

  18. The employment of a spoken language computer applied to an air traffic control task.

    NASA Technical Reports Server (NTRS)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  19. Spoken Word Recognition by Humans: A Single- or a Multi-Layer Process

    DTIC Science & Technology

    2008-02-01

    inherently rhythmic phenomenon. Phonetic constituents are articulated in syllabic "packets," spoken in cadence and reflect energy modulations between... Phonetic segments are articulated in syllabic "packages," which are spoken in cadence and reflect energy modulations between 3 and 20 Hz (Greenberg, 1999...function. For example, the range of time intervals (40-1000 ins) associated with different levels of linguistic abstraction ( phonetic feature

  20. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers With Down Syndrome.

    PubMed

    Yoder, Paul J; Woynaroski, Tiffany; Fey, Marc E; Warren, Steven F; Gardner, Elizabeth

    2015-07-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only the participants with DS, we found that more therapy led to larger spoken vocabularies at posttreatment because it increased children's canonical syllabic communication and receptive vocabulary growth early in the treatment phase.

  1. Effects of speech clarity on recognition memory for spoken sentences.

    PubMed

    Van Engen, Kristin J; Chandrasekaran, Bharath; Smiljanic, Rajka

    2012-01-01

    Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.

  2. Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems

    NASA Astrophysics Data System (ADS)

    Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae

    Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.

  3. Lexical frequency and acoustic reduction in spoken Dutch

    NASA Astrophysics Data System (ADS)

    Pluymaekers, Mark; Ernestus, Mirjam; Baayen, R. Harald

    2005-10-01

    This study investigates the effects of lexical frequency on the durational reduction of morphologically complex words in spoken Dutch. The hypothesis that high-frequency words are more reduced than low-frequency words was tested by comparing the durations of affixes occurring in different carrier words. Four Dutch affixes were investigated, each occurring in a large number of words with different frequencies. The materials came from a large database of face-to-face conversations. For each word containing a target affix, one token was randomly selected for acoustic analysis. Measurements were made of the duration of the affix as a whole and the durations of the individual segments in the affix. For three of the four affixes, a higher frequency of the carrier word led to shorter realizations of the affix as a whole, individual segments in the affix, or both. Other relevant factors were the sex and age of the speaker, segmental context, and speech rate. To accommodate for these findings, models of speech production should allow word frequency to affect the acoustic realizations of lower-level units, such as individual speech sounds occurring in affixes.

  4. Clarissa Spoken Dialogue System for Procedure Reading and Navigation

    NASA Technical Reports Server (NTRS)

    Hieronymus, James; Dowding, John

    2004-01-01

    Speech is the most natural modality for humans use to communicate with other people, agents and complex systems. A spoken dialogue system must be robust to noise and able to mimic human conversational behavior, like correcting misunderstandings, answering simple questions about the task and understanding most well formed inquiries or commands. The system aims to understand the meaning of the human utterance, and if it does not, then it discards the utterance as being meant for someone else. The first operational system is Clarissa, a conversational procedure reader and navigator, which will be used in a System Development Test Objective (SDTO) on the International Space Station (ISS) during Expedition 10. In the present environment one astronaut reads the procedure on a Manual Procedure Viewer (MPV) or paper, and has to stop to read or turn pages, shifting focus from the task. Clarissa is designed to read and navigate ISS procedures entirely with speech, while the astronaut has his eyes and hands engaged in performing the task. The system also provides an MPV like graphical interface so the procedure can be read visually. A demo of the system will be given.

  5. Investigating joint attention mechanisms through spoken human-robot interaction.

    PubMed

    Staudte, Maria; Crocker, Matthew W

    2011-08-01

    Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream (Griffin, 2001; Meyer, Sleiderink, & Levelt, 1998; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker's focus of (visual) attention to anticipate, ground, and disambiguate spoken references. To investigate the dynamics of such gaze-following and its influence on utterance comprehension in a controlled manner, we use a human-robot interaction setting. Specifically, we hypothesize that referential gaze is interpreted as a cue to the speaker's referential intentions which facilitates or disrupts reference resolution. Moreover, the use of a dynamic and yet extremely controlled gaze cue enables us to shed light on the simultaneous and incremental integration of the unfolding speech and gaze movement. We report evidence from two eye-tracking experiments in which participants saw videos of a robot looking at and describing objects in a scene. The results reveal a quantified benefit-disruption spectrum of gaze on utterance comprehension and, further, show that gaze is used, even during the initial movement phase, to restrict the spatial domain of potential referents. These findings more broadly suggest that people treat artificial agents similar to human agents and, thus, validate such a setting for further explorations of joint attention mechanisms.

  6. Usefulness of electromyography of the cavernous corpora (CC EMG) in the diagnosis of arterial erectile dysfunction.

    PubMed

    Virseda-Chamorro, M; Lopez-Garcia-Moreno, A M; Salinas-Casado, J; Esteban-Fuertes, M

    2012-01-01

    Electromyography (EMG) of the corpora cavernosa (CC-EMG) is able to record the activity of the erectile tissue during erection, and thus has been used as a diagnostic technique in patients with erectile dysfunction (ED). The present study examines the usefulness of the technique in the diagnosis of arterial ED. A cross-sectional study was made of 35 males with a mean age of 48.5 years (s.d. 11.34), referred to our center with ED for >1 year. The patients were subjected to CC-EMG and a penile Doppler ultrasound study following the injection of 20 μg of prostaglandin E1 (PGE1). The patients were divided into three groups according to their response to the intracavernous injection of PGE1: Group 1 (adequate erection and reduction/suppression of EMG activity); Group 2 (insufficient erection and persistence of EMG activity); and Group 3 (insufficient erection and reduction/suppression of EMG activity). Patient classification according to response to the intracavernous injection of PGE1 was as follows: Group 1: six patients (17%), Group 2: 18 patients (51%), and Group 3: 11 patients (31%). Patients diagnosed with arterial insufficiency according to Doppler ultrasound (systolic arterial peak velocity <30 mm s(-1) in both arteries) were significantly older than those without such damage (54.5 versus 41.8 years, respectively; s.d. 11.12). The patients in Group 3 showed a significantly lower maximum systolic velocity in both arteries than the subjects belonging to Group 2. Likewise, a statistically significant relationship was observed between the diagnosis of arterial insufficiency and patient classification in Group 3. The confirmation of insufficient erection associated with reduction/suppression of EMG activity showed a sensitivity of 66.7% (confidence interval between 50 and 84%) and a specificity of 92.9% (confidence interval between 84 and 100%) in the diagnosis of arterial ED. Owing to the high specificity of CC-EMG response to the injection of PGE1, this test is

  7. Training and evaluation corpora for the extraction of causal relationships encoded in biological expression language (BEL)

    PubMed Central

    Madan, Sumit; Ansari, Sam; Kodamullil, Alpha T.; Karki, Reagon; Rastegar-Mojarad, Majid; Catlett, Natalie L.; Hayes, William; Szostak, Justyna; Hoeng, Julia; Peitsch, Manuel

    2016-01-01

    Success in extracting biological relationships is mainly dependent on the complexity of the task as well as the availability of high-quality training data. Here, we describe the new corpora in the systems biology modeling language BEL for training and testing biological relationship extraction systems that we prepared for the BioCreative V BEL track. BEL was designed to capture relationships not only between proteins or chemicals, but also complex events such as biological processes or disease states. A BEL nanopub is the smallest unit of information and represents a biological relationship with its provenance. In BEL relationships (called BEL statements), the entities are normalized to defined namespaces mainly derived from public repositories, such as sequence databases, MeSH or publicly available ontologies. In the BEL nanopubs, the BEL statements are associated with citation information and supportive evidence such as a text excerpt. To enable the training of extraction tools, we prepared BEL resources and made them available to the community. We selected a subset of these resources focusing on a reduced set of namespaces, namely, human and mouse genes, ChEBI chemicals, MeSH diseases and GO biological processes, as well as relationship types ‘increases’ and ‘decreases’. The published training corpus contains 11 000 BEL statements from over 6000 supportive text excerpts. For method evaluation, we selected and re-annotated two smaller subcorpora containing 100 text excerpts. For this re-annotation, the inter-annotator agreement was measured by the BEL track evaluation environment and resulted in a maximal F-score of 91.18% for full statement agreement. In addition, for a set of 100 BEL statements, we do not only provide the gold standard expert annotations, but also text excerpts pre-selected by two automated systems. Those text excerpts were evaluated and manually annotated as true or false supportive in the course of the BioCreative V BEL track task

  8. The Spoken Word and the Integrity of English Instruction, Study Group Paper No. 1; and the Role of the Spoken Word.

    ERIC Educational Resources Information Center

    Loban, Walter; And Others

    In response to a study group concerned with the spoken word and the integrity of English instruction, Walter Loban traces speech and its development, examines oral language proficiency, discusses one study of oral language, discusses language and social class and language and learning, and concludes by commenting on the neglect speech instruction…

  9. Inhibition of nerve stimulation-induced vasodilatation in corpora cavernosa of the pithed rat by blockade of nitric oxide synthase.

    PubMed Central

    Finberg, J. P.; Levy, S.; Vardi, Y.

    1993-01-01

    1. The effect of inhibition of nitric oxide synthase by NG-nitro-L-arginine methyl ester (L-NAME) on nerve stimulation-induced vasodilation in corpora cavernosa was studied in the pithed rat. Corporal vasodilation was estimated by the increase in ratio (corpora cavernosal pressure/systemic blood pressure; CP/BP) following electrical stimulation of the sacral part of the spinal cord. 2. L-NAME (2, 5, 10 and 25 mg kg-1) caused an increase in BP and a dose-dependent inhibition of the rise in the CP/BP ratio following stimulation. 3. The inhibitory effect of L-NAME (25 mg kg-1) on the corporal response to spinal cord stimulation, as well as the pressor response, was partially prevented by prior administration of L- but not D-arginine (400 mg kg-1, i.v.). 4. L-NAME (20 mg kg-1, i.v.) did not inhibit the rise in corporal pressure resulting from direct intracavernosal administration of papaverine (400 micrograms over 2 min). However, this response was inhibited by 5-hydroxytryptamine (20 micrograms kg-1, i.v.). 5. The results are indicative of a role of nitric oxide (NO) in the corporal vasodilator response to erectile stimulation. PMID:7683562

  10. Reuse of termino-ontological resources and text corpora for building a multilingual domain ontology: an application to Alzheimer's disease.

    PubMed

    Dramé, Khadim; Diallo, Gayo; Delva, Fleur; Dartigues, Jean François; Mouillet, Evelyne; Salamon, Roger; Mougin, Fleur

    2014-04-01

    Ontologies are useful tools for sharing and exchanging knowledge. However ontology construction is complex and often time consuming. In this paper, we present a method for building a bilingual domain ontology from textual and termino-ontological resources intended for semantic annotation and information retrieval of textual documents. This method combines two approaches: ontology learning from texts and the reuse of existing terminological resources. It consists of four steps: (i) term extraction from domain specific corpora (in French and English) using textual analysis tools, (ii) clustering of terms into concepts organized according to the UMLS Metathesaurus, (iii) ontology enrichment through the alignment of French and English terms using parallel corpora and the integration of new concepts, (iv) refinement and validation of results by domain experts. These validated results are formalized into a domain ontology dedicated to Alzheimer's disease and related syndromes which is available online (http://lesim.isped.u-bordeaux2.fr/SemBiP/ressources/ontoAD.owl). The latter currently includes 5765 concepts linked by 7499 taxonomic relationships and 10,889 non-taxonomic relationships. Among these results, 439 concepts absent from the UMLS were created and 608 new synonymous French terms were added. The proposed method is sufficiently flexible to be applied to other domains.

  11. The Role of Grammatical Category Information in Spoken Word Retrieval

    PubMed Central

    Duràn, Carolina Palma; Pillon, Agnesa

    2011-01-01

    We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production. PMID

  12. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults with Normal Hearing but Not Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker

    2016-01-01

    Purpose: Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Method: Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13…

  13. Balanced intervention for adolescents and adults with language impairment: a clinical framework.

    PubMed

    Fallon, Karen A; Katz, Lauren A; Carlberg, Rachel

    2015-02-01

    Providing effective intervention services for adolescents and adults who struggle with spoken and written language presents a variety of unique challenges. This article discusses the 5S Framework (skills, strategies, school, student buy-in, and stakeholders) for designing and implementing balanced spoken and written language interventions for adolescents and adults. An in-depth case illustration highlights the usefulness of the framework for targeting the language and literacy skills of adolescents and young adults. By describing and illustrating the five key components of the intervention framework, the article provides a useful clinical tool to help guide clinicians and educators who serve the needs of adolescents and adults who struggle with spoken and written language.

  14. On the Nature of Talker Variability Effects on Recall of Spoken Word Lists

    PubMed Central

    Goldinger, Stephen D.; Pisoni, David B.; Logan, John S.

    2012-01-01

    In a recent study, Martin, Mullennix, Pisoni, and Summers (1989) reported that subjects’ accuracy in recalling lists of spoken words was better for words in early list positions when the words were spoken by a single talker than when they were spoken by multiple talkers. The present study was conducted to examine the nature of these effects in further detail. Accuracy of serial-ordered recall was examined for lists of words spoken by either a single talker or by multiple talkers. Half the lists contained easily recognizable words, and half contained more difficult words, according to a combined metric of word frequency, lexical neighborhood density, and neighborhood frequency. Rate of presentation was manipulated to assess the effects of both variables on rehearsal and perceptual encoding. A strong interaction was obtained between talker variability and rate of presentation. Recall of multiple-talker lists was affected much more than single-talker lists by changes in presentation rate. At slow presentation rates, words in early serial positions produced by multiple talkers were actually recalled more accurately than words produced by a single talker. No interaction was observed for word confusability and rate of presentation. The data provide support for the proposal that talker variability affects the accuracy of recall of spoken words not only by increasing the processing demands for early perceptual encoding of the words, but also by affecting the efficiency of the rehearsal process itself. PMID:1826729

  15. Adult Evaluation of Child Language. Papers from the Michigan Linguistic Society Meeting, Vol. 1, No. 2.

    ERIC Educational Resources Information Center

    Fulton, Mary Wills

    Analysis of adult evaluation of children's linguistic output provides a basis for elaboration upon the work of McNeill (1970) and Brown (1970). When limited to the uttered words of a child paired with an utterance spoken at an earlier time, adults cannot judge the relative age of the children making those utterances; in fact, their predictions of…

  16. Adult Language/Learning Disability: Issues and Resources.

    ERIC Educational Resources Information Center

    Comstock, Renee; Kamara, Carol A.

    A language/learning disability (LLD) is a disorder that may affect the comprehension and use of spoken or written language as well as nonverbal language, such as eye contact and tone of speech in both adults and children. Most research, treatment, and support resources emphasize childhood LLD, but the problems do not disappear once a person has…

  17. Is This Enough? A Qualitative Evaluation of the Effectiveness of a Teacher-Training Course on the Use of Corpora in Language Education

    ERIC Educational Resources Information Center

    Lenko-Szymanska, Agnieszka

    2014-01-01

    The paper describes a teacher-training course on the use of corpora in language education offered to graduate students at the Institute of Applied Linguistics, University of Warsaw. It also presents the results of two questionnaires distributed to the students prior to and after the second edition of the course. The main aims of the course are: to…

  18. The development of spoken language in deaf children: explaining the unexplained variance.

    PubMed

    Musselman, C; Kircaali-Iftar, G

    1996-01-01

    Using a large existing data base on children with severe and profound deafness, 10 children were identified whose level of spoken laguage was most above and 10 whose level was most below that expected on the basis of their hearing loss, age, and intelligence. A study of their personal characteristics, family background, and educational history identified factors associated with unusually high performance; these includes earlier use of binaural ear-level aids, more highly educated mothers, auditory/verbal or auditory/oral instruction, reliance on spoken language as a method of communication, individualized instruction, integration, and structured teaching by parents. Parents of high performers also reported being highly commited to and focusing family resouces on developing their child's spoken language.

  19. Influence of eye gaze on spoken word processing: an ERP study with infants.

    PubMed

    Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Friederici, Angela D

    2011-01-01

    Eye gaze is an important communicative signal, both as mutual eye contact and as referential gaze to objects. To examine whether attention to speech versus nonspeech stimuli in 4- to 5-month-olds (n=15) varies as a function of eye gaze, event-related brain potentials were used. Faces with mutual or averted gaze were presented in combination with forward- or backward-spoken words. Infants rapidly processed gaze and spoken words in combination. A late Slow Wave suggests an interaction of the 2 factors, separating backward-spoken word+direct gaze from all other conditions. An additional experiment (n=15) extended the results to referential gaze. The current findings suggest that interactions between visual and auditory cues are present early in infancy.

  20. Semantic Encoding of Spoken Sentences: Adult Aging and the Preservation of Conceptual Short-Term Memory

    ERIC Educational Resources Information Center

    Little, Deborah M.; McGrath, Lauren M.; Prentice, Kristen J.; Wingfield, Arthur

    2006-01-01

    Traditional models of human memory have postulated the need for a brief phonological or verbatim representation of verbal input as a necessary gateway to a higher level conceptual representation of the input. Potter has argued that meaningful sentences may be encoded directly in a conceptual short-term memory (CSTM) running parallel in time to…

  1. Research note: exceptional absolute pitch perception for spoken words in an able adult with autism.

    PubMed

    Heaton, Pamela; Davis, Robert E; Happé, Francesca G E

    2008-01-01

    Autism is a neurodevelopmental disorder, characterised by deficits in socialisation and communication, with repetitive and stereotyped behaviours [American Psychiatric Association (1994). Diagnostic and statistical manual for mental disorders (4th ed.). Washington, DC: APA]. Whilst intellectual and language impairment is observed in a significant proportion of diagnosed individuals [Gillberg, C., & Coleman, M. (2000). The biology of the autistic syndromes (3rd ed.). London: Mac Keith Press; Klinger, L., Dawson, G., & Renner, P. (2002). Autistic disorder. In E. Masn, & R. Barkley (Eds.), Child pyschopathology (2nd ed., pp. 409-454). New York: Guildford Press], the disorder is also strongly associated with the presence of highly developed, idiosyncratic, or savant skills [Heaton, P., & Wallace, G. (2004) Annotation: The savant syndrome. Journal of Child Psychology and Psychiatry, 45 (5), 899-911]. We tested identification of fundamental pitch frequencies in complex tones, sine tones and words in AC, an intellectually able man with autism and absolute pitch (AP) and a group of healthy controls with self-reported AP. The analysis showed that AC's naming of speech pitch was highly superior in comparison to controls. The results suggest that explicit access to perceptual information in speech is retained to a significantly higher degree in autism.

  2. Primary structure of a novel neuropeptide isolated from the corpora cardiaca of periodical cicadas having adipokinetic and hypertrehalosemic activities.

    PubMed

    Raina, A; Pannell, L; Kochansky, J; Jaffe, H

    1995-09-01

    A new neuropeptide hormone was isolated from the corpora cardiaca of the periodical cicadas, Magicicada species. Primary structure of the peptide as determined by a combination of automated Edman degradation after enzymatic deblocking with pyroglutamate aminopeptidase and mass spectrometry is: pGlu-Val-Asn-Phe-Ser-Pro-Ser-Trp-Gly-Asn-NH2. Synthetic peptide assayed in the green stink bug Nezara viridula caused a 112% increase in hemolymph lipids at a dose of 0.625 pmol, and a 67% increase in hemolymph carbohydrates at a dose of 2.5 pmol. Based on these results we designate this peptide, a first from order Homoptera, as Magicicada species-adipokinetic hormone (Mcsp-AKH).

  3. Are Young Children with Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    ERIC Educational Resources Information Center

    Guo, Ling-Yu; McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose: The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation.…

  4. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  5. A Critique of Mark D. Allen's "The Preservation of Verb Subcategory Knowledge in a Spoken Language Comprehension Deficit"

    ERIC Educational Resources Information Center

    Kemmerer, David

    2008-01-01

    Allen [Allen, M. (2005). "The preservation of verb subcategory knowledge in a spoken language comprehension deficit." "Brain and Language, 95", 255-264.] reports a single patient, WBN, who, during spoken language comprehension, is still able to access some of the syntactic properties of verbs despite being unable to access some of their semantic…

  6. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    PubMed

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words.

  7. L'Enonce Toura-Cote d'Ivoire (The Spoken Language of Toura-Ivory Coast).

    ERIC Educational Resources Information Center

    Bearth, Thomas

    The spoken language of Toura, a language spoken by nearly 20,000 inhabitants of a mountainous region situated in the north of Man, the administrative center of the West Ivory Coast, is systematically analyzed in this linguistic study. Sixteen major chapters include: (1) grammatical generalizations, (2) phonemic unities, (3) classification of…

  8. Speaking is silver, writing is golden? The role of cognitive and social factors in written versus spoken witness accounts.

    PubMed

    Sauerland, Melanie; Krix, Alana C; van Kan, Nikki; Glunz, Sarah; Sak, Annabel

    2014-08-01

    Contradictory empirical findings and theoretical accounts exist that are in favor of either a written or a spoken superiority effect. In this article, we present two experiments that put the recall modality effect in the context of eyewitness reports to another test. More specifically, we investigated the role of cognitive and social factors in the effect. In both experiments, participants watched a videotaped staged crime and then gave spoken or written accounts of the event and the people involved. In Experiment 1, 135 participants were assigned to written, spoken-videotaped, spoken-distracted, or spoken-voice recorded conditions to test for the impact of cognitive demand and social factors in the form of interviewer presence. Experiment 2 (N = 124) tested the idea that instruction comprehensiveness differentially impacts recall performance in written versus spoken accounts. While there was no evidence for a spoken superiority effect, we found some support for a written superiority effect for description quantity, but not accuracy. Furthermore, any differences found in description quantity as a function of recall modality could be traced back to participants' free reports. Following up with cued open-ended questions compensated for this effect, although at the expense of description accuracy. This suggests that current police practice of arbitrarily obtaining written or spoken accounts is mostly unproblematic.

  9. Apples and Oranges: Developmental Discontinuities in Spoken-Language Processing?

    PubMed Central

    Creel, Sarah C.; Quam, Carolyn

    2015-01-01

    Much research focuses on speech processing in infancy, sometimes generating the impression that speech-sound categories do not develop further. Yet other studies suggest substantial plasticity throughout mid-childhood. Differences between infant versus child and adult experimental methods currently obscure how language processing changes across childhood, calling for approaches that span development. PMID:26456261

  10. Directory of Spoken-Voice Audio-Cassettes, 1972.

    ERIC Educational Resources Information Center

    McKee, Gerald, Ed.

    Most listings in this catalog, which draws on many sources of production and is not a guide to one company's output, are for programs of college or adult level interest, with the exception of the "Careers" listings, geared toward high school students. The catalog also has lists of producers of children's cassettes and those designed for school…

  11. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Feenaughty, Lynda

    judged each speech sample using the perceptual construct of Speech Severity using a visual analog scale. Additional measures obtained to describe participants included the Sentence Intelligibility Test (SIT), the 10-item Communication Participation Item Bank (CPIB), and standard biopsychosocial measures of depression (Beck Depression Inventory-Fast Screen; BDI-FS), fatigue (Fatigue Severity Scale; FSS), and overall disease severity (Expanded Disability Status Scale; EDSS). Healthy controls completed all measures, with the exception of the CPIB and EDSS. All data were analyzed using standard, descriptive and parametric statistics. For the MSCI group, the relationship between neuropsychological test scores and speech-language variables were explored for each speech task using Pearson correlations. The relationship between neuropsychological test scores and Speech Severity also was explored. Results and Discussion: Topic familiarity for descriptive discourse did not strongly influence speech production or perceptual variables; however, results indicated predicted task-related differences for some spoken language measures. With the exception of the MSCI group, all speaker groups produced the same or slower global speech timing (i.e., speech and articulatory rates), more silent and filled pauses, more grammatical and longer silent pause durations in spontaneous discourse compared to reading aloud. Results revealed no appreciable task differences for linguistic complexity measures. Results indicated group differences for speech rate. The MSCI group produced significantly faster speech rates compared to the MSDYS group. Both the MSDYS and the MSCI groups were judged to have significantly poorer perceived Speech Severity compared to typically aging adults. The Task x Group interaction was only significant for the number of silent pauses. The MSDYS group produced fewer silent pauses in spontaneous speech and more silent pauses in the reading task compared to other groups. Finally

  12. Development of Lexical-Semantic Language System: N400 Priming Effect for Spoken Words in 18- and 24-Month Old Children

    ERIC Educational Resources Information Center

    Rama, Pia; Sirri, Louah; Serres, Josette

    2013-01-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…

  13. Perception and Lateralization of Spoken Emotion by Youths with High-Functioning Forms of Autism

    ERIC Educational Resources Information Center

    Baker, Kimberly F.; Montgomery, Allen A.; Abramson, Ruth

    2010-01-01

    The perception and the cerebral lateralization of spoken emotions were investigated in children and adolescents with high-functioning forms of autism (HFFA), and age-matched typically developing controls (TDC). A dichotic listening task using nonsense passages was used to investigate the recognition of four emotions: happiness, sadness, anger, and…

  14. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    ERIC Educational Resources Information Center

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  15. Improving the Grammatical Accuracy of the Spoken English of Indonesian International Kindergarten Students

    ERIC Educational Resources Information Center

    Gozali, Imelda; Harjanto, Ignatius

    2014-01-01

    The need to improve the spoken English of kindergarten students in an international preschool in Surabaya prompted this Classroom Action Research (CAR). It involved the implementation of Form-Focused Instruction (FFI) strategy coupled with Corrective Feedback (CF) in Grammar lessons. Four grammar topics were selected, namely Regular Plural form,…

  16. Children's Verbal Working Memory: Role of Processing Complexity in Predicting Spoken Sentence Comprehension

    ERIC Educational Resources Information Center

    Magimairaj, Beula M.; Montgomery, James W.

    2012-01-01

    Purpose: This study investigated the role of processing complexity of verbal working memory tasks in predicting spoken sentence comprehension in typically developing children. Of interest was whether simple and more complex working memory tasks have similar or different power in predicting sentence comprehension. Method: Sixty-five children (6- to…

  17. Differential Processing of Thematic and Categorical Conceptual Relations in Spoken Word Production

    ERIC Educational Resources Information Center

    de Zubicaray, Greig I.; Hansen, Samuel; McMahon, Katie L.

    2013-01-01

    Studies of semantic context effects in spoken word production have typically distinguished between categorical (or taxonomic) and associative relations. However, associates tend to confound semantic features or morphological representations, such as whole-part relations and compounds (e.g., BOAT-anchor, BEE-hive). Using a picture-word interference…

  18. Role of Working Memory in Children's Understanding Spoken Narrative: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Montgomery, James W.; Polunenko, Anzhela; Marinellie, Sally A.

    2009-01-01

    The role of phonological short-term memory (PSTM), attentional resource capacity/allocation, and processing speed on children's spoken narrative comprehension was investigated. Sixty-seven children (6-11 years) completed a digit span task (PSTM), concurrent verbal processing and storage (CPS) task (resource capacity/allocation), auditory-visual…

  19. Mental Imagery as Revealed by Eye Movements and Spoken Predicates: A Test of Neurolinguistic Programming.

    ERIC Educational Resources Information Center

    Elich, Matthew; And Others

    1985-01-01

    Tested Bandler and Grinder's proposal that eye movement direction and spoken predicates are indicative of sensory modality of imagery. Subjects reported images in the three modes, but no relation between imagery and eye movements or predicates was found. Visual images were most vivid and often reported. Most subjects rated themselves as visual,…

  20. KANNADA--A CULTURAL INTRODUCTION TO THE SPOKEN STYLES OF THE LANGUAGE.

    ERIC Educational Resources Information Center

    KRISHNAMURTHI, M.G.; MCCORMACK, WILLIAM

    THE TWENTY GRADED UNITS IN THIS TEXT CONSTITUTE AN INTRODUCTION TO BOTH INFORMAL AND FORMAL SPOKEN KANNADA. THE FIRST TWO UNITS PRESENT THE KANNADA MATERIAL IN PHONETIC TRANSCRIPTION ONLY, WITH KANNADA SCRIPT GRADUALLY INTRODUCED FROM UNIT III ON. A TYPICAL LESSON-UNIT INCLUDES--(1) A DIALOG IN PHONETIC TRANSCRIPTION AND ENGLISH TRANSLATION, (2)…

  1. Cross-Language Perception of Cantonese Vowels Spoken by Native and Non-Native Speakers

    ERIC Educational Resources Information Center

    So, Connie K.; Attina, Virginie

    2014-01-01

    This study examined the effect of native language background on listeners' perception of native and non-native vowels spoken by native (Hong Kong Cantonese) and non-native (Mandarin and Australian English) speakers. They completed discrimination and an identification task with and without visual cues in clear and noisy conditions. Results…

  2. Comprehension of Spoken, Written and Signed Sentences in Childhood Language Disorders.

    ERIC Educational Resources Information Center

    Bishop, D. V. M.

    1982-01-01

    Nine children suffering from Landau-Kleffner (L-K) syndrome and 25 children with developmental expressive disorders were tested for comprehension of English grammatical structures in spoken, written, and signed language modalities. L-K children demonstrated comprehension problems in all three language modalities and tended to treat language as…

  3. A Transactional Model of Spoken Vocabulary Variation in Toddlers with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Woynaroski, Tiffany; Yoder, Paul J.; Fey, Marc E.; Warren, Steven F.

    2014-01-01

    Purpose: The authors examined (a) whether dose frequency of milieu communication teaching (MCT) affects children's canonical syllabic communication and (b) whether the relation between early canonical syllabic communication and later spoken vocabulary is mediated by parental linguistic mapping in children with intellectual disabilities (ID).…

  4. The Temporal Dynamics of Ambiguity Resolution: Evidence from Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Dahan, Delphine; Gaskell, M. Gareth

    2007-01-01

    Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with…

  5. An Investigation of Spoken Brazilian Portuguese: Part I, Technical Report. Final Report.

    ERIC Educational Resources Information Center

    Hutchins, John A.

    This final report of a study which developed a working corpus of spoken and written Portuguese from which syntactical studies could be conducted includes computer-processed data on which the findings and analysis are based. A data base, obtained by taping some 487 conversations between Brazil and the United States, serves as the corpus from which…

  6. Some Sources of Error in the Transcription of Real Time in Spoken Discourse.

    ERIC Educational Resources Information Center

    O'Connell, Daniel C.; Kowal, Sabine

    1990-01-01

    Discusses such errors in transcribing real time in spoken discourse as inconsistent use of transcriptional conventions; use of transcriptional symbols with multiple meanings; measurement problems; some cross-purposes of real-time transcription; neglect of time between onset and offset of speech and silence transcription; and transcriptions that…

  7. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    ERIC Educational Resources Information Center

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  8. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  9. Contrastive Analysis of Turkish and English in Turkish EFL Learners' Spoken Discourse

    ERIC Educational Resources Information Center

    Yildiz, Mustafa

    2016-01-01

    The present study aimed at finding whether L1 Turkish caused interference errors on Turkish EFL learners' spoken English discourse. Whether English proficiency level had any effect on the number of errors learners made was further investigated. The participants were given the chance to choose one of the two alternative topics to speak about. The…

  10. The Effect of the Temporal Structure of Spoken Words on Paired-Associate Learning

    ERIC Educational Resources Information Center

    Creel, Sarah C.; Dahan, Delphine

    2010-01-01

    In a series of experiments, participants learned to associate black-and-white shapes with nonsense spoken labels (e.g., "joop"). When tested on their recognition memory, participants falsely recognized as correct a shape paired with a label that began with the same sounds as the shape's original label (onset-overlapping lure; e.g., "joob") more…

  11. Infant perceptual development for faces and spoken words: an integrated approach.

    PubMed

    Watson, Tamara L; Robbins, Rachel A; Best, Catherine T

    2014-11-01

    There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception.

  12. The Influence of Recent Scene Events on Spoken Comprehension: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Knoeferle, Pia; Crocker, Matthew W.

    2007-01-01

    Evidence from recent experiments that monitored attention in clipart scenes during spoken comprehension suggests that people preferably rely on non-stereotypical depicted events over stereotypical thematic knowledge for incremental interpretation. "The Coordinated Interplay Account [Knoeferle, P., & Crocker, M. W. (2006). "The coordinated…

  13. The Representation of Spoken Language in Early Reading Books: Problems for L2 Learner Readers.

    ERIC Educational Resources Information Center

    Wallace, Catherine

    Some of the difficulties faced by second language learners who are continuing to acquire English at the same time as they start to read simple extended English texts are illustrated. Specific focus is on the question of how writers of early reading material can best help such learners to understand the relationship between spoken and written…

  14. Spoken-Word Processing in Aphasia: Effects of Item Overlap and Item Repetition

    ERIC Educational Resources Information Center

    Janse, Esther

    2008-01-01

    Two studies were carried out to investigate the effects of presentation of primes showing partial (word-initial) or full overlap on processing of spoken target words. The first study investigated whether time compression would interfere with lexical processing so as to elicit aphasic-like performance in non-brain-damaged subjects. The second study…

  15. Student Factors in the Acquisition of Modern Spoken Japanese by North American and European Missionaries.

    ERIC Educational Resources Information Center

    Jacobsen, Morris Bernard

    This study tested hypotheses on the relationship between achievement of proficiency in spoken Japanese and the variables of ease of adjustment to life in Japan; effects of childhood multilingualism, musical background, and previous level of formal education; and deliberately delaying the introduction of kanji (Chinese ideographs) into intensive…

  16. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2012-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an event-related potential (ERP) norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a…

  17. Vestiges of "Nous" and the 1st Person Plural Verb in Informal Spoken French.

    ERIC Educational Resources Information Center

    Coveney, Aidan

    2000-01-01

    Aims to find the extent to which subject clitic "nous" and 4th person verbs in French are used in a corpus of informal spoken language and to identify factors that may account for the productive use of nous +4p verb. (Author/VWL)

  18. Parent Telegraphic Speech Use and Spoken Language in Preschoolers with ASD

    ERIC Educational Resources Information Center

    Venker, Courtney E.; Bolt, Daniel M.; Meyer, Allison; Sindberg, Heidi; Weismer, Susan Ellis; Tager-Flusberg, Helen

    2015-01-01

    Purpose: There is considerable controversy regarding whether to use telegraphic or grammatical input when speaking to young children with language delays, including children with autism spectrum disorder (ASD). This study examined telegraphic speech use in parents of preschoolers with ASD and associations with children's spoken language 1 year…

  19. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    ERIC Educational Resources Information Center

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  20. Italians Use Abstract Knowledge about Lexical Stress during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Sulpizio, Simone; McQueen, James M.

    2012-01-01

    In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress.…

  1. A Spoken Word Count (Children--Ages 5, 6 and 7).

    ERIC Educational Resources Information Center

    Wepman, Joseph M.; Hass, Wilbur

    Relatively little research has been done on the quantitative characteristics of children's word usage. This spoken count was undertaken to investigate those aspects of word usage and frequency which could cast light on lexical processes in grammar and verbal development in children. Three groups of 30 children each (boys and girls) from…

  2. AI-Based Chatterbots and Spoken English Teaching: A Critical Analysis

    ERIC Educational Resources Information Center

    Sha, Guoquan

    2009-01-01

    The aim of various approaches implemented, whether the classical "three Ps" (presentation, practice, and production) or communicative language teaching (CLT), is to achieve communicative competence. Although a lot of software developed for teaching spoken English is dressed up to raise interaction, its methodology is largely rooted in tradition.…

  3. Do Phonological Constraints on the Spoken Word Affect Visual Lexical Decision?

    ERIC Educational Resources Information Center

    Lee, Yang; Moreno, Miguel A.; Carello, Claudia; Turvey, M. T.

    2013-01-01

    Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language's writing system and the assimilation of phonemes according to the language's constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes…

  4. Are Phonological Representations of Printed and Spoken Language Isomorphic? Evidence from the Restrictions on Unattested Onsets

    ERIC Educational Resources Information Center

    Berent, Iris

    2008-01-01

    Are the phonological representations of printed and spoken words isomorphic? This question is addressed by investigating the restrictions on onsets. Cross-linguistic research suggests that onsets of rising sonority are preferred to sonority plateaus, which, in turn, are preferred to sonority falls (e.g., bnif, bdif, lbif). Of interest is whether…

  5. Parental Reports of Spoken Language Skills in Children with Down Syndrome.

    ERIC Educational Resources Information Center

    Berglund, Eva; Eriksson, Marten; Johansson, Irene

    2001-01-01

    Spoken language in 330 children with Down syndrome (ages 1-5) and 336 normally developing children (ages 1,2) was compared. Growth trends, individual variation, sex differences, and performance on vocabulary, pragmatic, and grammar scales as well as maximum length of utterance were explored. Three- and four-year-old Down syndrome children…

  6. Word Order in Spoken German: Syntactic Right-Expansions as an Interactionally Constructed Phenomenon

    ERIC Educational Resources Information Center

    Schoenfeldt, Juliane

    2009-01-01

    In real time interaction, the ordering of words is one of the resources participants-to-talk rely on in the negotiation of shared meaning. This dissertation investigates the emergence of syntactic right-expansions in spoken German as a systematic resource in the organization of talk-in-interaction. Employing the methodology of conversation…

  7. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  8. Chunk Learning and the Development of Spoken Discourse in a Japanese as a Foreign Language Classroom

    ERIC Educational Resources Information Center

    Taguchi, Naoko

    2007-01-01

    This study examined the development of spoken discourse among L2 learners of Japanese who received extensive practice on grammatical chunks. Participants in this study were 22 college students enrolled in an elementary Japanese course. They received instruction on a set of grammatical chunks in class through communicative drills and the…

  9. Using Unscripted Spoken Texts in the Teaching of Second Language Listening

    ERIC Educational Resources Information Center

    Wagner, Elvis

    2014-01-01

    Most spoken texts that are used in second language (L2) listening classroom activities are scripted texts, where the text is written, revised, polished, and then read aloud with artificially clear enunciation and slow rate of speech. This article explores the field's overreliance on these scripted texts, at the expense of including unscripted…

  10. Phonological Neighborhood Effects in Spoken Word Production: An fMRI Study

    ERIC Educational Resources Information Center

    Peramunage, Dasun; Blumstein, Sheila E.; Myers, Emily B.; Goldrick, Matthew; Baese-Berk, Melissa

    2011-01-01

    The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g., cape) or lacked a minimal pair (NMP) neighbor (e.g., cake). The voiced neighbor never appeared in the…

  11. Learning and Consolidation of New Spoken Words in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Henderson, Lisa; Powell, Anna; Gaskell, M. Gareth; Norbury, Courtenay

    2014-01-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words…

  12. Crossmodal Semantic Priming by Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2011-01-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…

  13. Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).

    ERIC Educational Resources Information Center

    Pisoni, David B.

    This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…

  14. Individual Differences in Inhibitory Control Relate to Bilingual Spoken Word Processing

    ERIC Educational Resources Information Center

    Mercier, Julie; Pivneva, Irina; Titone, Debra

    2014-01-01

    We investigated whether individual differences in inhibitory control relate to bilingual spoken word recognition. While their eye movements were monitored, native English and native French English-French bilinguals listened to English words (e.g., "field") and looked at pictures corresponding to the target, a within-language competitor…

  15. Effects of Sensory Information and Processing Time in Spoken-Word Recognition.

    ERIC Educational Resources Information Center

    Zwitserlood, Pienie; Schriefers, Herbert

    1995-01-01

    Current models of spoken-word recognition describe access to lexical representations in terms of activation and decay. This research investigated an important aspect of activation: the impact of processing time. The results showed a separable impact of time and signal on the activational state of lexical elements. (34 references) (Author/CK)

  16. Effects of Tasks on Spoken Interaction and Motivation in English Language Learners

    ERIC Educational Resources Information Center

    Carrero Pérez, Nubia Patricia

    2016-01-01

    Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL). It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction…

  17. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers with Down Syndrome

    ERIC Educational Resources Information Center

    Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth

    2015-01-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…

  18. Using Key Part-of-Speech Analysis to Examine Spoken Discourse by Taiwanese EFL Learners

    ERIC Educational Resources Information Center

    Lin, Yen-Liang

    2015-01-01

    This study reports on a corpus analysis of samples of spoken discourse between a group of British and Taiwanese adolescents, with the aim of exploring the statistically significant differences in the use of grammatical categories between the two groups of participants. The key word method extended to a part-of-speech level using the web-based…

  19. Intentional and Reactive Inhibition during Spoken-Word Stroop Task Performance in People with Aphasia

    ERIC Educational Resources Information Center

    Pompon, Rebecca Hunting; McNeil, Malcolm R.; Spencer, Kristie A.; Kendall, Diane L.

    2015-01-01

    Purpose: The integrity of selective attention in people with aphasia (PWA) is currently unknown. Selective attention is essential for everyday communication, and inhibition is an important part of selective attention. This study explored components of inhibition--both intentional and reactive inhibition--during spoken-word production in PWA and in…

  20. The Roles of Tonal and Segmental Information in Mandarin Spoken Word Recognition: An Eyetracking Study

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2010-01-01

    We used eyetracking to examine how tonal versus segmental information influence spoken word recognition in Mandarin Chinese. Participants heard an auditory word and were required to identify its corresponding picture from an array that included the target item ("chuang2" "bed"), a phonological competitor (segmental: chuang1 "window"; cohort:…

  1. A Spoken Access Approach for Chinese Text and Speech Information Retrieval.

    ERIC Educational Resources Information Center

    Chien, Lee-Feng; Wang, Hsin-Min; Bai, Bo-Ren; Lin, Sun-Chein

    2000-01-01

    Presents an efficient spoken-access approach for both Chinese text and Mandarin speech information retrieval. Highlights include human-computer interaction via voice input, speech query recognition at the syllable level, automatic term suggestion, relevance feedback techniques, and experiments that show an improvement in the effectiveness of…

  2. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    ERIC Educational Resources Information Center

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  3. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    ERIC Educational Resources Information Center

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  4. How Are Pronunciation Variants of Spoken Words Recognized? A Test of Generalization to Newly Learned Words

    ERIC Educational Resources Information Center

    Pitt, Mark A.

    2009-01-01

    One account of how pronunciation variants of spoken words (center-> "senner" or "sennah") are recognized is that sublexical processes use information about variation in the same phonological environments to recover the intended segments [Gaskell, G., & Marslen-Wilson, W. D. (1998). Mechanisms of phonological inference in speech perception.…

  5. The role of planum temporale in processing accent variation in spoken language comprehension.

    PubMed

    Adank, Patti; Noordzij, Matthijs L; Hagoort, Peter

    2012-02-01

    A repetition-suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation-speaker and accent-during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a novel accent of Dutch. Each speaker produced sentences in both accents. Participants listened to two sentences presented in quick succession while their haemodynamic responses were recorded in an MR scanner. The first sentence was spoken in Standard Dutch; the second was spoken by the same or a different speaker and produced in Standard Dutch or in the artificial accent. This design made it possible to identify neural responses to a switch in speaker and accent independently. A switch in accent was associated with activations in predominantly left-lateralized areas including posterior temporal regions, including superior temporal gyrus, planum temporale (PT), and supramarginal gyrus, as well as in frontal regions, including left pars opercularis of the inferior frontal gyrus (IFG). A switch in speaker recruited a predominantly right-lateralized network, including middle frontal gyrus and prenuneus. It is concluded that posterior temporal areas, including PT, and frontal areas, including IFG, are involved in processing accent variation in spoken sentence comprehension.

  6. Authentic ESL Spoken Materials: Soap Opera and Sitcom versus Natural Conversation

    ERIC Educational Resources Information Center

    Al-Surmi, Mansoor Ali

    2012-01-01

    TV shows, especially soap operas and sitcoms, are usually considered by ESL practitioners as a source of authentic spoken conversational materials presumably because they reflect the linguistic features of natural conversation. However, practitioners might be faced with the dilemma of how to evaluate whether such conversational materials reflect…

  7. Infant perceptual development for faces and spoken words: An integrated approach

    PubMed Central

    Watson, Tamara L; Robbins, Rachel A; Best, Catherine T

    2014-01-01

    There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626

  8. The Debate over Literary Tamil versus Standard Spoken Tamil: What Do Teachers Say?

    ERIC Educational Resources Information Center

    Saravanan, Vanithamani; Lakshmi, Seetha; Caleon, Imelda S.

    2009-01-01

    This study aims to determine the attitudes toward Standard Spoken Tamil (SST) and Literary Tamil (LT) of 46 Tamil teachers in Singapore. The teachers' attitudes were used as an indicator of the acceptance or nonacceptance of SST as a viable option in the teaching of Tamil in the classroom, in which the focus has been largely on LT. The…

  9. The Evolution of Iranian French Learners' Spoken Interlanguage from a Cognitive Point of View

    ERIC Educational Resources Information Center

    Mehrabi, Marzieh; Rahmatian, Rouhollah; Safa, Parivash; Armiun, Novid

    2014-01-01

    This paper analyzes the spoken corpus of thirty Iranian learners of French at four levels (A1, A2, B1 and B2). The data were collected in a pseudo-longitudinal manner in semi-directed interviews with half closed and open questions to analyze the learners' syntactic errors (omission, addition, substitution and displacement). The most frequent…

  10. Horizontal Flow of Semantic and Phonological Information in Chinese Spoken Sentence Production

    ERIC Educational Resources Information Center

    Yang, Jin-Chen; Yang, Yu-Fang

    2008-01-01

    A variant of the picture--word interference paradigm was used in three experiments to investigate the horizontal information flow of semantic and phonological information between nouns in spoken Mandarin Chinese sentences. Experiment 1 demonstrated that there is a semantic interference effect when the word in the second phrase (N3) and the first…

  11. The Contribution of the Inferior Parietal Cortex to Spoken Language Production

    ERIC Educational Resources Information Center

    Geranmayeh, Fatemeh; Brownsett, Sonia L. E.; Leech, Robert; Beckmann, Christian F.; Woodhead, Zoe; Wise, Richard J. S.

    2012-01-01

    This functional MRI study investigated the involvement of the left inferior parietal cortex (IPC) in spoken language production (Speech). Its role has been apparent in some studies but not others, and is not convincingly supported by clinical studies as they rarely include cases with lesions confined to the parietal lobe. We compared Speech with…

  12. Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.

    ERIC Educational Resources Information Center

    Burton, John K.; Bruning, Roger H.

    1982-01-01

    Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…

  13. Spoken Grammar Practice and Feedback in an ASR-Based CALL System

    ERIC Educational Resources Information Center

    de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland

    2015-01-01

    Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…

  14. Quantifying the "Degree of Linguistic Demand" in Spoken Intelligence Test Directions

    ERIC Educational Resources Information Center

    Cormier, Damien C.; McGrew, Kevin S.; Evans, Jeffrey J.

    2011-01-01

    The linguistic demand of spoken instructions on individually administered norm-referenced psychological and educational tests is of concern when examining individuals who have varying levels of language processing ability or varying cultural backgrounds. The authors present a new method for analyzing the level of verbosity, complexity, and total…

  15. A Comparison of Students' Explanations Derived from Spoken and Written Methods of Questioning and Answering.

    ERIC Educational Resources Information Center

    Seddon, G. M.; Pedrosa, M. A.

    1988-01-01

    Examined whether and how the quality of students' explanations of chemical phenomena was affected by changing the method of giving the question and answer between the spoken and written formats. Concluded that there was no difference between the performance of students using any of these combinations of formats. (Author/YP)

  16. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    ERIC Educational Resources Information Center

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  17. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    ERIC Educational Resources Information Center

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  18. The Spoken Language of Teachers and Pupils in the Education of Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Huntington, Alan; Watton, Faval

    1986-01-01

    Spoken language of 24 teachers and 131 hearing impaired students (6, 10, and 14-year levels) were analyzed for sentence length and complexity. Results revealed that the oral-alone (OA) teachers in OA institutions created richer language environments and helped children display relatively enhanced oral linguistic growth compared to laissez faire…

  19. Discourse Context and the Recognition of Reduced and Canonical Spoken Words

    ERIC Educational Resources Information Center

    Brouwer, Susanne; Mitterer, Holger; Huettig, Falk

    2013-01-01

    In two eye-tracking experiments we examined whether wider discourse information helps the recognition of reduced pronunciations (e.g., "puter") more than the recognition of canonical pronunciations of spoken words (e.g., "computer"). Dutch participants listened to sentences from a casual speech corpus containing canonical and reduced target words.…

  20. Bidialectal African American Adolescents' Beliefs about Spoken Language Expectations in English Classrooms

    ERIC Educational Resources Information Center

    Godley, Amanda; Escher, Allison

    2012-01-01

    This article describes the perspectives of bidialectal African American adolescents--adolescents who speak both African American Vernacular English (AAVE) and Standard English--on spoken language expectations in their English classes. Previous research has demonstrated that many teachers hold negative views of AAVE, but existing scholarship has…

  1. Getting It Right? Using Aphasic Naming Errors to Evaluate Theoretical Models of Spoken Word Recognition.

    ERIC Educational Resources Information Center

    Nickels, Lyndsey

    1995-01-01

    Different models of spoken word production make different predictions regarding the extent of effects of certain word properties on the output of that model. This article examines these predictions with regard to the effect of these variables on the production of semantic and phonological errors by aphasic subjects. (60 references) (Author/CK)

  2. Comparing Spoken Language Treatments for Minimally Verbal Preschoolers with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-01-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in…

  3. Professional Training in Listening and Spoken Language--A Canadian Perspective

    ERIC Educational Resources Information Center

    Fitzpatrick, Elizabeth

    2010-01-01

    Several factors undoubtedly influenced the development of listening and spoken language options for children with hearing loss in Canada. The concept of providing auditory-based rehabilitation was popularized in Canada in the 1960s through the work of Drs. Daniel Ling and Agnes Ling in Montreal. The Lings founded the McGill University Project for…

  4. User-Centred Design for Chinese-Oriented Spoken English Learning System

    ERIC Educational Resources Information Center

    Yu, Ping; Pan, Yingxin; Li, Chen; Zhang, Zengxiu; Shi, Qin; Chu, Wenpei; Liu, Mingzhuo; Zhu, Zhiting

    2016-01-01

    Oral production is an important part in English learning. Lack of a language environment with efficient instruction and feedback is a big issue for non-native speakers' English spoken skill improvement. A computer-assisted language learning system can provide many potential benefits to language learners. It allows adequate instructions and instant…

  5. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    ERIC Educational Resources Information Center

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  6. Effects of Negotiated Interaction on Mongolian-Nationality EFL Learners' Spoken Output

    ERIC Educational Resources Information Center

    Li, Xueping

    2012-01-01

    The present study examines the effect of negotiated interaction on Mongolian-nationality EFL learners' spoken production, focusing on the teacher-learner interaction in a story-telling task. The study supports the hypothesis that interaction plays a facilitating role in language development for learners. Quantitative analysis shows that Mongolian…

  7. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    ERIC Educational Resources Information Center

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  8. Teaching Spoken Discourse Markers Explicitly: A Comparison of III and PPP

    ERIC Educational Resources Information Center

    Jones, Christian; Carter, Ronald

    2014-01-01

    This article reports on mixed methods classroom research carried out at a British university. The study investigates the effectiveness of two different explicit teaching frameworks, Illustration--Interaction--Induction (III) and Present--Practice--Produce (PPP) used to teach the same spoken discourse markers (DMs) to two different groups of…

  9. Feature Statistics Modulate the Activation of Meaning during Spoken Word Processing

    ERIC Educational Resources Information Center

    Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.

    2016-01-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…

  10. Different patterns of spoken and written word comprehension deficit in aphasic stroke patients.

    PubMed

    Crutch, Sebastian J; Warrington, Elizabeth K

    2011-09-01

    This study presents neuropsychological evidence for differences in the semantic representations underpinning spoken and written word comprehension. Potential modality-based discrepancies in the semantic system were examined by testing whether spoken word (auditory-verbal input) and written word (visual-verbal input) comprehension exhibited the same effect profile on variables typically used to distinguish so-called access and storage disorders (e.g., response consistency, sensitivity to item frequency). The study was based on the premise that damage to a common set of semantic representations should have an equivalent impact upon comprehension performance irrespective of input modality, whereas damage to partially dissociable semantic representations may give rise to different qualities of deficit (access/storage) in the comprehension of stimuli presented in different input modalities (spoken/written). The study involved two patients with global aphasia following left middle cerebral artery stroke (F.B.I. and H.O.P.). The two patients showed matched performance on conventional tests of single word comprehension with clear evidence of semantic impairment for stimuli presented in both the spoken and written input modalities. However, in H.O.P., spoken and written word comprehension was affected in the same way by variations in stimulus category, frequency, and multiple stimulus presentations, whilst in F.B.I., there were clear differences between input modalities with all three variables. More specifically, F.B.I.'s written word comprehension was significantly affected by category (living > nonliving) and frequency (high > low) but not multiple presentations (single = multiple), more consistent with degradation of stored representations (storage deficit). By contrast, his spoken word comprehension was unaffected by category (living = nonliving) and frequency (high = low) but was affected by multiple presentations (single > multiple; serial

  11. Word identification and eye fixation locations in visual and visual-plus-auditory presentations of spoken sentences.

    PubMed

    Lansing, Charissa R; McConkie, George W

    2003-05-01

    In this study, we investigated where people look on talkers' faces as they try to understand what is being said. Sixteen young adults with normal hearing and demonstrated average speechreading proficiency were evaluated under two modality presentation conditions: vision only versus vision plus low-intensity sound. They were scored for the number of words correctly identified from 80 unconnected sentences spoken by two talkers. The results showed two competing tendencies: an eye primacy effect that draws the gaze to the talkers eyes during silence and an information source attraction effect that draws the gaze to the talker's mouth during speech periods. Dynamic shifts occur between eyes and mouth prior to speech onset and following the offset of speech, and saccades tend to be suppressed during speech periods. The degree to which the gaze is drawn to the mouth during speech and the degree to which saccadic activity is suppressed depend on the difficulty of the speech identification task. Under the most difficult modality presentation condition, vison only, accuracy was related to average sentence difficulty and individual proficiency in visual speech perception, but not to the proportion of gaze time directed toward the talkers mouth or toward other parts of the talker's face.

  12. "We communicated that way for a reason": language practices and language ideologies among hearing adults whose parents are deaf.

    PubMed

    Pizer, Ginger; Walters, Keith; Meier, Richard P

    2013-01-01

    Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing communities. Despite this variation, the interviewees' moral judgments of their own and others' communicative behavior suggest that these adults share a language ideology concerning the obligation of all family members to expend effort to overcome potential communication barriers. To our knowledge, such a language ideology is not similarly pervasive among spoken-language bilingual families, raising the question of whether there is something unique about family bimodal bilingualism that imposes different rights and responsibilities on family members than spoken-language family bilingualism does. This ideology unites an otherwise diverse group of interviewees, where each one preemptively denied being a "typical CODA [children of deaf adult]."

  13. Self-organizing semantic maps and its application to word alignment in Japanese-Chinese parallel corpora.

    PubMed

    Ma, Qing; Kanzaki, Kyoko; Zhang, Yujie; Murata, Masaki; Isahara, Hitoshi

    2004-01-01

    This paper presents a method involving self-organizing monolingual semantic maps that are visible and continuous representations where Chinese or Japanese words with similar meanings are placed at the same or neighboring points so that the distance between them represents the semantic similarity. We used the self-organizing map, SOM, as a self-organizing device. The words to be self-organized are defined by sets of co-occurring words collected from Chinese or Japanese newspapers, according to their grammatical relationships. The words are then coded into vectors to be forwarded to the SOM, taking into account the semantic correlation between them, which is established using a form of word-similarity computation. The self-organized monolingual semantic maps are assessed by numerical evaluations of accuracy, recall, and the F-measure, as well as by intuition, and by the comparisons with a clustering method and with multivariate statistical analysis. This paper further discusses the possibility that the method we propose can be extended to constructing Japanese-Chinese bilingual semantic maps, with the aim of providing a semantics-based approach to word alignment in Japanese-Chinese parallel corpora. We also show the effectiveness of this extended method through small-scale comparative experiments with a baseline method, where the alignment of Japanese and Chinese words is directly determined through the Euclidean distance of vectors representing the words, with a clustering method, and with multivariate statistical analysis.

  14. Physiologically Persistent Corpora lutea in Eurasian Lynx (Lynx lynx) – Longitudinal Ultrasound and Endocrine Examinations Intra-Vitam

    PubMed Central

    Painer, Johanna; Jewgenow, Katarina; Dehnhard, Martin; Arnemo, Jon M.; Linnell, John D. C.; Odden, John; Hildebrandt, Thomas B.; Goeritz, Frank

    2014-01-01

    Felids generally follow a poly-estrous reproductive strategy. Eurasian lynx (Lynx lynx) display a different pattern of reproductive cyclicity where physiologically persistent corpora lutea (CLs) induce a mono-estrous condition which results in highly seasonal reproduction. The present study was based around a sono-morphological and endocrine study of captive Eurasian lynx, and a control-study on free-ranging lynx. We verified that CLs persist after pregnancy and pseudo-pregnancy for at least a two-year period. We could show that lynx are able to enter estrus in the following year, while CLs from the previous years persisted in structure and only temporarily reduced their function for the period of estrus onset or birth, which is unique among felids. The almost constant luteal progesterone secretion (average of 5 ng/ml serum) seems to prevent folliculogenesis outside the breeding season and has converted a poly-estrous general felid cycle into a mono-estrous cycle specific for lynx. The hormonal regulation mechanism which causes lynx to have the longest CL lifespan amongst mammals remains unclear. The described non-felid like ovarian physiology appears to be a remarkably non-plastic system. The lynx's reproductive ability to adapt to environmental and anthropogenic changes needs further investigation. PMID:24599348

  15. The influence of trilostane on steroid hormone metabolism in canine adrenal glands and corpora lutea-an in vitro study.

    PubMed

    Ouschan, C; Lepschy, M; Zeugswetter, F; Möstl, E

    2012-03-01

    Trilostane is widely used to treat hyperadrenocorticism in dogs. Trilostane competitively inhibits the enzyme 3-beta hydroxysteroid dehydrogenase (3β-HSD), which converts pregnenolone (P5) to progesterone (P4) and dehydroepiandrosterone (DHEA) to androstendione (A4). Although trilostane is frequently used in dogs, the molecular mechanism underlying its effect on canine steroid hormone biosynthesis is still an enigma. Multiple enzymes of 3β-HSD have been found in humans, rats and mice and their presence might explain the contradictory results of studies on the effectiveness of trilostane. We therefore investigated the influence of trilostane on steroid hormone metabolism in dogs by means of an in vitro model. Canine adrenal glands from freshly euthanized dogs and corpora lutea (CL) were incubated with increasing doses of trilostane. Tritiated P5 or DHEA were used as substrates. The resulting radioactive metabolites were extracted, separated by thin layer chromatography and visualized by autoradiography. A wide variety of radioactive metabolites were formed in the adrenal glands and in the CL, indicating high metabolic activity in both tissues. In the adrenal cortex, trilostane influences the P5 metabolism in a dose- and time-dependent manner, while DHEA metabolism and metabolism of both hormones in the CL were unaffected. The results indicate for the first time that there might be more than one enzyme of 3β-HSD present in dogs and that trilostane selectively inhibits P5 conversion to P4 only in the adrenal gland.

  16. Physiologically persistent Corpora lutea in Eurasian lynx (Lynx lynx) - longitudinal ultrasound and endocrine examinations intra-vitam.

    PubMed

    Painer, Johanna; Jewgenow, Katarina; Dehnhard, Martin; Arnemo, Jon M; Linnell, John D C; Odden, John; Hildebrandt, Thomas B; Goeritz, Frank

    2014-01-01

    Felids generally follow a poly-estrous reproductive strategy. Eurasian lynx (Lynx lynx) display a different pattern of reproductive cyclicity where physiologically persistent corpora lutea (CLs) induce a mono-estrous condition which results in highly seasonal reproduction. The present study was based around a sono-morphological and endocrine study of captive Eurasian lynx, and a control-study on free-ranging lynx. We verified that CLs persist after pregnancy and pseudo-pregnancy for at least a two-year period. We could show that lynx are able to enter estrus in the following year, while CLs from the previous years persisted in structure and only temporarily reduced their function for the period of estrus onset or birth, which is unique among felids. The almost constant luteal progesterone secretion (average of 5 ng/ml serum) seems to prevent folliculogenesis outside the breeding season and has converted a poly-estrous general felid cycle into a mono-estrous cycle specific for lynx. The hormonal regulation mechanism which causes lynx to have the longest CL lifespan amongst mammals remains unclear. The described non-felid like ovarian physiology appears to be a remarkably non-plastic system. The lynx's reproductive ability to adapt to environmental and anthropogenic changes needs further investigation.

  17. Transnationalism and Rights in the Age of Empire: Spoken Word, Music, and Digital Culture in the Borderlands

    ERIC Educational Resources Information Center

    Hicks, Emily D.

    2004-01-01

    The cultural activities, including the performance of music and spoken word are documented. The cultural activities in the San Diego-Tijuana region that is described is emerged from rhizomatic, transnational points of contact.

  18. La mort d'une langue: le judeo-espagnol (The Death of a Language: The Spanish Spoken by Jews)

    ERIC Educational Resources Information Center

    Renard, Raymond

    1971-01-01

    Describes the Sephardic culture which flourished in the Balkans, Ottoman Empire, and North Africa during the Middle Ages. Suggests the use of Ladino", the language of medieval Spain spoken by the expelled Jews. (DS)

  19. The Specificity of Sound Symbolic Correspondences in Spoken Language.

    PubMed

    Tzeng, Christina Y; Nygaard, Lynne C; Namy, Laura L

    2016-12-29

    Although language has long been regarded as a primarily arbitrary system, sound symbolism, or non-arbitrary correspondences between the sound of a word and its meaning, also exists in natural language. Previous research suggests that listeners are sensitive to sound symbolism. However, little is known about the specificity of these mappings. This study investigated whether sound symbolic properties correspond to specific meanings, or whether these properties generalize across semantic dimensions. In three experiments, native English-speaking adults heard sound symbolic foreign words for dimensional adjective pairs (big/small, round/pointy, fast/slow, moving/still) and for each foreign word, selected a translation among English antonyms that either matched or mismatched with the correct meaning dimension. Listeners agreed more reliably on the English translation for matched relative to mismatched dimensions, though reliable cross-dimensional mappings did occur. These findings suggest that although sound symbolic properties generalize to meanings that may share overlapping semantic features, sound symbolic mappings offer semantic specificity.

  20. Targeted Help for Spoken Dialogue Systems: Intelligent Feedback Improves Naive Users' Performance

    NASA Technical Reports Server (NTRS)

    Hockey, Beth Ann; Lemon, Oliver; Campana, Ellen; Hiatt, Laura; Aist, Gregory; Hieronymous, Jim; Gruenstein, Alexander; Dowding, John

    2003-01-01

    We present experimental evidence that providing naive users of a spoken dialogue system with immediate help messages related to their out-of-coverage utterances improves their success in using the system. A grammar-based recognizer and a Statistical Language Model (SLM) recognizer are run simultaneously. If the grammar-based recognizer suceeds, the less accurate SLM recognizer hypothesis is not used. When the grammar-based recognizer fails and the SLM recognizer produces a recognition hypothesis, this result is used by the Targeted Help agent to give the user feed-back on what was recognized, a diagnosis of what was problematic about the utterance, and a related in-coverage example. The in-coverage example is intended to encourage alignment between user inputs and the language model of the system. We report on controlled experiments on a spoken dialogue system for command and control of a simulated robotic helicopter.

  1. Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans

    NASA Astrophysics Data System (ADS)

    Pei, Xiaomei; Barbour, Dennis L.; Leuthardt, Eric C.; Schalk, Gerwin

    2011-08-01

    Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.

  2. Effect of Representational Distance between Meanings on Recognition of Ambiguous Spoken Words

    PubMed Central

    Mirman, Daniel; Strauss, Ted J.; Dixon, James A.; Magnuson, James S.

    2009-01-01

    Previous research indicates that mental representations of word meanings are distributed along both semantic and syntactic dimensions such that nouns and verbs are relatively distinct from one another. Two experiments examined the effect of representational distance between meanings on recognition of ambiguous spoken words by comparing recognition of unambiguous words, noun-verb homonyms, and noun-noun homonyms. In Experiment 1, auditory lexical decision was fastest for unambiguous words, slower for noun-verb homonyms, and slowest for noun-noun homonyms. In Experiment 2, response times for matching spoken words to pictures followed the same pattern and eye fixation time courses revealed converging, gradual time course differences between conditions. These results indicate greater competition between meanings of ambiguous words when the meanings are from the same grammatical class (noun-noun homonyms) than they when are from different grammatical classes (noun-verb homonyms). PMID:20354577

  3. The genetic bases of speech sound disorders: evidence from spoken and written language.

    PubMed

    Lewis, Barbara A; Shriberg, Lawrence D; Freebairn, Lisa A; Hansen, Amy J; Stein, Catherine M; Taylor, H Gerry; Iyengar, Sudha K

    2006-12-01

    The purpose of this article is to review recent findings suggesting a genetic susceptibility for speech sound disorders (SSD), the most prevalent communication disorder in early childhood. The importance of genetic studies of SSD and the hypothetical underpinnings of these genetic findings are reviewed, as well as genetic associations of SSD with other language and reading disabilities. The authors propose that many genes contribute to SSD. They further hypothesize that some genes contribute to SSD disorders alone, whereas other genes influence both SSD and other written and spoken language disorders. The authors postulate that underlying common cognitive traits, or endophenotypes, are responsible for shared genetic influences of spoken and written language. They review findings from their genetic linkage study and from the literature to illustrate recent developments in this area. Finally, they discuss challenges for identifying genetic influence on SSD and propose a conceptual framework for study of the genetic basis of SSD.

  4. Preliminary findings of similarities and differences in the signed and spoken language of children with autism.

    PubMed

    Shield, Aaron

    2014-11-01

    Approximately 30% of hearing children with autism spectrum disorder (ASD) do not acquire expressive language, and those who do often show impairments related to their social deficits, using language instrumentally rather than socially, with a poor understanding of pragmatics and a tendency toward repetitive content. Linguistic abnormalities can be clinically useful as diagnostic markers of ASD and as targets for intervention. Studies have begun to document how ASD manifests in children who are deaf for whom signed languages are the primary means of communication. Though the underlying disorder is presumed to be the same in children who are deaf and children who hear, the structures of signed and spoken languages differ in key ways. This article describes similarities and differences between the signed and spoken language acquisition of children on the spectrum. Similarities include echolalia, pronoun avoidance, neologisms, and the existence of minimally verbal children. Possible areas of divergence include pronoun reversal, palm reversal, and facial grammar.

  5. The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing

    PubMed Central

    Gow, David W.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing. PMID:22498237

  6. AVIR: a spoken document retrieval system in e-learning environment

    NASA Astrophysics Data System (ADS)

    Gagliardi, Isabella; Padula, Marco; Pagliarulo, Patrizia; Aliprandi, Bruno

    2006-01-01

    In this paper we present AVIR (Audio & Video Information Retrieval), a project of CNR (Italian National Research Council) - ITC to develop a tools to support an information system for distance e-learning. AVIR has been designed to store, index, and classify audio and video lessons to make them available to students and other interested users. The core of AVIR is a SDR (Spoken Document Retrieval) system which automatically transcribes the spoken documents into texts and indexes them through dictionaries appropriately created. During the fruition on-line, the user can formulate his queries searching documents by date, professor, title of the lesson or selecting one or more specific words. The results are presented to the users: in case of video lessons the preview of the first frames is shown. Moreover, slides of the lessons and associate papers can be retrieved.

  7. The influence of sublexical and lexical representations on the processing of spoken words in English

    PubMed Central

    VITEVITCH, MICHAEL S.

    2008-01-01

    Previous research suggests that sublexical and lexical representations are involved in spoken word recognition. The current experiment examined when sublexical and lexical representations are used in the processing of real words in English. The same set of words varying in phonotactic probability/neighbourhood density was presented in three different versions of a same-different matching task: (1) mostly real words as filler items, (2) an equal number of words and nonsense words as filler items and (3) mostly nonsense words as filler items. The results showed that lexical representations were used in version 1 of the same-different matching task to process the words, whereas sublexical representations were used in version 3 of the same-different matching task to process the words. Finally, in version 2 of the same-different matching task individual variation was observed in the form of distinct sublexical and lexical biases. Implications for the processing of spoken words are discussed. PMID:14564833

  8. Contribution of the basal ganglia to spoken language: is speech production like the other motor skills?

    PubMed

    Zenon, Alexandre; Olivier, Etienne

    2014-12-01

    Two of the roles assigned to the basal ganglia in spoken language parallel very well their contribution to motor behaviour: (1) their role in sequence processing, resulting in syntax deficits, and (2) their role in movement "vigor," leading to "hypokinetic dysarthria" or "hypophonia." This is an additional example of how the motor system has served the emergence of high-level cognitive functions, such as language.

  9. Demonstration of a Spoken Dialogue Interface for Planning Activities of a Semi-autonomous Robot

    NASA Technical Reports Server (NTRS)

    Dowding, John; Frank, Jeremy; Hockey, Beth Ann; Jonsson, Ari; Aist, Gregory

    2002-01-01

    Planning and scheduling in the face of uncertainty and change pushes the capabilities of both planning and dialogue technologies by requiring complex negotiation to arrive at a workable plan. Planning for use of semi-autonomous robots involves negotiation among multiple participants with competing scientific and engineering goals to co-construct a complex plan. In NASA applications this plan construction is done under severe time pressure so having a dialogue interface to the plan construction tools can aid rapid completion of the process. But, this will put significant demands on spoken dialogue technology, particularly in the areas of dialogue management and generation. The dialogue interface will need to be able to handle the complex dialogue strategies that occur in negotiation dialogues, including hypotheticals and revisions, and the generation component will require an ability to summarize complex plans. This demonstration will describe a work in progress towards building a spoken dialogue interface to the EUROPA planner for the purposes of planning and scheduling the activities of a semi-autonomous robot. A prototype interface has been built for planning the schedule of the Personal Satellite Assistant (PSA), a mobile robot designed for micro-gravity environments that is intended for use on the Space Shuttle and International Space Station. The spoken dialogue interface gives the user the capability to ask for a description of the plan, ask specific questions about the plan, and update or modify the plan. We anticipate that a spoken dialogue interface to the planner will provide a natural augmentation or alternative to the visualization interface, in situations in which the user needs very targeted information about the plan, in situations where natural language can express complex ideas more concisely than GUI actions, or in situations in which a graphical user interface is not appropriate.

  10. Understanding the Relationship between Latino Students' Preferred Learning Styles and Their Language Spoken at Home

    ERIC Educational Resources Information Center

    Maldonado Torres, Sonia Enid

    2016-01-01

    The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…

  11. Applicability of the Spoken Knowledge in Low Literacy Patients with Diabetes in Brazilian elderly

    PubMed Central

    Souza, Jonas Gordilho; Apolinario, Daniel; Farfel, José Marcelo; Jaluul, Omar; Magaldi, Regina Miksian; Busse, Alexandre Leopold; Campora, Flávia; Jacob-Filho, Wilson

    2016-01-01

    ABSTRACT Objective To translate, adapt and evaluate the properties of a Brazilian Portuguese version of the Spoken Knowledge in Low Literacy Patients with Diabetes, which is a questionnaire that evaluate diabetes knowledge. Methods A cross-sectional study with type 2 diabetes patients aged ≥60 years, seen at a public healthcare organization in the city of Sao Paulo (SP). After the development of the Portuguese version, we evaluated the psychometrics properties and the association with sociodemographic and clinical variables. The regression models were adjusted for sociodemographic data, functional health literacy, duration of disease, use of insulin, and glycemic control. Results We evaluated 129 type 2 diabetic patients, with mean age of 75.9 (±6.2) years, mean scholling of 5.2 (±4.4) years, mean glycosylated hemoglobin of 7.2% (±1.4), and mean score on Spoken Knowledge in Low Literacy Patients with Diabetes of 42.1% (±25.8). In the regression model, the variables independently associated to Spoken Knowledge in Low Literacy Patients with Diabetes were schooling (B=0.193; p=0.003), use of insulin (B=1.326; p=0.004), duration of diabetes (B=0.053; p=0.022) and health literacy (B=0.108; p=0.021). The determination coefficient was 0.273. The Cronbach a was 0.75, demonstrating appropriate internal consistency. Conclusion This translated version of the Spoken Knowledge in Low Literacy Patients with Diabetes showed to be adequate to evaluate diabetes knowledge in elderly patients with low schooling levels. It presented normal distribution, adequate internal consistency, with no ceiling or floor effect. The tool is easy to be used, can be quickly applied and does not depend on reading skills. PMID:28076599

  12. Overlapping networks engaged during spoken language production and its cognitive control.

    PubMed

    Geranmayeh, Fatemeh; Wise, Richard J S; Mehta, Amrish; Leech, Robert

    2014-06-25

    Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and "rest," to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production.

  13. Spoken word recognition by Latino children learning Spanish as their first language*

    PubMed Central

    HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE

    2010-01-01

    Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157

  14. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials.

    PubMed

    Boudewyn, Megan A; Gordon, Peter C; Long, Debra; Polse, Lara; Swaab, Tamara Y

    2012-06-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., "Luckily Ben had picked up some salt and pepper/basil", preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition.

  15. Orthographic effects in spoken language: on-line activation or phonological restructuring?

    PubMed

    Perre, Laetitia; Pattamadilok, Chotiga; Montant, Marie; Ziegler, Johannes C

    2009-06-12

    Previous research has shown that literacy (i.e., learning to read and spell) affects spoken language processing. However, there is an on-going debate about the nature of this influence. Some argued that orthography is co-activated on-line whenever we hear a spoken word. Others suggested that orthography is not activated on-line but has changed the nature of the phonological representations. Finally, both effects might occur simultaneously, that is, orthography might be activated on-line in addition to having changed the nature of the phonological representations. Previous studies have not been able to tease apart these hypotheses. The present study started by replicating the finding of an orthographic consistency effect in spoken word recognition using event-related brain potentials (ERPs): words with multiple spellings (i.e., inconsistent words) differed from words with unique spellings (i.e., consistent words) as early as 330 ms after the onset of the target. We then employed standardized low resolution electromagnetic tomography (sLORETA) to determine the possible underlying cortical generators of this effect. The results showed that the orthographic consistency effect was clearly localized in a classic phonological area (left BA40). No evidence was found for activation in the posterior cortical areas coding orthographic information, such as the visual word form area in the left fusiform gyrus (BA37). This finding is consistent with the restructuring hypothesis according to which phonological representations are "contaminated" by orthographic knowledge.

  16. Adaptation to Pronunciation Variations in Indonesian Spoken Query-Based Information Retrieval

    NASA Astrophysics Data System (ADS)

    Lestari, Dessi Puji; Furui, Sadaoki

    Recognition errors of proper nouns and foreign words significantly decrease the performance of ASR-based speech applications such as voice dialing systems, speech summarization, spoken document retrieval, and spoken query-based information retrieval (IR). The reason is that proper nouns and words that come from other languages are usually the most important key words. The loss of such words due to misrecognition in turn leads to a loss of significant information from the speech source. This paper focuses on how to improve the performance of Indonesian ASR by alleviating the problem of pronunciation variation of proper nouns and foreign words (English words in particular). To improve the proper noun recognition accuracy, proper-noun specific acoustic models are created by supervised adaptation using maximum likelihood linear regression (MLLR). To improve English word recognition, the pronunciation of English words contained in the lexicon is fixed by using rule-based English-to-Indonesian phoneme mapping. The effectiveness of the proposed method was confirmed through spoken query based Indonesian IR. We used Inference Network-based (IN-based) IR and compared its results with those of the classical Vector Space Model (VSM) IR, both using a tf-idf weighting schema. Experimental results show that IN-based IR outperforms VSM IR.

  17. The influence of the phonological neighborhood clustering coefficient on spoken word recognition.

    PubMed

    Chan, Kit Ying; Vitevitch, Michael S

    2009-12-01

    Clustering coefficient-a measure derived from the new science of networks-refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words bat, hat, and can, all of which are neighbors of the word cat; the words bat and hat are also neighbors of each other. In a perceptual identification task, words with a low clustering coefficient (i.e., few neighbors are neighbors of each other) were more accurately identified than words with a high clustering coefficient (i.e., many neighbors are neighbors of each other). In a lexical decision task, words with a low clustering coefficient were responded to more quickly than words with a high clustering coefficient. These findings suggest that the structure of the lexicon (i.e., the similarity relationships among neighbors of the target word measured by clustering coefficient) influences lexical access in spoken word recognition. Simulations of the TRACE and Shortlist models of spoken word recognition failed to account for the present findings. A framework for a new model of spoken word recognition is proposed.

  18. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    PubMed

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection.

  19. Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices.

    PubMed

    Jones, A C; Toscano, E; Botting, N; Marshall, C R; Atkinson, J R; Denmark, T; Herman, R; Morgan, G

    2016-12-01

    Previous research has highlighted that deaf children acquiring spoken English have difficulties in narrative development relative to their hearing peers both in terms of macro-structure and with micro-structural devices. The majority of previous research focused on narrative tasks designed for hearing children that depend on good receptive language skills. The current study compared narratives of 6 to 11-year-old deaf children who use spoken English (N=59) with matched for age and non-verbal intelligence hearing peers. To examine the role of general language abilities, single word vocabulary was also assessed. Narratives were elicited by the retelling of a story presented non-verbally in video format. Results showed that deaf and hearing children had equivalent macro-structure skills, but the deaf group showed poorer performance on micro-structural components. Furthermore, the deaf group gave less detailed responses to inferencing probe questions indicating poorer understanding of the story's underlying message. For deaf children, micro-level devices most strongly correlated with the vocabulary measure. These findings suggest that deaf children, despite spoken language delays, are able to convey the main elements of content and structure in narrative but have greater difficulty in using grammatical devices more dependent on finer linguistic and pragmatic skills.

  20. Parent Telegraphic Speech Use and Spoken Language in Preschoolers With ASD

    PubMed Central

    Bolt, Daniel M.; Meyer, Allison; Sindberg, Heidi; Ellis Weismer, Susan; Tager-Flusberg, Helen

    2015-01-01

    Purpose There is considerable controversy regarding whether to use telegraphic or grammatical input when speaking to young children with language delays, including children with autism spectrum disorder (ASD). This study examined telegraphic speech use in parents of preschoolers with ASD and associations with children's spoken language 1 year later. Method Parent–child dyads (n = 55) participated when children were, on average, 3 (Time 1) and 4 years old (Time 2). The rate at which parents omitted obligatory determiners was derived from transcripts of parent–child play sessions; measures of children's spoken language were obtained from these same transcripts. Results Telegraphic speech use varied substantially across parents. Higher rates of parent determiner omissions at Time 1 were significantly associated with lower lexical diversity in children's spoken language at Time 2, even when controlling for children's baseline lexical diversity and nonverbal IQ. Findings from path analyses supported the directionality of effects assumed in our regression analyses, although these results should be interpreted with caution due to the limited sample size. Conclusions Telegraphic input may have a negative impact on language development in young children with ASD. Future experimental research is needed to directly investigate how telegraphic input affects children's language learning and processing. PMID:26381592

  1. Orthographic Activation in L2 Spoken Word Recognition Depends on Proficiency: Evidence from Eye-Tracking

    PubMed Central

    Veivo, Outi; Järvikivi, Juhani; Porretta, Vincent; Hyönä, Jukka

    2016-01-01

    The use of orthographic and phonological information in spoken word recognition was studied in a visual world task where L1 Finnish learners of L2 French (n = 64) and L1 French native speakers (n = 24) were asked to match spoken word forms with printed words while their eye movements were recorded. In Experiment 1, French target words were contrasted with competitors having a longer ( vs. ) or a shorter word initial phonological overlap ( vs. ) and an identical orthographic overlap. In Experiment 2, target words were contrasted with competitors of either longer ( vs. ) or shorter word initial orthographic overlap ( vs. ) and of an identical phonological overlap. A general phonological effect was observed in the L2 listener group but not in the L1 control group. No general orthographic effects were observed in the L2 or L1 groups, but a significant effect of proficiency was observed for orthographic overlap over time: higher proficiency L2 listeners used also orthographic information in the matching task in a time-window from 400 to 700 ms, whereas no such effect was observed for lower proficiency listeners. These results suggest that the activation of orthographic information in L2 spoken word recognition depends on proficiency in L2. PMID:27512381

  2. I Feel You: The Design and Evaluation of a Domotic Affect-Sensitive Spoken Conversational Agent

    PubMed Central

    Lutfi, Syaheerah Lebai; Fernández-Martínez, Fernando; Lorenzo-Trueba, Jaime; Barra-Chicote, Roberto; Montero, Juan Manuel

    2013-01-01

    We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction. PMID:23945740

  3. I feel you: the design and evaluation of a domotic affect-sensitive spoken conversational agent.

    PubMed

    Lutfi, Syaheerah Lebai; Fernández-Martínez, Fernando; Lorenzo-Trueba, Jaime; Barra-Chicote, Roberto; Montero, Juan Manuel

    2013-08-13

    We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction.

  4. Retinoic Acid Signaling: A New Piece in the Spoken Language Puzzle

    PubMed Central

    van Rhijn, Jon-Ruben; Vernes, Sonja C.

    2015-01-01

    Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech–motor output. Understanding the neuro-genetic mechanisms involved in the correct development and function of these pathways will shed light on how humans can effortlessly and innately use spoken language and help to elucidate what goes wrong in speech-language disorders. FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that includes receptive and expressive language impairments. The neuro-molecular mechanisms controlled by FOXP2 will give insight into our capacity for speech–motor control, but are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid (RA) signaling and to modify the cellular response to RA, a key regulator of brain development. Here we explore evidence that FOXP2 and RA function in overlapping pathways. We summate evidence at molecular, cellular, and behavioral levels that suggest an interplay between FOXP2 and RA that may be important for fine motor control and speech–motor output. We propose RA signaling is an exciting new angle from which to investigate how neuro-genetic mechanisms can contribute to the (spoken) language ready brain. PMID:26635706

  5. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    PubMed Central

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  6. How long-term memory and accentuation interact during spoken language comprehension.

    PubMed

    Li, Xiaoqing; Yang, Yufang

    2013-04-01

    Spoken language comprehension requires immediate integration of different information types, such as semantics, syntax, and prosody. Meanwhile, both the information derived from speech signals and the information retrieved from long-term memory exert their influence on language comprehension immediately. Using EEG (electroencephalogram), the present study investigated how the information retrieved from long-term memory interacts with accentuation during spoken language comprehension. Mini Chinese discourses were used as stimuli, with an interrogative or assertive context sentence preceding the target sentence. The target sentence included one critical word conveying new information. The critical word was either highly expected or lowly expected given the information retrieved from long-term memory. Moreover, the critical word was either consistently accented or inconsistently de-accented. The results revealed that for lowly expected new information, inconsistently de-accented words elicited a larger N400 and larger theta power increases (4-6 Hz) than consistently accented words. In contrast, for the highly expected new information, consistently accented words elicited a larger N400 and larger alpha power decreases (8-14 Hz) than inconsistently de-accented words. The results suggest that, during spoken language comprehension, the effect of accentuation interacted with the information retrieved from long-term memory immediately. Moreover, our results also have important consequences for our understanding of the processing nature of the N400. The N400 amplitude is not only enhanced for incorrect information (new and de-accented word) but also enhanced for correct information (new and accented words).

  7. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language.

    PubMed

    Williams, Joshua T; Newman, Sharlene D

    2017-02-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.

  8. Expression of luteinizing hormone and chorionic gonadotropin receptor messenger ribonucleic acid in human corpora lutea during menstrual cycle and pregnancy.

    PubMed

    Nishimori, K; Dunkel, L; Hsueh, A J; Yamoto, M; Nakano, R

    1995-04-01

    In the present study, we examined the expression of LH and CG receptor messenger RNA (mRNA) in human corpora lutea (CL) during the menstrual cycle and pregnancy. Poly(A)-enriched RNA was extracted from CL and analyzed by Northern and slot blots, using a radiolabeled complementary RNA probe derived from the human LH receptor complementary DNA. Northern blot analysis indicated the presence of multiple LH receptor mRNA transcripts with molecular sizes of 8.0, 7.0 and 4.5 kilobases in human CL during the menstrual cycle. The predominant transcript was 4.5 kilobases in size. However, no hybridization signals were observed in nongonadal tissues (heart, liver, and kidney). Densitometric analyses revealed that the levels of LH receptor mRNA increased from early luteal phase to midluteal phase and subsequently decreased during late luteal phase. After the onset of menstruation, the LH receptor mRNA level was undetectable in the regressing CL. Moreover, radioligand receptor assay (RRA) showed a close parallelism between LH receptor mRNA levels and LH receptor content in CL throughout the menstrual cycle. LH receptor mRNA expression was also found in CL during early pregnancy. The level of LH receptor mRNA was relatively high in early pregnancy CL, whereas LH receptor content was low. Using in situ hybridization, LH receptor mRNAs were uniformly expressed in both large and small luteal cells during early and midluteal phase and early pregnancy, but not in regressing CL. In conclusion, these data demonstrate that the regulation of LH receptor content in human CL during luteal phase is associated with similar changes in the receptor message levels, suggesting the physiological roles for LH receptor mRNA during the menstrual cycle in the human. In addition, the expression of LH receptor mRNA was demonstrated in human CL during early pregnancy.

  9. Adult Speakers' Tongue-Palate Contact Patterns for Bilabial Stops within Complex Clusters

    ERIC Educational Resources Information Center

    Zharkova, Natalia; Schaeffler, Sonja; Gibbon, Fiona E.

    2009-01-01

    Previous studies using Electropalatography (EPG) have shown that individuals with speech disorders sometimes produce articulation errors that affect bilabial targets, but currently there is limited normative data available. In this study, EPG and acoustic data were recorded during complex word final sps clusters spoken by 20 normal adults. A total…

  10. Comparative metabolism of branched-chain amino acids to precursors of juvenile hormone biogenesis in corpora allata of lepidopterous versus nonlepidopterous insects

    SciTech Connect

    Brindle, P.A.; Schooley, D.A.; Tsai, L.W.; Baker, F.C.

    1988-08-05

    Comparative studies were performed on the role of branched-chain amino acids (BCAA) in juvenile hormone (JH) biosynthesis using several lepidopterous and nonlepidopterous insects. Corpora cardiaca-corpora allata complexes (CC-CA, the corpora allata being the organ of JH biogenesis) were maintained in culture medium containing a uniformly /sup 14/C-labeled BCAA, together with (methyl-/sup 3/H)methionine as mass marker for JH quantification. BCAA catabolism was quantified by directly analyzing the medium for the presence of /sup 14/C-labeled propionate and/or acetate, while JHs were extracted, purified by liquid chromatography, and subjected to double-label liquid scintillation counting. Our results indicate that active BCAA catabolism occurs within the CC-CA of lepidopterans, and this efficiently provides propionyl-CoA (from isoleucine or valine) for the biosynthesis of the ethyl branches of JH I and II. Acetyl-CoA, formed from isoleucine or leucine catabolism, is also utilized by lepidopteran CC-CA for biosynthesizing JH III and the acetate-derived portions of the ethyl-branched JHs. In contrast, CC-CA of nonlepidopterans fail to catabolize BCAA. Consequently, exogenous isoleucine or leucine does not serve as a carbon source for the biosynthesis of JH III by these glands, and no propionyl-CoA is produced for genesis of ethyl-branched JHs. This is the first observation of a tissue-specific metabolic difference which in part explains why these novel homosesquiterpenoids exist in lepidopterans, but not in nonlepidopterans.

  11. Symbolic gestures and spoken language are processed by a common neural system.

    PubMed

    Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R

    2009-12-08

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.

  12. Conducting spoken word recognition research online: Validation and a new timing method.

    PubMed

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  13. Working memory predicts semantic comprehension in dichotic listening in older adults.

    PubMed

    James, Philip J; Krishnan, Saloni; Aydelott, Jennifer

    2014-10-01

    Older adults have difficulty understanding spoken language in the presence of competing voices. Everyday social situations involving multiple simultaneous talkers may become increasingly challenging in later life due to changes in the ability to focus attention. This study examined whether individual differences in cognitive function predict older adults' ability to access sentence-level meanings in competing speech using a dichotic priming paradigm. Older listeners showed faster responses to words that matched the meaning of spoken sentences presented to the left or right ear, relative to a neutral baseline. However, older adults were more vulnerable than younger adults to interference from competing speech when the competing signal was presented to the right ear. This pattern of performance was strongly correlated with a non-auditory working memory measure, suggesting that cognitive factors play a key role in semantic comprehension in competing speech in healthy aging.

  14. Spoken Hawaiian.

    ERIC Educational Resources Information Center

    Elbert, Samuel H.

    The objects of this beginning text in Hawaiian, the result of two decades' efforts in teaching, are to present the principal conversational and grammatical patterns and the most common idioms, in order to prepare the student to read and enjoy the rich heritage of Hawaiian traditional legends and poetry. A short introductory section discusses…

  15. Phoneme-free prosodic representations are involved in pre-lexical and lexical neurobiological mechanisms underlying spoken word processing.

    PubMed

    Schild, Ulrike; Becker, Angelika B C; Friedrich, Claudia K

    2014-09-01

    Recently we reported that spoken stressed and unstressed primes differently modulate Event Related Potentials (ERPs) of spoken initially stressed targets. ERP stress priming was independent of prime-target phoneme overlap. Here we test whether phoneme-free ERP stress priming involves the lexicon. We used German target words with the same onset phonemes but different onset stress, such as MANdel ("almond") and manDAT ("mandate"; capital letters indicate stress). First syllables of those words served as primes. We orthogonally varied prime-target overlap in stress and phonemes. ERP stress priming did neither interact with phoneme priming nor with the stress pattern of the targets. However, polarity of ERP stress priming was reversed to that previously obtained. The present results are evidence for phoneme-free prosodic processing at the lexical level. Together with the previous results they reveal that phoneme-free prosodic representations at the pre-lexical and lexical level are recruited by neurobiological spoken word recognition.

  16. The Hebrew CHILDES corpus: transcription and morphological analysis

    PubMed Central

    Albert, Aviad; MacWhinney, Brian; Nir, Bracha

    2014-01-01

    We present a corpus of transcribed spoken Hebrew that reflects spoken interactions between children and adults. The corpus is an integral part of the CHILDES database, which distributes similar corpora for over 25 languages. We introduce a dedicated transcription scheme for the spoken Hebrew data that is sensitive to both the phonology and the standard orthography of the language. We also introduce a morphological analyzer that was specifically developed for this corpus. The analyzer adequately covers the entire corpus, producing detailed correct analyses for all tokens. Evaluation on a new corpus reveals high coverage as well. Finally, we describe a morphological disambiguation module that selects the correct analysis of each token in context. The result is a high-quality morphologically-annotated CHILDES corpus of Hebrew, along with a set of tools that can be applied to new corpora. PMID:25419199

  17. Differential processing of thematic and categorical conceptual relations in spoken word production.

    PubMed

    de Zubicaray, Greig I; Hansen, Samuel; McMahon, Katie L

    2013-02-01

    Studies of semantic context effects in spoken word production have typically distinguished between categorical (or taxonomic) and associative relations. However, associates tend to confound semantic features or morphological representations, such as whole-part relations and compounds (e.g., BOAT-anchor, BEE-hive). Using a picture-word interference paradigm and functional magnetic resonance imaging (fMRI), we manipulated categorical (COW-rat) and thematic (COW-pasture) TARGET-distractor relations in a balanced design, finding interference and facilitation effects on naming latencies, respectively, as well as differential patterns of brain activation compared with an unrelated distractor condition. While both types of distractor relation activated the middle portion of the left middle temporal gyrus (MTG) consistent with retrieval of conceptual or lexical representations, categorical relations involved additional activation of posterior left MTG, consistent with retrieval of a lexical cohort. Thematic relations involved additional activation of the left angular gyrus. These results converge with recent lesion evidence implicating the left inferior parietal lobe in processing thematic relations and may indicate a potential role for this region during spoken word production.

  18. Modeling Longitudinal Changes in Older Adults’ Memory for Spoken Discourse: Findings from the ACTIVE Cohort

    PubMed Central

    Payne, Brennan R.; Gross, Alden L.; Parisi, Jeanine M.; Sisco, Shannon M.; Stine-Morrow, Elizabeth A. L.; Marsiske, Michael; Rebok, George W.

    2014-01-01

    Episodic memory shows substantial declines with advancing age, but research on longitudinal trajectories of spoken discourse memory (SDM) in older adulthood is limited. Using parallel process latent growth curve models, we examined 10 years of longitudinal data from the no-contact control group (N = 698) of the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) randomized controlled trial in order to test (a) the degree to which SDM declines with advancing age, (b) predictors of these age-related declines, and (c) the within-person relationship between longitudinal changes in SDM and longitudinal changes in fluid reasoning and verbal ability over 10 years, independent of age. Individuals who were younger, White, had more years of formal education, were male, and had better global cognitive function and episodic memory performance at baseline demonstrated greater levels of SDM on average. However, only age at baseline uniquely predicted longitudinal changes in SDM, such that declines accelerated with greater age. Independent of age, within-person decline in reasoning ability over the 10-year study period was substantially correlated with decline in SDM (r = .87). An analogous association with SDM did not hold for verbal ability. The findings suggest that longitudinal declines in fluid cognition are associated with reduced spoken language comprehension. Unlike findings from memory for written prose, preserved verbal ability may not protect against developmental declines in memory for speech. PMID:24304364

  19. Brain-to-text: decoding spoken phrases from phone representations in the brain.

    PubMed

    Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja

    2015-01-01

    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.

  20. Psycholinguistic norms for action photographs in French and their relationships with spoken and written latencies.

    PubMed

    Bonin, Patrick; Boyer, Bruno; Méot, Alain; Fayol, Michel; Droit, Sylvie

    2004-02-01

    A set of 142 photographs of actions (taken from Fiez & Tranel, 1997) was standardized in French on name agreement, image agreement, conceptual familiarity, visual complexity, imageability, age of acquisition, and duration of the depicted actions. Objective word frequency measures were provided for the infinitive modal forms of the verbs and for the cumulative frequency of the verbal forms associated with the photographs. Statistics on the variables collected for action items were provided and compared with the statistics on the same variables collected for object items. The relationships between these variables were analyzed, and certain comparisons between the current database and other similar published databases of pictures of actions are reported. Spoken and written naming latencies were also collected for the photographs of actions, and multiple regression analyses revealed that name agreement, image agreement, and age of acquisition are the major determinants of action naming speed. Finally, certain analyses were performed to compare object and action naming times. The norms and the spoken and written naming latencies corresponding to the pictures are available on the Internet (http://www.psy.univ-bpclermont.fr/~pbonin/pbonin-eng.html) and should be of great use to researchers interested in the processing of actions.

  1. [Formula: see text] and [Formula: see text] Spoken Word Processing: Evidence from Divided Attention Paradigm.

    PubMed

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language ([Formula: see text] and second language ([Formula: see text] spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken [Formula: see text] and [Formula: see text] words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their [Formula: see text] and [Formula: see text]. In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed [Formula: see text] and [Formula: see text] words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in [Formula: see text]. Moreover, [Formula: see text] word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  2. Visual scenes trigger immediate syntactic reanalysis: evidence from ERPs during situated spoken comprehension.

    PubMed

    Knoeferle, Pia; Habets, Boukje; Crocker, Matthew W; Münte, Thomas F

    2008-04-01

    A central topic in sentence comprehension research is the kinds of information and mechanisms involved in resolving temporary ambiguity regarding the syntactic structure of a sentence. Gaze patterns in scenes during spoken sentence comprehension have provided strong evidence that visual scenes trigger rapid syntactic reanalysis. However, they have also been interpreted as reflecting nonlinguistic, visual processes. Furthermore, little is known as to whether similar processes of syntactic revision are triggered by linguistic versus scene cues. To better understand how scenes influence comprehension and its time course, we recorded event-related potentials (ERPs) during the comprehension of spoken sentences that relate to depicted events. Prior electrophysiological research has observed a P600 when structural disambiguation toward a noncanonical structure occurred during reading and in the absence of scenes. We observed an ERP component with a similar latency, polarity, and distribution when depicted events disambiguated toward a noncanonical structure. The distributional similarities further suggest that scenes are on a par with linguistic contexts in triggering syntactic revision. Our findings confirm the interpretation of previous eye movement studies and highlight the benefits of combining ERP and eye-tracking measures to ascertain the neuronal processes enabled by, and the locus of attention in, visual contexts.

  3. The missing foundation in teacher education: Knowledge of the structure of spoken and written language.

    PubMed

    Moats, L C

    1994-01-01

    Reading research supports the necessity for directly teaching concepts about linguistic structure to beginning readers and to students with reading and spelling difficulties. In this study, experienced teachers of reading, language arts, and special education were tested to determine if they have the requisite awareness of language elements (e.g., phonemes, morphemes) and of how these elements are represented in writing (e.g., knowledge of sound-symbol correspondences). The results were surprisingly poor, indicating that even motivated and experienced teachers typically understand too little about spoken and written language structure to be able to provide sufficient instruction in these areas. The utility of language structure knowledge for instructional planning, for assessment of student progress, and for remediation of literacy problems is discussed.The teachers participating in the study subsequently took a course focusing on phonemic awareness training, spoken-written language relationships, and careful analysis of spelling and reading behavior in children. At the end of the course, the teachers judged this information to be essential for teaching and advised that it become a prerequisite for certification. Recommendations for requirements and content of teacher education programs are presented.

  4. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    PubMed

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality.

  5. Brain-to-text: decoding spoken phrases from phone representations in the brain

    PubMed Central

    Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja

    2015-01-01

    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech. PMID:26124702

  6. The influence of talker and foreign-accent variability on spoken word identification

    PubMed Central

    Bent, Tessa; Frush Holt, Rachael

    2013-01-01

    In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect. PMID:23464037

  7. Electrophysiological Correlates of Emotional Content and Volume Level in Spoken Word Processing

    PubMed Central

    Grass, Annika; Bayer, Mareike; Schacht, Annekathrin

    2016-01-01

    For visual stimuli of emotional content as pictures and written words, stimulus size has been shown to increase emotion effects in the early posterior negativity (EPN), a component of event-related potentials (ERPs) indexing attention allocation during visual sensory encoding. In the present study, we addressed the question whether this enhanced relevance of larger (visual) stimuli might generalize to the auditory domain and whether auditory emotion effects are modulated by volume. Therefore, subjects were listening to spoken words with emotional or neutral content, played at two different volume levels, while ERPs were recorded. Negative emotional content led to an increased frontal positivity and parieto-occipital negativity—a scalp distribution similar to the EPN—between ~370 and 530 ms. Importantly, this emotion-related ERP component was not modulated by differences in volume level, which impacted early auditory processing, as reflected in increased amplitudes of the N1 (80–130 ms) and P2 (130–265 ms) components as hypothesized. However, contrary to effects of stimulus size in the visual domain, volume level did not influence later ERP components. These findings indicate modality-specific and functionally independent processing triggered by emotional content of spoken words and volume level. PMID:27458359

  8. The neural basis of inhibitory effects of semantic and phonological neighbors in spoken word production

    PubMed Central

    Mirman, Daniel; Graziano, Kristen M.

    2014-01-01

    Theories of word production and word recognition generally agree that multiple word candidates are activated during processing. The facilitative and inhibitory effects of these “lexical neighbors” have been studied extensively using behavioral methods and have spurred theoretical development in psycholinguistics, but relatively little is known about the neural basis of these effects and how lesions may affect them. The present study used voxel-wise lesion overlap subtraction to examine semantic and phonological neighbor effects in spoken word production following left hemisphere stroke. Increased inhibitory effects of near semantic neighbors were associated with inferior frontal lobe lesions, suggesting impaired selection among strongly activated semantically-related candidates. Increased inhibitory effects of phonological neighbors were associated with posterior superior temporal and inferior parietal lobe lesions. In combination with previous studies, these results suggest that such lesions cause phonological-to-lexical feedback to more strongly activate phonologically-related lexical candidates. The comparison of semantic and phonological neighbor effects and how they are affected by left hemisphere lesions provides new insights into the cognitive dynamics and neural basis of phonological, semantic, and cognitive control processes in spoken word production. PMID:23647518

  9. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study

    PubMed Central

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-01-01

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150–250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300–500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500–700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements. PMID:27180951

  10. Positive Emotional Language in the Final Words Spoken Directly Before Execution

    PubMed Central

    Hirschmüller, Sarah; Egloff, Boris

    2016-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality. PMID:26793135

  11. Brain Basis of Phonological Awareness for Spoken Language in Children and Its Disruption in Dyslexia

    PubMed Central

    Norton, Elizabeth S.; Christodoulou, Joanna A.; Gaab, Nadine; Lieberman, Daniel A.; Triantafyllou, Christina; Wolf, Maryanne; Whitfield-Gabrieli, Susan; Gabrieli, John D. E.

    2012-01-01

    Phonological awareness, knowledge that speech is composed of syllables and phonemes, is critical for learning to read. Phonological awareness precedes and predicts successful transition from language to literacy, and weakness in phonological awareness is a leading cause of dyslexia, but the brain basis of phonological awareness for spoken language in children is unknown. We used functional magnetic resonance imaging to identify the neural correlates of phonological awareness using an auditory word-rhyming task in children who were typical readers or who had dyslexia (ages 7–13) and a younger group of kindergarteners (ages 5–6). Typically developing children, but not children with dyslexia, recruited left dorsolateral prefrontal cortex (DLPFC) when making explicit phonological judgments. Kindergarteners, who were matched to the older children with dyslexia on standardized tests of phonological awareness, also recruited left DLPFC. Left DLPFC may play a critical role in the development of phonological awareness for spoken language critical for reading and in the etiology of dyslexia. PMID:21693783

  12. Prosodic evaluation of accent distributions in spoken news bulletins of Flemish newsreaders.

    PubMed

    Swerts, Marc; Marsi, Erwin

    2012-10-01

    The current article describes research on whether the goodness of a particular speaking style correlates with the way speakers distribute pitch accents in their speech. Study 1 analyzed two Flemish newsreaders, who, according to poll ratings, had previously been judged to represent a good vs bad speaker. A perception study in which participants had to assess the quality of spoken paragraphs produced by either of the two speakers confirmed that one speaker was rated as significantly and consistently better than the other one. An exploration of the accent distributions in those paragraphs showed that the accent distributions of the better speaker were more similar to the ones of a gold standard, i.e., the accent distributions as predicted by two independent intonation experts. Study 2 compared synthetic versions of a selection of the paragraphs of study 1, generated by a Dutch text-to-speech system. It compared three basically identical versions of the texts, except that they had different accent distributions according to the gold standard, or to distributions as observed in the productions of the two newsreaders. A perception study revealed that the versions of the bad speaker were rated as being significantly worse than the other versions. The two studies thus show that variation in accent distribution can indeed affect the way spoken texts are assessed in terms of their perceived quality.

  13. The Activation of Embedded Words in Spoken Word Identification Is Robust but Constrained: Evidence from the Picture-Word Interference Paradigm

    ERIC Educational Resources Information Center

    Bowers, Jeffrey S.; Davis, Colin J.; Mattys, Sven L.; Damian, Markus F.; Hanley, Derek

    2009-01-01

    Three picture-word interference (PWI) experiments assessed the extent to which embedded subset words are activated during the identification of spoken superset words (e.g., "bone" in "trombone"). Participants named aloud pictures (e.g., "brain") while spoken distractors were presented. In the critical condition,…

  14. "We Communicated That Way for a Reason": Language Practices and Language Ideologies among Hearing Adults Whose Parents Are Deaf

    ERIC Educational Resources Information Center

    Pizer, Ginger; Walters, Keith; Meier, Richard P.

    2013-01-01

    Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing…

  15. How Vocabulary Size in Two Languages Relates to Efficiency in Spoken Word Recognition by Young Spanish-English Bilinguals

    ERIC Educational Resources Information Center

    Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda

    2010-01-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language…

  16. Semantic Richness and Word Learning in Children with Hearing Loss Who Are Developing Spoken Language: A Single Case Design Study

    ERIC Educational Resources Information Center

    Lund, Emily; Douglas, W. Michael; Schuele, C. Melanie

    2015-01-01

    Children with hearing loss who are developing spoken language tend to lag behind children with normal hearing in vocabulary knowledge. Thus, researchers must validate instructional practices that lead to improved vocabulary outcomes for children with hearing loss. The purpose of this study was to investigate how semantic richness of instruction…

  17. N400 brain responses to spoken phrases paired with photographs of scenes: implications for visual scene displays in AAC systems.

    PubMed

    Wilkinson, Krista M; Stutzman, Allyson; Seisler, Andrea

    2015-03-01

    Augmentative and alternative communication (AAC) systems are often implemented for individuals whose speech cannot meet their full communication needs. One type of aided display is called a Visual Scene Display (VSD). VSDs consist of integrated scenes (such as photographs) in which language concepts are embedded. Often, the representations of concepts on VSDs are perceptually similar to their referents. Given this physical resemblance, one may ask how well VSDs support development of symbolic functioning. We used brain imaging techniques to examine whether matches and mismatches between the content of spoken messages and photographic images of scenes evoke neural activity similar to activity that occurs to spoken or written words. Electroencephalography (EEG) was recorded from 15 college students who were shown photographs paired with spoken phrases that were either matched or mismatched to the concepts embedded within each photograph. Of interest was the N400 component, a negative deflecting wave 400 ms post-stimulus that is considered to be an index of semantic functioning. An N400 response in the mismatched condition (but not the matched) would replicate brain responses to traditional linguistic symbols. An N400 was found, exclusively in the mismatched condition, suggesting that mismatches between spoken messages and VSD-type representations set the stage for the N400 in ways similar to traditional linguistic symbols.

  18. Differential Error Types in Second-Language Students' Written and Spoken Texts: Implications for Instruction in Writing

    ERIC Educational Resources Information Center

    Makalela, Leketi

    2004-01-01

    This article reports on an empirical study undertaken at the University of the North, South Africa, to test personal classroom observation and anecdotal evidence about the persistent gap between writing and spoken proficiencies among learners of English as a second language. A comparative and contrastive analysis of speech samples in the study…

  19. Predictors of Early Reading Skill in 5-Year-Old Children with Hearing Loss Who Use Spoken Language

    ERIC Educational Resources Information Center

    Cupples, Linda; Ching, Teresa Y. C.; Crowe, Kathryn; Day, Julia; Seeto, Mark

    2014-01-01

    This research investigated the concurrent association between early reading skills and phonological awareness (PA), print knowledge, language, cognitive, and demographic variables in 101 five-Year-Old Children with prelingual hearing losses ranging from mild to profound who communicated primarily via spoken language. All participants were fitted…

  20. Spoken Language Comprehension of Phrases, Simple and Compound-Active Sentences in Non-Speaking Children with Severe Cerebral Palsy

    ERIC Educational Resources Information Center

    Geytenbeek, Joke J. M.; Heim, Margriet J. M.; Knol, Dirk L.; Vermeulen, R. Jeroen; Oostrom, Kim J.

    2015-01-01

    Background Children with severe cerebral palsy (CP) (i.e. "non-speaking children with severely limited mobility") are restricted in many domains that are important to the acquisition of language. Aims To investigate comprehension of spoken language on sentence type level in non-speaking children with severe CP. Methods & Procedures…

  1. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment

    PubMed Central

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070

  2. Assessing Multimodal Spoken Word-in-Sentence Recognition in Children with Normal Hearing and Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Holt, Rachael Frush; Kirk, Karen Iler; Hay-McCutcheon, Marcia

    2011-01-01

    Purpose: To examine multimodal spoken word-in-sentence recognition in children. Method: Two experiments were undertaken. In Experiment 1, the youngest age with which the multimodal sentence recognition materials could be used was evaluated. In Experiment 2, lexical difficulty and presentation modality effects were examined, along with test-retest…

  3. Building Language Blocks in L2 Japanese: Chunk Learning and the Development of Complexity and Fluency in Spoken Production

    ERIC Educational Resources Information Center

    Taguchi, Naoko

    2008-01-01

    This pilot study examined the development of complexity and fluency of second language (L2) spoken production among L2 learners who received extensive practice on grammatical chunks as constituent units of discourse. Twenty-two students enrolled in an elementary Japanese course at a U.S. university received classroom instruction on 40 grammatical…

  4. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    ERIC Educational Resources Information Center

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  5. Children's Spoken Word Recognition and Contributions to Phonological Awareness and Nonword Repetition: A 1-Year Follow-Up

    ERIC Educational Resources Information Center

    Metsala, Jamie L.; Stavrinos, Despina; Walley, Amanda C.

    2009-01-01

    This study examined effects of lexical factors on children's spoken word recognition across a 1-year time span, and contributions to phonological awareness and nonword repetition. Across the year, children identified words based on less input on a speech-gating task. For word repetition, older children improved for the most familiar words. There…

  6. Follow-up study investigating the benefits of phonological awareness intervention for children with spoken language impairment.

    PubMed

    Gillon, Gail T

    2002-01-01

    The efficacy of phonological awareness intervention for children at risk for reading disorder has received increasing attention in the literature. This paper reports the follow-up data for participants in the Gillon (2000a) intervention study. The performance of twenty, 5-7-year-old New Zealand children with spoken language impairment, who received phonological awareness intervention, was compared with the progress made by 20 children from a control group and 20 children with typical language development approximately 11 months post-intervention. The children with spoken language impairment all had expressive phonological difficulties and demonstrated delay in early reading development. Treatment effects on strengthening phoneme-grapheme connections in spelling development were also investigated. The results suggested that structured phonological awareness intervention led to sustained growth in phoneme awareness and word-recognition performance. At the follow-up assessment, the majority of the children who received intervention were reading at, or above, the level expected for their age on a measure of word recognition. The phonological awareness intervention also significantly strengthened phoneme-grapheme connections in spelling as evidenced by improved non-word spelling ability. In contrast, the control group of children with spoken language impairment who did not receive phonological awareness intervention showed remarkably little improvement in phoneme awareness over time and the majority remained poor readers. The results highlight the important role speech-language therapists can play in enhancing the early reading and spelling development of children with spoken language impairment.

  7. The Development and Validation of the "Academic Spoken English Strategies Survey (ASESS)" for Non-Native English Speaking Graduate Students

    ERIC Educational Resources Information Center

    Schroeder, Rui M.

    2016-01-01

    This study reports on the three-year development and validation of a new assessment tool--the Academic Spoken English Strategies Survey (ASESS). The questionnaire is the first of its kind to assess the listening and speaking strategy use of non-native English speaking (NNES) graduate students. A combination of sources was used to develop the…

  8. Standardization and Whiteness: One and the Same? A Response to "There Is No Culturally Responsive Teaching Spoken Here"

    ERIC Educational Resources Information Center

    Weilbacher, Gary

    2012-01-01

    The article "There Is No Culturally Responsive Teaching Spoken Here: A Critical Race Perspective" by Cleveland Hayes and Brenda C. Juarez suggests that the current focus on meeting standards incorporates limited thoughtful discussions related to complex notions of diversity. Our response suggests a strong link between standardization and White…

  9. The Sociolinguistics of Variety Identification and Categorisation: Free Classification of Varieties of Spoken English Amongst Non-Linguist Listeners

    ERIC Educational Resources Information Center

    McKenzie, Robert M.

    2015-01-01

    In addition to the examination of non-linguists' evaluations of different speech varieties, in recent years sociolinguists and sociophoneticians have afforded greater attention towards the ways in which naïve listeners perceive, process, and encode spoken language variation, including the identification of language varieties as regionally or…

  10. Are Young Children With Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    PubMed Central

    McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation. Method We analyzed archival data collected from the parents of 36 children who received cochlear implantation (20 unilateral, 16 bilateral) before 24 months of age. The parents reported their children's word productions 12 months after implantation using the MacArthur Communicative Development Inventories: Words and Sentences (Fenson et al., 1993). We computed the number of words, out of 292 possible monosyllabic nouns, verbs, and adjectives, that each child was reported to say and calculated the average phonotactic probability, neighborhood density, and word frequency of the reported words. Results Spoken vocabulary size positively correlated with average phonotactic probability and negatively correlated with average neighborhood density, but only in children with bilateral CIs. Conclusion At 12 months postimplantation, children with bilateral CIs demonstrate sensitivity to statistical characteristics of words in the ambient spoken language akin to that reported for children with normal hearing during the early stages of lexical development. Children with unilateral CIs do not. PMID:25677929

  11. The Interaction of Lexical Semantics and Cohort Competition in Spoken Word Recognition: An fMRI Study

    ERIC Educational Resources Information Center

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K.

    2011-01-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…

  12. Caracterizacion Lexica del Espanol Hablado en el Noroeste de Indiana (Lexical Characterization of the Spanish Spoken in Northwest Indiana).

    ERIC Educational Resources Information Center

    Mendieta, Eva; Molina, Isabel

    2000-01-01

    Analyzes Spanish lexical data recorded in sociolinguistic interviews with Hispanic community members in Northwest Indiana. Examined how prevalent English is in the spoken Spanish of this community; what variety of Spanish is regarded prestigious; whether lexical forms establish the prestige dialect adopted by speakers of other dialects; the…

  13. A Spoken-Language Intervention for School-Aged Boys With Fragile X Syndrome.

    PubMed

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-05-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived language-support strategies. All sessions were implemented through distance videoteleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data were collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies, and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed.

  14. The sound of motion in spoken language: visual information conveyed by acoustic properties of speech.

    PubMed

    Shintel, Hadas; Nusbaum, Howard C

    2007-12-01

    Language is generally viewed as conveying information through symbols whose form is arbitrarily related to their meaning. This arbitrary relation is often assumed to also characterize the mental representations underlying language comprehension. We explore the idea that visuo-spatial information can be analogically conveyed through acoustic properties of speech and that such information is integrated into an analog perceptual representation as a natural part of comprehension. Listeners heard sentences describing objects, spoken at varying speaking rates. After each sentence, participants saw a picture of an object and judged whether it had been mentioned in the sentence. Participants were faster to recognize the object when motion implied by speaking rate matched the motion implied by the picture. Results suggest that visuo-spatial referential information can be analogically conveyed and represented.

  15. Designing a spoken dialogue interface to an intelligent cognitive assistant for people with dementia.

    PubMed

    Wolters, Maria Klara; Kelly, Fiona; Kilgour, Jonathan

    2016-12-01

    Intelligent cognitive assistants support people who need help performing everyday tasks by detecting when problems occur and providing tailored and context-sensitive assistance. Spoken dialogue interfaces allow users to interact with intelligent cognitive assistants while focusing on the task at hand. In order to establish requirements for voice interfaces to intelligent cognitive assistants, we conducted three focus groups with people with dementia, carers, and older people without a diagnosis of dementia. Analysis of the focus group data showed that voice and interaction style should be chosen based on the preferences of the user, not those of the carer. For people with dementia, the intelligent cognitive assistant should act like a patient, encouraging guide, while for older people without dementia, assistance should be to the point and not patronising. The intelligent cognitive assistant should be able to adapt to cognitive decline.

  16. The evolutionary history of genes involved in spoken and written language: beyond FOXP2

    PubMed Central

    Mozzi, Alessandra; Forni, Diego; Clerici, Mario; Pozzoli, Uberto; Mascheretti, Sara; Guerini, Franca R.; Riva, Stefania; Bresolin, Nereo; Cagliani, Rachele; Sironi, Manuela

    2016-01-01

    Humans possess a communication system based on spoken and written language. Other animals can learn vocalization by imitation, but this is not equivalent to human language. Many genes were described to be implicated in language impairment (LI) and developmental dyslexia (DD), but their evolutionary history has not been thoroughly analyzed. Herein we analyzed the evolution of ten genes involved in DD and LI. Results show that the evolutionary history of LI genes for mammals and aves was comparable in vocal-learner species and non-learners. For the human lineage, several sites showing evidence of positive selection were identified in KIAA0319 and were already present in Neanderthals and Denisovans, suggesting that any phenotypic change they entailed was shared with archaic hominins. Conversely, in FOXP2, ROBO1, ROBO2, and CNTNAP2 non-coding changes rose to high frequency after the separation from archaic hominins. These variants are promising candidates for association studies in LI and DD. PMID:26912479

  17. The role of visual representations during the lexical access of spoken words

    PubMed Central

    Lewis, Gwyneth; Poeppel, David

    2015-01-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability - concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. PMID:24814579

  18. Response Timing Detection Using Prosodic and Linguistic Information for Human-friendly Spoken Dialog Systems

    NASA Astrophysics Data System (ADS)

    Kitaoka, Norihide; Takeuchi, Masashi; Nishimura, Ryota; Nakagawa, Seiichi

    If a dialog system can respond to the user as reasonably as a human, the interaction will become smoother. Timing of the response such as back-channels and turn-taking plays an important role in such a smooth dialog as in human-human interaction. We developed a response timing generator for such a dialog system. This generator uses a decision tree to detect the timing based on the features coming from some prosodic and linguistic information. The timing generator decides the action of the system at every 100 ms during the user's pause. In this paper, we describe a robust spoken dialog system using the timing generator. Subjective evaluation proved that almost all of the subjects experienced a friendly feeling from the system.

  19. How are pronunciation variants of spoken words recognized? A test of generalization to newly learned words.

    PubMed

    Pitt, Mark A

    2009-07-01

    One account of how pronunciation variants of spoken words (center-> "senner" or "sennah") are recognized is that sublexical processes use information about variation in the same phonological environments to recover the intended segments (Gaskell & Marslen-Wilson, 1998). The present study tests the limits of this phonological inference account by examining how listeners process for the first time a pronunciation variant of a newly learned word. Recognition of such a variant should occur as long as it possesses the phonological structure that legitimizes the variation. Experiments 1 and 2 identify a phonological environment that satisfies the conditions necessary for a phonological inference mechanism to be operational. Using a word-learning paradigm, Experiments 3 through 5 show that inference alone is not sufficient for generalization but could facilitate it, and that one condition that leads to generalization is meaningful exposure to the variant in an overheard conversation, demonstrating that lexical processing is necessary for variant recognition.

  20. The slow developmental time course of real-time spoken word recognition.

    PubMed

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J Bruce; McMurray, Bob

    2015-12-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than 16-year-olds; however, both age groups ultimately fixated targets to the same degree. This contrasts with a prior study of adolescents with language impairment (McMurray, Samelson, Lee, & Tomblin, 2010) that showed a different pattern of real-time processes. These findings suggest that the dynamics of word recognition are still developing even at these late ages, and developmental changes may derive from different sources than individual differences in relative language ability.

  1. The evolutionary history of genes involved in spoken and written language: beyond FOXP2.

    PubMed

    Mozzi, Alessandra; Forni, Diego; Clerici, Mario; Pozzoli, Uberto; Mascheretti, Sara; Guerini, Franca R; Riva, Stefania; Bresolin, Nereo; Cagliani, Rachele; Sironi, Manuela

    2016-02-25

    Humans possess a communication system based on spoken and written language. Other animals can learn vocalization by imitation, but this is not equivalent to human language. Many genes were described to be implicated in language impairment (LI) and developmental dyslexia (DD), but their evolutionary history has not been thoroughly analyzed. Herein we analyzed the evolution of ten genes involved in DD and LI. Results show that the evolutionary history of LI genes for mammals and aves was comparable in vocal-learner species and non-learners. For the human lineage, several sites showing evidence of positive selection were identified in KIAA0319 and were already present in Neanderthals and Denisovans, suggesting that any phenotypic change they entailed was shared with archaic hominins. Conversely, in FOXP2, ROBO1, ROBO2, and CNTNAP2 non-coding changes rose to high frequency after the separation from archaic hominins. These variants are promising candidates for association studies in LI and DD.

  2. The slow developmental timecourse of real-time spoken word recognition

    PubMed Central

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental timecourse of spoken word recognition in older children using eye-tracking to assess how the real-time processing dynamics of word recognition change over development. We found that nine-year-olds were slower to activate the target words and showed more early competition from competitor words than 16 year olds; however, both age groups ultimately fixated targets to the same degree. This contrasts with a prior study of adolescents with language impairment (McMurray et al, 2010) which showed a different pattern of real-time processes. These findings suggest that the dynamics of word recognition are still developing even at these late ages, and differences due to developmental change may derive from different sources than individual differences in relative language ability. PMID:26479544

  3. Phonological Neighborhood Competition Affects Spoken Word Production Irrespective of Sentential Context

    PubMed Central

    Fox, Neal P.; Reilly, Megan; Blumstein, Sheila E.

    2015-01-01

    Two experiments examined the influence of phonologically similar neighbors on articulation of words’ initial stop consonants in order to investigate the conditions under which lexically-conditioned phonetic variation arises. In Experiment 1, participants produced words in isolation. Results showed that the voice-onset time (VOT) of a target’s initial voiceless stop was predicted by its overall neighborhood density, but not by its having a voicing minimal pair. In Experiment 2, participants read aloud the same targets after semantically predictive sentence contexts and after neutral sentence contexts. Results showed that, although VOTs were shorter in words produced after predictive contexts, the neighborhood density effect on VOT production persisted irrespective of context. These findings suggest that global competition from a word’s neighborhood affects spoken word production independently of contextual modulation and support models in which activation cascades automatically and obligatorily among all of a selected target word’s phonological neighbors during acoustic-phonetic encoding. PMID:26124538

  4. Cross-modal metaphorical mapping of spoken emotion words onto vertical space

    PubMed Central

    Montoro, Pedro R.; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a ‘positive-up/negative-down’ embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis. PMID:26322007

  5. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    PubMed

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success.

  6. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning

    PubMed Central

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.

    2016-01-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success

  7. Characterization of relaxin radioimmunoassay using Bolton-Hunter reagent. First results in plasma during pregnancy, and in placenta, corpora lutea and ovarian cysts in woman.

    PubMed

    Loumaye, E; Teuwissen, B; Thomas, K

    1978-01-01

    Application of a porcine relaxin radioimmunoassay system using Bolton and Hunter reagent was characterized in different biological fluids. In pregnant woman, plasma reveals concentrations from undetectable values to 1,800 pg/ml. Maximal concentrations are found between the 8th and the 13th weeks of pregnancy. These peak values are significantly different from those at other stages of pregnancy. Appreciable relaxin concentrations are detected in gestational corpus luteum extract, and in corpora lutea cyst fluid of pregnant and nonpregnant women. No relaxin was detected in a cyclus corpus luteum extract, in a placental extract, or in an ovarian serous cyst. The presence of immunoreactive relaxin in the nonpregnant woman is reported.

  8. Narratives of Iraqi Adult Learners: Experiences of Spoken Register in English for Academic Purposes Programs at an Australian University

    ERIC Educational Resources Information Center

    Al Hamdany, Hayder; Picard, Michelle

    2015-01-01

    This paper explores the perceptions of Iraqi students of three different English Programs (a general English for academic purposes program, a pre-enrolment English program and the English component of a disciplinary bridging program) at an Australian University as reflected in their language learning narratives. It focuses specifically on the…

  9. Pronunciation Lessons for Teachers of Classes of Adults of Mainly South East Asian Origin at Near-Beginning to Intermediate Levels of English as a Second Language.

    ERIC Educational Resources Information Center

    Eriksen, Tove Anne

    The lessons are intended for teenage and adult students. Focus is on placement of the tongue, jaw, lips, any movements involved, and whether the sound is whispered (voiced) or spoken (voiceless). Consonants are taught in pairs so students realize the distinctions necessary to avoid misunderstandings. Lessons include (1) final consonants, (2)…

  10. How do adults use repetition? A comparison of conversations with young children and with multiply-handicapped adolescents.

    PubMed

    Bocéréan, Christine; Canut, Emmanuelle; Musiol, Michel

    2012-04-01

    The aim of this research is to compare the types and functions of repetitions in two different corpora, one constituted of verbal interactions between adults and multiply-handicapped adolescents, the other between adults and young children of the same mental age as the adolescents. Our overall aim is to observe whether the communicative (linguistic and pragmatic) behaviour of adults varies according to the interlocutor and, if it does vary, in what ways. The main results show that adults do not use repetition strategy with the same aims according to the interlocutor. When interacting with a child, repetitions form part of a strategy of linguistic 'tutoring' which allow the child to take on board progressively more complex linguistic constructions; it also enriches exchanges from a pragmatic point of view. On the other hand, when adults communicate with multiply-handicapped adolescents, their main aim is the maintaining of dialogue.

  11. Acquisition of spoken and signed English by hearing-impaired children of hearing-impaired or hearing parents.

    PubMed

    Geers, A E; Schick, B

    1988-05-01

    This study examines the degree to which hearing-impaired children of hearing-impaired parents (HIP) demonstrate an advantage in their acquisition of signed and spoken English over hearing-impaired children of hearing parents (HP). A subset from the normative sample of the Grammatical Analysis of Elicited Language, 50 HIP children and 50 HP children, were matched in terms of their educational program, hearing level, and age. Results indicate that both groups had comparably poor expressive English language ability at 5 and 6 years of age. However, at age 7 and 8 HIP children demonstrated a significant linguistic advantage in both their spoken and signed English over HP children. Because the production of English by HIP children closely resembled that of orally educated hearing-impaired children of hearing parents, consistent language stimulation throughout the child's early years may be a critical factor in the development of English, regardless of the language or mode of expression.

  12. A critique of Mark D. Allen's "the preservation of verb subcategory knowledge in a spoken language comprehension deficit".

    PubMed

    Kemmerer, David

    2008-07-01

    Allen [Allen, M. (2005). The preservation of verb subcategory knowledge in a spoken language comprehension deficit. Brain and Language, 95, 255-264.] reports a single patient, WBN, who, during spoken language comprehension, is still able to access some of the syntactic properties of verbs despite being unable to access some of their semantic properties. Allen claims that these findings challenge linguistic theories which assume that much of the syntactic behavior of verbs can be predicted from their meanings. I argue, however, that this conclusion is not supported by the data for two reasons: first, Allen focuses on aspects of verb syntax that are not claimed to be influenced by verb semantics; and second, he ignores aspects of verb syntax that are claimed to be influenced by verb semantics.

  13. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    PubMed Central

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  14. Encoding lexical tones in jTRACE: a simulation of monosyllabic spoken word recognition in Mandarin Chinese.

    PubMed

    Shuai, Lan; Malins, Jeffrey G

    2017-02-01

    Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.

  15. Children with reading difficulties show differences in brain regions associated with orthographic processing during spoken language processing.

    PubMed

    Desroches, Amy S; Cone, Nadia E; Bolger, Donald J; Bitan, Tali; Burman, Douglas D; Booth, James R

    2010-10-14

    We explored the neural basis of spoken language deficits in children with reading difficulty, specifically focusing on the role of orthography during spoken language processing. We used functional magnetic resonance imaging (fMRI) to examine differences in brain activation between children with reading difficulties (aged 9-to-15 years) and age-matched children with typical achievement during an auditory rhyming task. Both groups showed activation in bilateral superior temporal gyri (BA 42 and 22), a region associated with phonological processing, with no significant between-group differences. Interestingly, typically achieving children, but not children with reading difficulties, showed activation of left fusiform cortex (BA 37), a region implicated in orthographic processing. Furthermore, this activation was significantly greater for typically achieving children compared to those with reading difficulties. These findings suggest that typical children automatically activate orthographic representations during spoken language processing, while those with reading difficulties do not. Follow-up analyses revealed that the intensity of the activation in the fusiform gyrus was associated with significantly stronger behavioral conflict effects in typically achieving children only (i.e., longer latencies to rhyming pairs with orthographically dissimilar endings than to those with identical orthographic endings; jazz-has vs. cat-hat). Finally, for reading disabled children, a positive correlation between left fusiform activation and nonword reading was observed, such that greater access to orthography was related to decoding ability. Taken together, the results suggest that the integration of orthographic and phonological processing is directly related to reading ability.

  16. Fronto-temporal connectivity is preserved during sung but not spoken word listening, across the autism spectrum.

    PubMed

    Sharda, Megha; Midha, Rashi; Malik, Supriya; Mukerji, Shaneel; Singh, Nandini C

    2015-04-01

    Co-occurrence of preserved musical function with language and socio-communicative impairments is a common but understudied feature of Autism Spectrum Disorders (ASD). Given the significant overlap in neural organization of these processes, investigating brain mechanisms underlying speech and music may not only help dissociate the nature of these auditory processes in ASD but also provide a neurobiological basis for development of interventions. Using a passive-listening functional magnetic resonance imaging paradigm with spoken words, sung words and piano tones, we found that 22 children with ASD, with varying levels of functioning, activated bilateral temporal brain networks during sung-word perception, similarly to an age and gender-matched control group. In contrast, spoken-word perception was right-lateralized in ASD and elicited reduced inferior frontal gyrus (IFG) activity which varied as a function of language ability. Diffusion tensor imaging analysis reflected reduced integrity of the left hemisphere fronto-temporal tract in the ASD group and further showed that the hypoactivation in IFG was predicted by integrity of this tract. Subsequent psychophysiological interactions revealed that functional fronto-temporal connectivity, disrupted during spoken-word perception, was preserved during sung-word listening in ASD, suggesting alternate mechanisms of speech and music processing in ASD. Our results thus demonstrate the ability of song to overcome the structural deficit for speech across the autism spectrum and provide a mechanistic basis for efficacy of song-based interventions in ASD.

  17. Deaf Children With Cochlear Implants Do Not Appear to Use Sentence Context to Help Recognize Spoken Words

    PubMed Central

    Conway, Christopher M.; Deocampo, Joanne A.; Walk, Anne M.; Anaya, Esperanza M.; Pisoni, David B.

    2015-01-01

    Purpose The authors investigated the ability of deaf children with cochlear implants (CIs) to use sentence context to facilitate the perception of spoken words. Method Deaf children with CIs (n = 24) and an age-matched group of children with normal hearing (n = 31) were presented with lexically controlled sentences and were asked to repeat each sentence in its entirety. Performance was analyzed at each of 3 word positions of each sentence (first, second, and third key word). Results Whereas the children with normal hearing showed robust effects of contextual facilitation—improved speech perception for the final words in a sentence—the deaf children with CIs on average showed no such facilitation. Regression analyses indicated that for the deaf children with CIs, Forward Digit Span scores significantly predicted accuracy scores for all 3 positions, whereas performance on the Stroop Color and Word Test, Children’s Version (Golden, Freshwater, & Golden, 2003) predicted how much contextual facilitation was observed at the final word. Conclusions The pattern of results suggests that some deaf children with CIs do not use sentence context to improve spoken word recognition. The inability to use sentence context may be due to possible interactions between language experience and cognitive factors that affect the ability to successfully integrate temporal–sequential information in spoken language. PMID:25029170

  18. Phoneme-free prosodic representations are involved in pre-lexical and lexical neurobiological mechanisms underlying spoken word processing

    PubMed Central

    Schild, Ulrike; Becker, Angelika B.C.; Friedrich, Claudia K.

    2014-01-01

    Recently we reported that spoken stressed and unstressed primes differently modulate Event Related Potentials (ERPs) of spoken initially stressed targets. ERP stress priming was independent of prime–target phoneme overlap. Here we test whether phoneme-free ERP stress priming involves the lexicon. We used German target words with the same onset phonemes but different onset stress, such as MANdel (“almond”) and manDAT (“mandate”; capital letters indicate stress). First syllables of those words served as primes. We orthogonally varied prime–target overlap in stress and phonemes. ERP stress priming did neither interact with phoneme priming nor with the stress pattern of the targets. However, polarity of ERP stress priming was reversed to that previously obtained. The present results are evidence for phoneme-free prosodic processing at the lexical level. Together with the previous results they reveal that phoneme-free prosodic representations at the pre-lexical and lexical level are recruited by neurobiological spoken word recognition. PMID:25128904

  19. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    PubMed

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language.

  20. Automatic speech recognizer based on the Spanish spoken in Valdivia, Chile

    NASA Astrophysics Data System (ADS)

    Sanchez, Maria L.; Poblete, Victor H.; Sommerhoff, Jorge

    2004-05-01

    The performance of an automatic speech recognizer is affected by training process (dependent on or independent of the speaker) and the size of the vocabulary. The language used in this study was the Spanish spoken in the city of Valdivia, Chile. A representative sample of 14 students and six professionals all natives of Valdivia (ten women and ten men) were used to complete the study. The sample ranged in age between 20 and 30 years old. Two systems were programmed based on the classical principles: digitalizing, end point detection, linear prediction coding, cepstral coefficients, dynamic time warping, and a final decision stage with a previous step of training: (i) one dependent speaker (15 words: five colors and ten numbers), (ii) one independent speaker (30 words: ten verbs, ten nouns, and ten adjectives). A simple didactical application, with options to choose colors, numbers and drawings of the verbs, nouns and adjectives, was designed to be used with a personal computer. In both programs, the tests carried out showed a tendency towards errors in short words with monosyllables like ``flor,'' and ``sol.'' The best results were obtained in words with three syllables like ``disparar'' and ``mojado.'' [Work supported by Proyecto DID UACh N S-200278.

  1. Feature Statistics Modulate the Activation of Meaning During Spoken Word Processing.

    PubMed

    Devereux, Barry J; Taylor, Kirsten I; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K

    2016-03-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in (distinctiveness/sharedness) and likelihood of co-occurrence (correlational strength)--determine conceptual activation. To test these claims, we investigated the role of distinctiveness/sharedness and correlational strength in speech-to-meaning mapping, using a lexical decision task and computational simulations. Responses were faster for concepts with higher sharedness, suggesting that shared features are facilitatory in tasks like lexical decision that require access to them. Correlational strength facilitated responses for slower participants, suggesting a time-sensitive co-occurrence-driven settling mechanism. The computational simulation showed similar effects, with early effects of shared features and later effects of correlational strength. These results support a general-to-specific account of conceptual processing, whereby early activation of shared features is followed by the gradual emergence of a specific target representation.

  2. Repetition and comprehension of spoken sentences by reading-disabled children.

    PubMed

    Shankweiler, D; Smith, S T; Mann, V A

    1984-11-01

    The language problems of reading-disabled elementary school children are not confined to written language alone. These children often exhibit problems of ordered recall of verbal materials that are equally severe whether the materials are presented in printed or in spoken form. Sentences that pose problems of pronoun reference might be expected to place a special burden on short-term memory because close grammatical relationships obtain between words that are distant from one another. With this logic in mind, third-grade children with specific reading disability and classmates matched for age and IQ were tested on five sentence types, each of which poses a problem in assigning pronoun reference. On one occasion the children were tested for comprehension of the sentences by a forced-choice picture verification task. On a later occasion they received the same sentences as a repetition test. Good and poor readers differed significantly in immediate recall of the reflexive sentences, but not in comprehension of them as assessed by picture choice. It was suggested that the pictures provided cues which lightened the memory load, a possibility that could explain why the poor readers were not demonstrably inferior in comprehension of the sentences even though they made significantly more errors than the good readers in recalling them.

  3. Misperceptions of spoken words: Data from a random sample of American English words

    PubMed Central

    Albert Felty, Robert; Buchwald, Adam; Gruenenfelder, Thomas M.; Pisoni, David B.

    2013-01-01

    This study reports a detailed analysis of incorrect responses from an open-set spoken word recognition experiment of 1428 words designed to be a random sample of the entire American English lexicon. The stimuli were presented in six-talker babble to 192 young, normal-hearing listeners at three signal-to-noise ratios (0, +5, and +10 dB). The results revealed several patterns: (1) errors tended to have a higher frequency of occurrence than did the corresponding target word, and frequency of occurrence of error responses was significantly correlated with target frequency of occurrence; (2) incorrect responses were close to the target words in terms of number of phonemes and syllables but had a mean edit distance of 3; (3) for syllables, substitutions were much more frequent than either deletions or additions; for phonemes, deletions were slightly more frequent than substitutions; both were more frequent than additions; and (4) for errors involving just a single segment, substitutions were more frequent than either deletions or additions. The raw data are being made available to other researchers as supplementary material to form the beginnings of a database of speech errors collected under controlled laboratory conditions. PMID:23862832

  4. Does segmental overlap help or hurt? Evidence from blocked cyclic naming in spoken and written production.

    PubMed

    Breining, Bonnie; Nozari, Nazbanou; Rapp, Brenda

    2016-04-01

    Past research has demonstrated interference effects when words are named in the context of multiple items that share a meaning. This interference has been explained within various incremental learning accounts of word production, which propose that each attempt at mapping semantic features to lexical items induces slight but persistent changes that result in cumulative interference. We examined whether similar interference-generating mechanisms operate during the mapping of lexical items to segments by examining the production of words in the context of others that share segments. Previous research has shown that initial-segment overlap amongst a set of target words produces facilitation, not interference. However, this initial-segment facilitation is likely due to strategic preparation, an external factor that may mask underlying interference. In the present study, we applied a novel manipulation in which the segmental overlap across target items was distributed unpredictably across word positions, in order to reduce strategic response preparation. This manipulation led to interference in both spoken (Exp. 1) and written (Exp. 2) production. We suggest that these findings are consistent with a competitive learning mechanism that applies across stages and modalities of word production.

  5. Spoken language development in oral preschool children with permanent childhood deafness.

    PubMed

    Sarant, Julia Z; Holt, Colleen M; Dowell, Richard C; Rickards, Field W; Blamey, Peter J

    2009-01-01

    This article documented spoken language outcomes for preschool children with hearing loss and examined the relationships between language abilities and characteristics of children such as degree of hearing loss, cognitive abilities, age at entry to early intervention, and parent involvement in children's intervention programs. Participants were evaluated using a combination of the Child Development Inventory, the Peabody Picture Vocabulary Test, and the Preschool Clinical Evaluation of Language Fundamentals depending on their age at the time of assessment. Maternal education, cognitive ability, and family involvement were also measured. Over half of the children who participated in this study had poor language outcomes overall. No significant differences were found in language outcomes on any of the measures for children who were diagnosed early and those diagnosed later. Multiple regression analyses showed that family participation, degree of hearing loss, and cognitive ability significantly predicted language outcomes and together accounted for almost 60% of the variance in scores. This article highlights the importance of family participation in intervention programs to enable children to achieve optimal language outcomes. Further work may clarify the effects of early diagnosis on language outcomes for preschool children.

  6. Spared and impaired spoken discourse processing in schizophrenia: effects of local and global language context.

    PubMed

    Swaab, Tamara Y; Boudewyn, Megan A; Long, Debra L; Luck, Steve J; Kring, Ann M; Ragland, J Daniel; Ranganath, Charan; Lesh, Tyler; Niendam, Tara; Solomon, Marjorie; Mangun, George R; Carter, Cameron S

    2013-09-25

    Individuals with schizophrenia are impaired in a broad range of cognitive functions, including impairments in the controlled maintenance of context-relevant information. In this study, we used ERPs in human subjects to examine whether impairments in the controlled maintenance of spoken discourse context in schizophrenia lead to overreliance on local associations among the meanings of individual words. Healthy controls (n = 22) and patients (n = 22) listened to short stories in which we manipulated global discourse congruence and local priming. The target word in the last sentence of each story was globally congruent or incongruent and locally associated or unassociated. ERP local association effects did not significantly differ between control participants and schizophrenia patients. However, in contrast to controls, patients only showed effects of discourse congruence when targets were primed by a word in the local context. When patients had to use discourse context in the absence of local priming, they showed impaired brain responses to the target. Our findings indicate that schizophrenia patients are impaired during discourse comprehension when demands on controlled maintenance of context are high. We further found that ERP measures of increased reliance on local priming predicted reduced social functioning, suggesting that alterations in the neural mechanisms underlying discourse comprehension have functional consequences in the illness.

  7. Evaluation of the Spoken Knowledge in Low Literacy in Diabetes Scale for Use With Mexican Americans

    PubMed Central

    Garcia, Alexandra A.; Zuniga, Julie; Reynolds, Raquel; Cairampoma, Laura; Sumlin, Lisa

    2016-01-01

    Purpose This article evaluates the Spoken Knowledge in Low Literacy in Diabetes (SKILLD) questionnaire, a measure of essential knowledge for type 2 diabetes self-management, after it was modified for English- and Spanish-speaking Mexican Americans. Method We collected surveys (SKILLD, demographic, acculturation) and blood for A1C analysis from 72 community-recruited participants to analyze the SKILLD’s internal consistency, interrater reliability, item analysis, and construct validity. Clinical experts evaluated content validity. Results The SKILLD demonstrated low internal consistency but high interrater reliability and content and construct validity. There were significant correlations in expected directions between SKILLD scores and acculturation, education, and A1C and significant differences in SKILLD scores between and within groups after an educational intervention and between high- and low-acculturated participants. Conclusion/Implications The SKILLD generates useful information about Mexican Americans’ diabetes knowledge. Lower SKILLD scores suggest less diabetes knowledge, lower health literacy, and participants’ difficulties understanding items. Further modifications should improve use with low-acculturated Mexican Americans. PMID:24692338

  8. Phonological and syntactic competition effects in spoken word recognition: evidence from corpus-based statistics

    PubMed Central

    Zhuang, Jie; Devereux, Barry J.

    2017-01-01

    ABSTRACT As spoken language unfolds over time the speech input transiently activates multiple candidates at different levels of the system – phonological, lexical, and syntactic – which in turn leads to short-lived between-candidate competition. In an fMRI study, we investigated how different kinds of linguistic competition may be modulated by the presence or absence of a prior context (Tyler 1984; Tyler et al. 2008). We found significant effects of lexico-phonological competition for isolated words, but not for words in short phrases, with high competition yielding greater activation in left inferior frontal gyrus (LIFG) and posterior temporal regions. This suggests that phrasal contexts reduce lexico-phonological competition by eliminating form-class inconsistent cohort candidates. A corpus-derived measure of lexico-syntactic competition was associated with greater activation in LIFG for verbs in phrases, but not for isolated verbs, indicating that lexico-syntactic information is boosted by the phrasal context. Together, these findings indicate that LIFG plays a general role in resolving different kinds of linguistic competition. PMID:28164141

  9. Iconicity as a General Property of Language: Evidence from Spoken and Signed Languages

    PubMed Central

    Perniss, Pamela; Thompson, Robin L.; Vigliocco, Gabriella

    2010-01-01

    Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor, perceptual, and affective experience. PMID:21833282

  10. Intelligibility of American English vowels and consonants spoken by international students in the United States.

    PubMed

    Jin, Su-Hyun; Liu, Chang

    2014-04-01

    PURPOSE The purpose of this study was to examine the intelligibility of English consonants and vowels produced by Chinese-native (CN), and Korean-native (KN) students enrolled in American universities. METHOD 16 English-native (EN), 32 CN, and 32 KN speakers participated in this study. The intelligibility of 16 American English consonants and 16 vowels spoken by native and nonnative speakers of English was evaluated by EN listeners. All nonnative speakers also completed a survey of their language backgrounds. RESULTS Although the intelligibility of consonants and diphthongs for nonnative speakers was comparable to that of native speakers, the intelligibility of monophthongs was significantly lower for CN and KN speakers than for EN speakers. Sociolinguistic factors such as the age of arrival in the United States and daily use of English, as well as a linguistic factor, difference in vowel space between native (L1) and nonnative (L2) language, partially contributed to vowel intelligibility for CN and KN groups. There was no significant correlation between the length of U.S. residency and phoneme intelligibility. CONCLUSION Results indicated that the major difficulty in phonemic production in English for Chinese and Korean speakers is with vowels rather than consonants. This might be useful for developing training methods to improve English intelligibility for foreign students in the United States.

  11. On the nature of sonority in spoken word production: evidence from neuropsychology.

    PubMed

    Miozzo, Michele; Buchwald, Adam

    2013-09-01

    The concept of sonority - that speech sounds can be placed along a universal sonority scale that affects syllable structure - has proved valuable in accounting for a wide spectrum of linguistic phenomena and psycholinguistic findings. Yet, despite the success of this concept in specifying principles governing sound structure, several questions remain about sonority. One issue that needs clarification concerns its locus in the processes involved in spoken language production, and specifically whether sonority affects the computation of abstract word form representations (phonology), the encoding of context-specific features (phonetics), or both of these processes. This issue was examined in the present study investigating two brain-damaged individuals with impairment arising primarily from deficits affecting phonological and phonetic processes, respectively. Clear effects of sonority on production accuracy were observed in both individuals testing word onsets and codas in word production. These findings indicate that the underlying principles governing sound structure that are captured by the notion of sonority play a role at both phonological and phonetic levels of processing. Furthermore, aspects of the errors recorded from our participants revealed features of syllabic structure proposed under current phonological theories (e.g., articulatory phonology).

  12. Phonological and syntactic competition effects in spoken word recognition: evidence from corpus-based statistics.

    PubMed

    Zhuang, Jie; Devereux, Barry J

    2017-02-07

    As spoken language unfolds over time the speech input transiently activates multiple candidates at different levels of the system - phonological, lexical, and syntactic - which in turn leads to short-lived between-candidate competition. In an fMRI study, we investigated how different kinds of linguistic competition may be modulated by the presence or absence of a prior context (Tyler 1984; Tyler et al. 2008). We found significant effects of lexico-phonological competition for isolated words, but not for words in short phrases, with high competition yielding greater activation in left inferior frontal gyrus (LIFG) and posterior temporal regions. This suggests that phrasal contexts reduce lexico-phonological competition by eliminating form-class inconsistent cohort candidates. A corpus-derived measure of lexico-syntactic competition was associated with greater activation in LIFG for verbs in phrases, but not for isolated verbs, indicating that lexico-syntactic information is boosted by the phrasal context. Together, these findings indicate that LIFG plays a general role in resolving different kinds of linguistic competition.

  13. Damage to temporo-parietal cortex decreases incidental activation of thematic relations during spoken word comprehension.

    PubMed

    Mirman, Daniel; Graziano, Kristen M

    2012-07-01

    Both taxonomic and thematic semantic relations have been studied extensively in behavioral studies and there is an emerging consensus that the anterior temporal lobe plays a particularly important role in the representation and processing of taxonomic relations, but the neural basis of thematic semantics is less clear. We used eye tracking to examine incidental activation of taxonomic and thematic relations during spoken word comprehension in participants with aphasia. Three groups of participants were tested: neurologically intact control participants (N=14), individuals with aphasia resulting from lesions in left hemisphere BA 39 and surrounding temporo-parietal cortex regions (N=7), and individuals with the same degree of aphasia severity and semantic impairment and anterior left hemisphere lesions (primarily inferior frontal gyrus and anterior temporal lobe) that spared BA 39 (N=6). The posterior lesion group showed reduced and delayed activation of thematic relations, but not taxonomic relations. In contrast, the anterior lesion group exhibited longer-lasting activation of taxonomic relations and did not differ from control participants in terms of activation of thematic relations. These results suggest that taxonomic and thematic semantic knowledge are functionally and neuroanatomically distinct, with the temporo-parietal cortex playing a particularly important role in thematic semantics.

  14. Listening in circles. Spoken drama and the architects of sound, 1750-1830.

    PubMed

    Tkaczyk, Viktoria

    2014-07-01

    The establishment of the discipline of architectural acoustics is generally attributed to the physicist Wallace Clement Sabine, who developed the formula for reverberation time around 1900, and with it the possibility of making calculated prognoses about the acoustic potential of a particular design. If, however, we shift the perspective from the history of this discipline to the history of architectural knowledge and praxis, it becomes apparent that the topos of 'good sound' had already entered the discourse much earlier. This paper traces the Europe-wide discussion on theatre architecture between 1750 and 1830. It will be shown that the period of investigation is marked by an increasing interest in auditorium acoustics, one linked to the emergence of a bourgeois theatre culture and the growing socio-political importance of the spoken word. In the wake of this development the search among architects for new methods of acoustic research started to differ fundamentally from an analogical reasoning on the nature of sound propagation and reflection, which in part dated back to antiquity. Through their attempts to find new ways of visualising the behaviour of sound in enclosed spaces and to rethink both the materiality and the mediality of theatre auditoria, architects helped pave the way for the establishment of architectural acoustics as an academic discipline around 1900.

  15. Attention modulates specificity effects in spoken word recognition: Challenges to the time-course hypothesis

    PubMed Central

    Theodore, Rachel M.; Blumstein, Sheila E.; Luthra, Sahil

    2015-01-01

    Findings in the domain of spoken word recognition indicate that lexical representations contain both abstract and episodic information. It has been proposed that processing time determines when each source of information is recruited, with increased processing time required to access lower-frequency episodic instantiations. The time-course hypothesis of specificity effects thus identifies a strong role for retrieval mechanisms mediating the use of abstract versus episodic information. Here we conducted three recognition memory experiments to examine whether findings previously attributed to retrieval mechanisms might reflect attention during encoding. Results from Experiment 1 showed that talker-specificity effects emerged when subjects attended to individual speakers during encoding, but not when they attended to lexical characteristics during encoding, even though processing time at retrieval was equivalent. Results from Experiment 2 showed that talker-specificity effects emerged when listeners attended to talker gender but not when they attended to syntactic characteristics, even though processing time at retrieval was significantly longer in the latter condition. Results from Experiment 3 showed no talker-specificity effects when attending to lexical characteristics even when processing at retrieval was slowed by the addition of background noise. Collectively, these results suggest that when processing time during retrieval is decoupled from encoding factors, it fails to predict the emergence of talker-specificity effects. Rather, attention during encoding appears to be the putative variable. PMID:25824889

  16. Selection of Optimum Vocabulary and Dialog Strategy for Noise-Robust Spoken Dialog Systems

    NASA Astrophysics Data System (ADS)

    Ito, Akinori; Oba, Takanobu; Konashi, Takashi; Suzuki, Motoyuki; Makino, Shozo

    Speech recognition in a noisy environment is one of the hottest topics in the speech recognition research. Noise-tolerant acoustic models or noise reduction techniques are often used to improve recognition accuracy. In this paper, we propose a method to improve accuracy of spoken dialog system from a language model point of view. In the proposed method, the dialog system automatically changes its language model and dialog strategy according to the estimated recognition accuracy in a noisy environment in order to keep the performance of the system high. In a noise-free environment, the system accepts any utterance from a user. On the other hand, the system restricts its grammar and vocabulary in a noisy environment. To realize this strategy, we investigated a method to avoid the user's out-of-grammar utterances through an instruction given by the system to a user. Furthermore, we developed a method to estimate recognition accuracy from features extracted from noise signals. Finally, we realized a proposed dialog system according to these investigations.

  17. Cross-language perception of Cantonese vowels spoken by native and non-native speakers.

    PubMed

    So, Connie K; Attina, Virginie

    2014-10-01

    This study examined the effect of native language background on listeners' perception of native and non-native vowels spoken by native (Hong Kong Cantonese) and non-native (Mandarin and Australian English) speakers. They completed discrimination and an identification task with and without visual cues in clear and noisy conditions. Results indicated that visual cues did not facilitate perception, and performance was better in clear than in noisy conditions. More importantly, the Cantonese talker's vowels were the easiest to discriminate, and the Mandarin talker's vowels were as intelligible as the native talkers' speech. These results supported the interlanguage speech native intelligibility benefit patterns proposed by Hayes-Harb et al. (J Phonetics 36:664-679, 2008). The Mandarin and English listeners' identification patterns were similar to those of the Cantonese listeners, suggesting that they might have assimilated Cantonese vowels to their closest native vowels. In addition, listeners' perceptual patterns were consistent with the principles of Best's Perceptual Assimilation Model (Best in Speech perception and linguistic experience: issues in cross-language research. York Press, Timonium, 1995).

  18. Effects of Age and Hearing Sensitivity on the Use of Prosodic Information in Spoken Word Recognition.

    ERIC Educational Resources Information Center

    Wingfield, Arthur; Lindfield, Kimberly C.; Goodglass, Harold

    2000-01-01

    In this study, younger and older adults heard either just word onsets, word onsets followed by white noise indicating word duration, or word onsets followed by signals indicating word prosody. Older adults required longer stimulus durations for word recognition with hearing sensitivity a significant factor. Word recognition was facilitated equally…

  19. Evidence for a Dopamine Intrinsic Direct Role in the Regulation of the Ovary Reproductive Function: In Vitro Study on Rabbit Corpora Lutea

    PubMed Central

    Parillo, Francesco; Maranesi, Margherita; Mignini, Fiorenzo; Marinelli, Lisa; Di Stefano, Antonio; Boiti, Cristiano; Zerani, Massimo

    2014-01-01

    Dopamine (DA) receptor (DR) type 1 (D1R) has been found to be expressed in luteal cells of various species, but the intrinsic role of the DA/DRs system on corpora lutea (CL) function is still unclear. Experiments were devised to characterize the expression of DR types and the presence of DA, as well as the in vitro effects of DA on hormone productions by CL in pseudopregnant rabbits. Immunoreactivity and gene expression for D1R decreased while that for D3R increased in luteal and blood vessel cells from early to late pseudopregnant stages. DA immunopositivity was evidenced only in luteal cells. The DA and D1R agonist increased in vitro release of progesterone and prostaglandin E2 (PGE2) by early CL, whereas the DA and D3R agonist decreased progesterone and increased PGF2α in vitro release by mid- and late CL. These results provide evidence that the DA/DR system exerts a dual modulatory function in the lifespan of CL: the DA/D1R is luteotropic while the DA/D3R is luteolytic. The present data shed new light on the physiological mechanisms regulating luteal activity that might improve our ability to optimize reproductive efficiency in mammal species, including humans. PMID:25148384

  20. Evidence for a dopamine intrinsic direct role in the regulation of the ovary reproductive function: in vitro study on rabbit corpora lutea.

    PubMed

    Parillo, Francesco; Maranesi, Margherita; Mignini, Fiorenzo; Marinelli, Lisa; Di Stefano, Antonio; Boiti, Cristiano; Zerani, Massimo

    2014-01-01

    Dopamine (DA) receptor (DR) type 1 (D1R) has been found to be expressed in luteal cells of various species, but the intrinsic role of the DA/DRs system on corpora lutea (CL) function is still unclear. Experiments were devised to characterize the expression of DR types and the presence of DA, as well as the in vitro effects of DA on hormone productions by CL in pseudopregnant rabbits. Immunoreactivity and gene expression for D1R decreased while that for D3R increased in luteal and blood vessel cells from early to late pseudopregnant stages. DA immunopositivity was evidenced only in luteal cells. The DA and D1R agonist increased in vitro release of progesterone and prostaglandin E2 (PGE2) by early CL, whereas the DA and D3R agonist decreased progesterone and increased PGF2α in vitro release by mid- and late CL. These results provide evidence that the DA/DR system exerts a dual modulatory function in the lifespan of CL: the DA/D1R is luteotropic while the DA/D3R is luteolytic. The present data shed new light on the physiological mechanisms regulating luteal activity that might improve our ability to optimize reproductive efficiency in mammal species, including humans.