Science.gov

Sample records for adult spoken corpora

  1. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    PubMed Central

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980

  2. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    ERIC Educational Resources Information Center

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Halle, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and…

  3. The Spoken Word as Arts-Based Adult Education

    ERIC Educational Resources Information Center

    Merriweather, Lisa R.

    2011-01-01

    Arts-based adult education has been embraced by a growing number of adult educators. These educators have explored its potential in the workplace, in the community and in academia. This article contributes to this work by exploring the Spoken Word, an art form located within the genre of poetry, and its potential as a tool of arts-based adult…

  4. JH Biosynthesis by Reproductive Tissues and Corpora Allata in Adult Longhorned Beetles, Apriona germari

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We report on juvenile hormone (JH) biosynthesis from long-chain intermediates by specific reproductive system tissues and the corpora allata (CA) prepared from adult longhorned beetles, Apriona germari. Testes, male accessory glands (MAGs), ovaries and CA contain the long-chain intermediates in the ...

  5. The Effect of Redundant Cues on Comprehension of Spoken Messages by Aphasic Adults.

    ERIC Educational Resources Information Center

    Venus, Carol A.; Canter, Gerald J.

    1987-01-01

    Aphasic adults (N=16) with severe auditory comprehension impairment were evaluated for comprehension of redundant and nonredundant spoken and/or gestured messages. Results indicated redundancy was not reliably superior to spoken messages alone. (Author/DB)

  6. Spoken Oral Language and Adult Struggling Readers

    ERIC Educational Resources Information Center

    Bakhtiari, Dariush; Greenberg, Daphne; Patton-Terry, Nicole; Nightingale, Elena

    2015-01-01

    Oral language is a critical component to the development of reading acquisition. Much of the research concerning the relationship between oral language and reading ability is focused on children, while there is a paucity of research focusing on this relationship for adults who struggle with their reading. Oral language as defined in this paper…

  7. Conveying Information about Adjective Meanings in Spoken Discourse

    ERIC Educational Resources Information Center

    Corrigan, Roberta

    2008-01-01

    This study examined information about adjective meanings available in adults' spoken discourse in the original 27 CHILDES corpora of typically developing English-speaking children. In order to increase the probability that adjectives would be novel to children to whom they were addressed, only "rare" adjectives were examined (those that occurred…

  8. Novel Spoken Word Learning in Adults with Developmental Dyslexia

    ERIC Educational Resources Information Center

    Conner, Peggy S.

    2013-01-01

    A high percentage of individuals with dyslexia struggle to learn unfamiliar spoken words, creating a significant obstacle to foreign language learning after early childhood. The origin of spoken-word learning difficulties in this population, generally thought to be related to the underlying literacy deficit, is not well defined (e.g., Di Betta…

  9. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    PubMed

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. PMID:25820191

  10. A Bruner-Potter effect in audition? Spoken word recognition in adult aging.

    PubMed

    Lash, Amanda; Wingfield, Arthur

    2014-12-01

    Bruner and Potter (1964) demonstrated the surprising finding that incrementally increasing the clarity of images until they were correctly recognized (ascending presentation) was less effective for recognition than presenting images in a single presentation at that same clarity level. This has been attributed to interference from incorrect perceptual hypotheses formed on the initial presentations under ascending conditions. We demonstrate an analogous effect for spoken word recognition in older adults, with the size of the effect predicted by working memory span. This effect did not appear for young adults, whose group spans exceeded that of the older adults. PMID:25244463

  11. Contribution of Implicit Sequence Learning to Spoken Language Processing: Some Preliminary Findings With Hearing Adults

    PubMed Central

    Conway, Christopher M.; Karpicke, Jennifer; Pisoni, David B.

    2013-01-01

    Spoken language consists of a complex, sequentially arrayed signal that contains patterns that can be described in terms of statistical relations among language units. Previous research has suggested that a domain-general ability to learn structured sequential patterns may underlie language acquisition. To test this prediction, we examined the extent to which implicit sequence learning of probabilistically structured patterns in hearing adults is correlated with a spoken sentence perception task under degraded listening conditions. Performance on the sentence perception task was found to be correlated with implicit sequence learning, but only when the sequences were composed of stimuli that were easy to encode verbally. Implicit learning of phonological sequences thus appears to underlie spoken language processing and may indicate a hitherto unexplored cognitive factor that may account for the enormous variability in language outcomes in deaf children with cochlear implants. The present findings highlight the importance of investigating individual differences in specific cognitive abilities as a way to understand and explain language in deaf learners and, in particular, variability in language outcomes following cochlear implantation. PMID:17548805

  12. Relationships among vocabulary size, nonverbal cognition, and spoken word recognition in adults with cochlear implants

    NASA Astrophysics Data System (ADS)

    Collison, Elizabeth A.; Munson, Benjamin; Carney, Arlene E.

    2002-05-01

    Recent research has attempted to identify the factors that predict speech perception performance among users of cochlear implants (CIs). Studies have found that approximately 20%-60% of the variance in speech perception scores can be accounted for by factors including duration of deafness, etiology, type of device, and length of implant use, leaving approximately 50% of the variance unaccounted for. The current study examines the extent to which vocabulary size and nonverbal cognitive ability predict CI listeners' spoken word recognition. Fifteen postlingually deafened adults with nucleus or clarion CIs were given standardized assessments of nonverbal cognitive ability and expressive vocabulary size: the Expressive Vocabulary Test, the Test of Nonverbal Intelligence-III, and the Woodcock-Johnson-III Test of Cognitive Ability, Verbal Comprehension subtest. Two spoken word recognition tasks were administered. In the first, listeners identified isophonemic CVC words. In the second, listeners identified gated words varying in lexical frequency and neighborhood density. Analyses will examine the influence of lexical frequency and neighborhood density on the uniqueness point in the gating task, as well as relationships among nonverbal cognitive ability, vocabulary size, and the two spoken word recognition measures. [Work supported by NIH Grant P01 DC00110 and by the Lions 3M Hearing Foundation.

  13. Brief Report: The Effects of Typed and Spoken Modality Combinations on the Language Performance of Adults with Autism.

    ERIC Educational Resources Information Center

    Forsey, Janice; And Others

    1996-01-01

    A study of five adult males with autism investigated which combination of input/output modalities (typed or spoken) enhanced the syntactic, semantic, and/or pragmatic performance of individuals with autism when engaging in conversations with a normal language adult. Results found that typed communications facilitated the use of longer utterances.…

  14. Constructing the Taiwanese Componet of the Louvain International Database of Spoken English Interlanguage (LINDSEI)

    ERIC Educational Resources Information Center

    Huang, Lan-fen

    2014-01-01

    This paper reports the compilation of a corpus of Taiwanese students' spoken English, which is one of the sub-corpora of the Louvain International Database of Spoken English Interlanguage (LINDSEI) (Gilquin, De Cock, & Granger, 2010). LINDSEI is one of the largest corpora of learner speech. The compilation process follows the design criteria…

  15. Online Lexical Competition during Spoken Word Recognition and Word Learning in Children and Adults

    ERIC Educational Resources Information Center

    Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth

    2013-01-01

    Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children…

  16. Online lexical competition during spoken word recognition and word learning in children and adults.

    PubMed

    Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth

    2013-01-01

    Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children (n = 20) and adults (n = 17) were slower to detect pauses in familiar words with later uniqueness points. Faster latencies were obtained for words with late uniqueness points in constraining compared with neutral sentences; no such effect was observed for early unique words. Following exposure to novel competitors ("biscal"), children (n = 18) and adults (n = 18) showed competition for existing words with early uniqueness points ("biscuit") after 24 hr. Thus, online lexical competition effects are remarkably similar across development. PMID:23432734

  17. Decreased Sensitivity to Phonemic Mismatch in Spoken Word Processing in Adult Developmental Dyslexia

    ERIC Educational Resources Information Center

    Janse, Esther; de Bree, Elise; Brouwer, Susanne

    2010-01-01

    Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as "procodile for crocodile") for the atypical population of dyslexic adults to see to what…

  18. Spoken Lebanese.

    ERIC Educational Resources Information Center

    Feghali, Maksoud N.

    This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…

  19. Spoken Dutch.

    ERIC Educational Resources Information Center

    Bloomfield, Leonard

    This course in spoken Dutch is intended for use in introductory conversational classes. The book is divided into five major parts, each containing five learning units and one unit devoted to review. Each unit contains sections including (1) basic sentences, (2) word study and review of basic sentences, (3) listening comprehension, and (4)…

  20. Grammars of Spoken English: New Outcomes of Corpus-Oriented Research.

    ERIC Educational Resources Information Center

    Leech, Geoffrey

    2000-01-01

    Reviews research that has been emerging from the availability of corpora on the grammar of spoken English. Presents arguments for the view that spoken and written language utilize the same basic grammatical repertoire, however different their implementations of it are. (Author/VWL)

  1. Regulation of the corpora allata in male larvae of the cockroach Diploptera punctata

    SciTech Connect

    Paulson, C.R.

    1986-01-01

    The regulation of corpora allata was studied in final instar males of Diploptera punctata. The glands were manipulated in vivo and removed to determine the effect by in vitro radiochemical assay for juvenile hormone synthesis. Corpora allata were also treated with putative regulatory factors in vitro. During the final stadium the corpora allata were inhibited both by nerves and by humoral factors. Neural inhibition was shown by an increase in juvenile hormone synthesis following denervation of the corpora allata. This operation elicited an extra larval instar. Humoral inhibition was shown by the decline in juvenile hormone synthesis of adult female corpora allata following transplantation into final instar larval hosts, and conversely the increase in juvenile hormone synthesis by larval corpora allata following implantation into adult females. Humoral inhibition was prevented by decapitation of larvae prior to the head critical period for molting and restored by implantation of a larval brain, showing that the brain is the source of this inhibition.

  2. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults

    PubMed Central

    Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  3. Integrating Learner Corpora and Natural Language Processing: A Crucial Step towards Reconciling Technological Sophistication and Pedagogical Effectiveness

    ERIC Educational Resources Information Center

    Granger, Sylviane; Kraif, Olivier; Ponton, Claude; Antoniadis, Georges; Zampa, Virginie

    2007-01-01

    Learner corpora, electronic collections of spoken or written data from foreign language learners, offer unparalleled access to many hitherto uncovered aspects of learner language, particularly in their error-tagged format. This article aims to demonstrate the role that the learner corpus can play in CALL, particularly when used in conjunction with…

  4. Does the Information Content of an Irrelevant Source Differentially Affect Spoken Word Recognition in Younger and Older Adults?

    ERIC Educational Resources Information Center

    Li, Liang; Daneman, Meredyth; Qi, James G.; Schneider, Bruce A.

    2004-01-01

    To determine whether older adults find it difficult to inhibit the processing of irrelevant speech, the authors asked younger and older adults to listen to and repeat meaningless sentences (e.g., "A rose could paint a fish") when the perceived location of the masker (speech or noise) but not the target was manipulated. Separating the perceived…

  5. Peptidomic Analysis of the Brain and Corpora Cardiaca-Corpora Allata Complex in the Bombyx mori

    PubMed Central

    Liu, Xiaoguang; Ning, Xia; Zhang, Yan; Chen, Wenfeng; Zhao, Zhangwu; Zhang, Qingwen

    2012-01-01

    The silkworm, Bombyx mori, is an important economic insect for silk production. However, many of the mature peptides relevant to its various life stages remain unknown. Using RP-HPLC, MALDI-TOF MS, and previously identified peptides from B. mori and other insects in the transcriptome database, we created peptide profiles showing a total of 6 ion masses that could be assigned to peptides in eggs, including one previously unidentified peptide. A further 49 peptides were assigned to larval brains. 17 new mature peptides were identified in isolated masses. 39 peptides were found in pupal brains with 8 unidentified peptides. 48 were found in adult brains with 12 unidentified peptides. These new unidentified peptides showed highly significant matches in all MS analysis. These matches were then searched against the National Center for Biotechnology Information (NCBI) database to provide new annotations for these mature peptides. In total, 59 mature peptides in 19 categories were found in the brains of silkworms at the larval, pupal, and adult stages. These results demonstrate that peptidomic variation across different developmental stages can be dramatic. Moreover, the corpora cardiaca-corpora allata (CC-CA) complex was examined during the fifth larval instar. A total of 41 ion masses were assigned to peptides. PMID:23316247

  6. Text exposure predicts spoken production of complex sentences in eight and twelve year old children and adults

    PubMed Central

    Montag, Jessica L.; MacDonald, Maryellen C.

    2015-01-01

    There is still much debate about the nature of the experiential and maturational changes that take place during childhood to bring about the sophisticated language abilities of an adult. The present study investigated text exposure as a possible source of linguistic experience that plays a role in the development of adult-like language abilities. Corpus analyses of object and passive relative clauses (Object: The book that the woman carried; Passive: The book that was carried by the woman) established the frequencies of these sentence types in child-directed speech and children's literature. We found that relative clauses of either type were more frequent in the written corpus, and that the ratio of passive to object relatives was much higher in the written corpus as well. This analysis suggests that passive relative clauses are much more frequent in a child's linguistic environment if they have high rates of text exposure. We then elicited object and passive relative clauses using a picture-description production task with eight and twelve year old children and adults. Both group and individual differences were consistent with the corpus analyses, such that older individuals and individuals with more text exposure produced more passive relative clauses. These findings suggest that the qualitatively different patterns of text versus speech may be an important source of linguistic experience for the development of adult-like language behavior. PMID:25844625

  7. Using Monolingual and Bilingual Corpora in Lexicography

    ERIC Educational Resources Information Center

    Miangah, Tayebeh Mosavi

    2009-01-01

    Constructing and exploiting different types of corpora are among computer applications exposed to the researchers in different branches of science including lexicography. In lexicography, different types of corpora may be of great help in finding the most appropriate uses of words and expressions by referring to numerous examples and citations.…

  8. Eye Movements during Spoken Word Recognition in Russian Children

    ERIC Educational Resources Information Center

    Sekerina, Irina A.; Brooks, Patricia J.

    2007-01-01

    This study explores incremental processing in spoken word recognition in Russian 5- and 6-year-olds and adults using free-viewing eye-tracking. Participants viewed scenes containing pictures of four familiar objects and clicked on a target embedded in a spoken instruction. In the cohort condition, two object names shared identical three-phoneme…

  9. Automatic translation among spoken languages

    NASA Astrophysics Data System (ADS)

    Walter, Sharon M.; Costigan, Kelly

    1994-02-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  10. Automatic translation among spoken languages

    NASA Technical Reports Server (NTRS)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  11. A Word by Any Other Intonation: FMRI Evidence for Implicit Memory Traces for Pitch Contours of Spoken Words in Adult Brains

    PubMed Central

    Inspector, Michael; Manor, David; Amir, Noam; Kushnir, Tamar; Karni, Avi

    2013-01-01

    Objectives Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. Experimental design Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. Principal findings The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. Conclusions Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words. PMID:24391713

  12. Research on Spoken Dialogue Systems

    NASA Technical Reports Server (NTRS)

    Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel

    2010-01-01

    Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.

  13. Spoken Greek: Book Two.

    ERIC Educational Resources Information Center

    Kahane, Henry; And Others

    This course in spoken Greek is intended for use in introductory conversational Greek classes. Book II in the two-volume series is divided into three major parts, each containing five learning units and one unit devoted to review. Each unit contains sections including (1) basic sentences, (2) word study and review of basic sentences, (3) listening…

  14. Spoken Greek: Book One.

    ERIC Educational Resources Information Center

    Kahane, Henry; And Others

    This course in spoken Greek is intended for use in introductory conversational Greek classes. Book I in the two-volume series is divided into two major parts, each containing five learning units and one unit devoted to review. Each unit contains sections including (1) basic sentences, (2) word study and review of basic sentences, (3) listening…

  15. Proposed Framework for the Evaluation of Standalone Corpora Processing Systems: An Application to Arabic Corpora

    PubMed Central

    Al-Thubaity, Abdulmohsen; Alqifari, Reem

    2014-01-01

    Despite the accessibility of numerous online corpora, students and researchers engaged in the fields of Natural Language Processing (NLP), corpus linguistics, and language learning and teaching may encounter situations in which they need to develop their own corpora. Several commercial and free standalone corpora processing systems are available to process such corpora. In this study, we first propose a framework for the evaluation of standalone corpora processing systems and then use it to evaluate seven freely available systems. The proposed framework considers the usability, functionality, and performance of the evaluated systems while taking into consideration their suitability for Arabic corpora. While the results show that most of the evaluated systems exhibited comparable usability scores, the scores for functionality and performance were substantially different with respect to support for the Arabic language and N-grams profile generation. The results of our evaluation will help potential users of the evaluated systems to choose the system that best meets their needs. More importantly, the results will help the developers of the evaluated systems to enhance their systems and developers of new corpora processing systems by providing them with a reference framework. PMID:25610910

  16. Predictors of spoken language learning

    PubMed Central

    Wong, Patrick C. M.; Ettlinger, Marc

    2011-01-01

    We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We found those who were successful to have higher activation in bilateral auditory cortex, larger volume in Heschl’s Gyrus, and more accurate pitch pattern perception. All of these measures were performed before training began. In the second set of experiments, native English-speaking adults learned a phonological grammatical system governing the formation of words of an artificial language. Again, neurophysiological, neuroanatomical, and cognitive factors predicted to an extent how well these adults learned. Taken together, these experiments suggest that neural and behavioral factors can be used to predict spoken language learning. These predictors can inform the redesign of existing training paradigms to maximize learning for learners with different learning profiles. Learning outcomes: Readers will be able to: (a) understand the linguistic concepts of lexical tone and phonological grammar, (b) identify the brain regions associated with learning lexical tone and phonological grammar, and (c) identify the cognitive predictors for successful learning of a tone language and phonological rules. PMID:21601868

  17. Functional characterization of an allatotropin receptor expressed in the corpora allata of mosquitoes

    PubMed Central

    Nouzova, Marcela; Brockhoff, Anne; Mayoral, Jaime G.; Goodwin, Marianne; Meyerhof, Wolfgang; Noriega, Fernando G.

    2011-01-01

    Allatotropin is an insect neuropeptide with pleiotropic actions on a variety of different tissues. In the present work we describe the identification, cloning and functional and molecular characterization of an Aedes aegypti allatotropin receptor (AeATr) and provide a detailed quantitative study of the expression of the AeATr gene in the adult mosquito. Analysis of the tissue distribution of AeATr mRNA in adult female revealed high transcript levels in the nervous system (brain, abdominal, thoracic and ventral ganglia), corpora allata-corpora cardiaca complex and ovary. The receptor is also expressed in heart, hindgut and male testis and accessory glands. Separation of the corpora allata (CA) and corpora cardiaca followed by analysis of gene expression in the isolated glands revealed expression of the AeATr primarily in the CA. In the female CA, the AeATr mRNA levels were low in the early pupae, started increasing 6 hours before adult eclosion and reached a maximum 24 hours after female emergence. Blood feeding resulted in a decrease in transcript levels. The pattern of changes of AeATr mRNA resembles the changes in JH biosynthesis. Fluorometric Imaging Plate Reader recordings of calcium transients in HEK293 cells expressing the AeATr showed a selective response to A. aegypti allatotropin stimulation in the low nanomolar concentration range. Our studies suggest that the AeATr play a role in the regulation of JH synthesis in mosquitoes. PMID:21839791

  18. Sharing Spoken Language: Sounds, Conversations, and Told Stories

    ERIC Educational Resources Information Center

    Birckmayer, Jennifer; Kennedy, Anne; Stonehouse, Anne

    2010-01-01

    Infants and toddlers encounter numerous spoken story experiences early in their lives: conversations, oral stories, and language games such as songs and rhymes. Many adults are even surprised to learn that children this young need these kinds of natural language experiences at all. Adults help very young children take a step along the path toward…

  19. Mining Quality Phrases from Massive Text Corpora

    PubMed Central

    Liu, Jialu; Shang, Jingbo; Wang, Chi; Ren, Xiang; Han, Jiawei

    2015-01-01

    Text data are ubiquitous and play an essential role in big data applications. However, text data are mostly unstructured. Transforming unstructured text into structured units (e.g., semantically meaningful phrases) will substantially reduce semantic ambiguity and enhance the power and efficiency at manipulating such data using database technology. Thus mining quality phrases is a critical research problem in the field of databases. In this paper, we propose a new framework that extracts quality phrases from text corpora integrated with phrasal segmentation. The framework requires only limited training but the quality of phrases so generated is close to human judgment. Moreover, the method is scalable: both computation time and required space grow linearly as corpus size increases. Our experiments on large text corpora demonstrate the quality and efficiency of the new method. PMID:26705375

  20. Metaphor identification in large texts corpora.

    PubMed

    Neuman, Yair; Assaf, Dan; Cohen, Yohai; Last, Mark; Argamon, Shlomo; Howard, Newton; Frieder, Ophir

    2013-01-01

    Identifying metaphorical language-use (e.g., sweet child) is one of the challenges facing natural language processing. This paper describes three novel algorithms for automatic metaphor identification. The algorithms are variations of the same core algorithm. We evaluate the algorithms on two corpora of Reuters and the New York Times articles. The paper presents the most comprehensive study of metaphor identification in terms of scope of metaphorical phrases and annotated corpora size. Algorithms' performance in identifying linguistic phrases as metaphorical or literal has been compared to human judgment. Overall, the algorithms outperform the state-of-the-art algorithm with 71% precision and 27% averaged improvement in prediction over the base-rate of metaphors in the corpus. PMID:23658625

  1. Metaphor Identification in Large Texts Corpora

    PubMed Central

    Neuman, Yair; Assaf, Dan; Cohen, Yohai; Last, Mark; Argamon, Shlomo; Howard, Newton; Frieder, Ophir

    2013-01-01

    Identifying metaphorical language-use (e.g., sweet child) is one of the challenges facing natural language processing. This paper describes three novel algorithms for automatic metaphor identification. The algorithms are variations of the same core algorithm. We evaluate the algorithms on two corpora of Reuters and the New York Times articles. The paper presents the most comprehensive study of metaphor identification in terms of scope of metaphorical phrases and annotated corpora size. Algorithms’ performance in identifying linguistic phrases as metaphorical or literal has been compared to human judgment. Overall, the algorithms outperform the state-of-the-art algorithm with 71% precision and 27% averaged improvement in prediction over the base-rate of metaphors in the corpus. PMID:23658625

  2. Spoken name pronunciation evaluation

    NASA Astrophysics Data System (ADS)

    Tepperman, Joseph; Narayanan, Shrikanth

    2004-10-01

    Recognition of spoken names is an important ASR task since many speech applications can be associated with it. However, the task is also among the most difficult ones due to the large number of names, their varying origins, and the multiple valid pronunciations of any given name, largely dependent upon the speaker's mother tongue and familiarity with the name. In order to explore the speaker- and language-dependent pronunciation variability issues present in name pronunciation, a spoken name database was collected from 101 speakers with varying native languages. Each speaker was asked to pronounce 80 polysyllabic names, uniformly chosen from ten language origins. In preliminary experiments, various prosodic features were used to train Gaussian mixture models (GMMs) to identify misplaced syllabic emphasis within the name, at roughly 85% accuracy. Articulatory features (voicing, place, and manner of articulation) derived from MFCCs were also incorporated for that purpose. The combined prosodic and articulatory features were used to automatically grade the quality of name pronunciation. These scores can be used to provide meaningful feedback to foreign language learners. A detailed description of the name database and some preliminary results on the accuracy of detecting misplaced stress patterns will be reported.

  3. BASIC (SPOKEN) GERMAN IDIOM LIST.

    ERIC Educational Resources Information Center

    PFEFFER, J. ALAN

    THIRD IN A SERIES OF RELATED STUDIES UNDERTAKEN TO ARRIVE AT THE CORE ELEMENTS OF SPOKEN GERMAN, THIS LIST IS BASED ON THE LATEST COMPUTER TECHNIQUES IN FREQUENCY ANALYSIS. AN INTRODUCTION EXAMINES AND REDEFINES THE CONCEPT OF IDIOMATIC PATTERNS. THE BODY OF THE TEXT LISTS, BY FREQUENCY AND RANGE, 1,100 OF THE MOST COMMON IDIOMS IN SPOKEN GERMAN.…

  4. Conventions for sign and speech transcription of child bimodal bilingual corpora in ELAN

    PubMed Central

    Chen Pichler, Deborah; Hochgesang, Julie A.; Lillo-Martin, Diane; de Quadros, Ronice Müller

    2011-01-01

    This article extends current methodologies for the linguistic analysis of sign language acquisition to cases of bimodal bilingual acquisition. Using ELAN, we are transcribing longitudinal spontaneous production data from hearing children of Deaf parents who are learning either American Sign Language (ASL) and American English (AE), or Brazilian Sign Language (Libras, also referred to as Língua de Sinais Brasileira/LSB in some texts) and Brazilian Portuguese (BP). Our goal is to construct corpora that can be mined for a wide range of investigations on various topics in acquisition. Thus, it is important that we maintain consistency in transcription for both signed and spoken languages. This article documents our transcription conventions, including the principles behind our approach. Using this document, other researchers can chose to follow similar conventions or develop new ones using our suggestions as a starting point. PMID:21625371

  5. Enhanced Plasticity in Spoken Language Acquisition for Child Learners: Evidence from Phonetic Training Studies in Child and Adult Learners of English

    ERIC Educational Resources Information Center

    Giannakopoulou, Anastasia; Uther, Maria; Ylinen, Sari

    2013-01-01

    Speech sounds that contain multiple phonetic cues are often difficult for foreign-language learners, especially if certain cues are weighted differently in the foreign and native languages. Greek adult and child speakers of English were studied to determine the effect of native language on second-language (L2) cue weighting and, in particular, to…

  6. Corpora and Language Assessment: The State of the Art

    ERIC Educational Resources Information Center

    Park, Kwanghyun

    2014-01-01

    This article outlines the current state of and recent developments in the use of corpora for language assessment and considers future directions with a special focus on computational methodology. Because corpora began to make inroads into language assessment in the 1990s, test developers have increasingly used them as a reference resource to…

  7. The Importance of Corpora in Translation Studies: A Practical Case

    ERIC Educational Resources Information Center

    Bermúdez Bausela, Montserrat

    2016-01-01

    This paper deals with the use of corpora in Translation Studies, particularly with the so-called "'ad hoc' corpus" or "translator's corpus" as a working tool both in the classroom and for the professional translator. We believe that corpora are an inestimable source not only for terminology and phraseology extraction (cf. Maia,…

  8. Does textual feedback hinder spoken interaction in natural language?

    PubMed

    Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois

    2010-01-01

    The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated. PMID:20069480

  9. Recognizing Young Readers' Spoken Questions

    ERIC Educational Resources Information Center

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  10. Spoken (Yucatec) Maya. [Preliminary Edition].

    ERIC Educational Resources Information Center

    Blair, Robert W.; Vermont-Salas, Refugio

    This two-volume set of 18 tape-recorded lesson units represents a first attempt at preparing a course in the modern spoken language of some 300,000 inhabitants of the peninsula of Yucatan, the Guatemalan department of the Peten, and certain border areas of Belize. (A short account of the research and background of this material is given in the…

  11. Effects of Aging and Noise on Real-Time Spoken Word Recognition: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Ben-David, Boaz M.; Chambers, Craig G.; Daneman, Meredyth; Pichora-Fuller, M. Kathleen; Reingold, Eyal M.; Schneider, Bruce A.

    2011-01-01

    Purpose: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. Method: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted…

  12. [Cytophysiology of the glandular lobe of the corpora cardiaca: ergastoplasmic granules and their significance].

    PubMed

    Lafon-Cazal, M; Michel, R

    1977-01-01

    The number of ergastoplasmic granules in the glandular lobe of the corpora cardiaca is counted in Locusta migratoria migratorioides R. and F. and Schistocerca gregaria Forsk., male adults of different ages, grouped or isolated, having flown or not, and reared in various conditions of hygrometry and temperature. A good correlation was found between the number of ergastoplasmic granules and the utilization of the adipokinetic hormone. Ergastoplasmic granules may représent an original mechanism of hormonal storage used in view of heavy metabolic requirements. PMID:615543

  13. Spoken Word Classification in Children and Adults

    ERIC Educational Resources Information Center

    Carroll, Julia M.; Myers, Joanne M.

    2011-01-01

    Purpose: Preschool children often have difficulties in word classification, despite good speech perception and production. Some researchers suggest that they represent words using phonetic features rather than phonemes. In this study, the authors examined whether there is a progression from feature-based to phoneme-based processing across age…

  14. Investigating heterogeneous protein annotations toward cross-corpora utilization

    PubMed Central

    2009-01-01

    Background The number of corpora, collections of structured texts, has been increasing, as a result of the growing interest in the application of natural language processing methods to biological texts. Many named entity recognition (NER) systems have been developed based on these corpora. However, in the biomedical community, there is yet no general consensus regarding named entity annotation; thus, the resources are largely incompatible, and it is difficult to compare the performance of systems developed on resources that were divergently annotated. On the other hand, from a practical application perspective, it is desirable to utilize as many existing annotated resources as possible, because annotation is costly. Thus, it becomes a task of interest to integrate the heterogeneous annotations in these resources. Results We explore the potential sources of incompatibility among gene and protein annotations that were made for three common corpora: GENIA, GENETAG and AIMed. To show the inconsistency in the corpora annotations, we first tackle the incompatibility problem caused by corpus integration, and we quantitatively measure the effect of this incompatibility on protein mention recognition. We find that the F-score performance declines tremendously when training with integrated data, instead of training with pure data; in some cases, the performance drops nearly 12%. This degradation may be caused by the newly added heterogeneous annotations, and cannot be fixed without an understanding of the heterogeneities that exist among the corpora. Motivated by the result of this preliminary experiment, we further qualitatively analyze a number of possible sources for these differences, and investigate the factors that would explain the inconsistencies, by performing a series of well-designed experiments. Our analyses indicate that incompatibilities in the gene/protein annotations exist mainly in the following four areas: the boundary annotation conventions, the scope of

  15. NADP+-dependent farnesol dehydrogenase, a corpora allata enzyme involved in juvenile hormone synthesis

    PubMed Central

    Mayoral, Jaime G.; Nouzova, Marcela; Navare, Arti; Noriega, Fernando G.

    2009-01-01

    The synthesis of juvenile hormone (JH) is an attractive target for control of insect pests and vectors of disease, but the minute size of the corpora allata (CA), the glands that synthesize JH, has made it difficult to identify important biosynthetic enzymes by classical biochemical approaches. Here, we report identification and characterization of an insect farnesol dehydrogenase (AaSDR-1) that oxidizes farnesol into farnesal, a precursor of JH, in the CA. AaSDR-1 was isolated as an EST in a library of the corpora allata-corpora cardiaca of the mosquito Aedes aegypti. The 245-amino acid protein presents the typical short-chain dehydrogenase (SDR) Rossmann-fold motif for nucleotide binding. This feature, together with other conserved sequence motifs, place AaSDR-1 into the “classical” NADP+-dependent cP2 SDR subfamily. The gene is part of a group of highly conserved paralogs that cluster together in the mosquito genome; similar clusters of orthologs were found in other insect species. AaSDR-1 acts as a homodimer and efficiently oxidizes C10 to C15 isoprenoid and aliphatic alcohols, showing the highest affinity for the conversion of farnesol into farnesal. Farnesol dehydrogenase activity was not detected in the CA of newly emerged mosquitoes but significant activity was detected 24 h later. Real time PCR experiments revealed that AaSDR-1 mRNA levels were very low in the inactive CA of the newly emerged female, but increased >30-fold 24 h later during the peak of JH synthesis. These results suggest that oxidation of farnesol might be a rate-limiting step in JH III synthesis in adult mosquitoes. PMID:19940247

  16. Developmental Differences in the Influence of Phonological Similarity on Spoken Word Processing in Mandarin Chinese

    PubMed Central

    Malins, Jeffrey G.; Gao, Danqi; Tao, Ran; Booth, James R.; Shu, Hua; Joanisse, Marc F.; Liu, Li; Desroches, Amy S.

    2014-01-01

    The developmental trajectory of spoken word recognition has been well established in Indo-European languages, but to date remains poorly characterized in Mandarin Chinese. In this study, typically developing children (N = 17; mean age 10;5) and adults (N = 17; mean age 24) performed a picture-word matching task in Mandarin while we recorded ERPs. Mismatches diverged from expectations in different components of the Mandarin syllable; namely, word-initial phonemes, word-final phonemes, and tone. By comparing responses to different mismatch types, we uncovered evidence suggesting that both children and adults process words incrementally. However, we also observed key developmental differences in how subjects treated onset and rime mismatches. This was taken as evidence for a stronger influence of top-down processing on spoken word recognition in adults compared to children. This work therefore offers an important developmental component to theories of Mandarin spoken word recognition. PMID:25278419

  17. Developmental differences in the influence of phonological similarity on spoken word processing in Mandarin Chinese.

    PubMed

    Malins, Jeffrey G; Gao, Danqi; Tao, Ran; Booth, James R; Shu, Hua; Joanisse, Marc F; Liu, Li; Desroches, Amy S

    2014-11-01

    The developmental trajectory of spoken word recognition has been well established in Indo-European languages, but to date remains poorly characterized in Mandarin Chinese. In this study, typically developing children (N=17; mean age 10; 5) and adults (N=17; mean age 24) performed a picture-word matching task in Mandarin while we recorded ERPs. Mismatches diverged from expectations in different components of the Mandarin syllable; namely, word-initial phonemes, word-final phonemes, and tone. By comparing responses to different mismatch types, we uncovered evidence suggesting that both children and adults process words incrementally. However, we also observed key developmental differences in how subjects treated onset and rime mismatches. This was taken as evidence for a stronger influence of top-down processing on spoken word recognition in adults compared to children. This work therefore offers an important developmental component to theories of Mandarin spoken word recognition. PMID:25278419

  18. Building Spoken Language in the First Plane

    ERIC Educational Resources Information Center

    Bettmann, Joen

    2016-01-01

    Through a strong Montessori orientation to the parameters of spoken language, Joen Bettmann makes the case for "materializing" spoken knowledge using the stimulation of real objects and real situations that promote mature discussion around the sensorial aspect of the prepared environment. She lists specific materials in the classroom…

  19. The Neural Substrates of Spoken Idiom Comprehension

    ERIC Educational Resources Information Center

    Hillert, Dieter G.; Buracas, Giedrius T.

    2009-01-01

    To examine the neural correlates of spoken idiom comprehension, we conducted an event-related functional MRI study with a "rapid sentence decision" task. The spoken sentences were equally familiar but varied in degrees of "idiom figurativeness". Our results show that "figurativeness" co-varied with neural activity in the left ventral dorsolateral…

  20. Spoken Word Processing Creates a Lexical Bottleneck

    ERIC Educational Resources Information Center

    Cleland, Alexandra A.; Tamminen, Jakke; Quinlan, Philip T.; Gaskell, M. Gareth

    2012-01-01

    We report 3 experiments that examined whether presentation of a spoken word creates an attentional bottleneck associated with lexical processing in the absence of a response to that word. A spoken word and a visual stimulus were presented in quick succession, but only the visual stimulus demanded a response. Response times to the visual stimulus…

  1. How Do Raters Judge Spoken Vocabulary?

    ERIC Educational Resources Information Center

    Li, Hui

    2016-01-01

    The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…

  2. Automatic Construction of English/Chinese Parallel Corpora.

    ERIC Educational Resources Information Center

    Yang, Christopher C.; Li, Kar Wing

    2003-01-01

    Discussion of multilingual corpora and cross-lingual information retrieval focuses on research that constructed English/Chinese parallel corpus automatically from the World Wide Web. Presents an alignment method which is based on dynamic programming to identify one-to-one Chinese and English title pairs and discusses results of experiments…

  3. Investigating the Promise of Learner Corpora: Methodological Issues

    ERIC Educational Resources Information Center

    Pendar, Nick; Chapelle, Carol A.

    2008-01-01

    Researchers working with learner corpora promise quantitative results that would be of greater practical value in areas such as CALL than those from small-scale and qualitative studies. However, learner corpus research has not yet had an impact on practices in teaching and assessment. Significant methodological issues need to be examined if…

  4. Using Corpora as a Resource in Language Teaching.

    ERIC Educational Resources Information Center

    Wilson, Eve

    1994-01-01

    Discusses the need to provide learner-directed computer-assisted language learning (CALL) for mature students to perfect their English. Topics addressed include expert procedural knowledge, including lexical, grammatical, and discourse skills; and expert domain knowledge, including dictionaries and thesauri, tagged corpora, and discipline-specific…

  5. Annotation of Korean Learner Corpora for Particle Error Detection

    ERIC Educational Resources Information Center

    Lee, Sun-Hee; Jang, Seok Bae; Seo, Sang-Kyu

    2009-01-01

    In this study, we focus on particle errors and discuss an annotation scheme for Korean learner corpora that can be used to extract heuristic patterns of particle errors efficiently. We investigate different properties of particle errors so that they can be later used to identify learner errors automatically, and we provide resourceful annotation…

  6. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  7. Learner Construction of Corpora for General English in Taiwan

    ERIC Educational Resources Information Center

    Smith, Simon

    2011-01-01

    This exploratory study describes a framework for data-driven learning (DDL), in General (non-major) English university classes, in which learners "construct" linguistic corpora instead of merely "consulting" them. Prior related work has addressed the needs of language specialists, in particular trainee translators who are learning how to compile…

  8. How to Measure Development in Corpora? An Association Strength Approach

    ERIC Educational Resources Information Center

    Stoll, Sabine; Gries, Stefan Th.

    2009-01-01

    In this paper we propose a method for characterizing development in large longitudinal corpora. The method has the following three features: (i) it suggests how to represent development without assuming predefined stages; (ii) it includes caregiver speech/child-directed speech; (iii) it uses statistical association measures for investigating…

  9. Come studiare e insegnare l'italiano attraverso i corpora (How To Study and Teach Italian through Corpora).

    ERIC Educational Resources Information Center

    Laviosa, Sara

    1999-01-01

    Examines recent studies of linguistic corpora in Italian, and presents the results of an analysis of "piace" and "piacciono" conducted on a corpus of 3.5 million words of written Italian, accessible at the University of Birmingham in England. Focuses on pedagogical applications of the data and on the inductive methodologies that could be developed…

  10. A Corpus-Based EAP Course for NNS Doctoral Students: Moving from Available Specialized Corpora to Self-Compiled Corpora

    ERIC Educational Resources Information Center

    Lee, David; Swales, John

    2006-01-01

    This paper presents a discussion of an experimental, innovative course in corpus-informed EAP for doctoral students. Participants were given access to specialized corpora of academic writing and speaking, instructed in the tools of the trade (web- and PC-based concordancers) and gradually inducted into the skills needed to best exploit the data…

  11. Computer Corpora and the Language Classroom: On the Potential and Limitations of Computer Corpora in Language Teaching

    ERIC Educational Resources Information Center

    Kaltenbock, Gunther; Mehlmauer-Larcher, Barbara

    2005-01-01

    With computer corpora firmly established as research tools in linguistics, their application for language teaching purposes is also increasingly advocated to the extent that corpus-based language teaching has even been praised as the new revolution in language teaching (cf. Sinclair, 2004b). This article takes a more critical view and examines…

  12. Cataphoric devices in spoken discourse.

    PubMed

    Gernsbacher, M A; Jescheniak, J D

    1995-08-01

    We propose that speakers mark key words with cataphoric devices. Cataphoric devices are counterparts to anaphoric devices: Just as anaphoric devices enable backward reference, cataphoric devices enable forward reference. And just as anaphoric devices mark concepts that have been mentioned before, cataphoric devices mark concepts that are likely to be mentioned again. We investigated two cataphoric devices: spoken stress and the indefinite this. Our experiments demonstrated three ways that concepts marked by cataphoric devices gain a privileged status in listeners' mental representations: Cataphoric devices enhance the activation of the concepts that they mark; cataphoric devices suppress the activation of previously mentioned concepts; and cataphoric devices protect the concepts that they mark from being suppressed by subsequently mentioned concepts. PMID:7641525

  13. Controlling robots with spoken commands

    SciTech Connect

    Beugelsdijk, T.; Phelan, P.

    1987-10-01

    A robotic system for handling radioactive materials has been developed at Los Alamos National Laboratory. Because of safety considerations, the robot must be under the control of a human operator continuously. In this paper we describe the implementation of a voice-recognition system that makes such control possible, yet permits the robot to perform preprogrammed manipulations without the operator's intervention. We also describe the training given both the operator and the voice recognition-system, as well as practical problems encountered during routine operation. A speech synthesis unit connected to the robot's control computer provides audible feedback to the operator. Thus, when a task is completed or if an emergency develops, the computer provides an appropriate spoken message. Implementation and operation of this commercially available hardware are discussed.

  14. Cognitive aging and hearing acuity: modeling spoken language comprehension

    PubMed Central

    Wingfield, Arthur; Amichetti, Nicole M.; Lash, Amanda

    2015-01-01

    The comprehension of spoken language has been characterized by a number of “local” theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled. PMID:26124724

  15. An analysis on the entity annotations in biological corpora

    PubMed Central

    Neves, Mariana

    2014-01-01

    Collection of documents annotated with semantic entities and relationships are crucial resources to support development and evaluation of text mining solutions for the biomedical domain. Here I present an overview of 36 corpora and show an analysis on the semantic annotations they contain. Annotations for entity types were classified into six semantic groups and an overview on the semantic entities which can be found in each corpus is shown. Results show that while some semantic entities, such as genes, proteins and chemicals are consistently annotated in many collections, corpora available for diseases, variations and mutations are still few, in spite of their importance in the biological domain. PMID:25254099

  16. Famous talker effects in spoken word recognition.

    PubMed

    Maibauer, Alisa M; Markis, Teresa A; Newell, Jessica; McLennan, Conor T

    2014-01-01

    Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers. PMID:24366633

  17. Feature-level sentiment analysis by using comparative domain corpora

    NASA Astrophysics Data System (ADS)

    Quan, Changqin; Ren, Fuji

    2016-06-01

    Feature-level sentiment analysis (SA) is able to provide more fine-grained SA on certain opinion targets and has a wider range of applications on E-business. This study proposes an approach based on comparative domain corpora for feature-level SA. The proposed approach makes use of word associations for domain-specific feature extraction. First, we assign a similarity score for each candidate feature to denote its similarity extent to a domain. Then we identify domain features based on their similarity scores on different comparative domain corpora. After that, dependency grammar and a general sentiment lexicon are applied to extract and expand feature-oriented opinion words. Lastly, the semantic orientation of a domain-specific feature is determined based on the feature-oriented opinion lexicons. In evaluation, we compare the proposed method with several state-of-the-art methods (including unsupervised and semi-supervised) using a standard product review test collection. The experimental results demonstrate the effectiveness of using comparative domain corpora.

  18. EXPERIMENTAL USE OF UNIVERSITY OF MICHIGAN AUDIOLINGUAL SELF-INSTRUCTIONAL COURSE IN SPOKEN AMERICAN SPANISH.

    ERIC Educational Resources Information Center

    Inglewood Unified School District, CA.

    THE BASIC PURPOSES OF THIS STUDY WERE (1) TO DETERMINE THE EXTENT TO WHICH THE UNIVERSITY OF MICHIGAN AUDIOLINGUAL SELF-INSTRUCTIONAL COURSE IN SPOKEN AMERICAN SPANISH COULD ASSIST THE ADULT STUDENT TO LEARN TO CONTROL THE SOUND SYSTEM OF THE LANGUAGE BEING STUDIED AND (2) TO PROVIDE EVIDENCE OF THE EFFECTIVENESS OF THE COURSE OUTSIDE OF A…

  19. Integration of Partial Information within and across Modalities: Contributions to Spoken and Written Sentence Recognition

    ERIC Educational Resources Information Center

    Smith, Kimberly G.; Fogerty, Daniel

    2015-01-01

    Purpose: This study evaluated the extent to which partial spoken or written information facilitates sentence recognition under degraded unimodal and multimodal conditions. Method: Twenty young adults with typical hearing completed sentence recognition tasks in unimodal and multimodal conditions across 3 proportions of preservation. In the unimodal…

  20. Talker Familiarity and Spoken Word Recognition in School-Age Children

    ERIC Educational Resources Information Center

    Levi, Susannah V.

    2015-01-01

    Research with adults has shown that spoken language processing is improved when listeners are familiar with talkers' voices, known as the familiar talker advantage. The current study explored whether this ability extends to school-age children, who are still acquiring language. Children were familiarized with the voices of three German-English…

  1. Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items

    ERIC Educational Resources Information Center

    Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.

    2013-01-01

    An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…

  2. "I See What You Mean": Using Spoken Discourse in the Classroom: A Handbook for Teachers.

    ERIC Educational Resources Information Center

    Burns, Anne; Joyce, Helen; Gollin, Sandra

    The handbook, arising from a research project on adult English-as-a-Second-Language (ESL) teaching, presents ESL teachers with ways to use authentic English as a teaching tool in the classroom. The purpose of this book is to: examine the connections between current theories of spoken discourse analysis and classroom practice; encourage language…

  3. Predictors of Spoken Language Learning

    ERIC Educational Resources Information Center

    Wong, Patrick C. M.; Ettlinger, Marc

    2011-01-01

    We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We…

  4. Juvenile Hormone Biosynthesis Gene Expression in the corpora allata of Honey Bee (Apis mellifera L.) Female Castes

    PubMed Central

    Rosa, Gustavo Conrado Couto; Moda, Livia Maria; Martins, Juliana Ramos; Bitondi, Márcia Maria Gentile; Hartfelder, Klaus; Simões, Zilá Luz Paulino

    2014-01-01

    Juvenile hormone (JH) controls key events in the honey bee life cycle, viz. caste development and age polyethism. We quantified transcript abundance of 24 genes involved in the JH biosynthetic pathway in the corpora allata-corpora cardiaca (CA-CC) complex. The expression of six of these genes showing relatively high transcript abundance was contrasted with CA size, hemolymph JH titer, as well as JH degradation rates and JH esterase (jhe) transcript levels. Gene expression did not match the contrasting JH titers in queen and worker fourth instar larvae, but jhe transcript abundance and JH degradation rates were significantly lower in queen larvae. Consequently, transcriptional control of JHE is of importance in regulating larval JH titers and caste development. In contrast, the same analyses applied to adult worker bees allowed us inferring that the high JH levels in foragers are due to increased JH synthesis. Upon RNAi-mediated silencing of the methyl farnesoate epoxidase gene (mfe) encoding the enzyme that catalyzes methyl farnesoate-to-JH conversion, the JH titer was decreased, thus corroborating that JH titer regulation in adult honey bees depends on this final JH biosynthesis step. The molecular pathway differences underlying JH titer regulation in larval caste development versus adult age polyethism lead us to propose that mfe and jhe genes be assayed when addressing questions on the role(s) of JH in social evolution. PMID:24489805

  5. Juvenile hormone biosynthesis gene expression in the corpora allata of honey bee (Apis mellifera L.) female castes.

    PubMed

    Bomtorin, Ana Durvalina; Mackert, Aline; Rosa, Gustavo Conrado Couto; Moda, Livia Maria; Martins, Juliana Ramos; Bitondi, Márcia Maria Gentile; Hartfelder, Klaus; Simões, Zilá Luz Paulino

    2014-01-01

    Juvenile hormone (JH) controls key events in the honey bee life cycle, viz. caste development and age polyethism. We quantified transcript abundance of 24 genes involved in the JH biosynthetic pathway in the corpora allata-corpora cardiaca (CA-CC) complex. The expression of six of these genes showing relatively high transcript abundance was contrasted with CA size, hemolymph JH titer, as well as JH degradation rates and JH esterase (jhe) transcript levels. Gene expression did not match the contrasting JH titers in queen and worker fourth instar larvae, but jhe transcript abundance and JH degradation rates were significantly lower in queen larvae. Consequently, transcriptional control of JHE is of importance in regulating larval JH titers and caste development. In contrast, the same analyses applied to adult worker bees allowed us inferring that the high JH levels in foragers are due to increased JH synthesis. Upon RNAi-mediated silencing of the methyl farnesoate epoxidase gene (mfe) encoding the enzyme that catalyzes methyl farnesoate-to-JH conversion, the JH titer was decreased, thus corroborating that JH titer regulation in adult honey bees depends on this final JH biosynthesis step. The molecular pathway differences underlying JH titer regulation in larval caste development versus adult age polyethism lead us to propose that mfe and jhe genes be assayed when addressing questions on the role(s) of JH in social evolution. PMID:24489805

  6. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension

    PubMed Central

    Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974

  7. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements.

    PubMed

    Hadar, Britt; Skrzypek, Joshua E; Wingfield, Arthur; Ben-David, Boaz M

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the "visual world" eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., "point at the candle"). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  8. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension.

    PubMed

    Sekerina, Irina A; Campanelli, Luca; Van Dyke, Julie A

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974

  9. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    PubMed Central

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  10. How Can We Use Corpus Wordlists for Language Learning? Interfaces between Computer Corpora and Expert Intervention

    ERIC Educational Resources Information Center

    Chen, Yu-Hua; Bruncak, Radovan

    2015-01-01

    With the advances in technology, wordlists retrieved from computer corpora have become increasingly popular in recent years. The lexical items in those wordlists are usually selected, according to a set of robust frequency and dispersion criteria, from large corpora of authentic and naturally occurring language. Corpus wordlists are of great value…

  11. Use of English Corpora as a Primary Resource to Teach English to the Bengali Learners

    ERIC Educational Resources Information Center

    Dash, Niladri Sekhar

    2011-01-01

    In this paper we argue in favour of teaching English as a second language to the Bengali learners with direct utilisation of English corpora. The proposed strategy is meant to be assisted with computer and is based on data, information, and examples retrieved from the present-day English corpora developed with various text samples composed by…

  12. Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.

    PubMed

    González-Alvarez, Julio; Palomar-García, María-Angeles

    2016-08-01

    Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. PMID:27287267

  13. Spoken Grammar and Its Role in the English Language Classroom

    ERIC Educational Resources Information Center

    Hilliard, Amanda

    2014-01-01

    This article addresses key issues and considerations for teachers wanting to incorporate spoken grammar activities into their own teaching and also focuses on six common features of spoken grammar, with practical activities and suggestions for teaching them in the language classroom. The hope is that this discussion of spoken grammar and its place…

  14. Influences of Spoken Word Planning on Speech Recognition

    ERIC Educational Resources Information Center

    Roelofs, Ardi; Ozdemir, Rebecca; Levelt, Willem J. M.

    2007-01-01

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they…

  15. Developmental Phonological Disorders: Processing of Spoken Language.

    ERIC Educational Resources Information Center

    Dodd, Barbara; Basset, Barbara

    1987-01-01

    The ability of 22 phonologically disordered and normally speaking children's ability to process (phonologically, syntactically, and semantically) spoken language was evaluated. No differences between groups was found in number of errors, pattern of errors, or reaction times when monitoring sentences for target words, irrespective of sentence type.…

  16. Well Spoken: Teaching Speaking to All Students

    ERIC Educational Resources Information Center

    Palmer, Erik

    2011-01-01

    All teachers at all grade levels in all subjects have speaking assignments for students, but many teachers believe they don't know how to teach speaking, and many even fear public speaking themselves. In his new book, "Well Spoken", veteran teacher and education consultant Erik Palmer shares the art of teaching speaking in any classroom. Teachers…

  17. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    ERIC Educational Resources Information Center

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL GRAMMATICAL…

  18. A Grammar of Spoken Brazilian Portuguese.

    ERIC Educational Resources Information Center

    Thomas, Earl W.

    This is a first-year text of Portuguese grammar based on the Portuguese of moderately educated Brazilians from the area around Rio de Janeiro. Spoken idiomatic usage is emphasized. An important innovation is found in the presentation of verb tenses; they are presented in the order in which the native speaker learns them. The text is intended to…

  19. Artfulness in Young Children's Spoken Narratives

    ERIC Educational Resources Information Center

    Glenn-Applegate, Katherine; Breit-Smith, Allison; Justice, Laura M.; Piasta, Shayne B.

    2010-01-01

    Research Findings: Artfulness is rarely considered as an indicator of quality in young children's spoken narratives. Although some studies have examined artfulness in the narratives of children 5 and older, no studies to date have focused on the artfulness of preschoolers' oral narratives. This study examined the artfulness of fictional spoken…

  20. Automatic Discrimination of Emotion from Spoken Finnish

    ERIC Educational Resources Information Center

    Toivanen, Juhani; Vayrynen, Eero; Seppanen, Tapio

    2004-01-01

    In this paper, experiments on the automatic discrimination of basic emotions from spoken Finnish are described. For the purpose of the study, a large emotional speech corpus of Finnish was collected; 14 professional actors acted as speakers, and simulated four primary emotions when reading out a semantically neutral text. More than 40 prosodic…

  1. Learning and Consolidation of Novel Spoken Words

    ERIC Educational Resources Information Center

    Davis, Matthew H.; Di Betta, Anna Maria; Macdonald, Mark J. E.; Gaskell, Gareth

    2009-01-01

    Two experiments explored the neural mechanisms underlying the learning and consolidation of novel spoken words. In Experiment 1, participants learned two sets of novel words on successive days. A subsequent recognition test revealed high levels of familiarity for both sets. However, a lexical decision task showed that only novel words learned on…

  2. Modern Spoken Cambodian. Yale Linguistic Series.

    ERIC Educational Resources Information Center

    Huffman, Franklin E.

    The aim of this volume is to provide the student with a thorough command of the basic structures of standard spoken Cambodian. The course is based on the audio-oral method of language teaching developed by the Intensive Language Program of the American Council of Learned Societies and used successfully during World War II, but modified to take…

  3. Looking for French-English translations in comparable medical corpora.

    PubMed

    Chiao, Yun-Chuang; Zweigenbaum, P

    2002-01-01

    Cross-language retrieval of medical information needs to translate input queries into target language queries. It must be prepared to cope with 'new' words not yet listed in a multilingual lexicon. We address the issue of finding translational equivalents of such 'unknown' words from French to English in the medical domain. We rely on non-parallel, comparable corpora and an initial bilingual medical lexicon. We compare the distributional contexts of source and target words, testing several weighting factors and similarity measures. For the best combination (the Jaccard similarity measure with or without weighting), the correct translation is found in the top 10 candidates for more than 60% of the test words. This shows the potential of this technique to help extending bilingual medical lexicons. PMID:12463805

  4. Proliferative activity of preovulatory follicles and newly formed corpora lutea in cycling rats from late prooestrus to early oestrus

    PubMed Central

    GAYTÁN, FRANCISCO; BELLIDO, CARMEN; MORALES, CONCEPCIÓN; AGUILAR, ENRIQUE; SÁNCHEZ-CRIADO, JOSÉ EUGENIO

    1997-01-01

    Ovaries from adult cycling rats were studied from 1600 h on the day of prooestrus to 0700 h on the day of oestrus in order to relate the cyclic hormonal changes to the proliferative activity of preovulatory and postovulatory (i.e. newly-formed corpora lutea) follicles. Proliferative activity was studied by the immunohistochemical demonstration of DNA-incorporated 5-bromodeoxyuridine (BrdU). The proliferative activity of granulosa cells (GC) in large preovulatory follicles showed a centripetal pattern and decreased during prooestrus, reaching a minimum at 2100 h. However, a proliferative wave was found in the GC of preovulatory follicles at 0200 h on the day of oestrus and in those of newly-formed corpora lutea at 0700 h on the day of oestrus. These results suggest that the granulosa cells of preovulatory follicles show maturational changes that followed a different pattern, depending on their location within the follicle, and that the proliferative wave found from 0200 to 0700 h on oestrus is important for the establishment of the number of steroidogenic cells in the cyclic corpus luteum. PMID:9418999

  5. Spoken word recognition without a TRACE

    PubMed Central

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  6. Spoken word recognition without a TRACE.

    PubMed

    Hannagan, Thomas; Magnuson, James S; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition-including visual word recognition-have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  7. Setting the Tone: An ERP Investigation of the Influences of Phonological Similarity on Spoken Word Recognition in Mandarin Chinese

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2012-01-01

    We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin ("N" = 19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following nature:…

  8. An fMRI Study of Concreteness Effects during Spoken Word Recognition in Aging. Preservation or Attenuation?

    PubMed Central

    Roxbury, Tracy; McMahon, Katie; Coulthard, Alan; Copland, David A.

    2016-01-01

    It is unclear whether healthy aging influences concreteness effects (i.e., the processing advantage seen for concrete over abstract words) and its associated neural mechanisms. We conducted an fMRI study on young and older healthy adults performing auditory lexical decisions on concrete vs. abstract words. We found that spoken comprehension of concrete and abstract words appears relatively preserved for healthy older individuals, including the concreteness effect. This preserved performance was supported by altered activity in left hemisphere regions including the inferior and middle frontal gyri, angular gyrus, and fusiform gyrus. This pattern is consistent with age-related compensatory mechanisms supporting spoken word processing. PMID:26793097

  9. An fMRI Study of Concreteness Effects during Spoken Word Recognition in Aging. Preservation or Attenuation?

    PubMed

    Roxbury, Tracy; McMahon, Katie; Coulthard, Alan; Copland, David A

    2015-01-01

    It is unclear whether healthy aging influences concreteness effects (i.e., the processing advantage seen for concrete over abstract words) and its associated neural mechanisms. We conducted an fMRI study on young and older healthy adults performing auditory lexical decisions on concrete vs. abstract words. We found that spoken comprehension of concrete and abstract words appears relatively preserved for healthy older individuals, including the concreteness effect. This preserved performance was supported by altered activity in left hemisphere regions including the inferior and middle frontal gyri, angular gyrus, and fusiform gyrus. This pattern is consistent with age-related compensatory mechanisms supporting spoken word processing. PMID:26793097

  10. Ranking Multiple Dialogue States by Corpus Statistics to Improve Discourse Understanding in Spoken Dialogue Systems

    NASA Astrophysics Data System (ADS)

    Higashinaka, Ryuichiro; Nakano, Mikio

    This paper discusses the discourse understanding process in spoken dialogue systems. This process enables a system to understand user utterances from the context of a dialogue. Ambiguity in user utterances caused by multiple speech recognition hypotheses and parsing results sometimes makes it difficult for a system to decide on a single interpretation of a user intention. As a solution, the idea of retaining possible interpretations as multiple dialogue states and resolving the ambiguity using succeeding user utterances has been proposed. Although this approach has proven to improve discourse understanding accuracy, carefully created hand-crafted rules are necessary in order to accurately rank the dialogue states. This paper proposes automatically ranking multiple dialogue states using statistical information obtained from dialogue corpora. The experimental results in the train ticket reservation and weather information service domains show that the statistical information can significantly improve the ranking accuracy of dialogue states as well as the slot accuracy and the concept error rate of the top-ranked dialogue states.

  11. Corpora Amylacea in Neurodegenerative Diseases: Cause or Effect?

    PubMed Central

    Rohn, Troy T.

    2015-01-01

    The presence of corpora amylacea (CA) in the CNS is associated with both normal aging and neurodegenerative conditions including Alzheimer’s disease (AD) and vascular dementia (VaD). CA are spherical bodies ranging in diameter (10–50 μm) and whose origin has been documented to be derived from both neural and glial sources. CA are reported to be primarily composed of glucose polymers, but approximately 4% of the total weight of CA is consistently composed of protein. CA are typically localized in the subpial, periventricular and perivascular regions within the CNS. The presence of CA in VaD has recently been documented and of interest was the localization of CA within the hippocampus proper. Despite numerous efforts, the precise role of CA in normal aging or disease is not known. The purpose of this mini review is to highlight the potential function of CA in various neurodegenerative disorders with an emphasis on the potential role if any these structures may play in the etiology of these diseases. PMID:26550607

  12. Citation Matching in Sanskrit Corpora Using Local Alignment

    NASA Astrophysics Data System (ADS)

    Prasad, Abhinandan S.; Rao, Shrisha

    Citation matching is the problem of finding which citation occurs in a given textual corpus. Most existing citation matching work is done on scientific literature. The goal of this paper is to present methods for performing citation matching on Sanskrit texts. Exact matching and approximate matching are the two methods for performing citation matching. The exact matching method checks for exact occurrence of the citation with respect to the textual corpus. Approximate matching is a fuzzy string-matching method which computes a similarity score between an individual line of the textual corpus and the citation. The Smith-Waterman-Gotoh algorithm for local alignment, which is generally used in bioinformatics, is used here for calculating the similarity score. This similarity score is a measure of the closeness between the text and the citation. The exact- and approximate-matching methods are evaluated and compared. The methods presented can be easily applied to corpora in other Indic languages like Kannada, Tamil, etc. The approximate-matching method can in particular be used in the compilation of critical editions and plagiarism detection in a literary work.

  13. How Hierarchical Topics Evolve in Large Text Corpora.

    PubMed

    Cui, Weiwei; Liu, Shixia; Wu, Zhuofeng; Wei, Hao

    2014-12-01

    Using a sequence of topic trees to organize documents is a popular way to represent hierarchical and evolving topics in text corpora. However, following evolving topics in the context of topic trees remains difficult for users. To address this issue, we present an interactive visual text analysis approach to allow users to progressively explore and analyze the complex evolutionary patterns of hierarchical topics. The key idea behind our approach is to exploit a tree cut to approximate each tree and allow users to interactively modify the tree cuts based on their interests. In particular, we propose an incremental evolutionary tree cut algorithm with the goal of balancing 1) the fitness of each tree cut and the smoothness between adjacent tree cuts; 2) the historical and new information related to user interests. A time-based visualization is designed to illustrate the evolving topics over time. To preserve the mental map, we develop a stable layout algorithm. As a result, our approach can quickly guide users to progressively gain profound insights into evolving hierarchical topics. We evaluate the effectiveness of the proposed method on Amazon's Mechanical Turk and real-world news data. The results show that users are able to successfully analyze evolving topics in text data. PMID:26356942

  14. Integration of partial information for spoken and written sentence recognition by older listeners.

    PubMed

    Smith, Kimberly G; Fogerty, Daniel

    2016-06-01

    Older adults have difficulty understanding speech in challenging listening environments. Combining multisensory signals may facilitate speech recognition. This study measured recognition of interrupted spoken and written sentences by older adults for different preserved stimulus proportions. Unimodal performance was first examined when only interrupted text or speech stimuli were presented. Multimodal performance with concurrently presented text and speech stimuli was tested with delayed and simultaneous participant responses. Older listeners performed better in unimodal speech-only compared to text-only conditions across all proportions preserved. Performance was also better in delayed multimodal conditions. Comparison to a younger sample suggests age-related amodal processing declines. PMID:27369179

  15. Talker familiarity and spoken word recognition in school-age children*

    PubMed Central

    Levi, Susannah V.

    2014-01-01

    Research with adults has shown that spoken language processing is improved when listeners are familiar with talkers’ voices, known as the familiar talker advantage. The current study explored whether this ability extends to school-age children, who are still acquiring language. Children were familiarized with the voices of three German–English bilingual talkers and were tested on the speech of six bilinguals, three of whom were familiar. Results revealed that children do show improved spoken language processing when they are familiar with the talkers, but this improvement was limited to highly familiar lexical items. This restriction of the familiar talker advantage is attributed to differences in the representation of highly familiar and less familiar lexical items. In addition, children did not exhibit accent-general learning; despite having been exposed to German-accented talkers during training, there was no improvement for novel German-accented talkers. PMID:25159173

  16. TEES 2.2: Biomedical Event Extraction for Diverse Corpora

    PubMed Central

    2015-01-01

    Background The Turku Event Extraction System (TEES) is a text mining program developed for the extraction of events, complex biomedical relationships, from scientific literature. Based on a graph-generation approach, the system detects events with the use of a rich feature set built via dependency parsing. The TEES system has achieved record performance in several of the shared tasks of its domain, and continues to be used in a variety of biomedical text mining tasks. Results The TEES system was quickly adapted to the BioNLP'13 Shared Task in order to provide a public baseline for derived systems. An automated approach was developed for learning the underlying annotation rules of event type, allowing immediate adaptation to the various subtasks, and leading to a first place in four out of eight tasks. The system for the automated learning of annotation rules is further enhanced in this paper to the point of requiring no manual adaptation to any of the BioNLP'13 tasks. Further, the scikit-learn machine learning library is integrated into the system, bringing a wide variety of machine learning methods usable with TEES in addition to the default SVM. A scikit-learn ensemble method is also used to analyze the importances of the features in the TEES feature sets. Conclusions The TEES system was introduced for the BioNLP'09 Shared Task and has since then demonstrated good performance in several other shared tasks. By applying the current TEES 2.2 system to multiple corpora from these past shared tasks an overarching analysis of the most promising methods and possible pitfalls in the evolving field of biomedical event extraction are presented. PMID:26551925

  17. Direction Asymmetries in Spoken and Signed Language Interpreting

    ERIC Educational Resources Information Center

    Nicodemus, Brenda; Emmorey, Karen

    2013-01-01

    Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…

  18. Attentional Capture of Objects Referred to by Spoken Language

    ERIC Educational Resources Information Center

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  19. We Asked...You Told Us. Language Spoken at Home.

    ERIC Educational Resources Information Center

    Bureau of the Census (DOC), Washington, DC. Economics and Statistics Administration.

    Responses to the 1990 United States census question concerning languages other than English that are spoken at home are summarized. It was found that in 1990, 14 percent of the population 5 years and older spoke a language other than English at home, as contrasted with 11 percent in 1980. Languages spoken most commonly at home in descending order…

  20. Enhancing the Performance of Female Students in Spoken English

    ERIC Educational Resources Information Center

    Inegbeboh, Bridget O.

    2009-01-01

    Female students have been discriminated against right from birth in their various cultures and this affects the way they perform in Spoken English class, and how they rate themselves. They have been conditioned to believe that the male gender is superior to the female gender, so they leave the male students to excel in spoken English, while they…

  1. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  2. Spoken Language Research and ELT: Where Are We Now?

    ERIC Educational Resources Information Center

    Timmis, Ivor

    2012-01-01

    This article examines the relationship between spoken language research and ELT practice over the last 20 years. The first part is retrospective. It seeks first to capture the general tenor of recent spoken research findings through illustrative examples. The article then considers the sociocultural issues that arose when the relevance of these…

  3. Distinguish Spoken English from Written English: Rich Feature Analysis

    ERIC Educational Resources Information Center

    Tian, Xiufeng

    2013-01-01

    This article aims at the feature analysis of four expository essays (Text A/B/C/D) written by secondary school students with a focus on the differences between spoken and written language. Texts C and D are better written compared with the other two (Texts A&B) which are considered more spoken in language using. The language features are…

  4. Beyond Single Words: The Most Frequent Collocations in Spoken English

    ERIC Educational Resources Information Center

    Shin, Dongkwang; Nation, Paul

    2008-01-01

    This study presents a list of the highest frequency collocations of spoken English based on carefully applied criteria. In the literature, more than forty terms have been used for designating multi-word units, which are generally not well defined. To avoid this confusion, six criteria are strictly applied. The ten million word BNC spoken section…

  5. The Dynamics of Lexical Competition during Spoken Word Recognition

    ERIC Educational Resources Information Center

    Magnuson, James S.; Dixon, James A.; Tanenhaus, Michael K.; Aslin, Richard N.

    2007-01-01

    The sounds that make up spoken words are heard in a series and must be mapped rapidly onto words in memory because their elements, unlike those of visual words, cannot simultaneously exist or persist in time. Although theories agree that the dynamics of spoken word recognition are important, they differ in how they treat the nature of the…

  6. Written vs. spoken eyewitness accounts: does modality of testing matter?

    PubMed

    Sauerland, Melanie; Sporer, Siegfried L

    2011-01-01

    The aim of the current study was to test whether the modality of testing (written vs. spoken) matters when obtaining eyewitness statements. Writing puts higher demands on working memory than speaking because writing is slower, less practiced, and associated with the activation of graphemic representations for spelling words (Kellogg, 2007). Therefore, we hypothesized that witnesses' spoken reports should elicit more details than written ones. Participants (N = 192) watched a staged crime video and then gave a spoken or written description of the course of action and the perpetrator. As expected, spoken crime and perpetrator descriptions contained more details than written ones, although there was no difference in accuracy. However, the most critical (central) crime and perpetrator information was both more extensive and more accurate when witnesses gave spoken descriptions. In addition to cognitive factors, social factors are considered which may drive the effect. PMID:22009462

  7. Presentation video retrieval using automatically recovered slide and spoken text

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  8. What Do Second Language Listeners Know about Spoken Words? Effects of Experience and Attention in Spoken Word Processing

    ERIC Educational Resources Information Center

    Trofimovich, Pavel

    2008-01-01

    With a goal of investigating psycholinguistic bases of spoken word processing in a second language (L2), this study examined L2 learners' sensitivity to phonological information in spoken L2 words as a function of their L2 experience and attentional demands of a learning task. Fifty-two Chinese learners of English who differed in amount of L2…

  9. Developing a corpus of spoken language variability

    NASA Astrophysics Data System (ADS)

    Carmichael, Lesley; Wright, Richard; Wassink, Alicia Beckford

    2003-10-01

    We are developing a novel, searchable corpus as a research tool for investigating phonetic and phonological phenomena across various speech styles. Five speech styles have been well studied independently in previous work: reduced (casual), careful (hyperarticulated), citation (reading), Lombard effect (speech in noise), and ``motherese'' (child-directed speech). Few studies to date have collected a wide range of styles from a single set of speakers, and fewer yet have provided publicly available corpora. The pilot corpus includes recordings of (1) a set of speakers participating in a variety of tasks designed to elicit the five speech styles, and (2) casual peer conversations and wordlists to illustrate regional vowels. The data include high-quality recordings and time-aligned transcriptions linked to text files that can be queried. Initial measures drawn from the database provide comparison across speech styles along the following acoustic dimensions: MLU (changes in unit duration); relative intra-speaker intensity changes (mean and dynamic range); and intra-speaker pitch values (minimum, maximum, mean, range). The corpus design will allow for a variety of analyses requiring control of demographic and style factors, including hyperarticulation variety, disfluencies, intonation, discourse analysis, and detailed spectral measures.

  10. Automatic discrimination of emotion from spoken Finnish.

    PubMed

    Toivanen, Juhani; Väyrynen, Eero; Seppänen, Tapio

    2004-01-01

    In this paper, experiments on the automatic discrimination of basic emotions from spoken Finnish are described. For the purpose of the study, a large emotional speech corpus of Finnish was collected; 14 professional actors acted as speakers, and simulated four primary emotions when reading out a semantically neutral text. More than 40 prosodic features were derived and automatically computed from the speech samples. Two application scenarios were tested: the first scenario was speaker-independent for a small domain of speakers while the second scenario was completely speaker-independent. Human listening experiments were conducted to assess the perceptual adequacy of the emotional speech samples. Statistical classification experiments indicated that, with the optimal combination of prosodic feature vectors, automatic emotion discrimination performance close to human emotion recognition ability was achievable. PMID:16038449

  11. Deep Bottleneck Features for Spoken Language Identification

    PubMed Central

    Jiang, Bing; Song, Yan; Wei, Si; Liu, Jun-Hua; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    A key problem in spoken language identification (LID) is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF) for spoken LID, motivated by the success of Deep Neural Networks (DNN) in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV), using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09) show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed. PMID:24983963

  12. Morphometric Correlates of the Ovary and Ovulatory Corpora in the Bowhead Whale, Balaena mysticetus.

    PubMed

    Tarpley, Raymond J; Hillmann, Daniel J; George, John C; Zeh, Judith E; Suydam, Robert S

    2016-06-01

    Gross morphology and morphometry of the bowhead whale ovary, including ovulatory corpora, were investigated in 50 whales from the Chukchi and Beaufort seas off the coast of Alaska. Using the presence of ovarian corpora to define sexual maturity, 23 sexually immature whales (7.6-14.2 m total body length) and 27 sexually mature whales (14.2-17.7 m total body length) were identified. Ovary pair weights ranged from 0.38 to 2.45 kg and 2.92 to 12.02 kg for sexually immature and sexually mature whales, respectively. In sexually mature whales, corpora lutea (CLs) and/or large corpora albicantia (CAs) projected beyond ovary surfaces. CAs became increasingly less interruptive of the surface contour as they regressed, while remaining identifiable within transverse sections of the ovarian cortex. CLs formed large globular bodies, often with a central lumen, featuring golden parenchymas enfolded within radiating fibrous cords. CAs, sometimes vesicular, featured a dense fibrous core with outward fibrous projections through the former luteal tissue. CLs (never more than one per ovary pair) ranged from 6.7 to 15.0 cm in diameter in 13 whales. Fetuses were confirmed in nine of the 13 whales, with the associated CLs ranging from 8.3 to 15.0 cm in diameter. CLs from four whales where a fetus was not detected ranged from 6.7 to 10.6 cm in diameter. CA totals ranged from 0 to 22 for any single ovary, and from 1 to 41 for an ovary pair. CAs measured from 0.3 to 6.3 cm in diameter, and smaller corpora were more numerous, suggesting an accumulating record of ovulation. Neither the left nor the right ovary dominated in the production of corpora. Anat Rec, 299:769-797, 2016. © 2016 Wiley Periodicals, Inc. PMID:26917353

  13. Complete transection of the urethra and corpora cavernosa: a complication after laparoscopic repair (TEP) of an inguinal hernia.

    PubMed

    Rehme, C; Rübben, H; Heß, J

    2016-06-01

    Complete transection of both corpora cavernosa and the urethra is a very rare condition in urology. We report the case of a 59-year-old man with complete transection of the corpora cavernosa and the urethra during a laparoscopic repair of a recurrent inguinal hernia. PMID:25943096

  14. The Use of General and Specialized Corpora as Reference Sources for Academic English Writing: A Case Study

    ERIC Educational Resources Information Center

    Chang, Ji-Yeon

    2014-01-01

    Corpora have been suggested as valuable sources for teaching English for academic purposes (EAP). Since previous studies have mainly focused on corpus use in classroom settings, more research is needed to reveal how students react to using corpora on their own and what should be provided to help them become autonomous corpus users, considering…

  15. 31 CFR 358.6 - What is the procedure for converting bearer corpora and detached bearer coupons to book-entry?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... bearer corpora and detached bearer coupons to book-entry? 358.6 Section 358.6 Money and Finance: Treasury... PUBLIC DEBT REGULATIONS GOVERNING BOOK-ENTRY CONVERSION OF BEARER CORPORA AND DETACHED BEARER COUPONS § 358.6 What is the procedure for converting bearer corpora and detached bearer coupons to...

  16. Direct and Indirect Access to Corpora: An Exploratory Case Study Comparing Students' Error Correction and Learning Strategy Use in L2 Writing

    ERIC Educational Resources Information Center

    Yoon, Hyunsook; Jo, Jung Won

    2014-01-01

    Studies on students' use of corpora in L2 writing have demonstrated the benefits of corpora not only as a linguistic resource to improve their writing abilities but also as a cognitive tool to develop their learning skills and strategies. Most of the corpus studies, however, adopted either direct use or indirect use of corpora by students,…

  17. Automatic concept recognition using the Human Phenotype Ontology reference and test suite corpora

    PubMed Central

    Groza, Tudor; Köhler, Sebastian; Doelken, Sandra; Collier, Nigel; Oellrich, Anika; Smedley, Damian; Couto, Francisco M; Baynam, Gareth; Zankl, Andreas; Robinson, Peter N.

    2015-01-01

    Concept recognition tools rely on the availability of textual corpora to assess their performance and enable the identification of areas for improvement. Typically, corpora are developed for specific purposes, such as gene name recognition. Gene and protein name identification are longstanding goals of biomedical text mining, and therefore a number of different corpora exist. However, phenotypes only recently became an entity of interest for specialized concept recognition systems, and hardly any annotated text is available for performance testing and training. Here, we present a unique corpus, capturing text spans from 228 abstracts manually annotated with Human Phenotype Ontology (HPO) concepts and harmonized by three curators, which can be used as a reference standard for free text annotation of human phenotypes. Furthermore, we developed a test suite for standardized concept recognition error analysis, incorporating 32 different types of test cases corresponding to 2164 HPO concepts. Finally, three established phenotype concept recognizers (NCBO Annotator, OBO Annotator and Bio-LarK CR) were comprehensively evaluated, and results are reported against both the text corpus and the test suites. The gold standard and test suites corpora are available from http://bio-lark.org/hpo_res.html. Database URL: http://bio-lark.org/hpo_res.html PMID:25725061

  18. Language Teachers with Corpora in Mind: From Starting Steps to Walking Tall

    ERIC Educational Resources Information Center

    Chambers, Angela; Farr, Fiona; O'Riordan, Stephanie

    2011-01-01

    Although the use of corpus data in language learning is a steadily growing research area, direct access to corpora by teachers and learners and the use of the data in the classroom are developing slowly. This paper explores how teachers can integrate corpus approaches in their practice. After situating the topic in relation to current research and…

  19. Gonadotropin binding sites in human ovarian follicles and corpora lutea during the menstrual cycle

    SciTech Connect

    Shima, K.; Kitayama, S.; Nakano, R.

    1987-05-01

    Gonadotropin binding sites were localized by autoradiography after incubation of human ovarian sections with /sup 125/I-labeled gonadotropins. The binding sites for /sup 125/I-labeled human follicle-stimulating hormone (/sup 125/I-hFSH) were identified in the granulosa cells and in the newly formed corpora lutea. The /sup 125/I-labeled human luteinizing hormone (/sup 125/I-hLH) binding to the thecal cells increased during follicular maturation, and a dramatic increase was preferentially observed in the granulosa cells of the large preovulatory follicle. In the corpora lutea, the binding of /sup 125/I-hLH increased from the early luteal phase and decreased toward the late luteal phase. The changes in 3 beta-hydroxysteroid dehydrogenase activity in the corpora lutea corresponded to the /sup 125/I-hLH binding. Thus, the changes in gonadotropin binding sites in the follicles and corpora lutea during the menstrual cycle may help in some important way to regulate human ovarian function.

  20. Tracking Learners' Actual Uses of Corpora: Guided vs Non-Guided Corpus Consultation

    ERIC Educational Resources Information Center

    Perez-Paredes, Pascual; Sanchez-Tornel, Maria; Calero, Jose Maria Alcaraz; Jimenez, Pilar Aguado

    2011-01-01

    Much of the research into language learners' use of corpus resources has been conducted by means of indirect observation methodologies, like questionnaires or self-reports. While this type of study provides an excellent opportunity to reflect on the benefits and limitations of using corpora to teach and learn language, the use of indirect…

  1. Training L2 Writers to Reference Corpora as a Self-Correction Tool

    ERIC Educational Resources Information Center

    Quinn, Cynthia

    2015-01-01

    Corpora have the potential to support the L2 writing process at the discourse level in contrast to the isolated dictionary entries that many intermediate writers rely on. To take advantage of this resource, learners need to be trained, which involves practising corpus research and referencing skills as well as learning to make data-based…

  2. Corpora in Language Teaching and Learning: Potential, Evaluation, Challenges. English Corpus Linguistics. Volume 13

    ERIC Educational Resources Information Center

    Breyer, Yvonne Alexandra

    2011-01-01

    This book highlights the potential and the challenges of corpora in language education with a particular focus on the teacher's perspective. For this purpose, the study explores the relevance of the corpus approach to central paradigms underlying contemporary language education. Furthermore, a critical analysis investigates the persisting gap…

  3. What Does "Informed Consent" Mean in the Internet Age? Publishing Sign Language Corpora as Open Content

    ERIC Educational Resources Information Center

    Crasborn, Onno

    2010-01-01

    Recent technologies in the area of video and Internet are allowing the creation and online publication of large signed language corpora. Primarily addressing the needs of linguists and other researchers, because of their unique character in history these data collections are also made accessible online for a general audience. This "open access"…

  4. Learning in Parallel: Using Parallel Corpora to Enhance Written Language Acquisition at the Beginning Level

    ERIC Educational Resources Information Center

    Bluemel, Brody

    2014-01-01

    This article illustrates the pedagogical value of incorporating parallel corpora in foreign language education. It explores the development of a Chinese/English parallel corpus designed specifically for pedagogical application. The corpus tool was created to aid language learners in reading comprehension and writing development by making foreign…

  5. Corpora Processing and Computational Scaffolding for a Web-Based English Learning Environment: The CANDLE Project

    ERIC Educational Resources Information Center

    Liou, Hsien-Chin; Chang, Jason S; Chen, Hao-Jan; Lin, Chih-Cheng; Liaw, Meei-Ling; Gao, Zhao-Ming; Jang, Jyh-Shing Roger; Yeh, Yuli; Chuang, Thomas C.; You, Geeng-Neng

    2006-01-01

    This paper describes the development of an innovative web-based environment for English language learning with advanced data-driven and statistical approaches. The project uses various corpora, including a Chinese-English parallel corpus ("Sinorama") and various natural language processing (NLP) tools to construct effective English learning tasks…

  6. The Application of Corpora in Teaching Grammar: The Case of English Relative Clause

    ERIC Educational Resources Information Center

    Sahragard, Rahman; Kushki, Ali; Ansaripour, Ehsan

    2013-01-01

    The study was conducted to see if the provision of implementing corpora on English relative clauses would prove useful for Iranian EFL learners or not. Two writing classes were held for the participants of intermediate level. A record of 15 writing samples produced by each participant was kept in the form of a portfolio. Participants'…

  7. Stimulus-based similarity and the recognition of spoken words

    NASA Astrophysics Data System (ADS)

    Auer, Edward T.

    2003-10-01

    Spoken word recognition has been hypothesized to be achieved via a competitive process amongst perceptually similar lexical candidates in the mental lexicon. In this process, lexical candidates are activated as a function of their perceived similarity to the spoken stimulus. The evidence supporting this hypothesis has largely come from studies of auditory word recognition. In this talk, evidence from our studies of visual spoken word recognition will be reviewed. Visual speech provides the opportunity to highlight the importance of stimulus-driven perceptual similarity because it presents a different pattern of segmental similarity than is afforded by auditory speech degraded by noise. Our results are consistent with stimulus-driven activation followed by competition as general spoken word recognition mechanism. In addition, results will be presented from recent investigations of the direct prediction of perceptual similarity from measurements of spoken stimuli. High levels of correlation have been observed between the predicted and perceptually obtained distances for a large set of spoken consonants. These results support the hypothesis that the perceptual structure of English consonants and vowels is predicted by stimulus structure without the need for an intervening level of abstract linguistic representation. [Research supported by NSF IIS 9996088 and NIH DC04856.

  8. Luteinizing hormone receptors in human ovarian follicles and corpora lutea during the menstrual cycle

    SciTech Connect

    Yamoto, M.; Nakano, R.; Iwasaki, M.; Ikoma, H.; Furukawa, K.

    1986-08-01

    The binding of /sup 125/I-labeled human luteinizing hormone (hLH) to the 2000-g fraction of human ovarian follicles and corpora lutea during the entire menstrual cycle was examined. Specific high affinity, low capacity receptors for hLH were demonstrated in the 2000-g fraction of both follicles and corpora lutea. Specific binding of /sup 125/I-labeled hLH to follicular tissue increased from the early follicular phase to the ovulatory phase. Specific binding of /sup 125/I-labeled hLH to luteal tissue increased from the early luteal phase to the midluteal phase and decreased towards the late luteal phase. The results of the present study indicate that the increase and decrease in receptors for hLH during the menstrual cycle might play an important role in the regulation of the ovarian cycle.

  9. A linear-RBF multikernel SVM to classify big text corpora.

    PubMed

    Romero, R; Iglesias, E L; Borrajo, L

    2015-01-01

    Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers. PMID:25879039

  10. Exploring the application of deep learning techniques on medical text corpora.

    PubMed

    Minarro-Giménez, José Antonio; Marín-Alonso, Oscar; Samwald, Matthias

    2014-01-01

    With the rapidly growing amount of biomedical literature it becomes increasingly difficult to find relevant information quickly and reliably. In this study we applied the word2vec deep learning toolkit to medical corpora to test its potential for improving the accessibility of medical knowledge. We evaluated the efficiency of word2vec in identifying properties of pharmaceuticals based on mid-sized, unstructured medical text corpora without any additional background knowledge. Properties included relationships to diseases ('may treat') or physiological processes ('has physiological effect'). We evaluated the relationships identified by word2vec through comparison with the National Drug File - Reference Terminology (NDF-RT) ontology. The results of our first evaluation were mixed, but helped us identify further avenues for employing deep learning technologies in medical information retrieval, as well as using them to complement curated knowledge captured in ontologies and taxonomies. PMID:25160253

  11. A Linear-RBF Multikernel SVM to Classify Big Text Corpora

    PubMed Central

    Romero, R.; Iglesias, E. L.; Borrajo, L.

    2015-01-01

    Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers. PMID:25879039

  12. Natural discourse reference generation reduces cognitive load in spoken systems

    PubMed Central

    Campana, E.; Tanenhaus, M. K.; Allen, J. F.; Remington, R.

    2014-01-01

    The generation of referring expressions is a central topic in computational linguistics. Natural referring expressions – both definite references like ‘the baseball cap’ and pronouns like ‘it’ – are dependent on discourse context. We examine the practical implications of context-dependent referring expression generation for the design of spoken systems. Currently, not all spoken systems have the goal of generating natural referring expressions. Many researchers believe that the context-dependency of natural referring expressions actually makes systems less usable. Using the dual-task paradigm, we demonstrate that generating natural referring expressions that are dependent on discourse context reduces cognitive load. Somewhat surprisingly, we also demonstrate that practice does not improve cognitive load in systems that generate consistent (context-independent) referring expressions. We discuss practical implications for spoken systems as well as other areas of referring expression generation. PMID:25328423

  13. Incremental comprehension of spoken quantifier sentences: Evidence from brain potentials.

    PubMed

    Freunberger, Dominik; Nieuwland, Mante S

    2016-09-01

    Do people incrementally incorporate the meaning of quantifier expressions to understand an unfolding sentence? Most previous studies concluded that quantifiers do not immediately influence how a sentence is understood based on the observation that online N400-effects differed from offline plausibility judgments. Those studies, however, used serial visual presentation (SVP), which involves unnatural reading. In the current ERP-experiment, we presented spoken positive and negative quantifier sentences ("Practically all/practically no postmen prefer delivering mail, when the weather is good/bad during the day"). Different from results obtained in a previously reported SVP-study (Nieuwland, 2016) sentence truth-value N400 effects occurred in positive and negative quantifier sentences alike, reflecting fully incremental quantifier comprehension. This suggests that the prosodic information available during spoken language comprehension supports the generation of online predictions for upcoming words and that, at least for quantifier sentences, comprehension of spoken language may proceed more incrementally than comprehension during SVP reading. PMID:27346365

  14. Combining MEDLINE and publisher data to create parallel corpora for the automatic translation of biomedical text

    PubMed Central

    2013-01-01

    Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733

  15. Usable, real-time, interactive spoken language systems

    NASA Astrophysics Data System (ADS)

    Makhoul, J.; Bates, M.

    1994-09-01

    The objective of this project was to make the next significant advance in human-machine interaction by developing a spoken language system (SLS) that operates in real-time while maintaining high accuracy on cost-effective COTS (commercial, off-the-shelf) hardware. The system has a highly interactive user interface, is largely user independent and to be easily portable to new applications. The BBN HARC spoken language system consists of Byblos speech recognition system and the Delphi or HUM language understanding system.

  16. Accessory corpora lutea formation in pregnant Hokkaido sika deer (Cervus nippon yesoensis) investigated by examination of ovarian dynamics and steroid hormone concentrations.

    PubMed

    Yanagawa, Yojiro; Matsuura, Yukiko; Suzuki, Masatsugu; Saga, Shin-Ichi; Okuyama, Hideto; Fukui, Daisuke; Bando, Gen; Nagano, Masashi; Katagiri, Seiji; Takahashi, Yoshiyuki; Tsubota, Toshio

    2015-01-01

    Generally, sika deer conceive a single fetus, but approximately 80% of pregnant females have two corpora lutea (CLs). The function of the accessory CL (ACL) is unknown; moreover, the process of ACL formation is unclear, and understanding this is necessary to know its role. To elucidate the process of ACL formation, the ovarian dynamics of six adult Hokkaido sika deer females were examined ultrasonographically together with peripheral estradiol-17β and progesterone concentrations. ACLs formed in three females that conceived at the first estrus of the breeding season, but not in those females that conceived at the second estrus. After copulation, postconception ovulation of the dominant follicle of the first wave is induced by an increase in estradiol-17β, which leads to formation of an ACL. A relatively low concentration of progesterone after the first estrus of the breeding season is considered to be responsible for the increase in estradiol-17β after copulation. PMID:25482110

  17. Accessory corpora lutea formation in pregnant Hokkaido sika deer (Cervus nippon yesoensis) investigated by examination of ovarian dynamics and steroid hormone concentrations

    PubMed Central

    YANAGAWA, Yojiro; MATSUURA, Yukiko; SUZUKI, Masatsugu; SAGA, Shin-ichi; OKUYAMA, Hideto; FUKUI, Daisuke; BANDO, Gen; NAGANO, Masashi; KATAGIRI, Seiji; TAKAHASHI, Yoshiyuki; TSUBOTA, Toshio

    2014-01-01

    Generally, sika deer conceive a single fetus, but approximately 80% of pregnant females have two corpora lutea (CLs). The function of the accessory CL (ACL) is unknown; moreover, the process of ACL formation is unclear, and understanding this is necessary to know its role. To elucidate the process of ACL formation, the ovarian dynamics of six adult Hokkaido sika deer females were examined ultrasonographically together with peripheral estradiol-17β and progesterone concentrations. ACLs formed in three females that conceived at the first estrus of the breeding season, but not in those females that conceived at the second estrus. After copulation, postconception ovulation of the dominant follicle of the first wave is induced by an increase in estradiol-17β, which leads to formation of an ACL. A relatively low concentration of progesterone after the first estrus of the breeding season is considered to be responsible for the increase in estradiol-17β after copulation. PMID:25482110

  18. Redundancy in electronic health record corpora: analysis, impact on text mining performance and mitigation strategies

    PubMed Central

    2013-01-01

    Background The increasing availability of Electronic Health Record (EHR) data and specifically free-text patient notes presents opportunities for phenotype extraction. Text-mining methods in particular can help disease modeling by mapping named-entities mentions to terminologies and clustering semantically related terms. EHR corpora, however, exhibit specific statistical and linguistic characteristics when compared with corpora in the biomedical literature domain. We focus on copy-and-paste redundancy: clinicians typically copy and paste information from previous notes when documenting a current patient encounter. Thus, within a longitudinal patient record, one expects to observe heavy redundancy. In this paper, we ask three research questions: (i) How can redundancy be quantified in large-scale text corpora? (ii) Conventional wisdom is that larger corpora yield better results in text mining. But how does the observed EHR redundancy affect text mining? Does such redundancy introduce a bias that distorts learned models? Or does the redundancy introduce benefits by highlighting stable and important subsets of the corpus? (iii) How can one mitigate the impact of redundancy on text mining? Results We analyze a large-scale EHR corpus and quantify redundancy both in terms of word and semantic concept repetition. We observe redundancy levels of about 30% and non-standard distribution of both words and concepts. We measure the impact of redundancy on two standard text-mining applications: collocation identification and topic modeling. We compare the results of these methods on synthetic data with controlled levels of redundancy and observe significant performance variation. Finally, we compare two mitigation strategies to avoid redundancy-induced bias: (i) a baseline strategy, keeping only the last note for each patient in the corpus; (ii) removing redundant notes with an efficient fingerprinting-based algorithm. aFor text mining, preprocessing the EHR corpus with

  19. How does the provision of semantic information influence the lexicalization of new spoken words?

    PubMed

    Hawkins, Erin A; Rastle, Kathleen

    2016-07-01

    The integration of a novel spoken word with existing lexical items can proceed within 24 hours of learning its phonological form. However, previous studies have reported that lexical integration of new spoken words can be delayed if semantic information is provided during learning. One possibility is that this delay in lexical integration reflects reduced phonological processing during learning as a consequence of the need to learn the semantic associations. In the current study, adult participants learnt novel words via a phoneme monitoring task, in which half of the words were associated with a picture referent, and half were phonological forms only. Critically, participants were instructed to learn the forms of the novel words, with no explicit goal to learn the word-picture mappings. Results revealed significant lexical competition effects emerging one week after consolidation, which were equivalent for the picture-present and form-only conditions. Tests of declarative memory and shadowing showed equivalent performance for picture-present and form-only words, despite participants showing good knowledge of the picture associations immediately after learning. These data support the contention that provided phonological information is recruited sufficiently well during learning, the provision of semantic information does not slow the time-course of lexical integration. PMID:26241013

  20. Formant frequency analysis of children's spoken and sung vowels using sweeping fundamental frequency production.

    PubMed

    White, P

    1999-12-01

    High-pitched productions present difficulties in formant frequency analysis due to wide harmonic spacing and poorly defined formants. As a consequence, there is little reliable data regarding children's spoken or sung vowel formants. Twenty-nine 11-year-old Swedish children were asked to produce 4 sustained spoken and sung vowels. In order to circumvent the problem of wide harmonic spacing, F1 and F2 measurements were taken from vowels produced with a sweeping F0. Experienced choir singers were selected as subjects in order to minimize the larynx height adjustments associated with pitch variation in less skilled subjects. Results showed significantly higher formant frequencies for speech than for singing. Formants were consistently higher in girls than in boys suggesting longer vocal tracts in these preadolescent boys. Furthermore, formant scaling demonstrated vowel dependent differences between boys and girls suggesting non-uniform differences in male and female vocal tract dimensions. These vowel-dependent sex differences were not consistent with adult data. PMID:10622522

  1. Instructional Benefits of Spoken Words: A Review of Cognitive Load Factors

    ERIC Educational Resources Information Center

    Kalyuga, Slava

    2012-01-01

    Spoken words have always been an important component of traditional instruction. With the development of modern educational technology tools, spoken text more often replaces or supplements written or on-screen textual representations. However, there could be a cognitive load cost involved in this trend, as spoken words can have both benefits and…

  2. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention

    ERIC Educational Resources Information Center

    Medwetsky, Larry

    2011-01-01

    Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…

  3. On-Line Orthographic Influences on Spoken Language in a Semantic Task

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Perre, Laetitia; Dufau, Stephane; Ziegler, Johannes C.

    2009-01-01

    Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a…

  4. Discourse Markers and Spoken English: Nonnative Use in the Turkish EFL Setting

    ERIC Educational Resources Information Center

    Asik, Asuman; Cephe, Pasa Tevfik

    2013-01-01

    This study investigated the production of discourse markers by non-native speakers of English and their occurrences in their spoken English by comparing them with those used in native speakers' spoken discourse. Because discourse markers (DMs) are significant items in spoken discourse of native speakers, a study about the use of DMs by nonnative…

  5. Spoken Grammar: What Is It and How Can We Teach It?

    ERIC Educational Resources Information Center

    McCarthy, Michael; Carter, Ronald

    1995-01-01

    This article argues that consideration by teachers of spoken English shows that learners need to be given choices between written and spoken grammars, that the interpersonal implications of spoken grammars are important, and that methodologically inductive learning may be more appropriate than the presentation-practice-production approaches…

  6. Spoken Persuasive Discourse Abilities of Adolescents with Acquired Brain Injury

    ERIC Educational Resources Information Center

    Moran, Catherine; Kirk, Cecilia; Powell, Emma

    2012-01-01

    Purpose: The aim of this study was to examine the performance of adolescents with acquired brain injury (ABI) during a spoken persuasive discourse task. Persuasive discourse is frequently used in social and academic settings and is of importance in the study of adolescent language. Method: Participants included 8 adolescents with ABI and 8 peers…

  7. A Uniform Identity: Schoolgirl Snapshots and the Spoken Visual

    ERIC Educational Resources Information Center

    Spencer, Stephanie

    2007-01-01

    This article discusses the possibility for expanding our understanding of the visual to include the "spoken visual" within oral history analysis. It suggests that adding a further reading, that of the visualized body, to the voice-centred relational method we can consider the meaning of the uniformed body for the individual. It uses as a…

  8. Functions of Japanese Exemplifying Particles in Spoken and Written Discourse

    ERIC Educational Resources Information Center

    Taylor, Yuki Io

    2010-01-01

    This dissertation examines how the Japanese particles "nado", "toka", and "tari" which all may be translated as "such as", "etc.", or "like" behave differently in written and spoken discourse. According to traditional analyses (e.g. Martin, 1987), these particles are assumed to be Exemplifying Particles (EP) used to provide concrete examples to…

  9. Time Pressure and Phonological Advance Planning in Spoken Production

    ERIC Educational Resources Information Center

    Damian, Markus F.; Dumay, Nicolas

    2007-01-01

    Current accounts of spoken production debate the extent to which speakers plan ahead. Here, we investigated whether the scope of phonological planning is influenced by changes in time pressure constraints. The first experiment used a picture-word interference task and showed that picture naming latencies were shorter when word distractors shared…

  10. Lexical Competition in Non-Native Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Weber, Andrea; Cutler, Anne

    2004-01-01

    Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name ("pencil," given target "panda") than on less confusable distractors…

  11. Context and Spoken Word Recognition in a Novel Lexicon

    ERIC Educational Resources Information Center

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2008-01-01

    Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments,…

  12. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    ERIC Educational Resources Information Center

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  13. Animated and Static Concept Maps Enhance Learning from Spoken Narration

    ERIC Educational Resources Information Center

    Adesope, Olusola O.; Nesbit, John C.

    2013-01-01

    An animated concept map represents verbal information in a node-link diagram that changes over time. The goals of the experiment were to evaluate the instructional effects of presenting an animated concept map concurrently with semantically equivalent spoken narration. The study used a 2 x 2 factorial design in which an animation factor (animated…

  14. Pedagogy for Liberation: Spoken Word Poetry in Urban Schools

    ERIC Educational Resources Information Center

    Fiore, Mia

    2015-01-01

    The Black Arts Movement of the 1960s and 1970s, hip hop of the 1980s and early 1990s, and spoken word poetry have each attempted to initiate the dialogical process outlined by Paulo Freire as necessary in overturning oppression. Each art form has done this by critically engaging with the world and questioning dominant systems of power. However,…

  15. L2 Gender Facilitation and Inhibition in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Behney, Jennifer N.

    2011-01-01

    This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…

  16. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    ERIC Educational Resources Information Center

    Geers, Anne E.; Nicholas, Johanna G.

    2013-01-01

    Purpose: In this article, the authors sought to determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12 and 38 months of age. Relative advantages of receiving a bilateral CI after age 4.5 years, better…

  17. Representation and Competition in the Perception of Spoken Words

    ERIC Educational Resources Information Center

    Gaskell, M. Gareth; Marslen-Wilson, William D.

    2002-01-01

    We present data from four experiments using cross-modal priming to examine the effects of competitor environment on lexical activation during the time course of the perception of a spoken word. The research is conducted from the perspective of a distributed model of speech perception and lexical representation, which focuses on activation at the…

  18. Reading Spoken Words: Orthographic Effects in Auditory Priming

    ERIC Educational Resources Information Center

    Chereau, Celine; Gaskell, M. Gareth; Dumay, Nicolas

    2007-01-01

    Three experiments examined the involvement of orthography in spoken word processing using a task--unimodal auditory priming with offset overlap--taken to reflect activation of prelexical representations. Two types of prime-target relationship were compared; both involved phonological overlap, but only one had a strong orthographic overlap (e.g.,…

  19. Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension

    ERIC Educational Resources Information Center

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2015-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…

  20. Lexical Representation of Phonological Variation in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Ranbom, Larissa J.; Connine, Cynthia M.

    2007-01-01

    There have been a number of mechanisms proposed to account for recognition of phonological variation in spoken language. Five of these mechanisms were considered here, including underspecification, inference, feature parsing, tolerance, and a frequency-based representational account. A corpus analysis and five experiments using the nasal flap…

  1. A Comparison between Written and Spoken Narratives in Aphasia

    ERIC Educational Resources Information Center

    Behrns, Ingrid; Wengelin, Asa; Broberg, Malin; Hartelius, Lena

    2009-01-01

    The aim of the present study was to explore how a personal narrative told by a group of eight persons with aphasia differed between written and spoken language, and to compare this with findings from 10 participants in a reference group. The stories were analysed through holistic assessments made by 60 participants without experience of aphasia…

  2. Attitudes towards Literary Tamil and Standard Spoken Tamil in Singapore

    ERIC Educational Resources Information Center

    Saravanan, Vanithamani

    2007-01-01

    This is the first empirical study that focused on attitudes towards two varieties of Tamil, Literary Tamil (LT) and Standard Spoken Tamil (SST), with the multilingual state of Singapore as the backdrop. The attitudes of 46 Singapore Tamil teachers towards speakers of LT and SST were investigated using the matched-guise approach along with…

  3. The Impact of Orthographic Consistency on German Spoken Word Identification

    ERIC Educational Resources Information Center

    Beyermann, Sandra; Penke, Martina

    2014-01-01

    An auditory lexical decision experiment was conducted to find out whether sound-to-spelling consistency has an impact on German spoken word processing, and whether such an impact is different at different stages of reading development. Four groups of readers (school children in the second, third and fifth grades, and university students)…

  4. Product Evaluation: A Review of outSPOKEN for Windows.

    ERIC Educational Resources Information Center

    Leventhal, Jay D.; Earl, Crista L.

    1998-01-01

    Evaluates outSPOKEN, a Windows 95 screen reader for people with visual disabilities. Evaluates the program for installation and documentation (good), Word 97 (good), WordPerfect 8.0 (less than good), and Internet Explorer and Navigator (a little less than good). Provides specific suggestions for improving performance and steps for selecting a…

  5. Annotation and Analyses of Temporal Aspects of Spoken Fluency

    ERIC Educational Resources Information Center

    Hilton, Heather

    2009-01-01

    This article presents the methodology adopted for transcribing and quantifying temporal fluency phenomena in a spoken L2 corpus (L2 English, French, and Italian by learners of different proficiency levels). The CHILDES suite is being used for transcription and analysis, and we have adapted the CHAT format in order to code disfluencies as precisely…

  6. Call and Responsibility: Critical Questions for Youth Spoken Word Poetry

    ERIC Educational Resources Information Center

    Weinstein, Susan; West, Anna

    2012-01-01

    In this article, Susan Weinstein and Anna West embark on a critical analysis of the maturing field of youth spoken word poetry (YSW). Through a blend of firsthand experience, analysis of YSW-related films and television, and interview data from six years of research, the authors identify specific dynamics that challenge young poets as they…

  7. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  8. Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on languages spoken by English learners (ELs) are: (1) Twenty most common EL languages, as reported in states' top five lists: SY 2013-14; (2) States,…

  9. Orthographic Facilitation Effects on Spoken Word Production: Evidence from Chinese

    ERIC Educational Resources Information Center

    Zhang, Qingfang; Weekes, Brendan Stuart

    2009-01-01

    The aim of this experiment was to investigate the time course of orthographic facilitation on picture naming in Chinese. We used a picture-word paradigm to investigate orthographic and phonological facilitation on monosyllabic spoken word production in native Mandarin speakers. Both the stimulus-onset asynchrony (SOA) and the picture-word…

  10. Modular fuzzy-neuro controller driven by spoken language commands.

    PubMed

    Pulasinghe, Koliya; Watanabe, Keigo; Izumi, Kiyotaka; Kiguchi, Kazuo

    2004-02-01

    We present a methodology of controlling machines using spoken language commands. The two major problems relating to the speech interfaces for machines, namely, the interpretation of words with fuzzy implications and the out-of-vocabulary (OOV) words in natural conversation, are investigated. The system proposed in this paper is designed to overcome the above two problems in controlling machines using spoken language commands. The present system consists of a hidden Markov model (HMM) based automatic speech recognizer (ASR), with a keyword spotting system to capture the machine sensitive words from the running utterances and a fuzzy-neural network (FNN) based controller to represent the words with fuzzy implications in spoken language commands. Significance of the words, i.e., the contextual meaning of the words according to the machine's current state, is introduced to the system to obtain more realistic output equivalent to users' desire. Modularity of the system is also considered to provide a generalization of the methodology for systems having heterogeneous functions without diminishing the performance of the system. The proposed system is experimentally tested by navigating a mobile robot in real time using spoken language commands. PMID:15369072

  11. Associations among Play, Gesture and Early Spoken Language Acquisition

    ERIC Educational Resources Information Center

    Hall, Suzanne; Rumney, Lisa; Holler, Judith; Kidd, Evan

    2013-01-01

    The present study investigated the developmental interrelationships between play, gesture use and spoken language development in children aged 18-31 months. The children completed two tasks: (i) a structured measure of pretend (or "symbolic") play and (ii) a measure of vocabulary knowledge in which children have been shown to gesture.…

  12. An Analysis of Spoken Grammar: The Case for Production

    ERIC Educational Resources Information Center

    Mumford, Simon

    2009-01-01

    Corpus-based grammars, notably "Cambridge Grammar of English," give explicit information on the forms and use of native-speaker grammar, including spoken grammar. Native-speaker norms as a necessary goal in language teaching are contested by supporters of English as a Lingua Franca (ELF); however, this article argues for the inclusion of selected…

  13. SPOKEN MARATHI. BOOK I, FIRST-YEAR INTENSIVE COURSE.

    ERIC Educational Resources Information Center

    KAVADI, NARESH B.; SOUTHWORTH, FRANKLIN C.

    "SPOKEN MARATHI" PRESENTS THE BEGINNING STUDENT WITH THE BASIC PHONOLOGY AND STRUCTURE OF MODERN MARATHI. IT IS ASSUMED THAT THE STUDENT WILL SPEND MOST OF HIS STUDY TIME LISTENING TO AND SPEAKING THE LANGUAGE, EITHER WITH A NATIVE SPEAKER INSTRUCTOR OR WITH RECORDED MATERIALS. THIS TEXT IS DESIGNED TO PROVIDE MATERIAL FOR A ONE-YEAR INTENSIVE…

  14. Visible Thought: Deaf Children's Use of Signed & Spoken Private Speech.

    ERIC Educational Resources Information Center

    Jamieson, Janet R.

    1995-01-01

    Examines from a Vygotskian perspective deaf children's private speech, i.e., speech that is spoken aloud (or visibly performed) but that is addressed to no particular person. Findings are consistent with Vygotsky's notion of the robustness of the phenomenon and its ontogenesis in early social communication. (33 references) (LR)

  15. Spoken Word Recognition of Chinese Words in Continuous Speech

    ERIC Educational Resources Information Center

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  16. Spoken Language Derived Measures for Detecting Mild Cognitive Impairment

    PubMed Central

    Roark, Brian; Mitchell, Margaret; Hosom, John-Paul; Hollingshead, Kristy; Kaye, Jeffrey

    2011-01-01

    Spoken responses produced by subjects during neuropsychological exams can provide diagnostic markers beyond exam performance. In particular, characteristics of the spoken language itself can discriminate between subject groups. We present results on the utility of such markers in discriminating between healthy elderly subjects and subjects with mild cognitive impairment (MCI). Given the audio and transcript of a spoken narrative recall task, a range of markers are automatically derived. These markers include speech features such as pause frequency and duration, and many linguistic complexity measures. We examine measures calculated from manually annotated time alignments (of the transcript with the audio) and syntactic parse trees, as well as the same measures calculated from automatic (forced) time alignments and automatic parses. We show statistically significant differences between clinical subject groups for a number of measures. These differences are largely preserved with automation. We then present classification results, and demonstrate a statistically significant improvement in the area under the ROC curve (AUC) when using automatic spoken language derived features in addition to the neuropsychological test scores. Our results indicate that using multiple, complementary measures can aid in automatic detection of MCI. PMID:22199464

  17. Spoken English. "Educational Review" Occasional Publications Number Two.

    ERIC Educational Resources Information Center

    Wilkinson, Andrew; And Others

    Modifications of current assumptions both about the nature of the spoken language and about its functions in relation to personality development are suggested in this book. The discussion covers an explanation of "oracy" (the oral skills of speaking and listening); the contributions of linguistics to the teaching of English in Britain; the…

  18. Narratives of Youth: Cultural Critique through Spoken Word.

    ERIC Educational Resources Information Center

    Sparks, Barbara; Grochowski, Chad

    This paper explores a new youth movement, spoken word, and the role it plays in identity development for many disenfranchised youth. Often addressing issues of identity, politics, gender, and power relations, young performance poets are carving out a critical public and educative space where they can speak "who they are." There is a growing youth…

  19. Ragnar Rommetveit's Approach to Everyday Spoken Dialogue from Within.

    PubMed

    Kowal, Sabine; O'Connell, Daniel C

    2016-04-01

    The following article presents basic concepts and methods of Ragnar Rommetveit's (born 1924) hermeneutic-dialogical approach to everyday spoken dialogue with a focus on both shared consciousness and linguistically mediated meaning. He developed this approach originally in his engagement of mainstream linguistic and psycholinguistic research of the 1960s and 1970s. He criticized this research tradition for its individualistic orientation and its adherence to experimental methodology which did not allow the engagement of interactively established meaning and understanding in everyday spoken dialogue. As a social psychologist influenced by phenomenological philosophy, Rommetveit opted for an alternative conceptualization of such dialogue as a contextualized, partially private world, temporarily co-established by interlocutors on the basis of shared consciousness. He argued that everyday spoken dialogue should be investigated from within, i.e., from the perspectives of the interlocutors and from a psychology of the second person. Hence, he developed his approach with an emphasis on intersubjectivity, perspectivity and perspectival relativity, meaning potential of utterances, and epistemic responsibility of interlocutors. In his methods, he limited himself for the most part to casuistic analyses, i.e., logical analyses of fictitious examples to argue for the plausibility of his approach. After many years of experimental research on language, he pursued his phenomenologically oriented research on dialogue in English-language publications from the late 1980s up to 2003. During that period, he engaged psycholinguistic research on spoken dialogue carried out by Anglo-American colleagues only occasionally. Although his work remained unfinished and open to development, it provides both a challenging alternative and supplement to current Anglo-American research on spoken dialogue and some overlap therewith. PMID:26597220

  20. Orthographic influences in spoken word recognition: the consistency effect in semantic and gender categorization tasks.

    PubMed

    Peereman, Ronald; Dufour, Sophie; Burt, Jennifer S

    2009-04-01

    According to current models, spoken word recognition is driven by the phonological properties of the speech signal. However, several studies have suggested that orthographic information also influences recognition in adult listeners. In particular, it has been repeatedly shown that, in the lexical decision task, words that include rimes with inconsistent spellings (e.g., /-ip/ spelled -eap or -eep) are disadvantaged, as compared with words with consistent rime spelling. In the present study, we explored whether the orthographic consistency effect extends to tasks requiring people to process words beyond simple lexical access. Two different tasks were used: semantic and gender categorization. Both tasks produced reliable consistency effects. The data are discussed as suggesting that orthographic codes are activated during word recognition, or that the organization of phonological representations of words is affected by orthography during literacy acquisition. PMID:19293108

  1. Spoken Language Development in Children Following Cochlear Implantation

    PubMed Central

    Niparko, John K.; Tobey, Emily A.; Thal, Donna J.; Eisenberg, Laurie S.; Wang, Nae-Yuh; Quittner, Alexandra L.; Fink, Nancy E.

    2010-01-01

    Context Cochlear implantation (CI) is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe-to-profound sensorineural hearing loss (SNHL). Objective To prospectively assess spoken language acquisition following CI in young children with adjustment of co-variates. Design, Setting, and Participants Prospective, longitudinal, and multidimensional assessment of spoken language growth over a 3-year period following CI. Prospective cohort study of children who underwent CI before 5 years of age (n=188) from 6 US centers and hearing children of similar ages (n=97) from 2 preschools recruited between November, 2002 and December, 2004. Follow-up completed between November, 2005 and May, 2008. Main Outcome Measures Performance on measures of spoken language comprehension and expression. Results Children undergoing CI showed greater growth in spoken language performance (10.4;[95% confidence interval: 9.6–11.2] points/year in comprehension; 8.4;[7.8–9.0] in expression) than would be predicted by their pre-CI baseline scores (5.4;[4.1–6.7] comprehension; 5.8;[4.6–7.0] expression). Although mean scores were not restored to age-appropriate levels after 3 years, significantly greater annual rates of language acquisition were observed in children who were younger at CI (1.1;[0.5–1.7] points in comprehension per year younger; 1.0;[0.6–1.5] in expression), and in children with shorter histories of hearing deficit (0.8;[0.2,1.2] points in comprehension per year shorter; 0.6;[0.2–1.0] for expression). In multivariable analyses, greater residual hearing prior to CI, higher ratings of parent-child interactions, and higher SES associated with greater rates of growth in comprehension and expression. Conclusions The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their pre-implantation scores. However

  2. Milk progesterone concentrations following simultaneous administration of buserelin and cloprostenol in cattle with normal corpora lutea.

    PubMed Central

    White, M E; Reimers, T J

    1986-01-01

    Fifty Holstein dairy cows with palpable corpora lutea were divided into two groups. Twenty-five cows were given 500 micrograms of cloprostenol followed by 8 micrograms (2 mL) of buserelin, an analogue of gonadotropin-releasing hormone, and 25 were given cloprostenol followed by saline. Milk was collected for progesterone assay at the time of treatment and two days later. Differences in median progesterone concentrations before and following treatment were not significantly different between the saline and buserelin treated cows (p greater than 0.23). PMID:3093040

  3. Milk progesterone concentrations following simultaneous administration of buserelin and cloprostenol in cattle with normal corpora lutea.

    PubMed

    White, M E; Reimers, T J

    1986-04-01

    Fifty Holstein dairy cows with palpable corpora lutea were divided into two groups. Twenty-five cows were given 500 micrograms of cloprostenol followed by 8 micrograms (2 mL) of buserelin, an analogue of gonadotropin-releasing hormone, and 25 were given cloprostenol followed by saline. Milk was collected for progesterone assay at the time of treatment and two days later. Differences in median progesterone concentrations before and following treatment were not significantly different between the saline and buserelin treated cows (p greater than 0.23). PMID:3093040

  4. Planum temporale: where spoken and written language meet.

    PubMed

    Nakada, T; Fujii, Y; Yoneoka, Y; Kwee, I L

    2001-01-01

    Functional magnetic resonance imaging studies on spoken versus written language processing were performed in 20 right-handed normal volunteers on a high-field (3.0-tesla) system. The areas activated in common by both auditory (listening) and visual (reading) language comprehension paradigms were mapped onto the planum temporale (20/20), primary auditory region (2/20), superior temporal sulcus area (2/20) and planum parietale (3/20). The study indicates that the planum temporale represents a common traffic area for cortical processing which needs to access the system of language comprehension. The destruction of this area can result in comprehension deficits in both spoken and written language, i.e. a classical case of Wernicke's aphasia. PMID:11598329

  5. Individual differences in online spoken word recognition: Implications for SLI

    PubMed Central

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2012-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014

  6. The spread of the phonological neighborhood influences spoken word recognition

    PubMed Central

    Vitevitch, Michael S.

    2008-01-01

    In three experiments, the processing of words that had the same overall number of neighbors but varied in the spread of the neighborhood (i.e., the number of individual phonemes that could be changed to form real words) was examined. In an auditory lexical decision task, a naming task, and a same–different task, words in which changes at only two phoneme positions formed neighbors were responded to more quickly than words in which changes at all three phoneme positions formed neighbors. Additional analyses ruled out an account based on the computationally derived uniqueness points of the words. Although previous studies (e.g., Luce & Pisoni, 1998) have shown that the number of phonological neighbors influences spoken word recognition, the present results show that the nature of the relationship of the neighbors to the target word—as measured by the spread of the neighborhood—also influences spoken word recognition. The implications of this result for models of spoken word recognition are discussed. PMID:17533890

  7. Spoken Word Recognition in Toddlers Who Use Cochlear Implants

    PubMed Central

    Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.

    2010-01-01

    Purpose The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of speech competitors, while their eye movements to target objects were digitally recorded. Word recognition performance was quantified by measuring each child’s reaction time (i.e., the latency between the spoken auditory label and the first look at the target object) and accuracy (i.e., the amount of time that children looked at target objects within 367 ms to 2,000 ms after the label onset). Results Children with CIs were less accurate and took longer to fixate target objects than did age-matched children without hearing loss. Both groups of children showed reduced performance in the presence of the speech competitors, although many children continued to recognize labels at above-chance levels. Conclusion The results suggest that the unique auditory experience of young CI users slows the time course of spoken word recognition abilities. In addition, real-world listening environments may slow language processing in young language learners, regardless of their hearing status. PMID:19951921

  8. Hydroxy juvenile hormones: new putative juvenile hormones biosynthesized by locust corpora allata in vitro.

    PubMed

    Darrouzet, E; Mauchamp, B; Prestwich, G D; Kerhoas, L; Ujváry, I; Couillaud, F

    1997-11-26

    The in vitro production of sesquiterpenoids was investigated by using corpora allata (CA) of the African locust Locusta migratoria migratorioides. Labeled products from unstimulated biosynthesis were extracted, purified by normal phase HPLC, and derivatized to determine the functional groups present. An extra hydroxyl group was detected in each of two juvenile hormone (JH) biosynthetic products. One compound, NP-8, was found to co-migrate with a chemically-synthesized (Z)-hydroxymethyl isomer, 12'-OH JH-III, but not with the (E)-hydroxymethyl isomer, 12-OH JH III. Mass spectral analyses further supported the identity of the synthetic material with that biosynthesized by the corpora allata. A second compound was identified as the 8'-OH JH-III based on spectroscopic analyses. 12'-OH JH-III exhibited morphogenetic activity when tested on the heterospecific Tenebrio test. These data suggest that 12'-OH JH-III and 8'-OH JH-III are additional biosynthetically-produced and biologically-active juvenile hormones, and constitute the first known members of the class of hydroxy juvenile hormones (HJHs). PMID:9398639

  9. Automatic extraction of property norm-like data from large text corpora.

    PubMed

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties. PMID:25019134

  10. Lynx reproduction--long-lasting life cycle of corpora lutea in a feline species.

    PubMed

    Jewgenow, Katarina; Painer, Johanna; Amelkina, Olga; Dehnhard, Martin; Goeritz, Frank

    2014-04-01

    A review of lynxes' reproductive biology and comparison between the reproductive cycles of the domestic cat and lynxes is presented. Three of the four lynx species (the bobcat excluded) express quite similar reproductive pattern (age at sexual maturity, estrus and pregnancy length, litter size). Similarly to the domestic cat, the bobcat is polyestric and can have more than one litter per year. Domestic cats and many other felid species are known to express anovulatory, pregnant and pseudo-pregnant reproductive cycles in dependence on ovulation induction and fertilization. The formation of corpora lutea (CLs) occurs after ovulation. In pregnant animals, luteal function ends with parturition, whereas during pseudo-pregnancy a shorter life span and lower hormone secretion are observed. The life cycle of corpora lutea in Eurasian lynxes is different from the pattern described in domestic cats. Lynx CLs produce progestagens in distinctive amounts permanently for at least two years, regardless of their origin (pregnancy or pseudo-pregnancy). It is suggested that long-lasting CLs induce a negative feedback to inactivate folliculogenesis, turning a normally polyestric cycle observed in most felids into a monoestric cycle in lynxes. PMID:24856466

  11. Towards spoken clinical-question answering: evaluating and adapting automatic speech-recognition systems for spoken clinical questions

    PubMed Central

    Liu, Feifan; Tur, Gokhan; Hakkani-Tür, Dilek

    2011-01-01

    Objective To evaluate existing automatic speech-recognition (ASR) systems to measure their performance in interpreting spoken clinical questions and to adapt one ASR system to improve its performance on this task. Design and measurements The authors evaluated two well-known ASR systems on spoken clinical questions: Nuance Dragon (both generic and medical versions: Nuance Gen and Nuance Med) and the SRI Decipher (the generic version SRI Gen). The authors also explored language model adaptation using more than 4000 clinical questions to improve the SRI system's performance, and profile training to improve the performance of the Nuance Med system. The authors reported the results with the NIST standard word error rate (WER) and further analyzed error patterns at the semantic level. Results Nuance Gen and Med systems resulted in a WER of 68.1% and 67.4% respectively. The SRI Gen system performed better, attaining a WER of 41.5%. After domain adaptation with a language model, the performance of the SRI system improved 36% to a final WER of 26.7%. Conclusion Without modification, two well-known ASR systems do not perform well in interpreting spoken clinical questions. With a simple domain adaptation, one of the ASR systems improved significantly on the clinical question task, indicating the importance of developing domain/genre-specific ASR systems. PMID:21705457

  12. Relationships between spoken word and sign processing in children with cochlear implants.

    PubMed

    Giezen, Marcel R; Baker, Anne E; Escudero, Paola

    2014-01-01

    The effect of using signed communication on the spoken language development of deaf children with a cochlear implant (CI) is much debated. We report on two studies that investigated relationships between spoken word and sign processing in children with a CI who are exposed to signs in addition to spoken language. Study 1 assessed rapid word and sign learning in 13 children with a CI and found that performance in both language modalities correlated positively. Study 2 tested the effects of using sign-supported speech on spoken word processing in eight children with a CI, showing that simultaneously perceiving signs and spoken words does not negatively impact their spoken word recognition or learning. Together, these two studies suggest that sign exposure does not necessarily have a negative effect on speech processing in some children with a CI. PMID:24080074

  13. Stretched Verb Collocations with "Give": Their Use and Translation into Spanish Using the BNC and CREA Corpora

    ERIC Educational Resources Information Center

    Molina-Plaza, Silvia; de Gregorio-Godeo, Eduardo

    2010-01-01

    Within the context of on-going research, this paper explores the pedagogical implications of contrastive analyses of multiword units in English and Spanish based on electronic corpora as a CALL resource. The main tenets of collocations from a contrastive perspective--and the points of contact and departure between both languages--are discussed…

  14. Automatic Entity Recognition and Typing from Massive Text Corpora: A Phrase and Network Mining Approach

    PubMed Central

    Ren, Xiang; El-Kishky, Ahmed; Wang, Chi; Han, Jiawei

    2015-01-01

    In today’s computerized and information-based society, we are soaked with vast amounts of text data, ranging from news articles, scientific publications, product reviews, to a wide range of textual information from social media. To unlock the value of these unstructured text data from various domains, it is of great importance to gain an understanding of entities and their relationships. In this tutorial, we introduce data-driven methods to recognize typed entities of interest in massive, domain-specific text corpora. These methods can automatically identify token spans as entity mentions in documents and label their types (e.g., people, product, food) in a scalable way. We demonstrate on real datasets including news articles and tweets how these typed entities aid in knowledge discovery and management. PMID:26705508

  15. [Ventral corpora cavernosa corporoplasty for the treatment of severe penile incurvation in Pediatric Urology].

    PubMed

    Mieles Cerchar, M J; Gosalbez Rafel, R; Miguélez Lago, C

    2016-04-01

    Hypospadias is a congenital pathology of the male genitalia that we diagnose and treat more every day. Due to an increase of case load we must have at hand a large quantity of surgical techniques for its correct treatment. Ventral corporoplasty of the corpora cavernosa is one of them that will help us to successfully treat the most severe cases within this variety which is the pathology itself. We performed a prospective study in Malaga, Spain between 2010-2015. We review the technic and its indications, and the authors personal series with 20 cases performed by 2 surgeons using the same protocol and technics. The outcomes showed good results without complications in all cases. Corporoplasty is one of the surgical technique for the treatment of the most sever cases of penile incurvation. PMID:27068371

  16. Paraphrase acquisition from comparable medical corpora of specialized and lay texts.

    PubMed

    Deléger, Louise; Zweigenbaum, Pierre

    2008-01-01

    Nowadays a large amount of health information is available to the public, but medical language is often difficult for lay people to understand. Developing means to make medical information more comprehensible is therefore a real need. In this regard, a useful resource would be a corpus of specialized and lay paraphrases. To this end we built comparable corpora of specialized and lay texts on which we applied paraphrasing patterns based on anchors of deverbal noun and verb pairs. The results show that the paraphrases were of good quality (71.4% to 94.2% precision) and that this type of paraphrases was relevant in the context of studying the differences between specialized and lay language. This study also demonstrates that simple paraphrase acquisition methods can also work on texts with a rather small degree of similarity, once similar text segments are detected. PMID:18999095

  17. Identification of a glycogenolysis-inhibiting peptide from the corpora cardiaca of locusts.

    PubMed

    Clynen, Elke; Huybrechts, Jurgen; Baggerman, Geert; Van Doorn, Jan; Van Der Horst, Dick; De Loof, Arnold; Schoofs, Liliane

    2003-08-01

    A mass spectrometric study of the peptidome of the neurohemal part of the corpora cardiaca of Locusta migratoria and Schistocerca gregaria shows that it contains several unknown peptides. We were able to identify the sequence of one of these peptides as pQSDLFLLSPK. This sequence is identical to the part of the Locusta insulin-related peptide (IRP) precursor that is situated between the signal peptide and the B-chain. We designated this peptide as IRP copeptide. This IRP copeptide is also present in the pars intercerebralis, which is likely to be the site of synthesis. It is identical in both L. migratoria and S. gregaria. It shows no effect on the hemolymph lipid concentration in vivo or muscle contraction in vitro. The IRP copeptide is able to cause a decreased phosphorylase activity in locust fat body in vitro, opposite to the effect of the adipokinetic hormones and therefore possibly represents a glycogenolysis-inhibiting peptide. PMID:12865323

  18. Dynamics of extracellular matrix in ovarian follicles and corpora lutea of mice.

    PubMed

    Irving-Rodgers, Helen F; Hummitzsch, Katja; Murdiyarso, Lydia S; Bonner, Wendy M; Sado, Yoshikazu; Ninomiya, Yoshifumi; Couchman, John R; Sorokin, Lydia M; Rodgers, Raymond J

    2010-03-01

    Despite the mouse being an important laboratory species, little is known about changes in its extracellular matrix (ECM) during follicle and corpora lutea formation and regression. Follicle development was induced in mice (29 days of age/experimental day 0) by injections of pregnant mare's serum gonadotrophin on days 0 and 1 and ovulation was induced by injection of human chorionic gonadotrophin on day 2. Ovaries were collected for immunohistochemistry (n=10 per group) on days 0, 2 and 5. Another group was mated and ovaries were examined on day 11 (n=7). Collagen type IV alpha1 and alpha2, laminin alpha1, beta1 and gamma1 chains, nidogens 1 and 2 and perlecan were present in the follicular basal lamina of all developmental stages. Collagen type XVIII was only found in basal lamina of primordial, primary and some preantral follicles, whereas laminin alpha2 was only detected in some preantral and antral follicles. The focimatrix, a specialised matrix of the membrana granulosa, contained collagen type IV alpha1 and alpha2, laminin alpha1, beta1 and gamma1 chains, nidogens 1 and 2, perlecan and collagen type XVIII. In the corpora lutea, staining was restricted to capillary sub-endothelial basal laminas containing collagen type IV alpha1 and alpha2, laminin alpha1, beta1 and gamma1 chains, nidogens 1 and 2, perlecan and collagen type XVIII. Laminins alpha4 and alpha5 were not immunolocalised to any structure in the mouse ovary. The ECM composition of the mouse ovary has similarities to, but also major differences from, other species with respect to nidogens 1 and 2 and perlecan. PMID:20033213

  19. Preliminary evaluations of a spoken web enabled care management platform.

    PubMed

    Padman, Rema; Beam, Erika; Szewczyk, Rachel

    2013-01-01

    Telephones are a ubiquitous and widely accepted technology worldwide. The low ownership cost, simple user interface, intuitive voice-based access and long history contribute to the wide-spread use and success of telephones, and more recently, that of mobile phones. This study reports on our preliminary efforts to leverage this technology to bridge disparities in the access to and delivery of personalized health and wellness care by developing and evaluating a Spoken Web enabled Care Management solution. Early results with two proxy evaluations and a few visually impaired users highlight both the potential and challenges associated with this novel, voice-enabled healthcare delivery solution. PMID:23920978

  20. Neural processing of spoken words in specific language impairment and dyslexia.

    PubMed

    Helenius, Päivi; Parviainen, Tiina; Paetau, Ritva; Salmelin, Riitta

    2009-07-01

    Young adults with a history of specific language impairment (SLI) differ from reading-impaired (dyslexic) individuals in terms of limited vocabulary and poor verbal short-term memory. Phonological short-term memory has been shown to play a significant role in learning new words. We investigated the neural signatures of auditory word recognition and word repetition in young adults with SLI, dyslexia and normal language development using magnetoencephalography. The stimuli were 7-8 letter spoken real words and pseudo-words. They evoked a transient peak at 100 ms (N100m) followed by longer-lasting activation peaking around 400 ms (N400m) in the left and right superior temporal cortex. Both word repetition (first vs. immediately following second presentation) and lexicality (words vs. pseudowords) modulated the N400m response. An effect of lexicality was detected about 400 ms onwards as activation culminated for words but continued for pseudo-words. This effect was more pronounced in the left than right hemisphere in the control subjects. The left hemisphere lexicality effect was also present in the dyslexic adults, but it was non-significant in the subjects with SLI, possibly reflecting their limited vocabulary. The N400m activation between 200 and 700 ms was attenuated by the immediate repetition of words and pseudo-words in both hemispheres. In SLI adults the repetition effect evaluated at 200-400 ms was abnormally weak. This finding suggests impaired short-term maintenance of linguistic activation that underlies word recognition. Furthermore, the size of the repetition effect decreased from control subjects through dyslexics to SLIs, i.e. when advancing from milder to more severe language impairment. The unusually rapid decay of speech-evoked activation could have a detrimental role on vocabulary growth in children with SLI. PMID:19498087

  1. Setting the tone: an ERP investigation of the influences of phonological similarity on spoken word recognition in Mandarin Chinese.

    PubMed

    Malins, Jeffrey G; Joanisse, Marc F

    2012-07-01

    We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin (N=19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following nature: segmental (e.g., picture: hua1 'flower'; sound: hua4 'painting'); cohort (e.g., picture: hua1 'flower'; sound: hui1 'gray'); rhyme (e.g., picture: hua1 'flower'; sound: gua1 'melon'); tonal (e.g., picture: hua1 'flower'; sound: jing1 'whale'); unrelated (e.g., picture: hua1 'flower'; sound: lang2 'wolf'). Expectancy violations in the segmental condition showed an early-going modulation of components (starting at 250 ms post-stimulus onset), suggesting that listeners used tonal information to constrain word recognition as soon as it became available, just like they did with phonemic information in the cohort condition. However, effects were less persistent and more left-lateralized in the segmental than cohort condition, suggesting dissociable cognitive processes underlie access to tonal versus phonemic information. Cohort versus rhyme mismatches showed distinct patterns of modulation which were very similar to what has been observed in English, suggesting onsets and rimes are weighted similarly across the two languages. Last, we did not observe effects for whole-syllable mismatches above and beyond those for mismatches in individual components, suggesting the syllable does not merit a special status in Mandarin spoken word recognition. These results are discussed with respect to modifications needed for existing models to accommodate the tonal languages spoken by a large proportion of the world's speakers. PMID:22595659

  2. Immediate effects of anticipatory coarticulation in spoken-word recognition

    PubMed Central

    Salverda, Anne Pier; Kleinschmidt, Dave; Tanenhaus, Michael K.

    2014-01-01

    Two visual-world experiments examined listeners’ use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as “The … ladder is the target”. With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200–240 ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word (“ The ladder … is the target”). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article’s vowel. With these stimuli, effects of speech on eye-movement control began about 70 ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for “data explanation” approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed. PMID:24511179

  3. Listen carefully: the risk of error in spoken medication orders.

    PubMed

    Lambert, Bruce L; Dickey, Laura Walsh; Fisher, William M; Gibbons, Robert D; Lin, Swu-Jane; Luce, Paul A; McLennan, Conor T; Senders, John W; Yu, Clement T

    2010-05-01

    Clinicians and patients often confuse drug names that sound alike. We conducted auditory perception experiments in the United States to assess the impact of similarity, familiarity, background noise and other factors on clinicians' (physicians, family pharmacists, nurses) and laypersons' ability to identify spoken drug names. We found that accuracy increased significantly as the signal-to-noise (S/N) ratio increased, as subjective familiarity with the name increased and as the national prescribing frequency of the name increased. For clinicians only, similarity to other drug names reduced identification accuracy, especially when the neighboring names were frequently prescribed. When one name was substituted for another, the substituted name was almost always a more frequently prescribed drug. Objectively measurable properties of drug names can be used to predict confusability. The magnitude of the noise and familiarity effects suggests that they may be important targets for intervention. We conclude that the ability of clinicians and lay people to identify spoken drug names is influenced by signal-to-noise ratio, subjective familiarity, prescribing frequency, and the similarity neighborhoods of drug names. PMID:20207461

  4. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    PubMed Central

    Geers, Ann E.; Nicholas, Johanna G.

    2013-01-01

    Purpose To determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12–38 months of age. Relative advantages of receiving a bilateral CI after age 4.5, better pre-CI aided hearing, and longer CI experience were also examined. Method Sixty children participated in a prospective longitudinal study of outcomes at 4.5 and 10.5 years of age. Twenty-nine children received a sequential second CI. Test scores were compared to normative samples of hearing age-mates and predictors of outcomes identified. Results Standard scores on language tests at 10.5 years of age remained significantly correlated with age of first cochlear implantation. Scores were not associated with receipt of a second, sequentially-acquired CI. Significantly higher scores were achieved for vocabulary as compared with overall language, a finding not evident when the children were tested at younger ages. Conclusion Age-appropriate spoken language skills continued to be more likely with younger AOI, even after an average of 8.6 years of additional CI use. Receipt of a second implant between ages 4–10 years and longer duration of device use did not provide significant added benefit. PMID:23275406

  5. Visual speech primes open-set recognition of spoken words

    PubMed Central

    Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.

    2011-01-01

    Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins, 2004). In this paper, we used a cross-modality repetition priming paradigm with visual speech lexical primes and auditory lexical targets to explore the nature of this priming effect. First, we report that participants identified spoken words mixed with noise more accurately when the words were preceded by a visual speech prime of the same word compared with a control condition. Second, analyses of the responses indicated that both correct and incorrect responses were constrained by the visual speech information in the prime. These complementary results suggest that the visual speech primes have an effect on lexical access by increasing the likelihood that words with certain phonetic properties are selected. Third, we found that the cross-modality repetition priming effect was maintained even when visual and auditory signals came from different speakers, and thus different instances of the same lexical item. We discuss implications of these results for current theories of speech perception. PMID:21544260

  6. Phonetic discrimination and non-native spoken-word recognition

    NASA Astrophysics Data System (ADS)

    Weber, Andrea; Cutler, Anne

    2002-05-01

    When phoneme categories of a non-native language do not correspond to those of the native language, non-native categories may be inaccurately perceived. This may impair non-native spoken-word recognition. Weber and Cutler investigated the effect of phonetic discrimination difficulties on competitor activation in non-native listening. They tested whether Dutch listeners use English phonetic contrasts to resolve potential competition. Eye movements of Dutch participants were monitored as they followed spoken English instructions to click on pictures of objects. A target picture (e.g., picture of a paddle) was always presented along with distractor pictures. The name of a distractor picture either shared initial segments with the name of the target picture (e.g., target paddle, /paedl/ and competitor pedal, /pEdl/) or not (e.g., strawberry and duck). Half of the target-competitor pairs contained English vowels that are often confused by Dutch listeners (e.g., /ae/ and /E/ as in ``paddle-pedal''), half contained vowels that are unlikely to be confused (e.g., /ae/ and /aI/ as in ``parrot-pirate''). Dutch listeners fixated distractor pictures with confusable English vowels longer than distractor pictures with distinct vowels. The results demonstrate that the sensitivity of non-native listeners to phonetic contrasts can result in spurious competitors that should not be activated for native listeners.

  7. Optimally efficient neural systems for processing spoken language.

    PubMed

    Zhuang, Jie; Tyler, Lorraine K; Randall, Billi; Stamatakis, Emmanuel A; Marslen-Wilson, William D

    2014-04-01

    Cognitive models claim that spoken words are recognized by an optimally efficient sequential analysis process. Evidence for this is the finding that nonwords are recognized as soon as they deviate from all real words (Marslen-Wilson 1984), reflecting continuous evaluation of speech inputs against lexical representations. Here, we investigate the brain mechanisms supporting this core aspect of word recognition and examine the processes of competition and selection among multiple word candidates. Based on new behavioral support for optimal efficiency in lexical access from speech, a functional magnetic resonance imaging study showed that words with later nonword points generated increased activation in the left superior and middle temporal gyrus (Brodmann area [BA] 21/22), implicating these regions in dynamic sound-meaning mapping. We investigated competition and selection by manipulating the number of initially activated word candidates (competition) and their later drop-out rate (selection). Increased lexical competition enhanced activity in bilateral ventral inferior frontal gyrus (BA 47/45), while increased lexical selection demands activated bilateral dorsal inferior frontal gyrus (BA 44/45). These findings indicate functional differentiation of the fronto-temporal systems for processing spoken language, with left middle temporal gyrus (MTG) and superior temporal gyrus (STG) involved in mapping sounds to meaning, bilateral ventral inferior frontal gyrus (IFG) engaged in less constrained early competition processing, and bilateral dorsal IFG engaged in later, more fine-grained selection processes. PMID:23250955

  8. The Cortical Organization of Lexical Knowledge: A Dual Lexicon Model of Spoken Language Processing

    ERIC Educational Resources Information Center

    Gow, David W., Jr.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood.…

  9. Le Francais parle. Etudes sociolinguistiques (Spoken French. Sociolinguistic Studies). Current Inquiry into Languages and Linguistics 30.

    ERIC Educational Resources Information Center

    Thibault, Pierrette

    This volume contains twelve articles dealing with the French language as spoken in Quebec. The following topics are addressed: (1) language change and variation; (2) coordinating expressions in the French spoken in Montreal; (3) expressive language as source of language change; (4) the role of correction in conversation; (5) social change and…

  10. Effects of Ethnicity and Gender on Teachers' Evaluation of Students' Spoken Responses

    ERIC Educational Resources Information Center

    Shepherd, Michael A.

    2011-01-01

    To update and extend research on teachers' expectations of students of different sociocultural groups, 57 Black, White, Asian, and Hispanic teachers were asked to evaluate responses spoken by Black, White, and Hispanic 2nd- and 3rd-grade boys and girls. The results show that responses perceived as spoken by minority boys, minority girls, and White…

  11. A Cognitive Analysis of the Written Judgments and Spoken Discourse of Adolescent Pupils.

    ERIC Educational Resources Information Center

    Michell, Lynn

    1979-01-01

    A cognitive-based system of analysis of spoken discourse was designed to provide details about the way pupils think and talk about issues arising from prose material. Written and spoken responses were analyzed to examine the effects of intellectual ability, differences in the stimulus passage, and prior small group discussion. (Note: Page…

  12. Word Up: Using Spoken Word and Hip Hop Subject Matter in Pre-College Writing Instruction.

    ERIC Educational Resources Information Center

    Sirc, Geoffrey; Sutton, Terri

    2009-01-01

    In June 2008, the Department of English at the University of Minnesota partnered with the Minnesota Spoken Word Association to inaugurate an outreach literacy program for local high-school students and teachers. The four-day institute, named "In Da Tradition," used spoken word and hip hop to teach academic and creative writing to core-city…

  13. Asian/Pacific Islander Languages Spoken by English Learners (ELs). Fast Facts

    ERIC Educational Resources Information Center

    Office of English Language Acquisition, US Department of Education, 2015

    2015-01-01

    The Office of English Language Acquisition (OELA) has synthesized key data on English learners (ELs) into two-page PDF sheets, by topic, with graphics, plus key contacts. The topics for this report on Asian/Pacific Islander languages spoken by English Learners (ELs) include: (1) Top 10 Most Common Asian/Pacific Islander Languages Spoken Among ELs:…

  14. Phonological Competition within the Word: Evidence from the Phoneme Similarity Effect in Spoken Production

    ERIC Educational Resources Information Center

    Cohen-Goldberg, Ariel M.

    2012-01-01

    Theories of spoken production have not specifically addressed whether the phonemes of a word compete with each other for selection during phonological encoding (e.g., whether /t/ competes with /k/ in cat). Spoken production theories were evaluated and found to fall into three classes, theories positing (1) no competition, (2) competition among…

  15. Expressive Spoken Language Development in Deaf Children with Cochlear Implants Who Are Beginning Formal Education

    ERIC Educational Resources Information Center

    Inscoe, Jayne Ramirez; Odell, Amanda; Archbold, Susan; Nikolopoulos, Thomas

    2009-01-01

    This paper assesses the expressive spoken grammar skills of young deaf children using cochlear implants who are beginning formal education, compares it with that achieved by normally hearing children and considers possible implications for educational management. Spoken language grammar was assessed, three years after implantation, in 45 children…

  16. Spoken Spanish Language Development at the High School Level: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Moeller, Aleidine J.; Theiler, Janine

    2014-01-01

    Communicative approaches to teaching language have emphasized the centrality of oral proficiency in the language acquisition process, but research investigating oral proficiency has been surprisingly limited, yielding an incomplete understanding of spoken language development. This study investigated the development of spoken language at the high…

  17. Kindergarten Children's Initial Spoken and Written Word Learning in a Storybook Context

    ERIC Educational Resources Information Center

    Apel, Kenn

    2010-01-01

    Kindergarteners (M age = 6;2) were exposed to novel spoken nonwords and their written forms within a storybook reading context. Following each of 12 stories, the children were required to spell and identify 12 novel written nonwords and then verbally produce and comprehend the spoken version of those words. Results indicated the children acquired…

  18. Pictures and Spoken Descriptions Elicit Similar Eye Movements during Mental Imagery, Both in Light and in Complete Darkness

    ERIC Educational Resources Information Center

    Johansson, Roger; Holsanova, Jana; Holmqvist, Kenneth

    2006-01-01

    This study provides evidence that eye movements reflect the positions of objects while participants listen to a spoken description, retell a previously heard spoken description, and describe a previously seen picture. This effect is equally strong in retelling from memory, irrespective of whether the original elicitation was spoken or visual. In…

  19. Gonadotropin-releasing hormone 1 directly affects corpora lutea lifespan in Mediterranean buffalo (Bubalus bubalis) during diestrus: presence and in vitro effects on enzymatic and hormonal activities.

    PubMed

    Zerani, Massimo; Catone, Giuseppe; Maranesi, Margherita; Gobbetti, Anna; Boiti, Cristiano; Parillo, Francesco

    2012-08-01

    The expression of gonadotropin-releasing hormone (GNRH) receptor (GNRHR) and the direct role of GNRH1 on corpora lutea function were studied in Mediterranean buffalo during diestrus. Immunohistochemistry evidenced at early, mid, and late luteal stages the presence of GNRHR only in large luteal cells and GNRH1 in both small and large luteal cells. Real-time PCR revealed GNRHR and GNRH1 mRNA at the three luteal stages, with lowest values in late corpora lutea. In vitro corpora lutea progesterone production was greater in mid stages and lesser in late luteal phases, whereas prostaglandin F2 alpha (PGF2alpha) increased from early to late stages, and PGE2 was greater in the earlier-luteal phase. Cyclooxygenase 1 (prostaglandin-endoperoxide synthase 1; PTGS1) activity did not change during diestrus, whereas PTGS2 increased from early to late stages, and PGE2-9-ketoreductase (PGE2-9-K) was greater in late corpora lutea. PTGS1 activity was greater than PTGS2 in early corpora lutea and lesser in late luteal phase. In corpora lutea cultured in vitro, the GNRH1 analog (buserelin) reduced progesterone secretion and increased PGF2alpha secretion as well as PTGS2 and PGE2-9-K activities at mid and late stages. PGE2 release and PTGS1 activity were increased by buserelin only in late corpora lutea. These results suggest that GNRH is expressed in all luteal cells of buffalo, whereas GNRHR is only expressed in large luteal phase. Additionally, GNRH directly down-regulates corpora lutea progesterone release, with the concomitant increases of PGF2alpha production and PTGS2 and PGE2-9-K enzymatic activities. PMID:22592497

  20. Spoken commands control robot that handles radioactive materials

    SciTech Connect

    Phelan, P.F.; Keddy, C.; Beugelsdojk. T.J.

    1989-01-01

    Several robotic systems have been developed by Los Alamos National Laboratory to handle radioactive material. Because of safety considerations, the robotic system must be under direct human supervision and interactive control continuously. In this paper, we describe the implementation of a voice-recognition system that permits this control, yet allows the robot to perform complex preprogrammed manipulations without the operator's intervention. To provide better interactive control, we connected to the robot's control computer, a speech synthesis unit, which provides audible feedback to the operator. Thus upon completion of a task or if an emergency arises, an appropriate spoken message can be reported by the control computer. The training programming and operation of this commercially available system are discussed, as are the practical problems encountered during operations.

  1. Exploiting sequential phonetic constraints in recognizing spoken words

    NASA Astrophysics Data System (ADS)

    Huttenlocher, D. P.

    1985-10-01

    Machine recognition of spoken language requires developing more robust recognition algorithms. A recent study by Shipman and Zue suggest using partial descriptions of speech sounds to eliminate all but a handful of word candidates from a large lexicon. The current paper extends their work by investigating the power of partial phonetic descriptions for developing recognition algorithms. First, we demonstrate that sequences of manner of articulation classes are more reliable and provide more constraint than certain other classes. Alone these results are of limited utility, due to the high degree of variability in natural speech. This variability is not uniform however, as most modifications and deletions occur in unstressed syllables. Comparing the relative constraint provided by sounds in stressed versus unstressed syllables, we discover that the stressed syllables provide substantially more constraint. This indicates that recognition algorithms can be made more robust by exploiting the manner of articulation information in stressed syllables.

  2. Predicting user mental states in spoken dialogue systems

    NASA Astrophysics Data System (ADS)

    Callejas, Zoraida; Griol, David; López-Cózar, Ramón

    2011-12-01

    In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.

  3. Spoken Language Processing in the Clarissa Procedure Browser

    NASA Technical Reports Server (NTRS)

    Rayner, M.; Hockey, B. A.; Renders, J.-M.; Chatzichrisafis, N.; Farrell, K.

    2005-01-01

    Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station, is as far as we know the first spoken dialog system in space. We describe the objectives of the Clarissa project and the system's architecture. In particular, we focus on three key problems: grammar-based speech recognition using the Regulus toolkit; methods for open mic speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations. We first describe the grammar-based recogniser we have build using Regulus, and report experiments where we compare it against a class N-gram recogniser trained off the same 3297 utterance dataset. We obtained a 15% relative improvement in WER and a 37% improvement in semantic error rate. The grammar-based recogniser moreover outperforms the class N-gram version for utterances of all lengths from 1 to 9 words inclusive. The central problem in building an open-mic speech recognition system is being able to distinguish between commands directed at the system, and other material (cross-talk), which should be rejected. Most spoken dialogue systems make the accept/reject decision by applying a threshold to the recognition confidence score. NASA shows how a simple and general method, based on standard approaches to document classification using Support Vector Machines, can give substantially better performance, and report experiments showing a relative reduction in the task-level error rate by about 25% compared to the baseline confidence threshold method. Finally, we describe a general side-effect free dialogue management architecture that we have implemented in Clarissa, which extends the "update semantics'' framework by including task as well as dialogue information in the information state. We show that this enables elegant treatments of several dialogue management problems, including corrections, confirmations, querying of the environment, and regression

  4. Executive control in spoken noun-phrase production: Contributions of updating, inhibiting, and shifting.

    PubMed

    Sikora, Katarzyna; Roelofs, Ardi; Hermans, Daan; Knoors, Harry

    2016-01-01

    The present study examined how the updating, inhibiting, and shifting abilities underlying executive control influence spoken noun-phrase production. Previous studies provided evidence that updating and inhibiting, but not shifting, influence picture-naming response time (RT). However, little is known about the role of executive control in more complex forms of language production like generating phrases. We assessed noun-phrase production using picture description and a picture-word interference procedure. We measured picture description RT to assess length, distractor, and switch effects, which were assumed to reflect, respectively, the updating, inhibiting, and shifting abilities of adult participants. Moreover, for each participant we obtained scores on executive control tasks that measured verbal and nonverbal updating, nonverbal inhibiting, and nonverbal shifting. We found that both verbal and nonverbal updating scores correlated with the overall mean picture description RTs. Furthermore, the length effect in the RTs correlated with verbal but not nonverbal updating scores, while the distractor effect correlated with inhibiting scores. We did not find a correlation between the switch effect in the mean RTs and the shifting scores. However, the shifting scores correlated with the switch effect in the normal part of the underlying RT distribution. These results suggest that updating, inhibiting, and shifting each influence the speed of phrase production, thereby demonstrating a contribution of all three executive control abilities to language production. PMID:26382716

  5. Implicit learning of nonadjacent phonotactic dependencies in the perception of spoken language

    NASA Astrophysics Data System (ADS)

    McLennan, Conor T.; Luce, Paul A.

    2001-05-01

    We investigated the learning of nonadjacent phonotactic dependencies in adults. Following previous research examining learning of dependencies at a grammatical level (Gomez, 2002), we manipulated the co-occurrence of nonadjacent phonological segments within a spoken syllable. Each listener was exposed to consonant-vowel-consonant nonword stimuli produced by one of two phonological grammars. Both languages contained the same adjacent dependencies between the initial consonant-vowel and final vowel-consonant sequences but differed on the co-occurrences of initial and final consonants. The number of possible types of vowels that intervened between the initial and final consonants was also manipulated. Listeners learning of nonadjacent segmental dependencies were evaluated in a speeded recognition task in which they heard (1) old nonwords on which they had been trained, (2) new nonwords generated by the grammar on which they had been trained, and (3) new nonwords generated by the grammar on which they had not been trained. The results provide evidence for listener's sensitivity to nonadjacent dependencies. However, this sensitivity is manifested as an inhibitory competition effect rather than a facilitative effect on pattern processing. [Research supported by Research Grant No. R01 DC 0265802 from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health.

  6. The hardest butter to button: immediate context effects in spoken word identification.

    PubMed

    Brock, Jon; Nation, Kate

    2014-01-01

    According to some theories, the context in which a spoken word is heard has no impact on the earliest stages of word identification. This view has been challenged by recent studies indicating an interactive effect of context and acoustic similarity on language-mediated eye movements. However, an alternative explanation for these results is that participants looked less at acoustically similar objects in constraining contexts simply because they were looking more at other objects that were cued by the context. The current study addressed this concern whilst providing a much finer grained analysis of the temporal evolution of context effects. Thirty-two adults listened to sentences while viewing a computer display showing four objects. As expected, shortly after the onset of a target word (e.g., "button") in a neutral context, participants saccaded preferentially towards a cohort competitor of the word (e.g., butter). This effect was significantly reduced when the preceding verb made the competitor an unlikely referent (e.g., "Sam fastened the button"), even though there were no other contextually congruent objects in the display. Moreover, the time-course of these two effects was identical to within approximately 30 ms, indicating that certain forms of contextual information can have a near-immediate effect on word identification. PMID:23745798

  7. Understanding and producing the reduced relative construction: Evidence from ratings, editing and corpora.

    PubMed

    Hare, Mary; Tanenhaus, Michael K; McRae, Ken

    2007-04-01

    Tworating studies demonstrate that English speakers willingly produce reduced relatives with internal cause verbs (e.g., Whisky fermented in oak barrels can have a woody taste), and judge their acceptability based on factors known to influence ambiguity resolution, rather than on the internal/external cause distinction. Regression analyses demonstrate that frequency of passive usage predicts reduced relative frequency in corpora, but internal/external cause status does not. The authors conclude that reduced relatives with internal cause verbs are rare because few of these verbs occur in the passive. This contrasts with the claim in McKoon and Ratcliff (McKoon, G., & Ratcliff, R. (2003). Meaning through syntax: Language comprehension and the reduced relative clause construction. Psychological Review, 110, 490-525) that reduced relatives like The horse raced past the barn fell are rare and, when they occur, incomprehensible, because the meaning of the reduced relative construction prohibits the use of a verb with an internal cause event template. PMID:22162904

  8. A corpora allata farnesyl diphosphate synthase in mosquitoes displaying a metal ion dependent substrate specificity.

    PubMed

    Rivera-Perez, Crisalejandra; Nyati, Pratik; Noriega, Fernando G

    2015-09-01

    Farnesyl diphosphate synthase (FPPS) is a key enzyme in isoprenoid biosynthesis, it catalyzes the head-to-tail condensation of dimethylallyl diphosphate (DMAPP) with two molecules of isopentenyl diphosphate (IPP) to generate farnesyl diphosphate (FPP), a precursor of juvenile hormone (JH). In this study, we functionally characterized an Aedes aegypti FPPS (AaFPPS) expressed in the corpora allata. AaFPPS is the only FPPS gene present in the genome of the yellow fever mosquito, it encodes a 49.6 kDa protein exhibiting all the characteristic conserved sequence domains on prenyltransferases. AaFPPS displays its activity in the presence of metal cofactors; and the product condensation is dependent of the divalent cation. Mg(2+) ions lead to the production of FPP, while the presence of Co(2+) ions lead to geranyl diphosphate (GPP) production. In the presence of Mg(2+) the AaFPPS affinity for allylic substrates is GPP > DMAPP > IPP. These results suggest that AaFPPS displays "catalytic promiscuity", changing the type and ratio of products released (GPP or FPP) depending on allylic substrate concentrations and the presence of different metal cofactors. This metal ion-dependent regulatory mechanism allows a single enzyme to selectively control the metabolites it produces, thus potentially altering the flow of carbon into separate metabolic pathways. PMID:26188328

  9. Understanding and producing the reduced relative construction: Evidence from ratings, editing and corpora

    PubMed Central

    Hare, Mary; Tanenhaus, Michael K.; McRae, Ken

    2011-01-01

    Tworating studies demonstrate that English speakers willingly produce reduced relatives with internal cause verbs (e.g., Whisky fermented in oak barrels can have a woody taste), and judge their acceptability based on factors known to influence ambiguity resolution, rather than on the internal/external cause distinction. Regression analyses demonstrate that frequency of passive usage predicts reduced relative frequency in corpora, but internal/external cause status does not. The authors conclude that reduced relatives with internal cause verbs are rare because few of these verbs occur in the passive. This contrasts with the claim in McKoon and Ratcliff (McKoon, G., & Ratcliff, R. (2003). Meaning through syntax: Language comprehension and the reduced relative clause construction. Psychological Review, 110, 490–525) that reduced relatives like The horse raced past the barn fell are rare and, when they occur, incomprehensible, because the meaning of the reduced relative construction prohibits the use of a verb with an internal cause event template. PMID:22162904

  10. Activity of corpora allata, endocrine balance and reproduction in female Labidura riparia (Dermaptera).

    PubMed

    Baehr, J C; Cassier, P; Caussanel, C; Porcheron, P

    1982-01-01

    The reproductive activity of Labidura riparia females involves, after a 5-day maturation stage, a regular alternation of ovarian cycles and egg-care stages averaging 10 days each. Vitellogenesis is characterized by an increase in the size of the corpora allata (CA) where structured SER bodies appear, and by a rise of juvenile hormone (JH III) content in the hemolymph which is followed by an increase in the level of ecdysteroids. During the egg-care periods, the CA are inactive; structured bodies generate autophagic vacuoles, the titer of JHs and later that of ecdysteroids in the hemolymph decreases and remains stationary. Ovariectomy causes hypertrophy and hyperactivity of the CA for about two months. Subsequently, the titer of JH decreases and old females may display parental behaviour; the level of ecdysteroids falls and remains unchanged. After cauterization of the pars intercerebralis (PI) of the protocerebrum, the ovarian activity stops, the ovary shrinks, the JHs rapidly disappear but ecdysteroids remain at the same or even higher levels than those of normal females of the same age. On the basis of these data, we postulate the existence of a center located in the PI, inhibiting the production of ecdysteroids, and of a stimulating center located outside this area. The PI also exhibits an allatotropic function. PMID:7105149

  11. Corpora Amylacea of Brain Tissue from Neurodegenerative Diseases Are Stained with Specific Antifungal Antibodies

    PubMed Central

    Pisa, Diana; Alonso, Ruth; Rábano, Alberto; Carrasco, Luis

    2016-01-01

    The origin and potential function of corpora amylacea (CA) remains largely unknown. Low numbers of CA are detected in the aging brain of normal individuals but they are abundant in the central nervous system of patients with neurodegenerative diseases. In the present study, we show that CA from patients diagnosed with Alzheimer's disease (AD) contain fungal proteins as detected by immunohistochemistry analyses. Accordingly, CA were labeled with different anti-fungal antibodies at the external surface, whereas the central portion composed of calcium salts contain less proteins. Detection of fungal proteins was achieved using a number of antibodies raised against different fungal species, which indicated cross-reactivity between the fungal proteins present in CA and the antibodies employed. Importantly, these antibodies do not immunoreact with cellular proteins. Additionally, CNS samples from patients diagnosed with amyotrophic lateral sclerosis (ALS) and Parkinson's disease (PD) also contained CA that were immunoreactive with a range of antifungal antibodies. However, CA were less abundant in ALS or PD patients as compared to CNS samples from AD. By contrast, CA from brain tissue of control subjects were almost devoid of fungal immunoreactivity. These observations are consistent with the concept that CA associate with fungal infections and may contribute to the elucidation of the origin of CA. PMID:27013948

  12. Negative Feedbacks by Isoprenoids on a Mevalonate Kinase Expressed in the Corpora Allata of Mosquitoes

    PubMed Central

    Noriega, Fernando G.

    2015-01-01

    Background Juvenile hormones (JH) regulate development and reproductive maturation in insects. JHs are synthesized through the mevalonate pathway (MVAP), an ancient metabolic pathway present in the three domains of life. Mevalonate kinase (MVK) is a key enzyme in the MVAP. MVK catalyzes the synthesis of phosphomevalonate (PM) by transferring the γ-phosphoryl group from ATP to the C5 hydroxyl oxygen of mevalonic acid (MA). Despite the importance of MVKs, these enzymes have been poorly characterized in insects. Results We functionally characterized an Aedes aegypti MVK (AaMVK) expressed in the corpora allata (CA) of the mosquito. AaMVK displayed its activity in the presence of metal cofactors. Different nucleotides were used by AaMVK as phosphoryl donors. In the presence of Mg2+, the enzyme has higher affinity for MA than ATP. The activity of AaMVK was regulated by feedback inhibition from long-chain isoprenoids, such as geranyl diphosphate (GPP) and farnesyl diphosphate (FPP). Conclusions AaMVK exhibited efficient inhibition by GPP and FPP (Ki less than 1 μM), and none by isopentenyl pyrophosphate (IPP) and dimethyl allyl pyrophosphate (DPPM). These results suggest that GPP and FPP might act as physiological inhibitors in the synthesis of isoprenoids in the CA of mosquitoes. Changing MVK activity can alter the flux of precursors and therefore regulate juvenile hormone biosynthesis. PMID:26566274

  13. Type VII Collagen Expression in the Human Vitreoretinal Interface, Corpora Amylacea and Inner Retinal Layers

    PubMed Central

    Wullink, Bart; Pas, Hendri H.; Van der Worp, Roelofje J.; Kuijer, Roel; Los, Leonoor I.

    2015-01-01

    Type VII collagen, as a major component of anchoring fibrils found at basement membrane zones, is crucial in anchoring epithelial tissue layers to their underlying stroma. Recently, type VII collagen was discovered in the inner human retina by means of immunohistochemistry, while proteomic investigations demonstrated type VII collagen at the vitreoretinal interface of chicken. Because of its potential anchoring function at the vitreoretinal interface, we further assessed the presence of type VII collagen at this site. We evaluated the vitreoretinal interface of human donor eyes by means of immunohistochemistry, confocal microscopy, immunoelectron microscopy, and Western blotting. Firstly, type VII collagen was detected alongside vitreous fibers6 at the vitreoretinal interface. Because of its known anchoring function, it is likely that type VII collagen is involved in vitreoretinal attachment. Secondly, type VII collagen was found within cytoplasmic vesicles of inner retinal cells. These cells resided most frequently in the ganglion cell layer and inner plexiform layer. Thirdly, type VII collagen was found in astrocytic cytoplasmic inclusions, known as corpora amylacea. The intraretinal presence of type VII collagen was confirmed by Western blotting of homogenized retinal preparations. These data add to the understanding of vitreoretinal attachment, which is important for a better comprehension of common vitreoretinal attachment pathologies. PMID:26709927

  14. Radioisotope penile plethysmography: A technique for evaluating corpora cavernosal blood flow during early tumescence

    SciTech Connect

    Schwartz, A.N.; Graham, M.M.; Ferency, G.F.; Miura, R.S.

    1989-04-01

    Radioisotope penile plethysmography is a nuclear medicine technique which assists in the evaluation of patients with erectile dysfunction. This technique attempts to noninvasively quantitate penile corpora cavernosal blood flow during early penile tumescence using technetium-99m-labeled red blood cells. Penile images and counts were acquired in a steady-state blood-pool phase prior to and after the administration of intracorporal papaverine. Penile counts, images, and time-activity curves were computer analyzed in order to determine peak corporal flow and volume changes. Peak corporal flow rates were compared to arterial integrity (determined by angiography) and venosinusoidal corporal leak (determined by cavernosometry). Peak corporal flow correlated well with arterial integrity (r = 0.91) but did not correlate with venosinusoidal leak parameters (r = 0.01). This report focuses on the methodology and the assumptions which form the foundation of this technique. The strong correlation of peak corporal flow and angiography suggests that radioisotope penile plethysmography could prove useful in the evaluation of arterial inflow disorders in patients with erectile dysfunction.

  15. The time course of indexical specificity effects in the perception of spoken words

    NASA Astrophysics Data System (ADS)

    McLennan, Conor T.; Luce, Paul A.

    2003-10-01

    This research investigates the time-course of indexical specificity effects in spoken word recognition by examining the circumstances under which the variability in the speaking rate affects the participant's perception of spoken words. Previous research has demonstrated that variability has both representational and processing consequences. The current research examines one of the conditions expected to influence the extent to which indexical variability plays a role in spoken word recognition, namely the time-course of processing. Based on our past work, it was hypothesized that indexical specificity effects associated with the speaking rate would only affect later stages of processing in spoken word recognition. The results confirm this hypothesis: Specificity effects are only in evidence when processing is relatively slow. [Research supported (in part) by Research Grant No. R01 DC 0265801 from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health.

  16. Scaling laws and model of words organization in spoken and written language

    NASA Astrophysics Data System (ADS)

    Bian, Chunhua; Lin, Ruokuang; Zhang, Xiaoyu; Ma, Qianli D. Y.; Ivanov, Plamen Ch.

    2016-01-01

    A broad range of complex physical and biological systems exhibits scaling laws. The human language is a complex system of words organization. Studies of written texts have revealed intriguing scaling laws that characterize the frequency of words occurrence, rank of words, and growth in the number of distinct words with text length. While studies have predominantly focused on the language system in its written form, such as books, little attention is given to the structure of spoken language. Here we investigate a database of spoken language transcripts and written texts, and we uncover that words organization in both spoken language and written texts exhibits scaling laws, although with different crossover regimes and scaling exponents. We propose a model that provides insight into words organization in spoken language and written texts, and successfully accounts for all scaling laws empirically observed in both language forms.

  17. Human chorionic gonadotropin increases serum progesterone, number of corpora lutea and angiogenic factors in pregnant sheep.

    PubMed

    Coleson, Megan P T; Sanchez, Nicole S; Ashley, Amanda K; Ross, Timothy T; Ashley, Ryan L

    2015-07-01

    Early gestation is a critical period when implantation and placental vascularization are established, processes influenced by progesterone (P4). Although human chorionic gonadotropin (hCG) is not endogenously synthesized by livestock, it binds the LH receptor, stimulating P4 synthesis. We hypothesized treating pregnant ewes with hCG would increase serum P4, number of corpora lutea (CLs) and concepti, augment steroidogenic enzymes, and increase membrane P4 receptors (PAQRs) and angiogenic factors in reproductive tissues. The objective was to determine molecular alterations induced by hCG in pregnant sheep that may promote pregnancy. Ewes received either 600 IU of hCG or saline i.m. on day 4 post mating. Blood samples were collected daily from day 0 until tissue collection for serum P4 analysis. Reproductive tissues were collected on either day 13 or 25 of gestation and analyzed for PAQRs, CXCR4, proangiogenic factors and steroidogenic enzymes. Ewes receiving hCG had more CL and greater serum P4, which remained elevated. On day 25, StAR protein production decreased in CL from hCG-treated ewes while HSD3B1 was unchanged; further, expression of CXCR4 significantly increased and KDR tended to increase. PAQR7 and CXCR4 protein was increased in caruncle tissue from hCG-treated ewes. Maternal hCG exposure influenced fetal extraembryonic tissues, as VEGFA, VEGFB, FLT1, and ANGPT1 expression increased. Our results indicate hCG increases serum P4 due to augmented CL number per ewe. hCG treatment resulted in greater PAQR7 and CXCR4 in maternal endometrium and promoted expression of proangiogenic factors in fetal extraembryonic membranes. Supplementing livestock with hCG may boost P4 levels and improve reproductive efficiency. PMID:25861798

  18. Synthesis and reception of prostaglandins in corpora lutea of domestic cat and lynx.

    PubMed

    Zschockelt, Lina; Amelkina, Olga; Siemieniuch, Marta J; Kowalewski, Mariusz P; Dehnhard, Martin; Jewgenow, Katarina; Braun, Beate C

    2016-08-01

    Felids show different reproductive strategies related to the luteal phase. Domestic cats exhibit a seasonal polyoestrus and ovulation is followed by formation of corpora lutea (CL). Pregnant and non-pregnant cycles are reflected by diverging plasma progesterone (P4) profiles. Eurasian and Iberian lynxes show a seasonal monooestrus, in which physiologically persistent CL (perCL) support constantly elevated plasma P4 levels. Prostaglandins (PGs) represent key regulators of reproduction, and we aimed to characterise PG synthesis in feline CL to identify their contribution to the luteal lifespan. We assessed mRNA and protein expression of PG synthases (PTGS2/COX2, PTGES, PGFS/AKR1C3) and PG receptors (PTGER2, PTGER4, PTGFR), and intra-luteal levels of PGE2 and PGF2α Therefore, CL of pregnant (pre-implantation, post-implantation, regression stages) and non-pregnant (formation, development/maintenance, early regression, late regression stages) domestic cats, and prooestrous Eurasian (perCL, pre-mating) and metoestrous Iberian (perCL, freshCL, post-mating) lynxes were investigated. Expression of PTGS2/COX2, PTGES and PTGER4 was independent of the luteal stage in the investigated species. High levels of luteotrophic PGE2 in perCL might be associated with persistence of luteal function in lynxes. Signals for PGFS/AKR1C3 expression were weak in mid and late luteal stages of cats but were absent in lynxes, concomitant with low PGF2α levels in these species. Thus, regulation of CL regression by luteal PGF2α seems negligible. In contrast, expression of PTGFR was evident in nearly all investigated CL of cat and lynxes, implying that luteal regression, e.g. at the end of pregnancy, is triggered by extra-luteal PGF2α. PMID:27222595

  19. The employment of a spoken language computer applied to an air traffic control task.

    NASA Technical Reports Server (NTRS)

    Laveson, J. I.; Silver, C. A.

    1972-01-01

    Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.

  20. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers With Down Syndrome.

    PubMed

    Yoder, Paul J; Woynaroski, Tiffany; Fey, Marc E; Warren, Steven F; Gardner, Elizabeth

    2015-07-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only the participants with DS, we found that more therapy led to larger spoken vocabularies at posttreatment because it increased children's canonical syllabic communication and receptive vocabulary growth early in the treatment phase. PMID:26161468

  1. Profound deafness and the acquisition of spoken language in children

    PubMed Central

    Vlastarakos, Petros V

    2012-01-01

    Profound congenital sensorineural hearing loss (SNHL) is not so infrequent, affecting 1 to 2 of every 1000 newborns in western countries. Nevertheless, universal hearing screening programs have not been widely applied, although such programs are already established for metabolic diseases. The acquisition of spoken language is a time-dependent process, and some form linguistic input should be present before the first 6 mo of life for a child to become linguistically competent. Therefore, profoundly deaf children should be detected early, and referred timely for the process of auditory rehabilitation to be initiated. Hearing assessment methods should reflect the behavioural audiogram in an accurate manner. Additional disabilities also need to be taken into account. Profound congenital SNHL is managed by a multidisciplinary team. Affected infants should be bilaterally fitted with hearing aids, no later than 3 mo after birth. They should be monitored until the first year of age. If they are not progressing linguistically, cochlear implantation can be considered after thorough preoperative assessment. Prelingually deaf children develop significant speech perception and production abilities, and speech intelligibility over time, following cochlear implantation. Age at intervention and oral communication, are the most important determinants of outcomes. Realistic parental expectations are also essential. Cochlear implant programs deserve the strong support of community members, professional bodies, and political authorities in order to be successful, and maximize the future earnings of pediatric cochlear implantation for human societies. PMID:25254164

  2. Investigating joint attention mechanisms through spoken human-robot interaction.

    PubMed

    Staudte, Maria; Crocker, Matthew W

    2011-08-01

    Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream (Griffin, 2001; Meyer, Sleiderink, & Levelt, 1998; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker's focus of (visual) attention to anticipate, ground, and disambiguate spoken references. To investigate the dynamics of such gaze-following and its influence on utterance comprehension in a controlled manner, we use a human-robot interaction setting. Specifically, we hypothesize that referential gaze is interpreted as a cue to the speaker's referential intentions which facilitates or disrupts reference resolution. Moreover, the use of a dynamic and yet extremely controlled gaze cue enables us to shed light on the simultaneous and incremental integration of the unfolding speech and gaze movement. We report evidence from two eye-tracking experiments in which participants saw videos of a robot looking at and describing objects in a scene. The results reveal a quantified benefit-disruption spectrum of gaze on utterance comprehension and, further, show that gaze is used, even during the initial movement phase, to restrict the spatial domain of potential referents. These findings more broadly suggest that people treat artificial agents similar to human agents and, thus, validate such a setting for further explorations of joint attention mechanisms. PMID:21665198

  3. Bimodal Bilinguals Co-activate Both Languages during Spoken Comprehension

    PubMed Central

    Shook, Anthony; Marian, Viorica

    2012-01-01

    Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals’ and English monolinguals’ eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing items than at phonologically unrelated items, and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension. PMID:22770677

  4. Effects of speech clarity on recognition memory for spoken sentences.

    PubMed

    Van Engen, Kristin J; Chandrasekaran, Bharath; Smiljanic, Rajka

    2012-01-01

    Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences. PMID:22970141

  5. Dissociating frontotemporal contributions to semantic ambiguity resolution in spoken sentences.

    PubMed

    Rodd, Jennifer M; Johnsrude, Ingrid S; Davis, Matthew H

    2012-08-01

    Comprehension of sentences containing semantically ambiguous words requires listeners to select appropriate interpretations, maintain linguistic material in working memory, and to reinterpret sentences that have been misinterpreted. All these functions appear to involve frontal cortical regions. Here, we attempt to differentiate these functions by varying the relative timing of an ambiguous word and disambiguating information in spoken sentences. We compare the location, magnitude, and timing of evoked activity using a fast-acquisition semisparse functional magnetic resonance imaging sequence. The left inferior frontal gyrus (LIFG) shows a strong response to sentences that are initially ambiguous (disambiguated by information that occurs either soon after the ambiguity or that is delayed until the end of the sentence). Response profiles indicate that activity, in both anterior and posterior LIFG regions, is triggered both by the ambiguous word and by the subsequent disambiguating information. The LIFG also responds to ambiguities that are preceded by disambiguating context. These results suggest that the LIFG subserves multiple cognitive processes including selecting an appropriate meaning and reinterpreting sentences that have been misparsed. In contrast, the left inferior temporal gyrus responds to the disambiguating information but not to the ambiguous word itself and may be involved in reprocessing sentences that were initially misinterpreted. PMID:21968566

  6. Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems

    NASA Astrophysics Data System (ADS)

    Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae

    Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.

  7. Using spoken words to guide open-ended category formation.

    PubMed

    Chauhan, Aneesh; Seabra Lopes, Luís

    2011-11-01

    Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail. PMID:21614526

  8. Clarissa Spoken Dialogue System for Procedure Reading and Navigation

    NASA Technical Reports Server (NTRS)

    Hieronymus, James; Dowding, John

    2004-01-01

    Speech is the most natural modality for humans use to communicate with other people, agents and complex systems. A spoken dialogue system must be robust to noise and able to mimic human conversational behavior, like correcting misunderstandings, answering simple questions about the task and understanding most well formed inquiries or commands. The system aims to understand the meaning of the human utterance, and if it does not, then it discards the utterance as being meant for someone else. The first operational system is Clarissa, a conversational procedure reader and navigator, which will be used in a System Development Test Objective (SDTO) on the International Space Station (ISS) during Expedition 10. In the present environment one astronaut reads the procedure on a Manual Procedure Viewer (MPV) or paper, and has to stop to read or turn pages, shifting focus from the task. Clarissa is designed to read and navigate ISS procedures entirely with speech, while the astronaut has his eyes and hands engaged in performing the task. The system also provides an MPV like graphical interface so the procedure can be read visually. A demo of the system will be given.

  9. The roles of language processing in a spoken language interface.

    PubMed Central

    Hirschman, L

    1995-01-01

    This paper provides an overview of the colloquium's discussion session on natural language understanding, which followed presentations by M. Bates [Bates, M. (1995) Proc. Natl. Acad. Sci. USA 92, 9977-9982] and R. C. Moore [Moore, R. C. (1995) Proc. Natl. Acad. Sci. USA 92, 9983-9988]. The paper reviews the dual role of language processing in providing understanding of the spoken input and an additional source of constraint in the recognition process. To date, language processing has successfully provided understanding but has provided only limited (and computationally expensive) constraint. As a result, most current systems use a loosely coupled, unidirectional interface, such as N-best or a word network, with natural language constraints as a postprocess, to filter or resort the recognizer output. However, the level of discourse context provides significant constraint on what people can talk about and how things can be referred to; when the system becomes an active participant, it can influence this order. But sources of discourse constraint have not been extensively explored, in part because these effects can only be seen by studying systems in the context of their use in interactive problem solving. This paper argues that we need to study interactive systems to understand what kinds of applications are appropriate for the current state of technology and how the technology can move from the laboratory toward real applications. PMID:7479811

  10. Context and spoken word recognition in a novel lexicon.

    PubMed

    Revill, Kathleen Pirog; Tanenhaus, Michael K; Aslin, Richard N

    2008-09-01

    Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments, novel names for the actions and the shapes varied in frequency, cohort density, and whether the cohorts referred to actions (Experiment 1) or shapes with action-congruent or action-incongruent affordances (Experiments 2 and 3). Experiment 1 demonstrated effects of frequency and cohort competition from both displayed and non-displayed competitors. In Experiment 2, a biasing context induced an increase in anticipatory eye movements to congruent referents and reduced the probability of looks to incongruent cohorts, without the delay predicted by access-selection models. In Experiment 3, context did not reduce competition from non-displayed incompatible neighbors as predicted by restrictive access models. The authors conclude that the results are most consistent with continuous integration models. PMID:18763901

  11. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults with Normal Hearing but Not Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker

    2016-01-01

    Purpose: Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Method: Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13…

  12. Training and evaluation corpora for the extraction of causal relationships encoded in biological expression language (BEL)

    PubMed Central

    Madan, Sumit; Ansari, Sam; Kodamullil, Alpha T.; Karki, Reagon; Rastegar-Mojarad, Majid; Catlett, Natalie L.; Hayes, William; Szostak, Justyna; Hoeng, Julia; Peitsch, Manuel

    2016-01-01

    Success in extracting biological relationships is mainly dependent on the complexity of the task as well as the availability of high-quality training data. Here, we describe the new corpora in the systems biology modeling language BEL for training and testing biological relationship extraction systems that we prepared for the BioCreative V BEL track. BEL was designed to capture relationships not only between proteins or chemicals, but also complex events such as biological processes or disease states. A BEL nanopub is the smallest unit of information and represents a biological relationship with its provenance. In BEL relationships (called BEL statements), the entities are normalized to defined namespaces mainly derived from public repositories, such as sequence databases, MeSH or publicly available ontologies. In the BEL nanopubs, the BEL statements are associated with citation information and supportive evidence such as a text excerpt. To enable the training of extraction tools, we prepared BEL resources and made them available to the community. We selected a subset of these resources focusing on a reduced set of namespaces, namely, human and mouse genes, ChEBI chemicals, MeSH diseases and GO biological processes, as well as relationship types ‘increases’ and ‘decreases’. The published training corpus contains 11 000 BEL statements from over 6000 supportive text excerpts. For method evaluation, we selected and re-annotated two smaller subcorpora containing 100 text excerpts. For this re-annotation, the inter-annotator agreement was measured by the BEL track evaluation environment and resulted in a maximal F-score of 91.18% for full statement agreement. In addition, for a set of 100 BEL statements, we do not only provide the gold standard expert annotations, but also text excerpts pre-selected by two automated systems. Those text excerpts were evaluated and manually annotated as true or false supportive in the course of the BioCreative V BEL track task

  13. Training and evaluation corpora for the extraction of causal relationships encoded in biological expression language (BEL).

    PubMed

    Fluck, Juliane; Madan, Sumit; Ansari, Sam; Kodamullil, Alpha T; Karki, Reagon; Rastegar-Mojarad, Majid; Catlett, Natalie L; Hayes, William; Szostak, Justyna; Hoeng, Julia; Peitsch, Manuel

    2016-01-01

    Success in extracting biological relationships is mainly dependent on the complexity of the task as well as the availability of high-quality training data. Here, we describe the new corpora in the systems biology modeling language BEL for training and testing biological relationship extraction systems that we prepared for the BioCreative V BEL track. BEL was designed to capture relationships not only between proteins or chemicals, but also complex events such as biological processes or disease states. A BEL nanopub is the smallest unit of information and represents a biological relationship with its provenance. In BEL relationships (called BEL statements), the entities are normalized to defined namespaces mainly derived from public repositories, such as sequence databases, MeSH or publicly available ontologies. In the BEL nanopubs, the BEL statements are associated with citation information and supportive evidence such as a text excerpt. To enable the training of extraction tools, we prepared BEL resources and made them available to the community. We selected a subset of these resources focusing on a reduced set of namespaces, namely, human and mouse genes, ChEBI chemicals, MeSH diseases and GO biological processes, as well as relationship types 'increases' and 'decreases'. The published training corpus contains 11 000 BEL statements from over 6000 supportive text excerpts. For method evaluation, we selected and re-annotated two smaller subcorpora containing 100 text excerpts. For this re-annotation, the inter-annotator agreement was measured by the BEL track evaluation environment and resulted in a maximal F-score of 91.18% for full statement agreement. In addition, for a set of 100 BEL statements, we do not only provide the gold standard expert annotations, but also text excerpts pre-selected by two automated systems. Those text excerpts were evaluated and manually annotated as true or false supportive in the course of the BioCreative V BEL track task

  14. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

    PubMed Central

    Rusnell, Brennan J; Pierson, Roger A; Singh, Jaswant; Adams, Gregg P; Eramian, Mark G

    2008-01-01

    Background The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL) boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology. Methods Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8) obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD), root mean squared difference (RMSD), Hausdorff distance (HD), sensitivity, and specificity metrics. Results and discussion The mean MAD was 0.87 mm (sigma = 0.36 mm), RMSD was 1.1 mm (sigma = 0.47 mm), and HD was 3.4 mm (sigma = 2.0 mm) indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171) and 0.990 (sigma = 0.00786), respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early. Conclusion The hypothesis that level set

  15. Balanced intervention for adolescents and adults with language impairment: a clinical framework.

    PubMed

    Fallon, Karen A; Katz, Lauren A; Carlberg, Rachel

    2015-02-01

    Providing effective intervention services for adolescents and adults who struggle with spoken and written language presents a variety of unique challenges. This article discusses the 5S Framework (skills, strategies, school, student buy-in, and stakeholders) for designing and implementing balanced spoken and written language interventions for adolescents and adults. An in-depth case illustration highlights the usefulness of the framework for targeting the language and literacy skills of adolescents and young adults. By describing and illustrating the five key components of the intervention framework, the article provides a useful clinical tool to help guide clinicians and educators who serve the needs of adolescents and adults who struggle with spoken and written language. PMID:25633140

  16. The Role of Grammatical Category Information in Spoken Word Retrieval

    PubMed Central

    Duràn, Carolina Palma; Pillon, Agnesa

    2011-01-01

    We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production. PMID

  17. Reuse of termino-ontological resources and text corpora for building a multilingual domain ontology: an application to Alzheimer's disease.

    PubMed

    Dramé, Khadim; Diallo, Gayo; Delva, Fleur; Dartigues, Jean François; Mouillet, Evelyne; Salamon, Roger; Mougin, Fleur

    2014-04-01

    Ontologies are useful tools for sharing and exchanging knowledge. However ontology construction is complex and often time consuming. In this paper, we present a method for building a bilingual domain ontology from textual and termino-ontological resources intended for semantic annotation and information retrieval of textual documents. This method combines two approaches: ontology learning from texts and the reuse of existing terminological resources. It consists of four steps: (i) term extraction from domain specific corpora (in French and English) using textual analysis tools, (ii) clustering of terms into concepts organized according to the UMLS Metathesaurus, (iii) ontology enrichment through the alignment of French and English terms using parallel corpora and the integration of new concepts, (iv) refinement and validation of results by domain experts. These validated results are formalized into a domain ontology dedicated to Alzheimer's disease and related syndromes which is available online (http://lesim.isped.u-bordeaux2.fr/SemBiP/ressources/ontoAD.owl). The latter currently includes 5765 concepts linked by 7499 taxonomic relationships and 10,889 non-taxonomic relationships. Among these results, 439 concepts absent from the UMLS were created and 608 new synonymous French terms were added. The proposed method is sufficiently flexible to be applied to other domains. PMID:24382429

  18. The time course of speaking rate specificity effects in spoken word recognition

    NASA Astrophysics Data System (ADS)

    McLennan, Conor T.; Luce, Paul A.

    2005-09-01

    Specificity effects in spoken word recognition were previously examined by examining the circumstances under which variability in speaking rate affects participants perception of spoken words. The word recognition and memory literatures are now replete with demonstrations that variability has representational and processing consequences. The research focuses on one of the conditions expected to influence the extent to which variability plays a role in spoken word recognition, namely time course of processing. Based on previous work, it was hypothesized that speaking rate variability would only affect later stages of spoken word recognition. The results confirmed this hypothesis: Specificity effects were only obtained when processing was relatively slow. However, previous stimuli not only differed in speaking rate, but also in articulation style (i.e., casual and careful). Therefore, in the current set of experiments, it was sought to determine whether the same pattern of results would be obtained with stimuli that only differed in speaking rate (i.e., in the absence of articulation style differences). Moreover, to further generalize time course findings, the stimuli were produced by a different speaker than the speaker in the earlier study. The results add to the knowledge of the circumstances under which variability affects the perception of spoken words.

  19. On the Nature of Talker Variability Effects on Recall of Spoken Word Lists

    PubMed Central

    Goldinger, Stephen D.; Pisoni, David B.; Logan, John S.

    2012-01-01

    In a recent study, Martin, Mullennix, Pisoni, and Summers (1989) reported that subjects’ accuracy in recalling lists of spoken words was better for words in early list positions when the words were spoken by a single talker than when they were spoken by multiple talkers. The present study was conducted to examine the nature of these effects in further detail. Accuracy of serial-ordered recall was examined for lists of words spoken by either a single talker or by multiple talkers. Half the lists contained easily recognizable words, and half contained more difficult words, according to a combined metric of word frequency, lexical neighborhood density, and neighborhood frequency. Rate of presentation was manipulated to assess the effects of both variables on rehearsal and perceptual encoding. A strong interaction was obtained between talker variability and rate of presentation. Recall of multiple-talker lists was affected much more than single-talker lists by changes in presentation rate. At slow presentation rates, words in early serial positions produced by multiple talkers were actually recalled more accurately than words produced by a single talker. No interaction was observed for word confusability and rate of presentation. The data provide support for the proposal that talker variability affects the accuracy of recall of spoken words not only by increasing the processing demands for early perceptual encoding of the words, but also by affecting the efficiency of the rehearsal process itself. PMID:1826729

  20. Closed-Caption Television and Adult Students of English as a Second Language.

    ERIC Educational Resources Information Center

    Smith, Jennifer J.

    The use of closed-caption television (CCTV) to help teach English as a Second Language (ESL) to adults was studied with a group of adult students in the Arlington, Virginia, Education and Employment Program. Although CCTV is designed for the hearing impaired, its combination of written with spoken English in the visual context of television makes…

  1. The Latent Speaker: Attaining Adult Fluency in an Endangered Language

    ERIC Educational Resources Information Center

    Basham, Charlotte; Fathman, Ann

    2008-01-01

    This paper focuses on how latent knowledge of an ancestral or heritage language affects subsequent acquisition by adults. The "latent speaker" is defined as an individual raised in an environment where the ancestral language was spoken but who did not become a speaker of that language. The study examines how attitudes, latent knowledge and…

  2. Writing Versus Reading in Traditional and Functional Adult Literacy Processes

    ERIC Educational Resources Information Center

    Bonanni, C.

    1971-01-01

    The author suggests a novel approach to adult literacy education - stressing expressive writing instead of primer reading, and relating the basic spelling patterns of the written language to the already possessed corresponding sound patterns of the spoken language rather than teaching alphabets and letters. (AN)

  3. Mechanisms of intracellular calcium homeostasis in developing and mature bovine corpora lutea.

    PubMed

    Wright, Marietta F; Bowdridge, Elizabeth; McDermott, Erica L; Richardson, Samuel; Scheidler, James; Syed, Qaisar; Bush, Taylor; Inskeep, E Keith; Flores, Jorge A

    2014-03-01

    Although calcium (Ca(2+)) is accepted as an intracellular mediator of prostaglandin F2 alpha (PGF2alpha) actions on luteal cells, studies defining mechanisms of Ca(2+) homeostasis in bovine corpora lutea (CL) are lacking. The increase in intracellular Ca(2+) concentration ([Ca(2+)]i) induced by PGF2alpha in steroidogenic cells from mature CL is greater than in those isolated from developing CL. Our hypothesis is that differences in signal transduction associated with developing and mature CL contribute to the increased efficacy of PGF2alpha to induce a Ca(2+) signal capable of inducing regression in mature CL. To test this hypothesis, major genes participating in Ca(2+) homeostasis in the bovine CL were identified, and expression of mRNA, protein, or activity, in the case of phospholipase Cbeta (PLCbeta), in developing and mature bovine CL was compared. In addition, we examined the contribution of external and internal Ca(2+) to the PGF2alpha stimulated rise in [Ca(2+)]i in LLCs isolated from developing and mature bovine CL. Three differences were identified in mechanisms of calcium homeostasis between developing and mature CL, which could account for the lesser increase in [Ca(2+)]i in response to PGF2alpha in developing than in mature CL. First, there were lower concentrations of inositol 1,4,5-trisphosphate (IP3) after similar PGF2alpha challenge, indicating reduced phospholipase C beta (PLCbeta) activity, in developing than mature CL. Second, there was an increased expression of sorcin (SRI) in developing than in mature CL. This cytoplasmic Ca(2+) binding protein modulates the endoplasmic reticulum (ER) Ca(2+) release channel, ryanodine receptor (RyR), to be in the closed configuration. Third, there was greater expression of ATP2A2 or SERCA, which causes calcium reuptake into the ER, in developing than in mature CL. Developmental differences in expression detected in whole CL were confirmed by Western blots using protein samples from steroidogenic cells

  4. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Feenaughty, Lynda

    judged each speech sample using the perceptual construct of Speech Severity using a visual analog scale. Additional measures obtained to describe participants included the Sentence Intelligibility Test (SIT), the 10-item Communication Participation Item Bank (CPIB), and standard biopsychosocial measures of depression (Beck Depression Inventory-Fast Screen; BDI-FS), fatigue (Fatigue Severity Scale; FSS), and overall disease severity (Expanded Disability Status Scale; EDSS). Healthy controls completed all measures, with the exception of the CPIB and EDSS. All data were analyzed using standard, descriptive and parametric statistics. For the MSCI group, the relationship between neuropsychological test scores and speech-language variables were explored for each speech task using Pearson correlations. The relationship between neuropsychological test scores and Speech Severity also was explored. Results and Discussion: Topic familiarity for descriptive discourse did not strongly influence speech production or perceptual variables; however, results indicated predicted task-related differences for some spoken language measures. With the exception of the MSCI group, all speaker groups produced the same or slower global speech timing (i.e., speech and articulatory rates), more silent and filled pauses, more grammatical and longer silent pause durations in spontaneous discourse compared to reading aloud. Results revealed no appreciable task differences for linguistic complexity measures. Results indicated group differences for speech rate. The MSCI group produced significantly faster speech rates compared to the MSDYS group. Both the MSDYS and the MSCI groups were judged to have significantly poorer perceived Speech Severity compared to typically aging adults. The Task x Group interaction was only significant for the number of silent pauses. The MSDYS group produced fewer silent pauses in spontaneous speech and more silent pauses in the reading task compared to other groups. Finally

  5. Semantic Encoding of Spoken Sentences: Adult Aging and the Preservation of Conceptual Short-Term Memory

    ERIC Educational Resources Information Center

    Little, Deborah M.; McGrath, Lauren M.; Prentice, Kristen J.; Wingfield, Arthur

    2006-01-01

    Traditional models of human memory have postulated the need for a brief phonological or verbatim representation of verbal input as a necessary gateway to a higher level conceptual representation of the input. Potter has argued that meaningful sentences may be encoded directly in a conceptual short-term memory (CSTM) running parallel in time to…

  6. Grammatical Processing of Spoken Language in Child and Adult Language Learners

    ERIC Educational Resources Information Center

    Felser, Claudia; Clahsen, Harald

    2009-01-01

    This article presents a selective overview of studies that have investigated auditory language processing in children and late second-language (L2) learners using online methods such as event-related potentials (ERPs), eye-movement monitoring, or the cross-modal priming paradigm. Two grammatical phenomena are examined in detail, children's and…

  7. Directory of Spoken-Voice Audio-Cassettes, 1972.

    ERIC Educational Resources Information Center

    McKee, Gerald, Ed.

    Most listings in this catalog, which draws on many sources of production and is not a guide to one company's output, are for programs of college or adult level interest, with the exception of the "Careers" listings, geared toward high school students. The catalog also has lists of producers of children's cassettes and those designed for school…

  8. Use of a microdebrider for corporeal excavation and penile prosthesis implantation in men with severely fibrosed corpora cavernosa: a new minimal invasive surgical technique

    PubMed Central

    Bozkurt, İbrahim Halil; Yonguç, Tarık; Aydoğdu, Özgü; Değirmenci, Tansu; Arslan, Murat; Minareci, Süleyman

    2015-01-01

    Objective To propose a new minimal invasive surgical technique using a microdebrider (shaver) to excavate the fibrosed corpora cavernosa for penile prosthesis implantation in patients with severe fibrosis. Material and methods Two patients with severe corporeal fibrosis were implanted with a penile prosthesis using this technique. In the first patient, fibrosis was due to neglected idiopathic ischemic priapism and the second patient had his prosthesis extruded because of erosion in another center. Both patients were counseled about the procedure and the possible complications related to the experimental nature of the technique. A written informed consent was obtained from both patients. Excavation of the corpora was performed using microdebrider in both patients. Results Both operations were performed successfully without any intraoperative complications, including urethral injury or perforation of the tunica. The mean operation time was 57 min. The postoperative period was uneventful without any infection, migration, erosion, or mechanical failure. The penile length was increased nearly 2 cm in both patients, and the penile girth was increased around 30% in the patient who underwent inflatable penile prosthesis implantation. Conclusion The microdebrider potentially provides an important advance in patients with severe corporeal fibrosis to excavate the fibrosed corpora cavernosa for penile prosthesis implantation. The main advantages include fast, safe, and effective excavation of fibrous corpora cavernosa adequate for a satisfactory penile prosthesis implantation. PMID:26516594

  9. Is This Enough? A Qualitative Evaluation of the Effectiveness of a Teacher-Training Course on the Use of Corpora in Language Education

    ERIC Educational Resources Information Center

    Lenko-Szymanska, Agnieszka

    2014-01-01

    The paper describes a teacher-training course on the use of corpora in language education offered to graduate students at the Institute of Applied Linguistics, University of Warsaw. It also presents the results of two questionnaires distributed to the students prior to and after the second edition of the course. The main aims of the course are: to…

  10. Speaking is silver, writing is golden? The role of cognitive and social factors in written versus spoken witness accounts.

    PubMed

    Sauerland, Melanie; Krix, Alana C; van Kan, Nikki; Glunz, Sarah; Sak, Annabel

    2014-08-01

    Contradictory empirical findings and theoretical accounts exist that are in favor of either a written or a spoken superiority effect. In this article, we present two experiments that put the recall modality effect in the context of eyewitness reports to another test. More specifically, we investigated the role of cognitive and social factors in the effect. In both experiments, participants watched a videotaped staged crime and then gave spoken or written accounts of the event and the people involved. In Experiment 1, 135 participants were assigned to written, spoken-videotaped, spoken-distracted, or spoken-voice recorded conditions to test for the impact of cognitive demand and social factors in the form of interviewer presence. Experiment 2 (N = 124) tested the idea that instruction comprehensiveness differentially impacts recall performance in written versus spoken accounts. While there was no evidence for a spoken superiority effect, we found some support for a written superiority effect for description quantity, but not accuracy. Furthermore, any differences found in description quantity as a function of recall modality could be traced back to participants' free reports. Following up with cued open-ended questions compensated for this effect, although at the expense of description accuracy. This suggests that current police practice of arbitrarily obtaining written or spoken accounts is mostly unproblematic. PMID:24604628

  11. Word reading skill predicts anticipation of upcoming spoken language input: a study of children developing proficiency in reading.

    PubMed

    Mani, Nivedita; Huettig, Falk

    2014-10-01

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. PMID:24955519

  12. Crafting "The Humble Prose of Living": Rethinking Oral/Written Relations in the Echoes of Spoken Word

    ERIC Educational Resources Information Center

    Dyson, Anne Haas

    2005-01-01

    The art of spoken word has captured youthful spirits all over the country, especially in heterogeneous urban centers. In this essay, the author opens up for re-consideration a central issue in language arts education, one suggested by the very writing of spoken word: how educators think about the relationship between oral and written language and…

  13. Intervention Effects on Spoken-Language Outcomes for Children with Autism: A Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Hampton, L. H.; Kaiser, A. P.

    2016-01-01

    Background: Although spoken-language deficits are not core to an autism spectrum disorder (ASD) diagnosis, many children with ASD do present with delays in this area. Previous meta-analyses have assessed the effects of intervention on reducing autism symptomatology, but have not determined if intervention improves spoken language. This analysis…

  14. Spoken Language Scores of Children Using Cochlear Implants Compared to Hearing Age-Mates at School Entry

    ERIC Educational Resources Information Center

    Geers, Ann E.; Moog, Jean S.; Biedenstein, Julia; Brenner, Christine; Hayes, Heather

    2009-01-01

    This study investigated three questions: Is it realistic to expect age-appropriate spoken language skills in children with cochlear implants (CIs) who received auditory-oral intervention during the preschool years? What characteristics predict successful spoken language development in this population? Are children with CIs more proficient in some…

  15. AG Bell Academy Certification Program for Listening and Spoken Language Specialists: Meeting a World-Wide Need for Qualified Professionals

    ERIC Educational Resources Information Center

    Goldberg, Donald M.; Dickson, Cheryl L.; Flexer, Carol

    2010-01-01

    This article discusses the AG Bell Academy for Listening and Spoken Language--an organization designed to build capacity of certified Listening and Spoken Language Specialists (LSLS) by defining and maintaining a set of professional standards for LSLS professionals and thereby addressing the global deficit of qualified LSLS. Definitions and…

  16. Are Young Children with Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    ERIC Educational Resources Information Center

    Guo, Ling-Yu; McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose: The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation.…

  17. A Critique of Mark D. Allen's "The Preservation of Verb Subcategory Knowledge in a Spoken Language Comprehension Deficit"

    ERIC Educational Resources Information Center

    Kemmerer, David

    2008-01-01

    Allen [Allen, M. (2005). "The preservation of verb subcategory knowledge in a spoken language comprehension deficit." "Brain and Language, 95", 255-264.] reports a single patient, WBN, who, during spoken language comprehension, is still able to access some of the syntactic properties of verbs despite being unable to access some of their semantic…

  18. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Feenaughty, Lynda

    judged each speech sample using the perceptual construct of Speech Severity using a visual analog scale. Additional measures obtained to describe participants included the Sentence Intelligibility Test (SIT), the 10-item Communication Participation Item Bank (CPIB), and standard biopsychosocial measures of depression (Beck Depression Inventory-Fast Screen; BDI-FS), fatigue (Fatigue Severity Scale; FSS), and overall disease severity (Expanded Disability Status Scale; EDSS). Healthy controls completed all measures, with the exception of the CPIB and EDSS. All data were analyzed using standard, descriptive and parametric statistics. For the MSCI group, the relationship between neuropsychological test scores and speech-language variables were explored for each speech task using Pearson correlations. The relationship between neuropsychological test scores and Speech Severity also was explored. Results and Discussion: Topic familiarity for descriptive discourse did not strongly influence speech production or perceptual variables; however, results indicated predicted task-related differences for some spoken language measures. With the exception of the MSCI group, all speaker groups produced the same or slower global speech timing (i.e., speech and articulatory rates), more silent and filled pauses, more grammatical and longer silent pause durations in spontaneous discourse compared to reading aloud. Results revealed no appreciable task differences for linguistic complexity measures. Results indicated group differences for speech rate. The MSCI group produced significantly faster speech rates compared to the MSDYS group. Both the MSDYS and the MSCI groups were judged to have significantly poorer perceived Speech Severity compared to typically aging adults. The Task x Group interaction was only significant for the number of silent pauses. The MSDYS group produced fewer silent pauses in spontaneous speech and more silent pauses in the reading task compared to other groups. Finally

  19. Development of Lexical-Semantic Language System: N400 Priming Effect for Spoken Words in 18- and 24-Month Old Children

    ERIC Educational Resources Information Center

    Rama, Pia; Sirri, Louah; Serres, Josette

    2013-01-01

    Our aim was to investigate whether developing language system, as measured by a priming task for spoken words, is organized by semantic categories. Event-related potentials (ERPs) were recorded during a priming task for spoken words in 18- and 24-month-old monolingual French learning children. Spoken word pairs were either semantically related…

  20. Horizontal Flow of Semantic and Phonological Information in Chinese Spoken Sentence Production

    ERIC Educational Resources Information Center

    Yang, Jin-Chen; Yang, Yu-Fang

    2008-01-01

    A variant of the picture--word interference paradigm was used in three experiments to investigate the horizontal information flow of semantic and phonological information between nouns in spoken Mandarin Chinese sentences. Experiment 1 demonstrated that there is a semantic interference effect when the word in the second phrase (N3) and the first…

  1. Un trait du francais parle authentique: La dislocation. (A Trait of Authentic Spoken French: Dislocation.)

    ERIC Educational Resources Information Center

    Calve, Pierre

    1983-01-01

    The dislocation of sentence elements in spoken French is seen as allowing the speaker to free himself from certain constraints imposed on word order, position of accents, and grammar. Dislocation is described, its various functions are enumerated, and implications for second language instruction are outlined. (MSE)

  2. Learning and Consolidation of New Spoken Words in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Henderson, Lisa; Powell, Anna; Gaskell, M. Gareth; Norbury, Courtenay

    2014-01-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words…

  3. Teaching Spoken Discourse Markers Explicitly: A Comparison of III and PPP

    ERIC Educational Resources Information Center

    Jones, Christian; Carter, Ronald

    2014-01-01

    This article reports on mixed methods classroom research carried out at a British university. The study investigates the effectiveness of two different explicit teaching frameworks, Illustration--Interaction--Induction (III) and Present--Practice--Produce (PPP) used to teach the same spoken discourse markers (DMs) to two different groups of…

  4. A Spoken Word Count (Children--Ages 5, 6 and 7).

    ERIC Educational Resources Information Center

    Wepman, Joseph M.; Hass, Wilbur

    Relatively little research has been done on the quantitative characteristics of children's word usage. This spoken count was undertaken to investigate those aspects of word usage and frequency which could cast light on lexical processes in grammar and verbal development in children. Three groups of 30 children each (boys and girls) from…

  5. Comparing Spoken Language Treatments for Minimally Verbal Preschoolers with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Paul, Rhea; Campbell, Daniel; Gilbert, Kimberly; Tsiouri, Ioanna

    2013-01-01

    Preschoolers with severe autism and minimal speech were assigned either a discrete trial or a naturalistic language treatment, and parents of all participants also received parent responsiveness training. After 12 weeks, both groups showed comparable improvement in number of spoken words produced, on average. Approximately half the children in…

  6. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    ERIC Educational Resources Information Center

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  7. Cross-Language Perception of Cantonese Vowels Spoken by Native and Non-Native Speakers

    ERIC Educational Resources Information Center

    So, Connie K.; Attina, Virginie

    2014-01-01

    This study examined the effect of native language background on listeners' perception of native and non-native vowels spoken by native (Hong Kong Cantonese) and non-native (Mandarin and Australian English) speakers. They completed discrimination and an identification task with and without visual cues in clear and noisy conditions. Results…

  8. Design, Collection, and Description of a Database of Spoken Australian English.

    ERIC Educational Resources Information Center

    Millar, J. Bruce; And Others

    1989-01-01

    This paper describes the rationale for collection, digitisation, and quantitative characterization of a large multispeaker database of spoken Australian English. The speakers, all of whom were born in Australia of Australian parents, were recorded on 10 occasions over a period of months with each speaker recording a variety of speaking styles on…

  9. Infant perceptual development for faces and spoken words: An integrated approach

    PubMed Central

    Watson, Tamara L; Robbins, Rachel A; Best, Catherine T

    2014-01-01

    There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626

  10. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2012-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an event-related potential (ERP) norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a…

  11. Professional Training in Listening and Spoken Language--A Canadian Perspective

    ERIC Educational Resources Information Center

    Fitzpatrick, Elizabeth

    2010-01-01

    Several factors undoubtedly influenced the development of listening and spoken language options for children with hearing loss in Canada. The concept of providing auditory-based rehabilitation was popularized in Canada in the 1960s through the work of Drs. Daniel Ling and Agnes Ling in Montreal. The Lings founded the McGill University Project for…

  12. Italians Use Abstract Knowledge about Lexical Stress during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Sulpizio, Simone; McQueen, James M.

    2012-01-01

    In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress.…

  13. A Transactional Model of Spoken Vocabulary Variation in Toddlers with Intellectual Disabilities

    PubMed Central

    Woynaroski, Tiffany; Yoder, Paul; Fey, Marc E.; Warren, Steven F.

    2014-01-01

    Purpose This study examined whether: a) dose frequency of Milieu Communication Teaching (MCT) affects children's canonical syllabic communication, and b) the relation between early canonical syllabic communication and later spoken vocabulary is mediated by parental linguistic mapping in children with intellectual disabilities (ID). Method We drew on extant data from a recent differential treatment intensity study in which 63 toddlers with ID were randomly assigned to receive either five, 1 hour MCT sessions per week (i.e., daily treatment) or one, 1 hour MCT session per week (i.e., weekly treatment) for nine months. Children's early canonical syllabic communication was measured after three months of treatment and later spoken vocabulary was measured at post-treatment. Mid-point parental linguistic mapping was measured after 6 months of treatment. Results A moderate-sized effect in favor of daily treatment was observed on canonical syllabic communication. The significant relation between canonical syllabic communication and spoken vocabulary was partially mediated by linguistic mapping. Conclusions These results suggest that canonical syllabic communication may elicit parental linguistic mapping, which may in turn support spoken vocabulary development in children with ID. More frequent early intervention boosted canonical syllabic communication, which may jumpstart this transactional language-learning mechanism. Implications for theory, research, and practice are discussed. PMID:24802090

  14. Literacy in the Mainstream Inner-City School: Its Relationship to Spoken Language

    ERIC Educational Resources Information Center

    Myers, Lucy; Botting, Nicola

    2008-01-01

    This study describes the language and literacy skills of 11-year-olds attending a mainstream school in an area of social and economic disadvantage. The proportion of these young people experiencing difficulties in decoding and reading comprehension was identified and the relationship between spoken language skills and reading comprehension…

  15. Overt vs. Null Direct Objects in Spoken Brazilian Portuguese: A Semantic/Pragmatic Account.

    ERIC Educational Resources Information Center

    Schwenter, Scott A.; Silva, Glaucia

    2002-01-01

    Examines the semantic/pragmatic constraints on null objects spoken in Brazilian Portuguese (BP) in detail, and situates BP null objects in the broader crosslinguistic perspective of differential object marking. Demonstrates that semantic/pragmatic dimensions of animacy and specificity, and in particular their interaction, must be taken into…

  16. Interpersonal Engagement in Academic Spoken Discourse: A Functional Account of Dissertation Defenses

    ERIC Educational Resources Information Center

    Recski, Leonardo

    2005-01-01

    Whereas former research on academic discourse has paid a great deal of attention to writing and its hedging strategies, this paper aims to show that a complementary and equally important feature of academic spoken discourse is the use of modal certainty. An examination of modal selections in two American Dissertation Defenses additionally reveals…

  17. Quantifying the "Degree of Linguistic Demand" in Spoken Intelligence Test Directions

    ERIC Educational Resources Information Center

    Cormier, Damien C.; McGrew, Kevin S.; Evans, Jeffrey J.

    2011-01-01

    The linguistic demand of spoken instructions on individually administered norm-referenced psychological and educational tests is of concern when examining individuals who have varying levels of language processing ability or varying cultural backgrounds. The authors present a new method for analyzing the level of verbosity, complexity, and total…

  18. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    ERIC Educational Resources Information Center

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  19. Contrastive Analysis of Turkish and English in Turkish EFL Learners' Spoken Discourse

    ERIC Educational Resources Information Center

    Yildiz, Mustafa

    2016-01-01

    The present study aimed at finding whether L1 Turkish caused interference errors on Turkish EFL learners' spoken English discourse. Whether English proficiency level had any effect on the number of errors learners made was further investigated. The participants were given the chance to choose one of the two alternative topics to speak about. The…

  20. Expected Test Scores for Preschoolers with a Cochlear Implant Who Use Spoken Language

    ERIC Educational Resources Information Center

    Nicholas, Johanna G.; Geers, Ann E.

    2008-01-01

    Purpose: The major purpose of this study was to provide information about expected spoken language skills of preschool-age children who are deaf and who use a cochlear implant. A goal was to provide "benchmarks" against which those skills could be compared, for a given age at implantation. We also examined whether parent-completed checklists of…

  1. Spoken and Written Narratives in Swedish Children and Adolescents with Hearing Impairment

    ERIC Educational Resources Information Center

    Asker-Arnason, Lena; Akerlund, Viktoria; Skoglund, Cecilia; Ek-Lagergren, Ingela; Wengelin, Asa; Sahlen, Birgitta

    2012-01-01

    Twenty 10- to 18-year-old children and adolescents with varying degrees of hearing impairment (HI) and hearing aids (HA), ranging from mild-moderate to severe, produced picture-elicited narratives in a spoken and written version. Their performance was compared to that of 63 normally hearing (NH) peers within the same age span. The participants…

  2. Spoken Language Benefits of Extending Cochlear Implant Candidacy Below 12 Months of Age

    PubMed Central

    Nicholas, Johanna G.; Geers, Ann E.

    2013-01-01

    Objective To test the hypothesis that cochlear implantation surgery before 12 months of age yields better spoken language results than surgery between 12–18 months of age. Study Design Language testing administered to children at 4.5 years of age (± 2 months). Setting Schools, speech-language therapy offices, and cochlear implant (CI) centers in the US and Canada. Participants 69 children who received a cochlear implant between ages 6–18 months of age. All children were learning to communicate via listening and spoken language in English-speaking families. Main Outcome Measure Standard scores on receptive vocabulary, expressive and receptive language (includes grammar). Results Children with CI surgery at 6–11 months (N=27) achieved higher scores on all measures as compared to those with surgery at 12–18 months (N=42). Regression analysis revealed a linear relationship between age of implantation and language outcomes throughout the 6–18 month surgery-age range. Conclusion For children in intervention programs emphasizing listening and spoken language, cochlear implantation before 12 months of age appears to provide a significant advantage for spoken language achievement observed at 4.5 years of age. PMID:23478647

  3. Chunk Learning and the Development of Spoken Discourse in a Japanese as a Foreign Language Classroom

    ERIC Educational Resources Information Center

    Taguchi, Naoko

    2007-01-01

    This study examined the development of spoken discourse among L2 learners of Japanese who received extensive practice on grammatical chunks. Participants in this study were 22 college students enrolled in an elementary Japanese course. They received instruction on a set of grammatical chunks in class through communicative drills and the…

  4. The Temporal Dynamics of Ambiguity Resolution: Evidence from Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Dahan, Delphine; Gaskell, M. Gareth

    2007-01-01

    Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with…

  5. The Role of Additional Processing Time and Lexical Constraint in Spoken Word Recognition

    ERIC Educational Resources Information Center

    LoCasto, Paul C.; Connine, Cynthia M.; Patterson, David

    2007-01-01

    Three phoneme monitoring experiments examined the manner in which additional processing time influences spoken word recognition. Experiment 1a introduced a version of the phoneme monitoring paradigm in which a silent interval is inserted prior to the word-final target phoneme. Phoneme monitoring reaction time decreased as the silent interval…

  6. A Spoken-Language Intervention for School-Aged Boys with Fragile X Syndrome

    ERIC Educational Resources Information Center

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-01-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived…

  7. Windows into Sensory Integration and Rates in Language Processing: Insights from Signed and Spoken Languages

    ERIC Educational Resources Information Center

    Hwang, So-One K.

    2011-01-01

    This dissertation explores the hypothesis that language processing proceeds in "windows" that correspond to representational units, where sensory signals are integrated according to time-scales that correspond to the rate of the input. To investigate universal mechanisms, a comparison of signed and spoken languages is necessary. Underlying the…

  8. Children's Verbal Working Memory: Role of Processing Complexity in Predicting Spoken Sentence Comprehension

    ERIC Educational Resources Information Center

    Magimairaj, Beula M.; Montgomery, James W.

    2012-01-01

    Purpose: This study investigated the role of processing complexity of verbal working memory tasks in predicting spoken sentence comprehension in typically developing children. Of interest was whether simple and more complex working memory tasks have similar or different power in predicting sentence comprehension. Method: Sixty-five children (6- to…

  9. "Authenticity" in Language Testing: Evaluating Spoken Language Tests for International Teaching Assistants.

    ERIC Educational Resources Information Center

    Hoekje, Barbara; Linnell, Kimberly

    1994-01-01

    Bachman's framework of language testing and standard of authenticity for language testing instruments were used to evaluate three instruments--the SPEAK (Spoken Proficiency English Assessment Kit) test, OPI (Oral Proficiency Interview), and a performance test--as language tests for nonnative-English-speaking teaching assistants. (Contains 53…

  10. Recognition of Signed and Spoken Language: Different Sensory Inputs, the Same Segmentation Procedure

    ERIC Educational Resources Information Center

    Orfanidou, Eleni; Adam, Robert; Morgan, Gary; McQueen, James M.

    2010-01-01

    Signed languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. According to the Possible Word Constraint (PWC),…

  11. Bidialectal African American Adolescents' Beliefs about Spoken Language Expectations in English Classrooms

    ERIC Educational Resources Information Center

    Godley, Amanda; Escher, Allison

    2012-01-01

    This article describes the perspectives of bidialectal African American adolescents--adolescents who speak both African American Vernacular English (AAVE) and Standard English--on spoken language expectations in their English classes. Previous research has demonstrated that many teachers hold negative views of AAVE, but existing scholarship has…

  12. Neural Correlates of Priming Effects in Children during Spoken Word Processing with Orthographic Demands

    ERIC Educational Resources Information Center

    Cao, Fan; Khalid, Kainat; Zaveri, Rishi; Bolger, Donald J.; Bitan, Tali; Booth, James R.

    2010-01-01

    Priming effects were examined in 40 children (9-15 years old) using functional magnetic resonance imaging (fMRI). An orthographic judgment task required participants to determine if two sequentially presented spoken words had the same spelling for the rime. Four lexical conditions were designed: similar orthography and phonology (O[superscript…

  13. Feature Statistics Modulate the Activation of Meaning during Spoken Word Processing

    ERIC Educational Resources Information Center

    Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.

    2016-01-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…

  14. Do Phonological Constraints on the Spoken Word Affect Visual Lexical Decision?

    ERIC Educational Resources Information Center

    Lee, Yang; Moreno, Miguel A.; Carello, Claudia; Turvey, M. T.

    2013-01-01

    Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language's writing system and the assimilation of phonemes according to the language's constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes…

  15. Using Unscripted Spoken Texts in the Teaching of Second Language Listening

    ERIC Educational Resources Information Center

    Wagner, Elvis

    2014-01-01

    Most spoken texts that are used in second language (L2) listening classroom activities are scripted texts, where the text is written, revised, polished, and then read aloud with artificially clear enunciation and slow rate of speech. This article explores the field's overreliance on these scripted texts, at the expense of including unscripted…

  16. On-Line Syntax: Thoughts on the Temporality of Spoken Language

    ERIC Educational Resources Information Center

    Auer, Peter

    2009-01-01

    One fundamental difference between spoken and written language has to do with the "linearity" of speaking in time, in that the temporal structure of speaking is inherently the outcome of an interactive process between speaker and listener. But despite the status of "linearity" as one of Saussure's fundamental principles, in practice little more…

  17. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    ERIC Educational Resources Information Center

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  18. How Are Pronunciation Variants of Spoken Words Recognized? A Test of Generalization to Newly Learned Words

    ERIC Educational Resources Information Center

    Pitt, Mark A.

    2009-01-01

    One account of how pronunciation variants of spoken words (center-> "senner" or "sennah") are recognized is that sublexical processes use information about variation in the same phonological environments to recover the intended segments [Gaskell, G., & Marslen-Wilson, W. D. (1998). Mechanisms of phonological inference in speech perception.…

  19. Spoken Grammar Practice and Feedback in an ASR-Based CALL System

    ERIC Educational Resources Information Center

    de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland

    2015-01-01

    Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…

  20. Perception and Lateralization of Spoken Emotion by Youths with High-Functioning Forms of Autism

    ERIC Educational Resources Information Center

    Baker, Kimberly F.; Montgomery, Allen A.; Abramson, Ruth

    2010-01-01

    The perception and the cerebral lateralization of spoken emotions were investigated in children and adolescents with high-functioning forms of autism (HFFA), and age-matched typically developing controls (TDC). A dichotic listening task using nonsense passages was used to investigate the recognition of four emotions: happiness, sadness, anger, and…

  1. Differential Processing of Thematic and Categorical Conceptual Relations in Spoken Word Production

    ERIC Educational Resources Information Center

    de Zubicaray, Greig I.; Hansen, Samuel; McMahon, Katie L.

    2013-01-01

    Studies of semantic context effects in spoken word production have typically distinguished between categorical (or taxonomic) and associative relations. However, associates tend to confound semantic features or morphological representations, such as whole-part relations and compounds (e.g., BOAT-anchor, BEE-hive). Using a picture-word interference…

  2. Frequency Effects in Spoken and Visual Word Recognition: Evidence from Dual-Task Methodologies

    ERIC Educational Resources Information Center

    Cleland, Alexandra A.; Gaskell, M. Gareth; Quinlan, Philip T.; Tamminen, Jakke

    2006-01-01

    The authors report 3 dual-task experiments concerning the locus of frequency effects in word recognition. In all experiments, Task 1 entailed a simple perceptual choice and Task 2 involved lexical decision. In Experiment 1, an underadditive effect of word frequency arose for spoken words. Experiment 2 also showed underadditivity for visual lexical…

  3. Conversations over Video Conferences: An Evaluation of the Spoken Aspects of Video-Mediated Communication.

    ERIC Educational Resources Information Center

    O'Conaill, Brid; And Others

    1993-01-01

    Considers reasons for the lack of acceptance of video communication; examines differences between spoken characteristics of video-mediated communication and face-to-face interaction; and evaluates two video communication systems in the United Kingdom, an Integrated Services Digital Network and LIVE-NET (London Interactive Video Education Network).…

  4. The Influence of Recent Scene Events on Spoken Comprehension: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Knoeferle, Pia; Crocker, Matthew W.

    2007-01-01

    Evidence from recent experiments that monitored attention in clipart scenes during spoken comprehension suggests that people preferably rely on non-stereotypical depicted events over stereotypical thematic knowledge for incremental interpretation. "The Coordinated Interplay Account [Knoeferle, P., & Crocker, M. W. (2006). "The coordinated…

  5. Using Key Part-of-Speech Analysis to Examine Spoken Discourse by Taiwanese EFL Learners

    ERIC Educational Resources Information Center

    Lin, Yen-Liang

    2015-01-01

    This study reports on a corpus analysis of samples of spoken discourse between a group of British and Taiwanese adolescents, with the aim of exploring the statistically significant differences in the use of grammatical categories between the two groups of participants. The key word method extended to a part-of-speech level using the web-based…

  6. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  7. Consequences of the Now-or-Never bottleneck for signed versus spoken languages.

    PubMed

    Emmorey, Karen

    2016-01-01

    Signed and spoken languages emerge, change, are acquired, and are processed under distinct perceptual, motor, and memory constraints. Therefore, the Now-or-Never bottleneck has different ramifications for these languages, which are highlighted in this commentary. The extent to which typological differences in linguistic structure can be traced to processing differences provides unique evidence for the claim that structure is processing. PMID:27562833

  8. Crossmodal Semantic Priming by Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2011-01-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…

  9. The Debate over Literary Tamil versus Standard Spoken Tamil: What Do Teachers Say?

    ERIC Educational Resources Information Center

    Saravanan, Vanithamani; Lakshmi, Seetha; Caleon, Imelda S.

    2009-01-01

    This study aims to determine the attitudes toward Standard Spoken Tamil (SST) and Literary Tamil (LT) of 46 Tamil teachers in Singapore. The teachers' attitudes were used as an indicator of the acceptance or nonacceptance of SST as a viable option in the teaching of Tamil in the classroom, in which the focus has been largely on LT. The…

  10. The Evolution of Iranian French Learners' Spoken Interlanguage from a Cognitive Point of View

    ERIC Educational Resources Information Center

    Mehrabi, Marzieh; Rahmatian, Rouhollah; Safa, Parivash; Armiun, Novid

    2014-01-01

    This paper analyzes the spoken corpus of thirty Iranian learners of French at four levels (A1, A2, B1 and B2). The data were collected in a pseudo-longitudinal manner in semi-directed interviews with half closed and open questions to analyze the learners' syntactic errors (omission, addition, substitution and displacement). The most frequent…

  11. A Transactional Model of Spoken Vocabulary Variation in Toddlers with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Woynaroski, Tiffany; Yoder, Paul J.; Fey, Marc E.; Warren, Steven F.

    2014-01-01

    Purpose: The authors examined (a) whether dose frequency of milieu communication teaching (MCT) affects children's canonical syllabic communication and (b) whether the relation between early canonical syllabic communication and later spoken vocabulary is mediated by parental linguistic mapping in children with intellectual disabilities (ID).…

  12. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  13. Phonological Neighborhood Effects in Spoken Word Production: An fMRI Study

    ERIC Educational Resources Information Center

    Peramunage, Dasun; Blumstein, Sheila E.; Myers, Emily B.; Goldrick, Matthew; Baese-Berk, Melissa

    2011-01-01

    The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g., cape) or lacked a minimal pair (NMP) neighbor (e.g., cake). The voiced neighbor never appeared in the…

  14. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    ERIC Educational Resources Information Center

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  15. Mental Imagery as Revealed by Eye Movements and Spoken Predicates: A Test of Neurolinguistic Programming.

    ERIC Educational Resources Information Center

    Elich, Matthew; And Others

    1985-01-01

    Tested Bandler and Grinder's proposal that eye movement direction and spoken predicates are indicative of sensory modality of imagery. Subjects reported images in the three modes, but no relation between imagery and eye movements or predicates was found. Visual images were most vivid and often reported. Most subjects rated themselves as visual,…

  16. The Roles of Tonal and Segmental Information in Mandarin Spoken Word Recognition: An Eyetracking Study

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2010-01-01

    We used eyetracking to examine how tonal versus segmental information influence spoken word recognition in Mandarin Chinese. Participants heard an auditory word and were required to identify its corresponding picture from an array that included the target item ("chuang2" "bed"), a phonological competitor (segmental: chuang1 "window"; cohort:…

  17. Intentional and Reactive Inhibition during Spoken-Word Stroop Task Performance in People with Aphasia

    ERIC Educational Resources Information Center

    Pompon, Rebecca Hunting; McNeil, Malcolm R.; Spencer, Kristie A.; Kendall, Diane L.

    2015-01-01

    Purpose: The integrity of selective attention in people with aphasia (PWA) is currently unknown. Selective attention is essential for everyday communication, and inhibition is an important part of selective attention. This study explored components of inhibition--both intentional and reactive inhibition--during spoken-word production in PWA and in…

  18. Parental Reports of Spoken Language Skills in Children with Down Syndrome.

    ERIC Educational Resources Information Center

    Berglund, Eva; Eriksson, Marten; Johansson, Irene

    2001-01-01

    Spoken language in 330 children with Down syndrome (ages 1-5) and 336 normally developing children (ages 1,2) was compared. Growth trends, individual variation, sex differences, and performance on vocabulary, pragmatic, and grammar scales as well as maximum length of utterance were explored. Three- and four-year-old Down syndrome children…

  19. Infant perceptual development for faces and spoken words: an integrated approach.

    PubMed

    Watson, Tamara L; Robbins, Rachel A; Best, Catherine T

    2014-11-01

    There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626

  20. Research on Spoken Language Processing. Progress Report No. 21 (1996-1997).

    ERIC Educational Resources Information Center

    Pisoni, David B.

    This 21st annual progress report summarizes research activities on speech perception and spoken language processing carried out in the Speech Research Laboratory, Department of Psychology, Indiana University in Bloomington. As with previous reports, the goal is to summarize accomplishments during 1996 and 1997 and make them readily available. Some…

  1. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    ERIC Educational Resources Information Center

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  2. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    ERIC Educational Resources Information Center

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  3. Authentic ESL Spoken Materials: Soap Opera and Sitcom versus Natural Conversation

    ERIC Educational Resources Information Center

    Al-Surmi, Mansoor Ali

    2012-01-01

    TV shows, especially soap operas and sitcoms, are usually considered by ESL practitioners as a source of authentic spoken conversational materials presumably because they reflect the linguistic features of natural conversation. However, practitioners might be faced with the dilemma of how to evaluate whether such conversational materials reflect…

  4. The Interface between Morphology and Phonology: Exploring a Morpho-Phonological Deficit in Spoken Production

    ERIC Educational Resources Information Center

    Cohen-Goldberg, Ariel M.; Cholin, Joana; Miozzo, Michele; Rapp, Brenda

    2013-01-01

    Morphological and phonological processes are tightly interrelated in spoken production. During processing, morphological processes must combine the phonological content of individual morphemes to produce a phonological representation that is suitable for driving phonological processing. Further, morpheme assembly frequently causes changes in a…

  5. Individual Differences in Inhibitory Control Relate to Bilingual Spoken Word Processing

    ERIC Educational Resources Information Center

    Mercier, Julie; Pivneva, Irina; Titone, Debra

    2014-01-01

    We investigated whether individual differences in inhibitory control relate to bilingual spoken word recognition. While their eye movements were monitored, native English and native French English-French bilinguals listened to English words (e.g., "field") and looked at pictures corresponding to the target, a within-language competitor…

  6. Word Order in Spoken German: Syntactic Right-Expansions as an Interactionally Constructed Phenomenon

    ERIC Educational Resources Information Center

    Schoenfeldt, Juliane

    2009-01-01

    In real time interaction, the ordering of words is one of the resources participants-to-talk rely on in the negotiation of shared meaning. This dissertation investigates the emergence of syntactic right-expansions in spoken German as a systematic resource in the organization of talk-in-interaction. Employing the methodology of conversation…

  7. Beyond Rhyme or Reason: ERPs Reveal Task-Specific Activation of Orthography on Spoken Language

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Perre, Laetitia; Ziegler, Johannes C.

    2011-01-01

    Metaphonological tasks, such as rhyme judgment, have been the primary tool for the investigation of the effects of orthographic knowledge on spoken language. However, it has been recently argued that the orthography effect in rhyme judgment does not reflect the automatic activation of orthographic codes but rather stems from sophisticated response…

  8. The Effect of the Temporal Structure of Spoken Words on Paired-Associate Learning

    ERIC Educational Resources Information Center

    Creel, Sarah C.; Dahan, Delphine

    2010-01-01

    In a series of experiments, participants learned to associate black-and-white shapes with nonsense spoken labels (e.g., "joop"). When tested on their recognition memory, participants falsely recognized as correct a shape paired with a label that began with the same sounds as the shape's original label (onset-overlapping lure; e.g., "joob") more…

  9. Developing and Testing EVALOE: A Tool for Assessing Spoken Language Teaching and Learning in the Classroom

    ERIC Educational Resources Information Center

    Gràcia, Marta; Vega, Fàtima; Galván-Bovaira, Maria José

    2015-01-01

    Broadly speaking, the teaching of spoken language in Spanish schools has not been approached in a systematic way. Changes in school practices are needed in order to allow all children to become competent speakers and to understand and construct oral texts that are appropriate in different contexts and for different audiences both inside and…

  10. Assessing spoken word recognition in children who are deaf or hard of hearing: A translational approach

    PubMed Central

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S.; Young, Nancy

    2013-01-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this paper, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate “real-world” stimulus variability in the form multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening ability and to predict benefit from sensory aid use in children with varying degrees of hearing loss. PMID:22668766

  11. Factors influencing spoken language outcomes in children following early cochlear implantation.

    PubMed

    Geers, Ann E

    2006-01-01

    Development of spoken language is an objective of virtually all English-based educational programs for children who are deaf or hard of hearing. The primary goal of pediatric cochlear implantation is to provide critical speech information to the child's auditory system and brain to maximize the chances of developing spoken language. Cochlear implants have the potential to accomplish for profoundly deaf children what the electronic hearing aid made possible for hard of hearing children more than 50 years ago. Though the cochlear implant does not allow for hearing of the same quality as that experienced by persons without a hearing loss, it nonetheless has revolutionized the experience of spoken language acquisition for deaf children. However, the variability in performance remains quite high, with limited explanation as to the reasons for good and poor outcomes. Evaluating the success of cochlear implantation requires careful consideration of intervening variables, the characteristics of which are changing with advances in technology and clinical practice. Improvement in speech coding strategies, implantation at younger ages and in children with greater preimplant residual hearing, and rehabilitation focused on speech and auditory skill development are leading to a larger proportion of children approaching spoken language levels of hearing age-mates. PMID:16891836

  12. Syntactic Priming Persists while the Lexical Boost Decays: Evidence from Written and Spoken Dialogue

    ERIC Educational Resources Information Center

    Hartsuiker, Robert J.; Bernolet, Sarah; Schoonbaert, Sofie; Speybroeck, Sara; Vanderelst, Dieter

    2008-01-01

    Four experiments in written and spoken dialogue tested the predictions of two distinct accounts of syntactic encoding in sentence production: a lexicalist, residual activation account and an implicit-learning account. Experiments 1 and 2 showed syntactic priming (i.e., the tendency to reuse the syntactic structure of a prime sentence in the…

  13. The Contribution of the Inferior Parietal Cortex to Spoken Language Production

    ERIC Educational Resources Information Center

    Geranmayeh, Fatemeh; Brownsett, Sonia L. E.; Leech, Robert; Beckmann, Christian F.; Woodhead, Zoe; Wise, Richard J. S.

    2012-01-01

    This functional MRI study investigated the involvement of the left inferior parietal cortex (IPC) in spoken language production (Speech). Its role has been apparent in some studies but not others, and is not convincingly supported by clinical studies as they rarely include cases with lesions confined to the parietal lobe. We compared Speech with…

  14. Effects of Negotiated Interaction on Mongolian-Nationality EFL Learners' Spoken Output

    ERIC Educational Resources Information Center

    Li, Xueping

    2012-01-01

    The present study examines the effect of negotiated interaction on Mongolian-nationality EFL learners' spoken production, focusing on the teacher-learner interaction in a story-telling task. The study supports the hypothesis that interaction plays a facilitating role in language development for learners. Quantitative analysis shows that Mongolian…

  15. Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.

    ERIC Educational Resources Information Center

    Burton, John K.; Bruning, Roger H.

    1982-01-01

    Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…

  16. Why Dose Frequency Affects Spoken Vocabulary in Preschoolers with Down Syndrome

    ERIC Educational Resources Information Center

    Yoder, Paul J.; Woynaroski, Tiffany; Fey, Marc E.; Warren, Steven F.; Gardner, Elizabeth

    2015-01-01

    In an earlier randomized clinical trial, daily communication and language therapy resulted in more favorable spoken vocabulary outcomes than weekly therapy sessions in a subgroup of initially nonverbal preschoolers with intellectual disabilities that included only children with Down syndrome (DS). In this reanalysis of the dataset involving only…

  17. Discourse Markers and Spoken English: Native and Learner Use in Pedagogic Settings

    ERIC Educational Resources Information Center

    Fung, Loretta; Carter, Ronald

    2007-01-01

    This study examines and compares the production of discourse markers by native speakers and learners of English based on a pedagogic sub-corpus from CANCODE, a corpus of spoken British English, and a corpus of interactive classroom discourse of secondary pupils in Hong Kong. The results indicate that in both groups discourse markers serve as…

  18. Information Density in the Development of Spoken and Written Narratives in English and Hebrew

    ERIC Educational Resources Information Center

    David, Dorit; Berman, Ruth A.

    2006-01-01

    This study compares what we term information density in spoken versus written discourse by distinguishing between 2 broad classes of material in narrative texts: narrative information as conveyed through three types of propositional content--events, descriptions, and interpretations (Berman, 1997)--and ancillary information as conveyed by…

  19. AI-Based Chatterbots and Spoken English Teaching: A Critical Analysis

    ERIC Educational Resources Information Center

    Sha, Guoquan

    2009-01-01

    The aim of various approaches implemented, whether the classical "three Ps" (presentation, practice, and production) or communicative language teaching (CLT), is to achieve communicative competence. Although a lot of software developed for teaching spoken English is dressed up to raise interaction, its methodology is largely rooted in tradition.…

  20. Are Phonological Representations of Printed and Spoken Language Isomorphic? Evidence from the Restrictions on Unattested Onsets

    ERIC Educational Resources Information Center

    Berent, Iris

    2008-01-01

    Are the phonological representations of printed and spoken words isomorphic? This question is addressed by investigating the restrictions on onsets. Cross-linguistic research suggests that onsets of rising sonority are preferred to sonority plateaus, which, in turn, are preferred to sonority falls (e.g., bnif, bdif, lbif). Of interest is whether…

  1. Characteristics of the Transition to Spoken Words in Two Young Cochlear Implant Recipients

    PubMed Central

    Ertmer, David J.; Inniger, Kelli J.

    2009-01-01

    Purpose This investigation addressed two main questions: (1) How do toddler's spoken utterances change during the first year of Cochlear Implant (CI) use? (2) How do the time-courses for reaching spoken word milestones after implant activation compare with those reported for typically developing children? These questions were explored to increase understanding of early semantic development in children who receive cochlear implants before their second birthdays. Methods Monthly recordings of mother-child interactions were gathered during the first year of CI use by a boy and a girl whose cochlear implants were activated at 11 and 21 months of age, respectively. Child utterances were classified as non-words, pre-words, single words, or word combinations and the percentages of these utterance types were calculated for each month. Data were compared to published findings for typically developing children for the number of months of robust hearing (i.e., auditory access to conversational speech) needed to reach spoken word milestones, and the chronological ages at which milestones were achieved. Results The main findings were that the percentages of non-words and pre-words decreased as single words and word combinations increased; both children achieved most spoken word milestones with fewer months of robust hearing experience than reported for typically developing children; the youngest recipient achieved more milestones within typical age-ranges than the child implanted later in life. Conclusions The children's expeditious gains in spoken word development appeared to be facilitated by interactions among their pre-implant hearing experiences, their relatively advanced physical, cognitive, and social maturity, participation in intervention programs, and the introduction of robust hearing within the Utterance Acquisition phase of language development as proposed in the Neurolingusitic theory (Locke, 1997). PMID:19717658

  2. A real-time spoken-language system for interactive problem-solving, combining linguistic and statistical technology for improved spoken language understanding

    NASA Astrophysics Data System (ADS)

    Moore, Robert C.; Cohen, Michael H.

    1993-09-01

    Under this effort, SRI has developed spoken-language technology for interactive problem solving, featuring real-time performance for up to several thousand word vocabularies, high semantic accuracy, habitability within the domain, and robustness to many sources of variability. Although the technology is suitable for many applications, efforts to date have focused on developing an Air Travel Information System (ATIS) prototype application. SRI's ATIS system has been evaluated in four ARPA benchmark evaluations, and has consistently been at or near the top in performance. These achievements are the result of SRI's technical progress in speech recognition, natural-language processing, and speech and natural-language integration.

  3. Teaching ESL to Competencies: A Departure from a Traditional Curriculum for Adult Learners with Specific Needs. Adult Education Series #12. Refugee Education Guide.

    ERIC Educational Resources Information Center

    Center for Applied Linguistics, Washington, DC. Language and Orientation Resource Center.

    Teaching English as a second language (ESL) to competencies requires that the instructional focus be on functional competencies and life-coping skills while developing the spoken and/or written English structures necessary to perform these skills. A step-by-step approach to develop and implement a competency-based approach to ESL for adults is…

  4. Transnationalism and Rights in the Age of Empire: Spoken Word, Music, and Digital Culture in the Borderlands

    ERIC Educational Resources Information Center

    Hicks, Emily D.

    2004-01-01

    The cultural activities, including the performance of music and spoken word are documented. The cultural activities in the San Diego-Tijuana region that is described is emerged from rhizomatic, transnational points of contact.

  5. Preliminary findings of similarities and differences in the signed and spoken language of children with autism.

    PubMed

    Shield, Aaron

    2014-11-01

    Approximately 30% of hearing children with autism spectrum disorder (ASD) do not acquire expressive language, and those who do often show impairments related to their social deficits, using language instrumentally rather than socially, with a poor understanding of pragmatics and a tendency toward repetitive content. Linguistic abnormalities can be clinically useful as diagnostic markers of ASD and as targets for intervention. Studies have begun to document how ASD manifests in children who are deaf for whom signed languages are the primary means of communication. Though the underlying disorder is presumed to be the same in children who are deaf and children who hear, the structures of signed and spoken languages differ in key ways. This article describes similarities and differences between the signed and spoken language acquisition of children on the spectrum. Similarities include echolalia, pronoun avoidance, neologisms, and the existence of minimally verbal children. Possible areas of divergence include pronoun reversal, palm reversal, and facial grammar. PMID:25321855

  6. The role of the syllable in the segmentation of Cairene spoken Arabic.

    PubMed

    Aquil, Rajaa

    2012-04-01

    The syllable as a perceptual unit has been investigated cross linguistically. In Cairene Arabic syllables fall into three categories, light CV, heavy CVC/CVV and superheavy CVCC/CVVC. However, heavy syllables in Cariene Arabic have varied weight depending on their position in a word, whether internal or final. The present paper investigates the role of the syllable in the segmentation of Cariene Arabic. It reports a psycholinguistic study; syllable monitoring that was conducted on 32 Egyptian Arabic native speakers to examine the perceptual role of the syllable in spoken connected language. Theoretical phonological studies have identified Cairene Arabic as a stress-timed language; however, psycholinguistic studies providing evidence for this theoretical finding are scarce. The present study which is a cross modal (visual and auditory) counterbalanced design, gives evidence for the role of the (CVC) syllable in the segmentation of Cairene spoken language. PMID:22072294

  7. Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans

    NASA Astrophysics Data System (ADS)

    Pei, Xiaomei; Barbour, Dennis L.; Leuthardt, Eric C.; Schalk, Gerwin

    2011-08-01

    Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.

  8. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions.

    PubMed

    Brouwer, Susanne; Bradlow, Ann R

    2016-10-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. candle), an onset competitor (e.g. candy), a rhyme competitor (e.g. sandal), and an unrelated distractor (e.g. lemon). Target words were presented in quiet, mixed with broadband noise, or mixed with background speech. Results showed that lexical competition changes throughout the observation window as a function of what is presented in the background. These findings suggest that, rather than being strictly sequential, stream segregation and lexical competition interact during spoken word recognition. PMID:26420754

  9. The power of the spoken word: sociolinguistic cues influence the misinformation effect.

    PubMed

    Vornik, Lana A; Sharman, Stefanie J; Garry, Maryanne

    2003-01-01

    We investigated whether the sociolinguistic information delivered by spoken, accented postevent narratives would influence the misinformation effect. New Zealand subjects listened to misleading postevent information spoken in either a New Zealand (NZ) or North American (NA) accent. Consistent with earlier research, we found that NA accents were seen as more powerful and more socially attractive. We found that accents per se had no influence on the misinformation effect but sociolinguistic factors did: both power and social attractiveness affected subjects' susceptibility to misleading postevent suggestions. When subjects rated the speaker highly on power, social attractiveness did not matter; they were equally misled. However, when subjects rated the speaker low on power, social attractiveness did matter: subjects who rated the speaker high on social attractiveness were more misled than subjects who rated it lower. There were similar effects for confidence. These results have implications for our understanding of social influences on the misinformation effect. PMID:12653492

  10. Targeted Help for Spoken Dialogue Systems: Intelligent Feedback Improves Naive Users' Performance

    NASA Technical Reports Server (NTRS)

    Hockey, Beth Ann; Lemon, Oliver; Campana, Ellen; Hiatt, Laura; Aist, Gregory; Hieronymous, Jim; Gruenstein, Alexander; Dowding, John

    2003-01-01

    We present experimental evidence that providing naive users of a spoken dialogue system with immediate help messages related to their out-of-coverage utterances improves their success in using the system. A grammar-based recognizer and a Statistical Language Model (SLM) recognizer are run simultaneously. If the grammar-based recognizer suceeds, the less accurate SLM recognizer hypothesis is not used. When the grammar-based recognizer fails and the SLM recognizer produces a recognition hypothesis, this result is used by the Targeted Help agent to give the user feed-back on what was recognized, a diagnosis of what was problematic about the utterance, and a related in-coverage example. The in-coverage example is intended to encourage alignment between user inputs and the language model of the system. We report on controlled experiments on a spoken dialogue system for command and control of a simulated robotic helicopter.

  11. Physiologically Persistent Corpora lutea in Eurasian Lynx (Lynx lynx) – Longitudinal Ultrasound and Endocrine Examinations Intra-Vitam

    PubMed Central

    Painer, Johanna; Jewgenow, Katarina; Dehnhard, Martin; Arnemo, Jon M.; Linnell, John D. C.; Odden, John; Hildebrandt, Thomas B.; Goeritz, Frank

    2014-01-01

    Felids generally follow a poly-estrous reproductive strategy. Eurasian lynx (Lynx lynx) display a different pattern of reproductive cyclicity where physiologically persistent corpora lutea (CLs) induce a mono-estrous condition which results in highly seasonal reproduction. The present study was based around a sono-morphological and endocrine study of captive Eurasian lynx, and a control-study on free-ranging lynx. We verified that CLs persist after pregnancy and pseudo-pregnancy for at least a two-year period. We could show that lynx are able to enter estrus in the following year, while CLs from the previous years persisted in structure and only temporarily reduced their function for the period of estrus onset or birth, which is unique among felids. The almost constant luteal progesterone secretion (average of 5 ng/ml serum) seems to prevent folliculogenesis outside the breeding season and has converted a poly-estrous general felid cycle into a mono-estrous cycle specific for lynx. The hormonal regulation mechanism which causes lynx to have the longest CL lifespan amongst mammals remains unclear. The described non-felid like ovarian physiology appears to be a remarkably non-plastic system. The lynx's reproductive ability to adapt to environmental and anthropogenic changes needs further investigation. PMID:24599348

  12. The influence of trilostane on steroid hormone metabolism in canine adrenal glands and corpora lutea-an in vitro study.

    PubMed

    Ouschan, C; Lepschy, M; Zeugswetter, F; Möstl, E

    2012-03-01

    Trilostane is widely used to treat hyperadrenocorticism in dogs. Trilostane competitively inhibits the enzyme 3-beta hydroxysteroid dehydrogenase (3β-HSD), which converts pregnenolone (P5) to progesterone (P4) and dehydroepiandrosterone (DHEA) to androstendione (A4). Although trilostane is frequently used in dogs, the molecular mechanism underlying its effect on canine steroid hormone biosynthesis is still an enigma. Multiple enzymes of 3β-HSD have been found in humans, rats and mice and their presence might explain the contradictory results of studies on the effectiveness of trilostane. We therefore investigated the influence of trilostane on steroid hormone metabolism in dogs by means of an in vitro model. Canine adrenal glands from freshly euthanized dogs and corpora lutea (CL) were incubated with increasing doses of trilostane. Tritiated P5 or DHEA were used as substrates. The resulting radioactive metabolites were extracted, separated by thin layer chromatography and visualized by autoradiography. A wide variety of radioactive metabolites were formed in the adrenal glands and in the CL, indicating high metabolic activity in both tissues. In the adrenal cortex, trilostane influences the P5 metabolism in a dose- and time-dependent manner, while DHEA metabolism and metabolism of both hormones in the CL were unaffected. The results indicate for the first time that there might be more than one enzyme of 3β-HSD present in dogs and that trilostane selectively inhibits P5 conversion to P4 only in the adrenal gland. PMID:22113849

  13. Overlapping Networks Engaged during Spoken Language Production and Its Cognitive Control

    PubMed Central

    Wise, Richard J.S.; Mehta, Amrish; Leech, Robert

    2014-01-01

    Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and “rest,” to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. PMID:24966373

  14. The influence of amplitude envelope information on resolving lexically ambiguous spoken words.

    PubMed

    Szostak, Christine M; Pitt, Mark A

    2014-10-01

    Prior studies exploring the contribution of amplitude envelope information to spoken word recognition are mixed with regard to the question of whether amplitude envelope alone, without spectral detail, can aid isolated word recognition. Three experiments show that the amplitude envelope will aid word identification only if two conditions are met: (1) It is not the only information available to the listener and (2) lexical ambiguity is not present. Implications for lexical processing are discussed. PMID:25324106

  15. Demonstration of a Spoken Dialogue Interface for Planning Activities of a Semi-autonomous Robot

    NASA Technical Reports Server (NTRS)

    Dowding, John; Frank, Jeremy; Hockey, Beth Ann; Jonsson, Ari; Aist, Gregory

    2002-01-01

    Planning and scheduling in the face of uncertainty and change pushes the capabilities of both planning and dialogue technologies by requiring complex negotiation to arrive at a workable plan. Planning for use of semi-autonomous robots involves negotiation among multiple participants with competing scientific and engineering goals to co-construct a complex plan. In NASA applications this plan construction is done under severe time pressure so having a dialogue interface to the plan construction tools can aid rapid completion of the process. But, this will put significant demands on spoken dialogue technology, particularly in the areas of dialogue management and generation. The dialogue interface will need to be able to handle the complex dialogue strategies that occur in negotiation dialogues, including hypotheticals and revisions, and the generation component will require an ability to summarize complex plans. This demonstration will describe a work in progress towards building a spoken dialogue interface to the EUROPA planner for the purposes of planning and scheduling the activities of a semi-autonomous robot. A prototype interface has been built for planning the schedule of the Personal Satellite Assistant (PSA), a mobile robot designed for micro-gravity environments that is intended for use on the Space Shuttle and International Space Station. The spoken dialogue interface gives the user the capability to ask for a description of the plan, ask specific questions about the plan, and update or modify the plan. We anticipate that a spoken dialogue interface to the planner will provide a natural augmentation or alternative to the visualization interface, in situations in which the user needs very targeted information about the plan, in situations where natural language can express complex ideas more concisely than GUI actions, or in situations in which a graphical user interface is not appropriate.

  16. Understanding the Relationship between Latino Students' Preferred Learning Styles and Their Language Spoken at Home

    ERIC Educational Resources Information Center

    Maldonado Torres, Sonia Enid

    2016-01-01

    The purpose of this study was to explore the relationships between Latino students' learning styles and their language spoken at home. Results of the study indicated that students who spoke Spanish at home had higher means in the Active Experimentation modality of learning (M = 31.38, SD = 5.70) than students who spoke English (M = 28.08,…

  17. Retinoic Acid Signaling: A New Piece in the Spoken Language Puzzle

    PubMed Central

    van Rhijn, Jon-Ruben; Vernes, Sonja C.

    2015-01-01

    Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech–motor output. Understanding the neuro-genetic mechanisms involved in the correct development and function of these pathways will shed light on how humans can effortlessly and innately use spoken language and help to elucidate what goes wrong in speech-language disorders. FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that includes receptive and expressive language impairments. The neuro-molecular mechanisms controlled by FOXP2 will give insight into our capacity for speech–motor control, but are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid (RA) signaling and to modify the cellular response to RA, a key regulator of brain development. Here we explore evidence that FOXP2 and RA function in overlapping pathways. We summate evidence at molecular, cellular, and behavioral levels that suggest an interplay between FOXP2 and RA that may be important for fine motor control and speech–motor output. We propose RA signaling is an exciting new angle from which to investigate how neuro-genetic mechanisms can contribute to the (spoken) language ready brain. PMID:26635706

  18. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    PubMed Central

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2011-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an ERP norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a typical N400 effect when participants heard critical associated and unassociated target words in word pairs. In a subsequent experiment, we presented the same word pairs in spoken discourse contexts. Target words were always consistent with the local sentence context, but were congruent or not with the global discourse (e.g., “Luckily Ben had picked up some salt and pepper/basil”, preceded by a context in which Ben was preparing marinara sauce (congruent) or dealing with an icy walkway (incongruent). ERP effects of global discourse congruence preceded those of local lexical association, suggesting an early influence of the global discourse representation on lexical processing, even in locally congruent contexts. Furthermore, effects of lexical association occurred earlier in the congruent than incongruent condition. These results differ from those that have been obtained in studies of reading, suggesting that the effects may be unique to spoken word recognition. PMID:23002319

  19. The socially weighted encoding of spoken words: a dual-route approach to speech perception

    PubMed Central

    Sumner, Meghan; Kim, Seung Kyung; King, Ed; McGowan, Kevin B.

    2014-01-01

    Spoken words are highly variable. A single word may never be uttered the same way twice. As listeners, we regularly encounter speakers of different ages, genders, and accents, increasing the amount of variation we face. How listeners understand spoken words as quickly and adeptly as they do despite this variation remains an issue central to linguistic theory. We propose that learned acoustic patterns are mapped simultaneously to linguistic representations and to social representations. In doing so, we illuminate a paradox that results in the literature from, we argue, the focus on representations and the peripheral treatment of word-level phonetic variation. We consider phonetic variation more fully and highlight a growing body of work that is problematic for current theory: words with different pronunciation variants are recognized equally well in immediate processing tasks, while an atypical, infrequent, but socially idealized form is remembered better in the long-term. We suggest that the perception of spoken words is socially weighted, resulting in sparse, but high-resolution clusters of socially idealized episodes that are robust in immediate processing and are more strongly encoded, predicting memory inequality. Our proposal includes a dual-route approach to speech perception in which listeners map acoustic patterns in speech to linguistic and social representations in tandem. This approach makes novel predictions about the extraction of information from the speech signal, and provides a framework with which we can ask new questions. We propose that language comprehension, broadly, results from the integration of both linguistic and social information. PMID:24550851

  20. The Influence of the Phonological Neighborhood Clustering-Coefficient on Spoken Word Recognition

    PubMed Central

    Chan, Kit Ying; Vitevitch, Michael S.

    2009-01-01

    Clustering coefficient—a measure derived from the new science of networks—refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words bat, hat, and can, all of which are neighbors of the word cat; the words bat and hat are also neighbors of each other. In a perceptual identification task, words with a low clustering coefficient (i.e., few neighbors are neighbors of each other) were more accurately identified than words with a high clustering coefficient (i.e., many neighbors are neighbors of each other). In a lexical decision task, words with a low clustering coefficient were responded to more quickly than words with a high clustering coefficient. These findings suggest that the structure of the lexicon, that is the similarity relationships among neighbors of the target word measured by clustering coefficient, influences lexical access in spoken word recognition. Simulations of the TRACE and Shortlist models of spoken word recognition failed to account for the present findings. A framework for a new model of spoken word recognition is proposed. PMID:19968444

  1. I Feel You: The Design and Evaluation of a Domotic Affect-Sensitive Spoken Conversational Agent

    PubMed Central

    Lutfi, Syaheerah Lebai; Fernández-Martínez, Fernando; Lorenzo-Trueba, Jaime; Barra-Chicote, Roberto; Montero, Juan Manuel

    2013-01-01

    We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction. PMID:23945740

  2. Early effects of neighborhood density and phonotactic probability of spoken words on event-related potentials.

    PubMed

    Hunter, Cynthia R

    2013-12-01

    All current models of spoken word recognition propose that sound-based representations of spoken words compete with, or inhibit, one another during recognition. In addition, certain models propose that higher probability sublexical units facilitate recognition under certain circumstances. Two experiments were conducted examining ERPs to spoken words and nonwords simultaneously varying in phonotactic probability and neighborhood density. Results showed that the amplitude of the P2 potential was greater for high probability-density words and nonwords, suggesting an early inhibitory effect of neighborhood density. In order to closely examine the role of phonotactic probability, effects of initial phoneme frequency were also examined. The latency of the P2 potential was shorter for words with high initial-consonant probability, suggesting a facilitative effect of phonotactic probability. The current results are consistent with findings from previous studies using reaction time and eye-tracking paradigms and provide new insights into the time-course of lexical and sublexical activation and competition. PMID:24129200

  3. Retinoic Acid Signaling: A New Piece in the Spoken Language Puzzle.

    PubMed

    van Rhijn, Jon-Ruben; Vernes, Sonja C

    2015-01-01

    Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech-motor output. Understanding the neuro-genetic mechanisms involved in the correct development and function of these pathways will shed light on how humans can effortlessly and innately use spoken language and help to elucidate what goes wrong in speech-language disorders. FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that includes receptive and expressive language impairments. The neuro-molecular mechanisms controlled by FOXP2 will give insight into our capacity for speech-motor control, but are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid (RA) signaling and to modify the cellular response to RA, a key regulator of brain development. Here we explore evidence that FOXP2 and RA function in overlapping pathways. We summate evidence at molecular, cellular, and behavioral levels that suggest an interplay between FOXP2 and RA that may be important for fine motor control and speech-motor output. We propose RA signaling is an exciting new angle from which to investigate how neuro-genetic mechanisms can contribute to the (spoken) language ready brain. PMID:26635706

  4. How long-term memory and accentuation interact during spoken language comprehension.

    PubMed

    Li, Xiaoqing; Yang, Yufang

    2013-04-01

    Spoken language comprehension requires immediate integration of different information types, such as semantics, syntax, and prosody. Meanwhile, both the information derived from speech signals and the information retrieved from long-term memory exert their influence on language comprehension immediately. Using EEG (electroencephalogram), the present study investigated how the information retrieved from long-term memory interacts with accentuation during spoken language comprehension. Mini Chinese discourses were used as stimuli, with an interrogative or assertive context sentence preceding the target sentence. The target sentence included one critical word conveying new information. The critical word was either highly expected or lowly expected given the information retrieved from long-term memory. Moreover, the critical word was either consistently accented or inconsistently de-accented. The results revealed that for lowly expected new information, inconsistently de-accented words elicited a larger N400 and larger theta power increases (4-6 Hz) than consistently accented words. In contrast, for the highly expected new information, consistently accented words elicited a larger N400 and larger alpha power decreases (8-14 Hz) than inconsistently de-accented words. The results suggest that, during spoken language comprehension, the effect of accentuation interacted with the information retrieved from long-term memory immediately. Moreover, our results also have important consequences for our understanding of the processing nature of the N400. The N400 amplitude is not only enhanced for incorrect information (new and de-accented word) but also enhanced for correct information (new and accented words). PMID:23376769

  5. I feel you: the design and evaluation of a domotic affect-sensitive spoken conversational agent.

    PubMed

    Lutfi, Syaheerah Lebai; Fernández-Martínez, Fernando; Lorenzo-Trueba, Jaime; Barra-Chicote, Roberto; Montero, Juan Manuel

    2013-01-01

    We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction. PMID:23945740

  6. Adaptation to Pronunciation Variations in Indonesian Spoken Query-Based Information Retrieval

    NASA Astrophysics Data System (ADS)

    Lestari, Dessi Puji; Furui, Sadaoki

    Recognition errors of proper nouns and foreign words significantly decrease the performance of ASR-based speech applications such as voice dialing systems, speech summarization, spoken document retrieval, and spoken query-based information retrieval (IR). The reason is that proper nouns and words that come from other languages are usually the most important key words. The loss of such words due to misrecognition in turn leads to a loss of significant information from the speech source. This paper focuses on how to improve the performance of Indonesian ASR by alleviating the problem of pronunciation variation of proper nouns and foreign words (English words in particular). To improve the proper noun recognition accuracy, proper-noun specific acoustic models are created by supervised adaptation using maximum likelihood linear regression (MLLR). To improve English word recognition, the pronunciation of English words contained in the lexicon is fixed by using rule-based English-to-Indonesian phoneme mapping. The effectiveness of the proposed method was confirmed through spoken query based Indonesian IR. We used Inference Network-based (IN-based) IR and compared its results with those of the classical Vector Space Model (VSM) IR, both using a tf-idf weighting schema. Experimental results show that IN-based IR outperforms VSM IR.

  7. Intentional and Reactive Inhibition During Spoken-Word Stroop Task Performance in People With Aphasia

    PubMed Central

    McNeil, Malcolm R.; Spencer, Kristie A.; Kendall, Diane L.

    2015-01-01

    Purpose The integrity of selective attention in people with aphasia (PWA) is currently unknown. Selective attention is essential for everyday communication, and inhibition is an important part of selective attention. This study explored components of inhibition—both intentional and reactive inhibition—during spoken-word production in PWA and in controls who were neurologically healthy (HC). Intentional inhibition is the ability to suppress a response to interference, and reactive inhibition is the delayed reactivation of a previously suppressed item. Method Nineteen PWA and 20 age- and education-matched HC participated in a Stroop spoken-word production task. This task allowed the examination of intentional and reactive inhibition by evoking and comparing interference, facilitation, and negative priming effects in different contexts. Results Although both groups demonstrated intentional inhibition, PWA demonstrated significantly more interference effects. PWA demonstrated no significant facilitation effects. HC demonstrated significant reverse facilitation effects. Neither group showed significant evidence of reactive inhibition, though both groups showed similar individual variability. Conclusions These results underscore the challenge interference presents for PWA during spoken-word production, indicating diminished intentional inhibition. Although reactive inhibition was not different between PWA and HC, PWA showed difficulty integrating and adapting to contextual information during language tasks. PMID:25674773

  8. Spoken word recognition by Latino children learning Spanish as their first language*

    PubMed Central

    HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE

    2010-01-01

    Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157

  9. Semantic Richness Effects in Spoken Word Recognition: A Lexical Decision and Semantic Categorization Megastudy.

    PubMed

    Goh, Winston D; Yap, Melvin J; Lau, Mabel C; Ng, Melvin M R; Tan, Luuan-Chin

    2016-01-01

    A large number of studies have demonstrated that semantic richness dimensions [e.g., number of features, semantic neighborhood density, semantic diversity , concreteness, emotional valence] influence word recognition processes. Some of these richness effects appear to be task-general, while others have been found to vary across tasks. Importantly, almost all of these findings have been found in the visual word recognition literature. To address this gap, we examined the extent to which these semantic richness effects are also found in spoken word recognition, using a megastudy approach that allows for an examination of the relative contribution of the various semantic properties to performance in two tasks: lexical decision, and semantic categorization. The results show that concreteness, valence, and number of features accounted for unique variance in latencies across both tasks in a similar direction-faster responses for spoken words that were concrete, emotionally valenced, and with a high number of features-while arousal, semantic neighborhood density, and semantic diversity did not influence latencies. Implications for spoken word recognition processes are discussed. PMID:27445936

  10. Orthographic Activation in L2 Spoken Word Recognition Depends on Proficiency: Evidence from Eye-Tracking

    PubMed Central

    Veivo, Outi; Järvikivi, Juhani; Porretta, Vincent; Hyönä, Jukka

    2016-01-01

    The use of orthographic and phonological information in spoken word recognition was studied in a visual world task where L1 Finnish learners of L2 French (n = 64) and L1 French native speakers (n = 24) were asked to match spoken word forms with printed words while their eye movements were recorded. In Experiment 1, French target words were contrasted with competitors having a longer ( vs. ) or a shorter word initial phonological overlap ( vs. ) and an identical orthographic overlap. In Experiment 2, target words were contrasted with competitors of either longer ( vs. ) or shorter word initial orthographic overlap ( vs. ) and of an identical phonological overlap. A general phonological effect was observed in the L2 listener group but not in the L1 control group. No general orthographic effects were observed in the L2 or L1 groups, but a significant effect of proficiency was observed for orthographic overlap over time: higher proficiency L2 listeners used also orthographic information in the matching task in a time-window from 400 to 700 ms, whereas no such effect was observed for lower proficiency listeners. These results suggest that the activation of orthographic information in L2 spoken word recognition depends on proficiency in L2. PMID:27512381

  11. The time course of spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    PubMed

    Huang, Xianjun; Yang, Jin-Chen; Zhang, Qin; Guo, Chunyan

    2014-10-01

    In the present study, two experiments were carried out to investigate the time course of spoken word recognition in Mandarin Chinese using both event-related potentials (ERPs) and behavioral measures. To address the hypothesis that there is an early phonological processing stage independent of semantics during spoken word recognition, a unimodal word-matching paradigm was employed, in which both prime and target words were presented auditorily. Experiment 1 manipulated the phonological relations between disyllabic primes and targets, and found an enhanced P2 (200-270 ms post-target onset) as well as a smaller early N400 to word-initial phonological mismatches over fronto-central scalp sites. Experiment 2 manipulated both phonological and semantic relations between monosyllabic primes and targets, and replicated the phonological mismatch-associated P2, which was not modulated by semantic relations. Overall, these results suggest that P2 is a sensitive electrophysiological index of early phonological processing independent of semantics in Mandarin Chinese spoken word recognition. PMID:25172388

  12. Semantic Richness Effects in Spoken Word Recognition: A Lexical Decision and Semantic Categorization Megastudy

    PubMed Central

    Goh, Winston D.; Yap, Melvin J.; Lau, Mabel C.; Ng, Melvin M. R.; Tan, Luuan-Chin

    2016-01-01

    A large number of studies have demonstrated that semantic richness dimensions [e.g., number of features, semantic neighborhood density, semantic diversity , concreteness, emotional valence] influence word recognition processes. Some of these richness effects appear to be task-general, while others have been found to vary across tasks. Importantly, almost all of these findings have been found in the visual word recognition literature. To address this gap, we examined the extent to which these semantic richness effects are also found in spoken word recognition, using a megastudy approach that allows for an examination of the relative contribution of the various semantic properties to performance in two tasks: lexical decision, and semantic categorization. The results show that concreteness, valence, and number of features accounted for unique variance in latencies across both tasks in a similar direction—faster responses for spoken words that were concrete, emotionally valenced, and with a high number of features—while arousal, semantic neighborhood density, and semantic diversity did not influence latencies. Implications for spoken word recognition processes are discussed. PMID:27445936

  13. Adult Speakers' Tongue-Palate Contact Patterns for Bilabial Stops within Complex Clusters

    ERIC Educational Resources Information Center

    Zharkova, Natalia; Schaeffler, Sonja; Gibbon, Fiona E.

    2009-01-01

    Previous studies using Electropalatography (EPG) have shown that individuals with speech disorders sometimes produce articulation errors that affect bilabial targets, but currently there is limited normative data available. In this study, EPG and acoustic data were recorded during complex word final sps clusters spoken by 20 normal adults. A total…

  14. Discrimination Skills Predict Effective Preference Assessment Methods for Adults with Developmental Disabilities

    ERIC Educational Resources Information Center

    Lee, May S. H.; Nguyen, Duong; Yu, C. T.; Thorsteinsson, Jennifer R.; Martin, Toby L.; Martin, Garry L.

    2008-01-01

    We examined the relationship between three discrimination skills (visual, visual matching-to-sample, and auditory-visual) and four stimulus modalities (object, picture, spoken, and video) in assessing preferences of leisure activities for 7 adults with developmental disabilities. Three discrimination skills were measured using the Assessment of…

  15. Conducting spoken word recognition research online: Validation and a new timing method.

    PubMed

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research. PMID:25987305

  16. Symbolic gestures and spoken language are processed by a common neural system.

    PubMed

    Xu, Jiang; Gannon, Patrick J; Emmorey, Karen; Smith, Jason F; Braun, Allen R

    2009-12-01

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects. PMID:19923436

  17. Symbolic gestures and spoken language are processed by a common neural system

    PubMed Central

    Xu, Jiang; Gannon, Patrick J.; Emmorey, Karen; Smith, Jason F.; Braun, Allen R.

    2009-01-01

    Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating “be quiet”), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects. PMID:19923436

  18. Alpha and theta brain oscillations index dissociable processes in spoken word recognition.

    PubMed

    Strauß, Antje; Kotz, Sonja A; Scharinger, Mathias; Obleser, Jonas

    2014-08-15

    Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition. PMID:24747736

  19. Syllable frequency and word frequency effects in spoken and written word production in a non-alphabetic script

    PubMed Central

    Zhang, Qingfang; Wang, Cheng

    2014-01-01

    The effects of word frequency (WF) and syllable frequency (SF) are well-established phenomena in domain such as spoken production in alphabetic languages. Chinese, as a non-alphabetic language, presents unique lexical and phonological properties in speech production. For example, the proximate unit of phonological encoding is syllable in Chinese but segments in Dutch, French or English. The present study investigated the effects of WF and SF, and their interaction in Chinese written and spoken production. Significant facilitatory WF and SF effects were observed in spoken as well as in written production. The SF effect in writing indicated that phonological properties (i.e., syllabic frequency) constrain orthographic output via a lexical route, at least, in Chinese written production. However, the SF effect over repetitions was divergent in both modalities: it was significant in the former two repetitions in spoken whereas it was significant in the second repetition only in written. Due to the fragility of the SF effect in writing, we suggest that the phonological influence in handwritten production is not mandatory and universal, and it is modulated by experimental manipulations. This provides evidence for the orthographic autonomy hypothesis, rather than the phonological mediation hypothesis. The absence of an interaction between WF and SF showed that the SF effect is independent of the WF effect in spoken and written output modalities. The implications of these results on written production models are discussed. PMID:24600420

  20. Comparative metabolism of branched-chain amino acids to precursors of juvenile hormone biogenesis in corpora allata of lepidopterous versus nonlepidopterous insects

    SciTech Connect

    Brindle, P.A.; Schooley, D.A.; Tsai, L.W.; Baker, F.C.

    1988-08-05

    Comparative studies were performed on the role of branched-chain amino acids (BCAA) in juvenile hormone (JH) biosynthesis using several lepidopterous and nonlepidopterous insects. Corpora cardiaca-corpora allata complexes (CC-CA, the corpora allata being the organ of JH biogenesis) were maintained in culture medium containing a uniformly /sup 14/C-labeled BCAA, together with (methyl-/sup 3/H)methionine as mass marker for JH quantification. BCAA catabolism was quantified by directly analyzing the medium for the presence of /sup 14/C-labeled propionate and/or acetate, while JHs were extracted, purified by liquid chromatography, and subjected to double-label liquid scintillation counting. Our results indicate that active BCAA catabolism occurs within the CC-CA of lepidopterans, and this efficiently provides propionyl-CoA (from isoleucine or valine) for the biosynthesis of the ethyl branches of JH I and II. Acetyl-CoA, formed from isoleucine or leucine catabolism, is also utilized by lepidopteran CC-CA for biosynthesizing JH III and the acetate-derived portions of the ethyl-branched JHs. In contrast, CC-CA of nonlepidopterans fail to catabolize BCAA. Consequently, exogenous isoleucine or leucine does not serve as a carbon source for the biosynthesis of JH III by these glands, and no propionyl-CoA is produced for genesis of ethyl-branched JHs. This is the first observation of a tissue-specific metabolic difference which in part explains why these novel homosesquiterpenoids exist in lepidopterans, but not in nonlepidopterans.

  1. Dose-Volume Parameters of the Corpora Cavernosa Do Not Correlate With Erectile Dysfunction After External Beam Radiotherapy for Prostate Cancer: Results From a Dose-Escalation Trial

    SciTech Connect

    Wielen, Gerard J. van der Hoogeman, Mischa S.; Dohle, Gert R.; Putten, Wim L.J. van; Incrocci, Luca

    2008-07-01

    Purpose: To analyze the correlation between dose-volume parameters of the corpora cavernosa and erectile dysfunction (ED) after external beam radiotherapy (EBRT) for prostate cancer. Methods and Materials: Between June 1997 and February 2003, a randomized dose-escalation trial comparing 68 Gy and 78 Gy was conducted. Patients at our institute were asked to participate in an additional part of the trial evaluating sexual function. After exclusion of patients with less than 2 years of follow-up, ED at baseline, or treatment with hormonal therapy, 96 patients were eligible. The proximal corpora cavernosa (crura), the superiormost 1-cm segment of the crura, and the penile bulb were contoured on the planning computed tomography scan and dose-volume parameters were calculated. Results: Two years after EBRT, 35 of the 96 patients had developed ED. No statistically significant correlations between ED 2 years after EBRT and dose-volume parameters of the crura, the superiormost 1-cm segment of the crura, or the penile bulb were found. The few patients using potency aids typically indicated to have ED. Conclusion: No correlation was found between ED after EBRT for prostate cancer and radiation dose to the crura or penile bulb. The present study is the largest study evaluating the correlation between ED and radiation dose to the corpora cavernosa after EBRT for prostate cancer. Until there is clear evidence that sparing the penile bulb or crura will reduce ED after EBRT, we advise to be careful in sparing these structures, especially when this involves reducing treatment margins.

  2. Effects of word frequency, contextual diversity, and semantic distinctiveness on spoken word recognition

    PubMed Central

    Johns, Brendan T.; Gruenenfelder, Thomas M.; Pisoni, David B.; Jones, Michael N.

    2012-01-01

    The relative abilities of word frequency, contextual diversity, and semantic distinctiveness to predict accuracy of spoken word recognition in noise were compared using two data sets. Word frequency is the number of times a word appears in a corpus of text. Contextual diversity is the number of different documents in which the word appears in that corpus. Semantic distinctiveness takes into account the number of different semantic contexts in which the word appears. Semantic distinctiveness and contextual diversity were both able to explain variance above and beyond that explained by word frequency, which by itself explained little unique variance. PMID:22894319

  3. Lexical effects on spoken word recognition in children with normal hearing a

    PubMed Central

    Krull, Vidya; Choi, Sangsook; Kirk, Karen Iler; Prusick, Lindsay; French, Brian

    2009-01-01

    Summary This paper outlines the development of a theoretically-motivated sentence recognition test for children. Previous sentence tests such as the Lexical Neighborhood Test and the Multisyllabic Lexical Neighborhood Test examined lexical effects on children's recognition of words. In previous studies related to their test development, lexical characteristics were confounded. This study examines independent effects of word frequency and lexical density on a new test of spoken word recognition in children. Results show that word frequency and lexical density influence word recognition both independently, and in combination. Lexical density appears to be more heavily weighted than word frequency in children. PMID:19701087

  4. Copula filtration of spoken language signals on the background of acoustic noise

    NASA Astrophysics Data System (ADS)

    Kolchenko, Lilia V.; Sinitsyn, Rustem B.

    2010-09-01

    This paper is devoted to the filtration of acoustic signals on the background of acoustic noise. Signal filtering is done with the help of a nonlinear analogue of a correlation function - a copula. The copula is estimated with the help of kernel estimates of the cumulative distribution function. At the second stage we suggest a new procedure of adaptive filtering. The silence and sound intervals are detected before the filtration with the help of nonparametric algorithm. The results are confirmed by experimental processing of spoken language signals.

  5. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    PubMed

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. PMID:26669620

  6. Learning and consolidation of new spoken words in autism spectrum disorder.

    PubMed

    Henderson, Lisa; Powell, Anna; Gareth Gaskell, M; Norbury, Courtenay

    2014-11-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words and/or integrating them with existing knowledge. Nineteen boys with ASD and 19 typically developing (TD) boys matched on age and vocabulary knowledge showed similar improvements in recognition and recall of novel words (e.g. 'biscal') 24 hours after training, suggesting an intact ability to consolidate explicit knowledge of new spoken word forms. TD children showed competition effects for existing neighbors (e.g. 'biscuit') after 24 hours, suggesting that the new words had been integrated with existing knowledge over time. In contrast, children with ASD showed immediate competition effects that were not significant after 24 hours, suggesting a qualitative difference in the time course of lexical integration. These results are considered from the perspective of the dual-memory systems framework. PMID:24636285

  7. The neural basis of inhibitory effects of semantic and phonological neighbors in spoken word production

    PubMed Central

    Mirman, Daniel; Graziano, Kristen M.

    2014-01-01

    Theories of word production and word recognition generally agree that multiple word candidates are activated during processing. The facilitative and inhibitory effects of these “lexical neighbors” have been studied extensively using behavioral methods and have spurred theoretical development in psycholinguistics, but relatively little is known about the neural basis of these effects and how lesions may affect them. The present study used voxel-wise lesion overlap subtraction to examine semantic and phonological neighbor effects in spoken word production following left hemisphere stroke. Increased inhibitory effects of near semantic neighbors were associated with inferior frontal lobe lesions, suggesting impaired selection among strongly activated semantically-related candidates. Increased inhibitory effects of phonological neighbors were associated with posterior superior temporal and inferior parietal lobe lesions. In combination with previous studies, these results suggest that such lesions cause phonological-to-lexical feedback to more strongly activate phonologically-related lexical candidates. The comparison of semantic and phonological neighbor effects and how they are affected by left hemisphere lesions provides new insights into the cognitive dynamics and neural basis of phonological, semantic, and cognitive control processes in spoken word production. PMID:23647518

  8. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition.

    PubMed

    Jesse, Alexandra; McQueen, James M

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress. PMID:24134065

  9. Causal Influence of Articulatory Motor Cortex on Comprehending Single Spoken Words: TMS Evidence

    PubMed Central

    Schomers, Malte R.; Kirilina, Evgeniya; Weigand, Anne; Bajbouj, Malek; Pulvermüller, Friedemann

    2015-01-01

    Classic wisdom had been that motor and premotor cortex contribute to motor execution but not to higher cognition and language comprehension. In contrast, mounting evidence from neuroimaging, patient research, and transcranial magnetic stimulation (TMS) suggest sensorimotor interaction and, specifically, that the articulatory motor cortex is important for classifying meaningless speech sounds into phonemic categories. However, whether these findings speak to the comprehension issue is unclear, because language comprehension does not require explicit phonemic classification and previous results may therefore relate to factors alien to semantic understanding. We here used the standard psycholinguistic test of spoken word comprehension, the word-to-picture-matching task, and concordant TMS to articulatory motor cortex. TMS pulses were applied to primary motor cortex controlling either the lips or the tongue as subjects heard critical word stimuli starting with bilabial lip-related or alveolar tongue-related stop consonants (e.g., “pool” or “tool”). A significant cross-over interaction showed that articulatory motor cortex stimulation delayed comprehension responses for phonologically incongruent words relative to congruous ones (i.e., lip area TMS delayed “tool” relative to “pool” responses). As local TMS to articulatory motor areas differentially delays the comprehension of phonologically incongruous spoken words, we conclude that motor systems can take a causal role in semantic comprehension and, hence, higher cognition. PMID:25452575

  10. Effects of orthographic consistency and homophone density on Chinese spoken word recognition.

    PubMed

    Chen, Wei-Fan; Chao, Pei-Chun; Chang, Ya-Ning; Hsu, Chun-Hsien; Lee, Chia-Ying

    2016-01-01

    Studies of alphabetic language have shown that orthographic knowledge influences phonological processing during spoken word recognition. This study utilized the Event-Related Potentials (ERPs) to differentiate two types of phonology-to-orthography (P-to-O) mapping consistencies in Chinese, namely homophone density and orthographic consistency. The ERP data revealed an orthographic consistency effect in the frontal-centrally distributed N400, and a homophone density effect in central-posteriorly distributed late positive component (LPC). Further source analyses using the standardized low-resolution electromagnetic tomography (sLORETA) demonstrated that the orthographic effect was not only localized in the frontal and temporal-parietal regions for phonological processing, but also in the posterior visual cortex for orthographic processing, while the homophone density effect was found in middle temporal gyrus for lexical-semantic selection, and in the temporal-occipital junction for orthographic processing. These results suggest that orthographic information not only shapes the nature of phonological representations, but may also be activated during on-line spoken word recognition. PMID:27174851

  11. Phonological Neighborhood Effects in Spoken Word Production: An fMRI Study

    PubMed Central

    Peramunage, D.; Blumstein, S. E.; Myers, E.B.; Goldrick, M.; Baese-Berk, M.

    2010-01-01

    The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words which either had a voiced minimal pair (MP) neighbor (e.g. cape) or lacked a minimal pair (NMP) neighbor (e.g. cake). The voiced neighbor never appeared in the stimulus set. Behavioral results showed longer voice-onset time for MP target words, replicating earlier behavioral results (Baese-Berk & Goldrick, 2009). fMRI results revealed reduced activation for MP words compared to NMP words in a network including the left posterior superior temporal gyrus, the supramarginal gyrus, inferior frontal gyrus, and precentral gyrus. These findings support cascade models of spoken word production and show that neural activation at the lexical level modulates activation in those brain regions involved in lexical selection, phonological planning, and ultimately motor plans for production. The facilitatory effects for words with minimal pair neighbors suggest that competition effects reflect the overlap inherent in the phonological representation of the target word and its minimal pair neighbor. PMID:20350185

  12. Positive Emotional Language in the Final Words Spoken Directly Before Execution

    PubMed Central

    Hirschmüller, Sarah; Egloff, Boris

    2016-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality. PMID:26793135

  13. [Formula: see text] and [Formula: see text] Spoken Word Processing: Evidence from Divided Attention Paradigm.

    PubMed

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language ([Formula: see text] and second language ([Formula: see text] spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken [Formula: see text] and [Formula: see text] words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their [Formula: see text] and [Formula: see text]. In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed [Formula: see text] and [Formula: see text] words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in [Formula: see text]. Moreover, [Formula: see text] word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed. PMID:26643309

  14. Electrophysiological Correlates of Emotional Content and Volume Level in Spoken Word Processing

    PubMed Central

    Grass, Annika; Bayer, Mareike; Schacht, Annekathrin

    2016-01-01

    For visual stimuli of emotional content as pictures and written words, stimulus size has been shown to increase emotion effects in the early posterior negativity (EPN), a component of event-related potentials (ERPs) indexing attention allocation during visual sensory encoding. In the present study, we addressed the question whether this enhanced relevance of larger (visual) stimuli might generalize to the auditory domain and whether auditory emotion effects are modulated by volume. Therefore, subjects were listening to spoken words with emotional or neutral content, played at two different volume levels, while ERPs were recorded. Negative emotional content led to an increased frontal positivity and parieto-occipital negativity—a scalp distribution similar to the EPN—between ~370 and 530 ms. Importantly, this emotion-related ERP component was not modulated by differences in volume level, which impacted early auditory processing, as reflected in increased amplitudes of the N1 (80–130 ms) and P2 (130–265 ms) components as hypothesized. However, contrary to effects of stimulus size in the visual domain, volume level did not influence later ERP components. These findings indicate modality-specific and functionally independent processing triggered by emotional content of spoken words and volume level. PMID:27458359

  15. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    PubMed

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-01-01

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150-250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300-500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500-700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements. PMID:27180951

  16. Accessing characters in spoken Chinese disyllables: An ERP study on the resolution of auditory ambiguity.

    PubMed

    Chen, Xuqian; Huang, Guoliang; Huang, Jian

    2016-01-01

    Chinese differs from most Indo-European languages in its phonological, lexical, and syntactic structures. One of its unique properties is the abundance of homophones at the monosyllabic/morphemic level, with the consequence that monosyllabic homophones are all ambiguous in speech perception. Two-morpheme Chinese words can be composed of two high homophone-density morphemes (HH words), two low homophone-density morphemes (LL words), or one high and one low homophone-density morphemes (LH or HL words). The assumption of a simple inhibitory homophone effect is called into question in the case of disyllabic spoken word recognition, in which the recognition of one morpheme is affected by semantic information given by the other. Event-related brain potentials (ERPs) were used to trace on-line competitions among morphemic homophones in accessing Chinese disyllables. Results showing significant differences in ERP amplitude when comparing LL and LH words, but not when comparing LL and HL words, suggested that the first morpheme cannot be accessed without feedback from the second morpheme. Most importantly, analyses of N400 amplitude among different densities showed a converse homophone effect in which LL words, rather than LH or HL words, triggered larger N400. These findings provide strong evidence of a dynamic integration system at work during spoken Chinese disyllable recognition. PMID:26589544

  17. Brain-to-text: decoding spoken phrases from phone representations in the brain

    PubMed Central

    Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja

    2015-01-01

    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech. PMID:26124702

  18. Task modulation of disyllabic spoken word recognition in Mandarin Chinese: a unimodal ERP study

    PubMed Central

    Huang, Xianjun; Yang, Jin-Chen; Chang, Ruohan; Guo, Chunyan

    2016-01-01

    Using unimodal auditory tasks of word-matching and meaning-matching, this study investigated how the phonological and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechanism induced by experimental tasks. Both semantic similarity and word-initial phonological similarity between the primes and targets were manipulated. Results showed that at early stage of recognition (~150–250 ms), an enhanced P2 was elicited by the word-initial phonological mismatch in both tasks. In ~300–500 ms, a fronto-central negative component was elicited by word-initial phonological similarities in the word-matching task, while a parietal negativity was elicited by semantically unrelated primes in the meaning-matching task, indicating that both the semantic and phonological processes can be involved in this time window, depending on the task requirements. In the late stage (~500–700 ms), a centro-parietal Late N400 was elicited in both tasks, but with a larger effect in the meaning-matching task than in the word-matching task. This finding suggests that the semantic representation of the spoken words can be activated automatically in the late stage of recognition, even when semantic processing is not required. However, the magnitude of the semantic activation is modulated by task requirements. PMID:27180951

  19. Experimentally-induced Increases in Early Gesture Lead to Increases in Spoken Vocabulary

    PubMed Central

    LeBarton, Eve Sauer; Goldin-Meadow, Susan; Raudenbush, Stephen

    2014-01-01

    Differences in vocabulary that children bring with them to school can be traced back to the gestures they produce at 1;2, which, in turn, can be traced back to the gestures their parents produce at the same age (Rowe & Goldin-Meadow, 2009b). We ask here whether child gesture can be experimentally increased and, if so, whether the increases lead to increases in spoken vocabulary. Fifteen children aged 1;5 participated in an 8-week at-home intervention study (6 weekly training sessions plus follow-up 2 weeks later) in which all were exposed to object words, but only some were told to point at the named objects. Before each training session and at follow-up, children interacted naturally with caregivers to establish a baseline against which changes in communication were measured. Children who were told to gesture increased the number of gesture meanings they conveyed, not only during training but also during interactions with caregivers. These experimentally-induced increases in gesture led to larger spoken repertoires at follow-up. PMID:26120283

  20. Positive Emotional Language in the Final Words Spoken Directly Before Execution.

    PubMed

    Hirschmüller, Sarah; Egloff, Boris

    2015-01-01

    How do individuals emotionally cope with the imminent real-world salience of mortality? DeWall and Baumeister as well as Kashdan and colleagues previously provided support that an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one's own contemplated death. Although these studies provide important insights into the psychological dynamics of mortality salience, it remains an open question how individuals cope with the immense threat of mortality prior to their imminent actual death. In the present research, we therefore analyzed positivity in the final words spoken immediately before execution by 407 death row inmates in Texas. By using computerized quantitative text analysis as an objective measure of emotional language use, our results showed that the final words contained a significantly higher proportion of positive than negative emotion words. This emotional positivity was significantly higher than (a) positive emotion word usage base rates in spoken and written materials and (b) positive emotional language use with regard to contemplated death and attempted or actual suicide. Additional analyses showed that emotional positivity in final statements was associated with a greater frequency of language use that was indicative of self-references, social orientation, and present-oriented time focus as well as with fewer instances of cognitive-processing, past-oriented, and death-related word use. Taken together, our findings offer new insights into how individuals cope with the imminent real-world salience of mortality. PMID:26793135

  1. "We Communicated That Way for a Reason": Language Practices and Language Ideologies among Hearing Adults Whose Parents Are Deaf

    ERIC Educational Resources Information Center

    Pizer, Ginger; Walters, Keith; Meier, Richard P.

    2013-01-01

    Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing…

  2. The Hebrew CHILDES corpus: transcription and morphological analysis

    PubMed Central

    Albert, Aviad; MacWhinney, Brian; Nir, Bracha

    2014-01-01

    We present a corpus of transcribed spoken Hebrew that reflects spoken interactions between children and adults. The corpus is an integral part of the CHILDES database, which distributes similar corpora for over 25 languages. We introduce a dedicated transcription scheme for the spoken Hebrew data that is sensitive to both the phonology and the standard orthography of the language. We also introduce a morphological analyzer that was specifically developed for this corpus. The analyzer adequately covers the entire corpus, producing detailed correct analyses for all tokens. Evaluation on a new corpus reveals high coverage as well. Finally, we describe a morphological disambiguation module that selects the correct analysis of each token in context. The result is a high-quality morphologically-annotated CHILDES corpus of Hebrew, along with a set of tools that can be applied to new corpora. PMID:25419199

  3. The Activation of Embedded Words in Spoken Word Identification Is Robust but Constrained: Evidence from the Picture-Word Interference Paradigm

    ERIC Educational Resources Information Center

    Bowers, Jeffrey S.; Davis, Colin J.; Mattys, Sven L.; Damian, Markus F.; Hanley, Derek

    2009-01-01

    Three picture-word interference (PWI) experiments assessed the extent to which embedded subset words are activated during the identification of spoken superset words (e.g., "bone" in "trombone"). Participants named aloud pictures (e.g., "brain") while spoken distractors were presented. In the critical condition, superset distractors contained a…

  4. How the stigma of low literacy can impair patient-professional spoken interactions and affect health: insights from a qualitative investigation

    PubMed Central

    2013-01-01

    Background Low literacy is a significant problem across the developed world. A considerable body of research has reported associations between low literacy and less appropriate access to healthcare services, lower likelihood of self-managing health conditions well, and poorer health outcomes. There is a need to explore the previously neglected perspectives of people with low literacy to help explain how low literacy can lead to poor health, and to consider how to improve the ability of health services to meet their needs. Methods Two stage qualitative study. In-depth individual interviews followed by focus groups to confirm analysis and develop suggestions for service improvements. A purposive sample of 29 adults with English as their first language who had sought help with literacy was recruited from an Adult Learning Centre in the UK. Results Over and above the well-documented difficulties that people with low literacy can have with the written information and complex explanations and instructions they encounter as they use health services, the stigma of low literacy had significant negative implications for participants’ spoken interactions with healthcare professionals. Participants described various difficulties in consultations, some of which had impacted negatively on their broader healthcare experiences and abilities to self-manage health conditions. Some communication difficulties were apparently perpetuated or exacerbated because participants limited their conversational engagement and used a variety of strategies to cover up their low literacy that could send misleading signals to health professionals. Participants’ biographical narratives revealed that the ways in which they managed their low literacy in healthcare settings, as in other social contexts, stemmed from highly negative experiences with literacy-related stigma, usually from their schooldays onwards. They also suggest that literacy-related stigma can significantly undermine mental

  5. Defining Spoken Language Benchmarks and Selecting Measures of Expressive Language Development for Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2009-01-01

    Purpose: The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken…

  6. Predictors of Early Reading Skill in 5-Year-Old Children with Hearing Loss Who Use Spoken Language

    ERIC Educational Resources Information Center

    Cupples, Linda; Ching, Teresa Y. C.; Crowe, Kathryn; Day, Julia; Seeto, Mark

    2014-01-01

    This research investigated the concurrent association between early reading skills and phonological awareness (PA), print knowledge, language, cognitive, and demographic variables in 101 five-Year-Old Children with prelingual hearing losses ranging from mild to profound who communicated primarily via spoken language. All participants were fitted…

  7. Words Spoken with Insistence: "Wak'as" and the Limits of the Bolivian Multi-Institutional Democracy

    ERIC Educational Resources Information Center

    Cuelenaere, Laurence Janine

    2009-01-01

    Building on 18 months of fieldwork in the Bolivian highlands, this dissertation examines how traversing landscapes, through the mediation of spatial practices and spoken words, are embedded in systems of belief. By focusing on "wak'as" (i.e. sacred objects) and on how the inhabitants of the Altiplano relate to the Andean deities known as "wak'as,"…

  8. The Development and Validation of the "Academic Spoken English Strategies Survey (ASESS)" for Non-Native English Speaking Graduate Students

    ERIC Educational Resources Information Center

    Schroeder, Rui M.

    2016-01-01

    This study reports on the three-year development and validation of a new assessment tool--the Academic Spoken English Strategies Survey (ASESS). The questionnaire is the first of its kind to assess the listening and speaking strategy use of non-native English speaking (NNES) graduate students. A combination of sources was used to develop the…

  9. Caracterizacion Lexica del Espanol Hablado en el Noroeste de Indiana (Lexical Characterization of the Spanish Spoken in Northwest Indiana).

    ERIC Educational Resources Information Center

    Mendieta, Eva; Molina, Isabel

    2000-01-01

    Analyzes Spanish lexical data recorded in sociolinguistic interviews with Hispanic community members in Northwest Indiana. Examined how prevalent English is in the spoken Spanish of this community; what variety of Spanish is regarded prestigious; whether lexical forms establish the prestige dialect adopted by speakers of other dialects; the…

  10. The Sociolinguistics of Variety Identification and Categorisation: Free Classification of Varieties of Spoken English Amongst Non-Linguist Listeners

    ERIC Educational Resources Information Center

    McKenzie, Robert M.

    2015-01-01

    In addition to the examination of non-linguists' evaluations of different speech varieties, in recent years sociolinguists and sociophoneticians have afforded greater attention towards the ways in which naïve listeners perceive, process, and encode spoken language variation, including the identification of language varieties as regionally or…

  11. Parent and Teacher Perceptions of Transitioning Students from a Listening and Spoken Language School to the General Education Setting

    ERIC Educational Resources Information Center

    Rugg, Natalie; Donne, Vicki

    2011-01-01

    The present study examines the perception of parents and teachers towards the transition process and preparedness of students who are deaf or hard of hearing from a listening and spoken language school in the Northeastern United States to a general education setting in their home school districts. The study uses a mixed methods design with…

  12. Top Languages Spoken by English Language Learners Nationally and by State. ELL Information Center Fact Sheet Series. No. 3

    ERIC Educational Resources Information Center

    Batalova, Jeanne; McHugh, Margie

    2010-01-01

    While English Language Learner (ELL) students in the United States speak more than 150 languages, Spanish is by far the most common home or first language, but is not the top language spoken by ELLs in every state. This fact sheet, based on analysis of the U.S. Census Bureau's 2009 American Community Survey, documents the top languages spoken…

  13. How Vocabulary Size in Two Languages Relates to Efficiency in Spoken Word Recognition by Young Spanish-English Bilinguals

    ERIC Educational Resources Information Center

    Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda

    2010-01-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language…

  14. The Interaction of Lexical Semantics and Cohort Competition in Spoken Word Recognition: An fMRI Study

    ERIC Educational Resources Information Center

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K.

    2011-01-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…

  15. Spoken Language Comprehension of Phrases, Simple and Compound-Active Sentences in Non-Speaking Children with Severe Cerebral Palsy

    ERIC Educational Resources Information Center

    Geytenbeek, Joke J. M.; Heim, Margriet J. M.; Knol, Dirk L.; Vermeulen, R. Jeroen; Oostrom, Kim J.

    2015-01-01

    Background Children with severe cerebral palsy (CP) (i.e. "non-speaking children with severely limited mobility") are restricted in many domains that are important to the acquisition of language. Aims To investigate comprehension of spoken language on sentence type level in non-speaking children with severe CP. Methods & Procedures…

  16. Does It Really Matter whether Students' Contributions Are Spoken versus Typed in an Intelligent Tutoring System with Natural Language?

    ERIC Educational Resources Information Center

    D'Mello, Sidney K.; Dowell, Nia; Graesser, Arthur

    2011-01-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The "speech facilitation" hypothesis predicts that spoken input will "increase" learning, whereas the "text…

  17. Teaching Pragmatic Awareness of Spoken Requests to Chinese EAP Learners in the UK: Is Explicit Instruction Effective?

    ERIC Educational Resources Information Center

    Halenko, Nicola; Jones, Christian

    2011-01-01

    The aim of this study is to evaluate the impact of explicit interventional treatment on developing pragmatic awareness and production of spoken requests in an EAP context (taken here to mean those studying/using English for academic purposes in the UK) with Chinese learners of English at a British higher education institution. The study employed…

  18. Assessing Multimodal Spoken Word-in-Sentence Recognition in Children with Normal Hearing and Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Holt, Rachael Frush; Kirk, Karen Iler; Hay-McCutcheon, Marcia

    2011-01-01

    Purpose: To examine multimodal spoken word-in-sentence recognition in children. Method: Two experiments were undertaken. In Experiment 1, the youngest age with which the multimodal sentence recognition materials could be used was evaluated. In Experiment 2, lexical difficulty and presentation modality effects were examined, along with test-retest…

  19. Influence of Spoken Language on the Initial Acquisition of Reading/Writing: Critical Analysis of Verbal Deficit Theory

    ERIC Educational Resources Information Center

    Ramos-Sanchez, Jose Luis; Cuadrado-Gordillo, Isabel

    2004-01-01

    This article presents the results of a quasi-experimental study of whether there exists a causal relationship between spoken language and the initial learning of reading/writing. The subjects were two matched samples each of 24 preschool pupils (boys and girls), controlling for certain relevant external variables. It was found that there was no…

  20. Standardization and Whiteness: One and the Same? A Response to "There Is No Culturally Responsive Teaching Spoken Here"

    ERIC Educational Resources Information Center

    Weilbacher, Gary

    2012-01-01

    The article "There Is No Culturally Responsive Teaching Spoken Here: A Critical Race Perspective" by Cleveland Hayes and Brenda C. Juarez suggests that the current focus on meeting standards incorporates limited thoughtful discussions related to complex notions of diversity. Our response suggests a strong link between standardization and White…

  1. Are Young Children With Cochlear Implants Sensitive to the Statistics of Words in the Ambient Spoken Language?

    PubMed Central

    McGregor, Karla K.; Spencer, Linda J.

    2015-01-01

    Purpose The purpose of this study was to determine whether children with cochlear implants (CIs) are sensitive to statistical characteristics of words in the ambient spoken language, whether that sensitivity changes in expected ways as their spoken lexicon grows, and whether that sensitivity varies with unilateral or bilateral implantation. Method We analyzed archival data collected from the parents of 36 children who received cochlear implantation (20 unilateral, 16 bilateral) before 24 months of age. The parents reported their children's word productions 12 months after implantation using the MacArthur Communicative Development Inventories: Words and Sentences (Fenson et al., 1993). We computed the number of words, out of 292 possible monosyllabic nouns, verbs, and adjectives, that each child was reported to say and calculated the average phonotactic probability, neighborhood density, and word frequency of the reported words. Results Spoken vocabulary size positively correlated with average phonotactic probability and negatively correlated with average neighborhood density, but only in children with bilateral CIs. Conclusion At 12 months postimplantation, children with bilateral CIs demonstrate sensitivity to statistical characteristics of words in the ambient spoken language akin to that reported for children with normal hearing during the early stages of lexical development. Children with unilateral CIs do not. PMID:25677929

  2. Building Language Blocks in L2 Japanese: Chunk Learning and the Development of Complexity and Fluency in Spoken Production

    ERIC Educational Resources Information Center

    Taguchi, Naoko

    2008-01-01

    This pilot study examined the development of complexity and fluency of second language (L2) spoken production among L2 learners who received extensive practice on grammatical chunks as constituent units of discourse. Twenty-two students enrolled in an elementary Japanese course at a U.S. university received classroom instruction on 40 grammatical…

  3. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment

    PubMed Central

    Planchou, Clément; Clément, Sylvain; Béland, Renée; Cason, Nia; Motte, Jacques; Samson, Séverine

    2015-01-01

    Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection. PMID:26767070

  4. A Spoken-Language Intervention for School-Aged Boys With Fragile X Syndrome.

    PubMed

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2016-05-01

    Using a single case design, a parent-mediated spoken-language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared storytelling using wordless picture books and targeted three empirically derived language-support strategies. All sessions were implemented through distance videoteleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data were collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies, and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214

  5. Speech Recognition System and Formant Based Analysis of Spoken Arabic Vowels

    NASA Astrophysics Data System (ADS)

    Alotaibi, Yousef Ajami; Hussain, Amir

    Arabic is one of the world's oldest languages and is currently the second most spoken language in terms of number of speakers. However, it has not received much attention from the traditional speech processing research community. This study is specifically concerned with the analysis of vowels in modern standard Arabic dialect. The first and second formant values in these vowels are investigated and the differences and similarities between the vowels are explored using consonant-vowels-consonant (CVC) utterances. For this purpose, an HMM based recognizer was built to classify the vowels and the performance of the recognizer analyzed to help understand the similarities and dissimilarities between the phonetic features of vowels. The vowels are also analyzed in both time and frequency domains, and the consistent findings of the analysis are expected to facilitate future Arabic speech processing tasks such as vowel and speech recognition and classification.

  6. The evolutionary history of genes involved in spoken and written language: beyond FOXP2

    PubMed Central

    Mozzi, Alessandra; Forni, Diego; Clerici, Mario; Pozzoli, Uberto; Mascheretti, Sara; Guerini, Franca R.; Riva, Stefania; Bresolin, Nereo; Cagliani, Rachele; Sironi, Manuela

    2016-01-01

    Humans possess a communication system based on spoken and written language. Other animals can learn vocalization by imitation, but this is not equivalent to human language. Many genes were described to be implicated in language impairment (LI) and developmental dyslexia (DD), but their evolutionary history has not been thoroughly analyzed. Herein we analyzed the evolution of ten genes involved in DD and LI. Results show that the evolutionary history of LI genes for mammals and aves was comparable in vocal-learner species and non-learners. For the human lineage, several sites showing evidence of positive selection were identified in KIAA0319 and were already present in Neanderthals and Denisovans, suggesting that any phenotypic change they entailed was shared with archaic hominins. Conversely, in FOXP2, ROBO1, ROBO2, and CNTNAP2 non-coding changes rose to high frequency after the separation from archaic hominins. These variants are promising candidates for association studies in LI and DD. PMID:26912479

  7. Semantic Relations Cause Interference in Spoken Language Comprehension When Using Repeated Definite References, Not Pronouns.

    PubMed

    Peters, Sara A; Boiteau, Timothy W; Almor, Amit

    2016-01-01

    The choice and processing of referential expressions depend on the referents' status within the discourse, such that pronouns are generally preferred over full repetitive references when the referent is salient. Here we report two visual-world experiments showing that: (1) in spoken language comprehension, this preference is reflected in delayed fixations to referents mentioned after repeated definite references compared with after pronouns; (2) repeated references are processed differently than new references; (3) long-term semantic memory representations affect the processing of pronouns and repeated names differently. Overall, these results support the role of semantic discourse representation in referential processing and reveal important details about how pronouns and full repeated references are processed in the context of these representations. The results suggest the need for modifications to current theoretical accounts of reference processing such as Discourse Prominence Theory and the Informational Load Hypothesis. PMID:26973552

  8. The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening

    PubMed Central

    Cibelli, Emily S.; Leonard, Matthew K.; Johnson, Keith; Chang, Edward F.

    2015-01-01

    Neural representations of words are thought to have a complex spatio-temporal cortical basis. It has been suggested that spoken word recognition is not a process of feed-forward computations from phonetic to lexical forms, but rather involves the online integration of bottom-up input with stored lexical knowledge. Using direct neural recordings from the temporal lobe, we examined cortical responses to words and pseudowords. We found that neural populations were not only sensitive to lexical status (real vs. pseudo), but also to cohort size (number of words matching the phonetic input at each time point) and cohort frequency (lexical frequency of those words). These lexical variables modulated neural activity from the posterior to anterior temporal lobe, and also dynamically as the stimuli unfolded on a millisecond time scale. Our findings indicate that word recognition is not purely modular, but relies on rapid and online integration of multiple sources of lexical knowledge. PMID:26072003

  9. The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening.

    PubMed

    Cibelli, Emily S; Leonard, Matthew K; Johnson, Keith; Chang, Edward F

    2015-08-01

    Neural representations of words are thought to have a complex spatio-temporal cortical basis. It has been suggested that spoken word recognition is not a process of feed-forward computations from phonetic to lexical forms, but rather involves the online integration of bottom-up input with stored lexical knowledge. Using direct neural recordings from the temporal lobe, we examined cortical responses to words and pseudowords. We found that neural populations were not only sensitive to lexical status (real vs. pseudo), but also to cohort size (number of words matching the phonetic input at each time point) and cohort frequency (lexical frequency of those words). These lexical variables modulated neural activity from the posterior to anterior temporal lobe, and also dynamically as the stimuli unfolded on a millisecond time scale. Our findings indicate that word recognition is not purely modular, but relies on rapid and online integration of multiple sources of lexical knowledge. PMID:26072003

  10. Semantic Relations Cause Interference in Spoken Language Comprehension When Using Repeated Definite References, Not Pronouns

    PubMed Central

    Peters, Sara A.; Boiteau, Timothy W.; Almor, Amit

    2016-01-01

    The choice and processing of referential expressions depend on the referents' status within the discourse, such that pronouns are generally preferred over full repetitive references when the referent is salient. Here we report two visual-world experiments showing that: (1) in spoken language comprehension, this preference is reflected in delayed fixations to referents mentioned after repeated definite references compared with after pronouns; (2) repeated references are processed differently than new references; (3) long-term semantic memory representations affect the processing of pronouns and repeated names differently. Overall, these results support the role of semantic discourse representation in referential processing and reveal important details about how pronouns and full repeated references are processed in the context of these representations. The results suggest the need for modifications to current theoretical accounts of reference processing such as Discourse Prominence Theory and the Informational Load Hypothesis. PMID:26973552

  11. The evolutionary history of genes involved in spoken and written language: beyond FOXP2.

    PubMed

    Mozzi, Alessandra; Forni, Diego; Clerici, Mario; Pozzoli, Uberto; Mascheretti, Sara; Guerini, Franca R; Riva, Stefania; Bresolin, Nereo; Cagliani, Rachele; Sironi, Manuela

    2016-01-01

    Humans possess a communication system based on spoken and written language. Other animals can learn vocalization by imitation, but this is not equivalent to human language. Many genes were described to be implicated in language impairment (LI) and developmental dyslexia (DD), but their evolutionary history has not been thoroughly analyzed. Herein we analyzed the evolution of ten genes involved in DD and LI. Results show that the evolutionary history of LI genes for mammals and aves was comparable in vocal-learner species and non-learners. For the human lineage, several sites showing evidence of positive selection were identified in KIAA0319 and were already present in Neanderthals and Denisovans, suggesting that any phenotypic change they entailed was shared with archaic hominins. Conversely, in FOXP2, ROBO1, ROBO2, and CNTNAP2 non-coding changes rose to high frequency after the separation from archaic hominins. These variants are promising candidates for association studies in LI and DD. PMID:26912479

  12. Long-term temporal tracking of speech rate affects spoken-word recognition.

    PubMed

    Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin

    2014-08-01

    Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. PMID:24907119

  13. How are pronunciation variants of spoken words recognized? A test of generalization to newly learned words

    PubMed Central

    Pitt, Mark A.

    2009-01-01

    One account of how pronunciation variants of spoken words (center-> “senner” or “sennah”) are recognized is that sublexical processes use information about variation in the same phonological environments to recover the intended segments (Gaskell & Marslen-Wilson, 1998). The present study tests the limits of this phonological inference account by examining how listeners process for the first time a pronunciation variant of a newly learned word. Recognition of such a variant should occur as long as it possesses the phonological structure that legitimizes the variation. Experiments 1 and 2 identify a phonological environment that satisfies the conditions necessary for a phonological inference mechanism to be operational. Using a word-learning paradigm, Experiments 3 through 5 show that inference alone is not sufficient for generalization but could facilitate it, and that one condition that leads to generalization is meaningful exposure to the variant in an overheard conversation, demonstrating that lexical processing is necessary for variant recognition. PMID:20161243

  14. Tracking the time course of phonetic cue integration during spoken word recognition.

    PubMed

    McMurray, Bob; Clayards, Meghan A; Tanenhaus, Michael K; Aslin, Richard N

    2008-12-01

    Speech perception requires listeners to integrate multiple cues that each contribute to judgments about a phonetic category. Classic studies of trading relations assessed the weights attached to each cue but did not explore the time course of cue integration. Here, we provide the first direct evidence that asynchronous cues to voicing (/b/ vs. /p/) and manner (/b/ vs. /w/) contrasts become available to the listener at different times during spoken word recognition. Using the visual world paradigm, we show that the probability of eye movements to pictures of target and of competitor objects diverge at different points in time after the onset of the target word. These points of divergence correspond to the availability of early (voice onset time or formant transition slope) and late (vowel length) cues to voicing and manner contrasts. These results support a model of cue integration in which phonetic cues are used for lexical access as soon as they are available. PMID:19001568

  15. The role of visual representations during the lexical access of spoken words

    PubMed Central

    Lewis, Gwyneth; Poeppel, David

    2015-01-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability - concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. PMID:24814579

  16. The effect of written text on comprehension of spoken English as a foreign language.

    PubMed

    Diao, Yali; Chandler, Paul; Sweller, John

    2007-01-01

    Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas. PMID:17650920

  17. The slow developmental timecourse of real-time spoken word recognition

    PubMed Central

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental timecourse of spoken word recognition in older children using eye-tracking to assess how the real-time processing dynamics of word recognition change over development. We found that nine-year-olds were slower to activate the target words and showed more early competition from competitor words than 16 year olds; however, both age groups ultimately fixated targets to the same degree. This contrasts with a prior study of adolescents with language impairment (McMurray et al, 2010) which showed a different pattern of real-time processes. These findings suggest that the dynamics of word recognition are still developing even at these late ages, and differences due to developmental change may derive from different sources than individual differences in relative language ability. PMID:26479544

  18. Phonological Neighborhood Competition Affects Spoken Word Production Irrespective of Sentential Context

    PubMed Central

    Fox, Neal P.; Reilly, Megan; Blumstein, Sheila E.

    2015-01-01

    Two experiments examined the influence of phonologically similar neighbors on articulation of words’ initial stop consonants in order to investigate the conditions under which lexically-conditioned phonetic variation arises. In Experiment 1, participants produced words in isolation. Results showed that the voice-onset time (VOT) of a target’s initial voiceless stop was predicted by its overall neighborhood density, but not by its having a voicing minimal pair. In Experiment 2, participants read aloud the same targets after semantically predictive sentence contexts and after neutral sentence contexts. Results showed that, although VOTs were shorter in words produced after predictive contexts, the neighborhood density effect on VOT production persisted irrespective of context. These findings suggest that global competition from a word’s neighborhood affects spoken word production independently of contextual modulation and support models in which activation cascades automatically and obligatorily among all of a selected target word’s phonological neighbors during acoustic-phonetic encoding. PMID:26124538

  19. Potato not Pope: human brain potentials to gender expectation and agreement in Spanish spoken sentences

    PubMed Central

    Wicha, Nicole Y.Y.; Bates, Elizabeth A.; Moreno, Eva M.; Kutas, Marta

    2012-01-01

    Event-related potentials were used to examine the role of grammatical gender in auditory sentence comprehension. Native Spanish speakers listened to sentence pairs in which a drawing depicting a noun was either congruent or incongruent with sentence meaning, and agreed or disagreed in gender with the immediately preceding spoken article. Semantically incongruent drawings elicited an N400 regardless of gender agreement. A similar negativity to prior articles of gender opposite to that of the contextually expected noun suggests that listeners predict specific words during comprehension. Gender disagreements at the drawing also elicited an increased negativity with a later onset and distribution distinct from the canonical N400, indicating that comprehenders attend to gender agreement, even when one of the words is only implicitly represented by a drawing. PMID:12853110

  20. Effects of lexical competition on immediate memory span for spoken words.

    PubMed

    Goh, Winston D; Pisoni, David B

    2003-08-01

    Current theories and models of the structural organization of verbal short-term memory are primarily based on evidence obtained from manipulations of features inherent in the short-term traces of the presented stimuli, such as phonological similarity. In the present study, we investigated whether properties of the stimuli that are not inherent in the short-term traces of spoken words would affect performance in an immediate memory span task. We studied the lexical neighbourhood properties of the stimulus items, which are based on the structure and organization of words in the mental lexicon. The experiments manipulated lexical competition by varying the phonological neighbourhood structure (i.e., neighbourhood density and neighbourhood frequency) of the words on a test list while controlling for word frequency and intra-set phonological similarity (family size). Immediate memory span for spoken words was measured under repeated and nonrepeated sampling procedures. The results demonstrated that lexical competition only emerged when a nonrepeated sampling procedure was used and the participants had to access new words from their lexicons. These findings were not dependent on individual differences in short-term memory capacity. Additional results showed that the lexical competition effects did not interact with proactive interference. Analyses of error patterns indicated that item-type errors, but not positional errors, were influenced by the lexical attributes of the stimulus items. These results complement and extend previous findings that have argued for separate contributions of long-term knowledge and short-term memory rehearsal processes in immediate verbal serial recall tasks. PMID:12881165

  1. Cross-modal metaphorical mapping of spoken emotion words onto vertical space.

    PubMed

    Montoro, Pedro R; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a 'positive-up/negative-down' embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis. PMID:26322007

  2. Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.

    PubMed

    Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M

    2016-03-01

    A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success. PMID

  3. The role of attention in processing morphologically complex spoken words: an EEG/MEG study

    PubMed Central

    Leminen, Alina; Lehtonen, Minna; Leminen, Miika; Nevalainen, Päivi; Mäkelä, Jyrki P.; Kujala, Teija

    2013-01-01

    This study determined to what extent morphological processing of spoken inflected and derived words is attention-independent. To answer these questions EEG and MEG responses were recorded from healthy participants while they were presented with spoken Finnish inflected, derived, and monomorphemic words. In the non-attended task, the participants were instructed to ignore the incoming auditory stimuli and concentrate on the silent cartoon. In the attended task, previously reported by Leminen et al. (2011), the participants were to judge the acceptability of each stimulus. Importantly, EEG and MEG responses were time-locked to the onset of critical information [suffix onset for the complex words and uniqueness point (UP) for the monomorphemic words]. Early after the critical point, word type did not interact with task: in both attended and non-attended tasks, the event-related potentials (ERPs) showed larger negativity to derived than inflected or monomorphemic words ~100 ms after the critical point. MEG source waveforms showed a similar pattern. Later than 100 ms after the critical point, there were no differences between word types in the non-attended task either in the ERP or source modeling data. However, in the attended task inflected words elicited larger responses than other words ~200 ms after the critical point. The results suggest different brain representations for derived and inflected words. The early activation after the critical point was elicited both in the non-attended and attended tasks. As this stage of word recognition was not modulated by attention, it can be concluded to reflect an automatic mapping of incoming acoustic information onto stored representations. In contrast, the later differences between word types in the attended task were not observed in the non-attended task. This indicates that later compositional processes at the (morpho)syntactic-semantic level require focused attention. PMID:23316156

  4. Cross-modal metaphorical mapping of spoken emotion words onto vertical space

    PubMed Central

    Montoro, Pedro R.; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a ‘positive-up/negative-down’ embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis. PMID:26322007

  5. Semantic information mediates visual attention during spoken word recognition in Chinese: Evidence from the printed-word version of the visual-world paradigm.

    PubMed

    Shen, Wei; Qu, Qingqing; Li, Xingshan

    2016-07-01

    In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm. PMID:26993126

  6. [Synthesis of juvenile hormones in vitro by the corpora allata of 5th stage larva of Locusta migratoria migratorioides (R and F) (Insecta, Orthopteroida)].

    PubMed

    Caruelle, J P; Baehr, J C; Cassier, P

    1979-04-01

    Corpora allata of Locusta migratoria 5th stage larvae synthesize J.H.1, J.H.2 and J.H.3 in vitro. The C.A. of insects of different ages exbit different rates of J.H. synthesis. J.H.1 and J.H.2 synthesis is less than 1 ng/48 h/gland. During the same time the J.H.3 production may be as much as 25.6 ng/gland. J.H. synthetic activity is the same between right and left C.A. The release of J.H. from the C.A. occurs immediately following synthesis. These results are compared with in vivo haemolymphatic J.H. levels. PMID:113127

  7. CYP15A1, the cytochrome P450 that catalyzes epoxidation of methyl farnesoate to juvenile hormone III in cockroach corpora allata

    PubMed Central

    Helvig, C.; Koener, J. F.; Unnithan, G. C.; Feyereisen, R.

    2004-01-01

    The molecular analysis of insect hormone biosynthesis has long been hampered by the minute size of the endocrine glands producing them. Expressed sequence tags from the corpora allata of the cockroach Diploptera punctata yielded a new cytochrome P450, CYP15A1. Its full-length cDNA encoded a 493-aa protein that has only 34% amino acid identity with CYP4C7, a terpenoid ω-hydroxylase previously cloned from this tissue. Heterologous expression of the cDNA in Escherichia coli produced >300 nmol of CYP15A1 per liter of culture. After purification, its catalytic activity was reconstituted by using phospholipids and house fly P450 reductase. CYP15A1 metabolizes methyl (2E,6E)-3,7,11-trimethyl-2,6-dodecatrienoate (methyl farnesoate) to methyl (2E,6E)-(10R)-10,11-epoxy-3,7,11-trimethyl-2,6-dodecadienoate [juvenile hormone III, JH III] with a turnover of 3–5 nmol/min/nmol P450. The enzyme produces JH III with a ratio of ≈98:2 in favor of the natural (10R)-epoxide enantiomer. This result is in contrast to other insect P450s, such as CYP6A1, that epoxidize methyl farnesoate with lower regio- and stereoselectivity. RT-PCR experiments show that the CYP15A1 gene is expressed selectively in the corpora allata of D. punctata, at the time of maximal JH production by the glands. We thus report the cloning and functional expression of a gene involved in an insect-specific step of juvenile hormone biosynthesis. Heterologously expressed CYP15A1 from D. punctata or its ortholog from economically important species may be useful in the design and screening of selective insect control agents. PMID:15024118

  8. Assessment of tissue integrity, ultrastructure and steroidogenic activity of corpora lutea of the marmoset monkey, Callithrix jacchus, following in vitro microdialysis.

    PubMed

    Fehrenbach, A; Einspanier, A; Nicksch, E; Hodges, J K

    1995-08-01

    Microdialysis of marmoset (Callithrix jacchus) corpora lutea in vitro was evaluated regarding morphology, activity of 3 beta-hydroxysteroid-dehydrogenase (3 beta-HSD) and progesterone (P) secretion. Two different dialysis media were used: an unbuffered ringer solution and Krebs-Henseleit buffer gassed with carbogenium. Additionally, the effects of the luteotrophin prostaglandin E2 (PGE2) on P secretion were examined for both media. In general 3 zones of tissue according to the maintenance of cellular integrity could be identified after dialysis. Structurally intact cells were found in close vicinity to the dialysis tubing or the bathing medium after 8 h of perfusion. These 2 zones were separated by a sheet of cells which showed signs of ischemic injury and whose activity of 3-beta-HSD was reduced. During dialysis with ringer solution P release stayed constantly high for a longer period of time than with Krebs buffer. With both media PGE2 stimulated P release but could not prevent the decrease in P production during dialysis with Krebs buffer. In general profiles of baseline secretion, were more stable after treatment than for untreated corpora lutea. There, under dialysis with ringer solution, the ultrastructure of cells close to the dialysis tubing was well preserved exhibiting euchromatic nuclei, tubular sER and numerous mitochondria gathered in the perinuclear region. In contrast, with Krebs buffer heterochromatization of nuclei and vesiculation of smooth endoplasmic reticulum prevailed. After application of PGE2 no histological and ultrastructural differences could be found between tissue dialysed with ringer or Krebs buffer. In these specimens the sER of zone A cells generally appeared vesiculated. Our results indicate (1) a close structure-function relationship of luteal cells in the tested system, (2) the suitability of the system to study intra-luteal regulation and (3) the necessity to control structural integrity of the dialysed tissue. PMID:7570579

  9. Pronunciation Lessons for Teachers of Classes of Adults of Mainly South East Asian Origin at Near-Beginning to Intermediate Levels of English as a Second Language.

    ERIC Educational Resources Information Center

    Eriksen, Tove Anne

    The lessons are intended for teenage and adult students. Focus is on placement of the tongue, jaw, lips, any movements involved, and whether the sound is whispered (voiced) or spoken (voiceless). Consonants are taught in pairs so students realize the distinctions necessary to avoid misunderstandings. Lessons include (1) final consonants, (2)…

  10. Child-centered collaborative conversations that maximize listening and spoken language development for children with hearing loss.

    PubMed

    Garber, Ashley S; Nevins, Mary Ellen

    2012-11-01

    In the period that begins with early intervention enrollment and ends with the termination of formal education, speech-language pathologists (SLPs) will have numerous opportunities to form professional relationships that can enhance any child's listening and spoken language accomplishments. SLPs who initiate and/or nurture these relationships are urged to place the needs of the child as the core value that drives decision making. Addressing this priority will allow for the collaborative conversations necessary to develop an effective intervention plan at any level. For the SLP, the purpose of these collaborative conversations will be twofold: identifying the functional communication needs of the child with hearing loss across settings and sharing practical strategies to encourage listening and spoken language skill development. Auditory first, wait time, sabotage, and thinking turns are offered as four techniques easily implemented by all service providers to support the child with hearing loss in all educational settings. PMID:23081786

  11. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language.

    PubMed

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H2 (15)O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of

  12. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    PubMed Central

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  13. Defining Spoken Language Benchmarks and Selecting Measures of Expressive Language Development for Young Children With Autism Spectrum Disorders

    PubMed Central

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2010-01-01

    Purpose The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken language ability in the expressive modality and to set benchmarks for determining a child’s language level in order to establish a framework for comparing outcomes across intervention studies. Method The National Institute on Deafness and Other Communication Disorders assembled a group of researchers with interests and experience in the study of language development and disorders in young children with autism spectrum disorders. The group worked for 18 months through a series of conference calls and correspondence, culminating in a meeting held in December 2007 to achieve consensus on these aims. Results The authors recommend moving away from using the term functional speech, replacing it with a developmental framework. Rather, they recommend multiple sources of information to define language phases, including natural language samples, parent report, and standardized measures. They also provide guidelines and objective criteria for defining children’s spoken language expression in three major phases that correspond to developmental levels between 12 and 48 months of age. PMID:19380608

  14. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    PubMed

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. PMID:25858311

  15. Fronto-temporal connectivity is preserved during sung but not spoken word listening, across the autism spectrum.

    PubMed

    Sharda, Megha; Midha, Rashi; Malik, Supriya; Mukerji, Shaneel; Singh, Nandini C

    2015-04-01

    Co-occurrence of preserved musical function with language and socio-communicative impairments is a common but understudied feature of Autism Spectrum Disorders (ASD). Given the significant overlap in neural organization of these processes, investigating brain mechanisms underlying speech and music may not only help dissociate the nature of these auditory processes in ASD but also provide a neurobiological basis for development of interventions. Using a passive-listening functional magnetic resonance imaging paradigm with spoken words, sung words and piano tones, we found that 22 children with ASD, with varying levels of functioning, activated bilateral temporal brain networks during sung-word perception, similarly to an age and gender-matched control group. In contrast, spoken-word perception was right-lateralized in ASD and elicited reduced inferior frontal gyrus (IFG) activity which varied as a function of language ability. Diffusion tensor imaging analysis reflected reduced integrity of the left hemisphere fronto-temporal tract in the ASD group and further showed that the hypoactivation in IFG was predicted by integrity of this tract. Subsequent psychophysiological interactions revealed that functional fronto-temporal connectivity, disrupted during spoken-word perception, was preserved during sung-word listening in ASD, suggesting alternate mechanisms of speech and music processing in ASD. Our results thus demonstrate the ability of song to overcome the structural deficit for speech across the autism spectrum and provide a mechanistic basis for efficacy of song-based interventions in ASD. PMID:25377165

  16. Deaf Children With Cochlear Implants Do Not Appear to Use Sentence Context to Help Recognize Spoken Words

    PubMed Central

    Conway, Christopher M.; Deocampo, Joanne A.; Walk, Anne M.; Anaya, Esperanza M.; Pisoni, David B.

    2015-01-01

    Purpose The authors investigated the ability of deaf children with cochlear implants (CIs) to use sentence context to facilitate the perception of spoken words. Method Deaf children with CIs (n = 24) and an age-matched group of children with normal hearing (n = 31) were presented with lexically controlled sentences and were asked to repeat each sentence in its entirety. Performance was analyzed at each of 3 word positions of each sentence (first, second, and third key word). Results Whereas the children with normal hearing showed robust effects of contextual facilitation—improved speech perception for the final words in a sentence—the deaf children with CIs on average showed no such facilitation. Regression analyses indicated that for the deaf children with CIs, Forward Digit Span scores significantly predicted accuracy scores for all 3 positions, whereas performance on the Stroop Color and Word Test, Children’s Version (Golden, Freshwater, & Golden, 2003) predicted how much contextual facilitation was observed at the final word. Conclusions The pattern of results suggests that some deaf children with CIs do not use sentence context to improve spoken word recognition. The inability to use sentence context may be due to possible interactions between language experience and cognitive factors that affect the ability to successfully integrate temporal–sequential information in spoken language. PMID:25029170

  17. Phoneme-free prosodic representations are involved in pre-lexical and lexical neurobiological mechanisms underlying spoken word processing

    PubMed Central

    Schild, Ulrike; Becker, Angelika B.C.; Friedrich, Claudia K.

    2014-01-01

    Recently we reported that spoken stressed and unstressed primes differently modulate Event Related Potentials (ERPs) of spoken initially stressed targets. ERP stress priming was independent of prime–target phoneme overlap. Here we test whether phoneme-free ERP stress priming involves the lexicon. We used German target words with the same onset phonemes but different onset stress, such as MANdel (“almond”) and manDAT (“mandate”; capital letters indicate stress). First syllables of those words served as primes. We orthogonally varied prime–target overlap in stress and phonemes. ERP stress priming did neither interact with phoneme priming nor with the stress pattern of the targets. However, polarity of ERP stress priming was reversed to that previously obtained. The present results are evidence for phoneme-free prosodic processing at the lexical level. Together with the previous results they reveal that phoneme-free prosodic representations at the pre-lexical and lexical level are recruited by neurobiological spoken word recognition. PMID:25128904

  18. Using Spoken Language Benchmarks to Characterize the Expressive Language Skills of Young Children With Autism Spectrum Disorders

    PubMed Central

    Weismer, Susan Ellis

    2015-01-01

    Purpose Spoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels. Method The communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years. Results The majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors. Conclusion The spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth. PMID:26254475

  19. Theories of Spoken Word Recognition Deficits in Aphasia: Evidence from Eye-Tracking and Computational Modeling

    PubMed Central

    Mirman, Daniel; Yee, Eiling; Blumstein, Sheila E.; Magnuson, James S.

    2011-01-01

    We used eye tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., carrot – parrot) and cohort (e.g., beaker – beetle) competitors. Broca’s aphasic participants exhibited larger rhyme competition effects than age-matched controls. A reanalysis of previously reported data (Yee, Blumstein, & Sedivy, 2008) confirmed that Wernicke’s aphasic participants exhibited larger cohort competition effects. Individual-level analyses revealed a negative correlation between rhyme and cohort competition effect size across both groups of aphasic participants. Computational model simulations were performed to examine which of several accounts of lexical processing deficits in aphasia might account for the observed effects. Simulation results revealed that slower deactivation of lexical competitors could account for increased cohort competition in Wernicke’s aphasic participants; auditory perceptual impairment could account for increased rhyme competition in Broca's aphasic participants; and a perturbation of a parameter controlling selection among competing alternatives could account for both patterns, as well as the correlation between the effects. In light of these simulation results, we discuss theoretical accounts that have the potential to explain the dynamics of spoken word recognition in aphasia and the possible roles of anterior and posterior brain regions in lexical processing and cognitive control. PMID:21371743

  20. Listening in circles. Spoken drama and the architects of sound, 1750-1830.

    PubMed

    Tkaczyk, Viktoria

    2014-07-01

    The establishment of the discipline of architectural acoustics is generally attributed to the physicist Wallace Clement Sabine, who developed the formula for reverberation time around 1900, and with it the possibility of making calculated prognoses about the acoustic potential of a particular design. If, however, we shift the perspective from the history of this discipline to the history of architectural knowledge and praxis, it becomes apparent that the topos of 'good sound' had already entered the discourse much earlier. This paper traces the Europe-wide discussion on theatre architecture between 1750 and 1830. It will be shown that the period of investigation is marked by an increasing interest in auditorium acoustics, one linked to the emergence of a bourgeois theatre culture and the growing socio-political importance of the spoken word. In the wake of this development the search among architects for new methods of acoustic research started to differ fundamentally from an analogical reasoning on the nature of sound propagation and reflection, which in part dated back to antiquity. Through their attempts to find new ways of visualising the behaviour of sound in enclosed spaces and to rethink both the materiality and the mediality of theatre auditoria, architects helped pave the way for the establishment of architectural acoustics as an academic discipline around 1900. PMID:24908794