Sample records for creating language recognition

  1. [Creating language model of the forensic medicine domain for developing a autopsy recording system by automatic speech recognition].

    PubMed

    Niijima, H; Ito, N; Ogino, S; Takatori, T; Iwase, H; Kobayashi, M

    2000-11-01

    For the purpose of practical use of speech recognition technology for recording of forensic autopsy, a language model of the speech recording system, specialized for the forensic autopsy, was developed. The language model for the forensic autopsy by applying 3-gram model was created, and an acoustic model for Japanese speech recognition by Hidden Markov Model in addition to the above were utilized to customize the speech recognition engine for forensic autopsy. A forensic vocabulary set of over 10,000 words was compiled and some 300,000 sentence patterns were made to create the forensic language model, then properly mixing with a general language model to attain high exactitude. When tried by dictating autopsy findings, this speech recognition system was proved to be about 95% of recognition rate that seems to have reached to the practical usability in view of speech recognition software, though there remains rooms for improving its hardware and application-layer software.

  2. Foreign Language Analysis and Recognition (FLARe) Initial Progress

    DTIC Science & Technology

    2012-11-29

    University Language Modeling ToolKit CoMMA Count Mediated Morphological Analysis CRUD Create, Read , Update & Delete CPAN Comprehensive Perl Archive...DATES COVERED (From - To) 1 October 2010 – 30 September 2012 4. TITLE AND SUBTITLE Foreign Language Analysis and Recognition (FLARe) Initial Progress...AFRL-RH-WP-TR-2012-0165 FOREIGN LANGUAGE ANALYSIS AND RECOGNITION (FLARE) INITIAL PROGRESS Brian M. Ore

  3. Evaluating Effects of Language Recognition on Language Rights and the Vitality of New Zealand Sign Language

    ERIC Educational Resources Information Center

    McKee, Rachel Locker; Manning, Victoria

    2015-01-01

    Status planning through legislation made New Zealand Sign Language (NZSL) an official language in 2006. But this strong symbolic action did not create resources or mechanisms to further the aims of the act. In this article we discuss the extent to which legal recognition and ensuing language-planning activities by state and community have affected…

  4. Whole Language in the Play Store.

    ERIC Educational Resources Information Center

    Fields, Marjorie V.; Hillstead, Deborah V.

    1990-01-01

    The concept of whole language instruction is explained by means of examples from a kindergarten unit on the grocery store. Activities include visiting the supermarket, making stone soup, and creating a grocery store. Activities teach reading, writing, oral language, phonics, and word recognition. (DG)

  5. Signs in Which Handshape and Hand Orientation Are either Not Visible or Are Only Partially Visible: What Is the Consequence for Lexical Recognition?

    ERIC Educational Resources Information Center

    ten Holt, G. A.; van Doorn, A. J.; de Ridder, H.; Reinders, M. J. T.; Hendriks, E. A.

    2009-01-01

    We present the results of an experiment on lexical recognition of human sign language signs in which the available perceptual information about handshape and hand orientation was manipulated. Stimuli were videos of signs from Sign Language of the Netherlands (SLN). The videos were processed to create four conditions: (1) one in which neither…

  6. One Hundred Years of Esperanto: A Survey.

    ERIC Educational Resources Information Center

    Tonkin, Humphrey

    1987-01-01

    The history of Esperanto, a language created to promote international communication, is chronicled. Discussion focuses on the origins and development of the language, early attitudes toward its adoption, patterns of use and recognition, the development of a literature in and of Esperanto, the culture associated with the language. (Author/MSE)

  7. The Subsystem of Numerals in Catalan Sign Language: Description and Examples from a Psycholinguistic Study

    ERIC Educational Resources Information Center

    Fuentes, Mariana; Tolchinsky, Liliana

    2004-01-01

    Linguistic descriptions of sign languages are important to the recognition of their linguistic status. These languages are an essential part of the cultural heritage of the communities that create and use them and vital in the education of deaf children. They are also the reference point in language acquisition studies. Ours is exploratory…

  8. Improving language models for radiology speech recognition.

    PubMed

    Paulett, John M; Langlotz, Curtis P

    2009-02-01

    Speech recognition systems have become increasingly popular as a means to produce radiology reports, for reasons both of efficiency and of cost. However, the suboptimal recognition accuracy of these systems can affect the productivity of the radiologists creating the text reports. We analyzed a database of over two million de-identified radiology reports to determine the strongest determinants of word frequency. Our results showed that body site and imaging modality had a similar influence on the frequency of words and of three-word phrases as did the identity of the speaker. These findings suggest that the accuracy of speech recognition systems could be significantly enhanced by further tailoring their language models to body site and imaging modality, which are readily available at the time of report creation.

  9. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  10. The software for automatic creation of the formal grammars used by speech recognition, computer vision, editable text conversion systems, and some new functions

    NASA Astrophysics Data System (ADS)

    Kardava, Irakli; Tadyszak, Krzysztof; Gulua, Nana; Jurga, Stefan

    2017-02-01

    For more flexibility of environmental perception by artificial intelligence it is needed to exist the supporting software modules, which will be able to automate the creation of specific language syntax and to make a further analysis for relevant decisions based on semantic functions. According of our proposed approach, of which implementation it is possible to create the couples of formal rules of given sentences (in case of natural languages) or statements (in case of special languages) by helping of computer vision, speech recognition or editable text conversion system for further automatic improvement. In other words, we have developed an approach, by which it can be achieved to significantly improve the training process automation of artificial intelligence, which as a result will give us a higher level of self-developing skills independently from us (from users). At the base of our approach we have developed a software demo version, which includes the algorithm and software code for the entire above mentioned component's implementation (computer vision, speech recognition and editable text conversion system). The program has the ability to work in a multi - stream mode and simultaneously create a syntax based on receiving information from several sources.

  11. Recognition of a person named entity from the text written in a natural language

    NASA Astrophysics Data System (ADS)

    Dolbin, A. V.; Rozaliev, V. L.; Orlova, Y. A.

    2017-01-01

    This work is devoted to the semantic analysis of texts, which were written in a natural language. The main goal of the research was to compare latent Dirichlet allocation and latent semantic analysis to identify elements of the human appearance in the text. The completeness of information retrieval was chosen as the efficiency criteria for methods comparison. However, it was insufficient to choose only one method for achieving high recognition rates. Thus, additional methods were used for finding references to the personality in the text. All these methods are based on the created information model, which represents person’s appearance.

  12. Advances in natural language processing.

    PubMed

    Hirschberg, Julia; Manning, Christopher D

    2015-07-17

    Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.

  13. A multilingual gold-standard corpus for biomedical concept recognition: the Mantra GSC

    PubMed Central

    Clematide, Simon; Akhondi, Saber A; van Mulligen, Erik M; Rebholz-Schuhmann, Dietrich

    2015-01-01

    Objective To create a multilingual gold-standard corpus for biomedical concept recognition. Materials and methods We selected text units from different parallel corpora (Medline abstract titles, drug labels, biomedical patent claims) in English, French, German, Spanish, and Dutch. Three annotators per language independently annotated the biomedical concepts, based on a subset of the Unified Medical Language System and covering a wide range of semantic groups. To reduce the annotation workload, automatically generated preannotations were provided. Individual annotations were automatically harmonized and then adjudicated, and cross-language consistency checks were carried out to arrive at the final annotations. Results The number of final annotations was 5530. Inter-annotator agreement scores indicate good agreement (median F-score 0.79), and are similar to those between individual annotators and the gold standard. The automatically generated harmonized annotation set for each language performed equally well as the best annotator for that language. Discussion The use of automatic preannotations, harmonized annotations, and parallel corpora helped to keep the manual annotation efforts manageable. The inter-annotator agreement scores provide a reference standard for gauging the performance of automatic annotation techniques. Conclusion To our knowledge, this is the first gold-standard corpus for biomedical concept recognition in languages other than English. Other distinguishing features are the wide variety of semantic groups that are being covered, and the diversity of text genres that were annotated. PMID:25948699

  14. The Legal Recognition of Sign Languages

    ERIC Educational Resources Information Center

    De Meulder, Maartje

    2015-01-01

    This article provides an analytical overview of the different types of explicit legal recognition of sign languages. Five categories are distinguished: constitutional recognition, recognition by means of general language legislation, recognition by means of a sign language law or act, recognition by means of a sign language law or act including…

  15. A multilingual gold-standard corpus for biomedical concept recognition: the Mantra GSC.

    PubMed

    Kors, Jan A; Clematide, Simon; Akhondi, Saber A; van Mulligen, Erik M; Rebholz-Schuhmann, Dietrich

    2015-09-01

    To create a multilingual gold-standard corpus for biomedical concept recognition. We selected text units from different parallel corpora (Medline abstract titles, drug labels, biomedical patent claims) in English, French, German, Spanish, and Dutch. Three annotators per language independently annotated the biomedical concepts, based on a subset of the Unified Medical Language System and covering a wide range of semantic groups. To reduce the annotation workload, automatically generated preannotations were provided. Individual annotations were automatically harmonized and then adjudicated, and cross-language consistency checks were carried out to arrive at the final annotations. The number of final annotations was 5530. Inter-annotator agreement scores indicate good agreement (median F-score 0.79), and are similar to those between individual annotators and the gold standard. The automatically generated harmonized annotation set for each language performed equally well as the best annotator for that language. The use of automatic preannotations, harmonized annotations, and parallel corpora helped to keep the manual annotation efforts manageable. The inter-annotator agreement scores provide a reference standard for gauging the performance of automatic annotation techniques. To our knowledge, this is the first gold-standard corpus for biomedical concept recognition in languages other than English. Other distinguishing features are the wide variety of semantic groups that are being covered, and the diversity of text genres that were annotated. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  16. Scenario-Based Spoken Interaction with Virtual Agents

    ERIC Educational Resources Information Center

    Morton, Hazel; Jack, Mervyn A.

    2005-01-01

    This paper describes a CALL approach which integrates software for speaker independent continuous speech recognition with embodied virtual agents and virtual worlds to create an immersive environment in which learners can converse in the target language in contextualised scenarios. The result is a self-access learning package: SPELL (Spoken…

  17. The Interplay between Attention, Experience and Skills in Online Language Teaching

    ERIC Educational Resources Information Center

    Shi, Lijing; Stickler, Ursula; Lloyd, Mair E.

    2017-01-01

    The demand for online teaching is growing as is the recognition that online teachers require highly sophisticated skills to manage classrooms and create an environment conducive to learning. However, there is little rigorous empirical research investigating teachers' thoughts and actions during online tutorials. Taking a sociocultural perspective,…

  18. Bilingual Language Switching: Production vs. Recognition

    PubMed Central

    Mosca, Michela; de Bot, Kees

    2017-01-01

    This study aims at assessing how bilinguals select words in the appropriate language in production and recognition while minimizing interference from the non-appropriate language. Two prominent models are considered which assume that when one language is in use, the other is suppressed. The Inhibitory Control (IC) model suggests that, in both production and recognition, the amount of inhibition on the non-target language is greater for the stronger compared to the weaker language. In contrast, the Bilingual Interactive Activation (BIA) model proposes that, in language recognition, the amount of inhibition on the weaker language is stronger than otherwise. To investigate whether bilingual language production and recognition can be accounted for by a single model of bilingual processing, we tested a group of native speakers of Dutch (L1), advanced speakers of English (L2) in a bilingual recognition and production task. Specifically, language switching costs were measured while participants performed a lexical decision (recognition) and a picture naming (production) task involving language switching. Results suggest that while in language recognition the amount of inhibition applied to the non-appropriate language increases along with its dominance as predicted by the IC model, in production the amount of inhibition applied to the non-relevant language is not related to language dominance, but rather it may be modulated by speakers' unconscious strategies to foster the weaker language. This difference indicates that bilingual language recognition and production might rely on different processing mechanisms and cannot be accounted within one of the existing models of bilingual language processing. PMID:28638361

  19. Bilingual Language Switching: Production vs. Recognition.

    PubMed

    Mosca, Michela; de Bot, Kees

    2017-01-01

    This study aims at assessing how bilinguals select words in the appropriate language in production and recognition while minimizing interference from the non-appropriate language. Two prominent models are considered which assume that when one language is in use, the other is suppressed. The Inhibitory Control (IC) model suggests that, in both production and recognition, the amount of inhibition on the non-target language is greater for the stronger compared to the weaker language. In contrast, the Bilingual Interactive Activation (BIA) model proposes that, in language recognition, the amount of inhibition on the weaker language is stronger than otherwise. To investigate whether bilingual language production and recognition can be accounted for by a single model of bilingual processing, we tested a group of native speakers of Dutch (L1), advanced speakers of English (L2) in a bilingual recognition and production task. Specifically, language switching costs were measured while participants performed a lexical decision (recognition) and a picture naming (production) task involving language switching. Results suggest that while in language recognition the amount of inhibition applied to the non-appropriate language increases along with its dominance as predicted by the IC model, in production the amount of inhibition applied to the non-relevant language is not related to language dominance, but rather it may be modulated by speakers' unconscious strategies to foster the weaker language. This difference indicates that bilingual language recognition and production might rely on different processing mechanisms and cannot be accounted within one of the existing models of bilingual language processing.

  20. The Heinz Electronic Library Interactive On-line System (HELIOS): An Update.

    ERIC Educational Resources Information Center

    Galloway, Edward A.; Michalek, Gabrielle V.

    1998-01-01

    Describes a project at Carnegie Mellon University libraries to convert the congressional papers of the late Senator John Heinz to digital format and to create an online system to search and retrieve these papers. Highlights include scanning, optical character recognition, and a search engine utilizing natural language processing. (Author/LRW)

  1. Brief Report: Impaired Differentiation of Vegetative/Affective and Intentional Nonverbal Vocalizations in a Subject with Asperger Syndrome (AS)

    ERIC Educational Resources Information Center

    Dietrich, Susanne; Hertrich, Ingo; Riedel, Andreas; Ackermann, Hermann

    2012-01-01

    The Asperger syndrome (AS) includes impaired recognition of other people's mental states. Since language-based diagnostic procedures may be confounded by cognitive-linguistic compensation strategies, nonverbal test materials were created, including human affective and vegetative sounds. Depending on video context, each sound could be interpreted…

  2. Named Entity Recognition in a Hungarian NL Based QA System

    NASA Astrophysics Data System (ADS)

    Tikkl, Domonkos; Szidarovszky, P. Ferenc; Kardkovacs, Zsolt T.; Magyar, Gábor

    In WoW project our purpose is to create a complex search interface with the following features: search in the deep web content of contracted partners' databases, processing Hungarian natural language (NL) questions and transforming them to SQL queries for database access, image search supported by a visual thesaurus that describes in a structural form the visual content of images (also in Hungarian). This paper primarily focuses on a particular problem of question processing task: the entity recognition. Before going into details we give a short overview of the project's aims.

  3. The MITLL NIST LRE 2015 Language Recognition System

    DTIC Science & Technology

    2016-05-06

    The MITLL NIST LRE 2015 Language Recognition System Pedro Torres-Carrasquillo, Najim Dehak*, Elizabeth Godoy, Douglas Reynolds, Fred Richardson...most recent MIT Lincoln Laboratory language recognition system developed for the NIST 2015 Language Recognition Evaluation (LRE). The submission...Task The National Institute of Science and Technology ( NIST ) has conducted formal evaluations of language detection algorithms since 1994. In

  4. The MITLL NIST LRE 2015 Language Recognition system

    DTIC Science & Technology

    2016-02-05

    The MITLL NIST LRE 2015 Language Recognition System Pedro Torres-Carrasquillo, Najim Dehak*, Elizabeth Godoy, Douglas Reynolds, Fred Richardson...recent MIT Lincoln Laboratory language recognition system developed for the NIST 2015 Language Recognition Evaluation (LRE). The submission features a...National Institute of Science and Technology ( NIST ) has conducted formal evaluations of language detection algorithms since 1994. In previous

  5. The Development of Word Recognition in a Second Language.

    ERIC Educational Resources Information Center

    Muljani, D.; Koda, Keiko; Moates, Danny R.

    1998-01-01

    A study investigated differences in English word recognition in native speakers of Indonesian (an alphabetic language) and Chinese (a logographic languages) learning English as a Second Language. Results largely confirmed the hypothesis that an alphabetic first language would predict better word recognition in speakers of an alphabetic language,…

  6. Creating a Culture for the Preparation of an ACTFL/NCATE Program Review

    ERIC Educational Resources Information Center

    McAlpine, Dave; Dhonau, Stephanie

    2007-01-01

    This article examines what one university has done to prepare for its program review for recognition by the National Council for Accreditation of Teacher Education (NCATE) and the American Council on the Teaching of Foreign Languages (ACTFL), a Specialized Professional Association (SPA) of NCATE. The history of the standards movement within higher…

  7. The development of cross-cultural recognition of vocal emotion during childhood and adolescence.

    PubMed

    Chronaki, Georgia; Wigelsworth, Michael; Pell, Marc D; Kotz, Sonja A

    2018-06-14

    Humans have an innate set of emotions recognised universally. However, emotion recognition also depends on socio-cultural rules. Although adults recognise vocal emotions universally, they identify emotions more accurately in their native language. We examined developmental trajectories of universal vocal emotion recognition in children. Eighty native English speakers completed a vocal emotion recognition task in their native language (English) and foreign languages (Spanish, Chinese, and Arabic) expressing anger, happiness, sadness, fear, and neutrality. Emotion recognition was compared across 8-to-10, 11-to-13-year-olds, and adults. Measures of behavioural and emotional problems were also taken. Results showed that although emotion recognition was above chance for all languages, native English speaking children were more accurate in recognising vocal emotions in their native language. There was a larger improvement in recognising vocal emotion from the native language during adolescence. Vocal anger recognition did not improve with age for the non-native languages. This is the first study to demonstrate universality of vocal emotion recognition in children whilst supporting an "in-group advantage" for more accurate recognition in the native language. Findings highlight the role of experience in emotion recognition, have implications for child development in modern multicultural societies and address important theoretical questions about the nature of emotions.

  8. Use of Co-occurrences for Temporal Expressions Annotation

    NASA Astrophysics Data System (ADS)

    Craveiro, Olga; Macedo, Joaquim; Madeira, Henrique

    The annotation or extraction of temporal information from text documents is becoming increasingly important in many natural language processing applications such as text summarization, information retrieval, question answering, etc.. This paper presents an original method for easy recognition of temporal expressions in text documents. The method creates semantically classified temporal patterns, using word co-occurrences obtained from training corpora and a pre-defined seed keywords set, derived from the used language temporal references. A participation on a Portuguese named entity evaluation contest showed promising effectiveness and efficiency results. This approach can be adapted to recognize other type of expressions or languages, within other contexts, by defining the suitable word sets and training corpora.

  9. Recognition of Emotions in Mexican Spanish Speech: An Approach Based on Acoustic Modelling of Emotion-Specific Vowels

    PubMed Central

    Caballero-Morales, Santiago-Omar

    2013-01-01

    An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness). Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR's output for the sentence. With this approach, accuracy of 87–100% was achieved for the recognition of emotional state of Mexican Spanish speech. PMID:23935410

  10. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  11. Lexical association and false memory for words in two cultures.

    PubMed

    Lee, Yuh-shiow; Chiang, Wen-Chi; Hung, Hsu-Ching

    2008-01-01

    This study examined the relationship between language experience and false memory produced by the DRM paradigm. The word lists used in Stadler, et al. (Memory & Cognition, 27, 494-500, 1999) were first translated into Chinese. False recall and false recognition for critical non-presented targets were then tested on a group of Chinese users. The average co-occurrence rate of the list word and the critical word was calculated based on two large Chinese corpuses. List-level analyses revealed that the correlation between the American and Taiwanese participants was significant only in false recognition. More importantly, the co-occurrence rate was significantly correlated with false recall and recognition of Taiwanese participants, and not of American participants. In addition, the backward association strength based on Nelson et al. (The University of South Florida word association, rhyme and word fragment norms, 1999) was significantly correlated with false recall of American participants and not of Taiwanese participants. Results are discussed in terms of the relationship between language experiences and lexical association in creating false memory for word lists.

  12. Kanji Recognition by Second Language Learners: Exploring Effects of First Language Writing Systems and Second Language Exposure

    ERIC Educational Resources Information Center

    Matsumoto, Kazumi

    2013-01-01

    This study investigated whether learners of Japanese with different first language (L1) writing systems use different recognition strategies and whether second language (L2) exposure affects L2 kanji recognition. The study used a computerized lexical judgment task with 3 types of kanji characters to investigate these questions: (a)…

  13. Multi-Lingual Deep Neural Networks for Language Recognition

    DTIC Science & Technology

    2016-08-08

    training configurations for the NIST 2011 and 2015 lan- guage recognition evaluations (LRE11 and LRE15). The best per- forming multi-lingual BN-DNN...very ef- fective approach in the NIST 2015 language recognition evaluation (LRE15) open training condition [4, 5]. In this work we evaluate the impact...language are summarized in Table 2. Two language recognition tasks are used for evaluating the multi-lingual bottleneck systems. The first is the NIST

  14. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  15. An analysis of the influence of deep neural network (DNN) topology in bottleneck feature based language recognition.

    PubMed

    Lozano-Diez, Alicia; Zazo, Ruben; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin

    2017-01-01

    Language recognition systems based on bottleneck features have recently become the state-of-the-art in this research field, showing its success in the last Language Recognition Evaluation (LRE 2015) organized by NIST (U.S. National Institute of Standards and Technology). This type of system is based on a deep neural network (DNN) trained to discriminate between phonetic units, i.e. trained for the task of automatic speech recognition (ASR). This DNN aims to compress information in one of its layers, known as bottleneck (BN) layer, which is used to obtain a new frame representation of the audio signal. This representation has been proven to be useful for the task of language identification (LID). Thus, bottleneck features are used as input to the language recognition system, instead of a classical parameterization of the signal based on cepstral feature vectors such as MFCCs (Mel Frequency Cepstral Coefficients). Despite the success of this approach in language recognition, there is a lack of studies analyzing in a systematic way how the topology of the DNN influences the performance of bottleneck feature-based language recognition systems. In this work, we try to fill-in this gap, analyzing language recognition results with different topologies for the DNN used to extract the bottleneck features, comparing them and against a reference system based on a more classical cepstral representation of the input signal with a total variability model. This way, we obtain useful knowledge about how the DNN configuration influences bottleneck feature-based language recognition systems performance.

  16. Knowledge of a Second Language Influences Auditory Word Recognition in the Native Language

    ERIC Educational Resources Information Center

    Lagrou, Evelyne; Hartsuiker, Robert J.; Duyck, Wouter

    2011-01-01

    Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether…

  17. Voice Recognition Software Accuracy with Second Language Speakers of English.

    ERIC Educational Resources Information Center

    Coniam, D.

    1999-01-01

    Explores the potential of the use of voice-recognition technology with second-language speakers of English. Involves the analysis of the output produced by a small group of very competent second-language subjects reading a text into the voice recognition software Dragon Systems "Dragon NaturallySpeaking." (Author/VWL)

  18. Indonesian Sign Language Number Recognition using SIFT Algorithm

    NASA Astrophysics Data System (ADS)

    Mahfudi, Isa; Sarosa, Moechammad; Andrie Asmara, Rosa; Azrino Gustalika, M.

    2018-04-01

    Indonesian sign language (ISL) is generally used for deaf individuals and poor people communication in communicating. They use sign language as their primary language which consists of 2 types of action: sign and finger spelling. However, not all people understand their sign language so that this becomes a problem for them to communicate with normal people. this problem also becomes a factor they are isolated feel from the social life. It needs a solution that can help them to be able to interacting with normal people. Many research that offers a variety of methods in solving the problem of sign language recognition based on image processing. SIFT (Scale Invariant Feature Transform) algorithm is one of the methods that can be used to identify an object. SIFT is claimed very resistant to scaling, rotation, illumination and noise. Using SIFT algorithm for Indonesian sign language recognition number result rate recognition to 82% with the use of a total of 100 samples image dataset consisting 50 sample for training data and 50 sample images for testing data. Change threshold value get affect the result of the recognition. The best value threshold is 0.45 with rate recognition of 94%.

  19. The Role of Morphology in Word Recognition of Hebrew as a Templatic Language

    ERIC Educational Resources Information Center

    Oganyan, Marina

    2017-01-01

    Research on recognition of complex words has primarily focused on affixational complexity in concatenative languages. This dissertation investigates both templatic and affixational complexity in Hebrew, a templatic language, with particular focus on the role of the root and template morphemes in recognition. It also explores the role of morphology…

  20. Bilingual Word Recognition in Deaf and Hearing Signers: Effects of Proficiency and Language Dominance on Cross-Language Activation

    ERIC Educational Resources Information Center

    Morford, Jill P.; Kroll, Judith F.; Piñar, Pilar; Wilkinson, Erin

    2014-01-01

    Recent evidence demonstrates that American Sign Language (ASL) signs are active during print word recognition in deaf bilinguals who are highly proficient in both ASL and English. In the present study, we investigate whether signs are active during print word recognition in two groups of unbalanced bilinguals: deaf ASL-dominant and hearing…

  1. Individual differences in language and working memory affect children's speech recognition in noise.

    PubMed

    McCreery, Ryan W; Spratford, Meredith; Kirby, Benjamin; Brennan, Marc

    2017-05-01

    We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. Ninety-six children with normal hearing, who were between 5 and 12 years of age. Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. Working memory and language both influence children's speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child's auditory skills, consistent with the Ease of Language Understanding model.

  2. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  3. Individual differences in online spoken word recognition: Implications for SLI

    PubMed Central

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2012-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014

  4. Construction of language models for an handwritten mail reading system

    NASA Astrophysics Data System (ADS)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2012-01-01

    This paper presents a system for the recognition of unconstrained handwritten mails. The main part of this system is an HMM recognizer which uses trigraphs to model contextual information. This recognition system does not require any segmentation into words or characters and directly works at line level. To take into account linguistic information and enhance performance, a language model is introduced. This language model is based on bigrams and built from training document transcriptions only. Different experiments with various vocabulary sizes and language models have been conducted. Word Error Rate and Perplexity values are compared to show the interest of specific language models, fit to handwritten mail recognition task.

  5. Experimental study on GMM-based speaker recognition

    NASA Astrophysics Data System (ADS)

    Ye, Wenxing; Wu, Dapeng; Nucci, Antonio

    2010-04-01

    Speaker recognition plays a very important role in the field of biometric security. In order to improve the recognition performance, many pattern recognition techniques have be explored in the literature. Among these techniques, the Gaussian Mixture Model (GMM) is proved to be an effective statistic model for speaker recognition and is used in most state-of-the-art speaker recognition systems. The GMM is used to represent the 'voice print' of a speaker through modeling the spectral characteristic of speech signals of the speaker. In this paper, we implement a speaker recognition system, which consists of preprocessing, Mel-Frequency Cepstrum Coefficients (MFCCs) based feature extraction, and GMM based classification. We test our system with TIDIGITS data set (325 speakers) and our own recordings of more than 200 speakers; our system achieves 100% correct recognition rate. Moreover, we also test our system under the scenario that training samples are from one language but test samples are from a different language; our system also achieves 100% correct recognition rate, which indicates that our system is language independent.

  6. Character Recognition Using Genetically Trained Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diniz, C.; Stantz, K.M.; Trahan, M.W.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfidmore » recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the amount of noise significantly degrades character recognition efficiency, some of which can be overcome by adding noise during training and optimizing the form of the network's activation fimction.« less

  7. Handwritten recognition of Tamil vowels using deep learning

    NASA Astrophysics Data System (ADS)

    Ram Prashanth, N.; Siddarth, B.; Ganesh, Anirudh; Naveen Kumar, Vaegae

    2017-11-01

    We come across a large volume of handwritten texts in our daily lives and handwritten character recognition has long been an important area of research in pattern recognition. The complexity of the task varies among different languages and it so happens largely due to the similarity between characters, distinct shapes and number of characters which are all language-specific properties. There have been numerous works on character recognition of English alphabets and with laudable success, but regional languages have not been dealt with very frequently and with similar accuracies. In this paper, we explored the performance of Deep Belief Networks in the classification of Handwritten Tamil vowels, and conclusively compared the results obtained. The proposed method has shown satisfactory recognition accuracy in light of difficulties faced with regional languages such as similarity between characters and minute nuances that differentiate them. We can further extend this to all the Tamil characters.

  8. Recognition of voice commands using adaptation of foreign language speech recognizer via selection of phonetic transcriptions

    NASA Astrophysics Data System (ADS)

    Maskeliunas, Rytis; Rudzionis, Vytautas

    2011-06-01

    In recent years various commercial speech recognizers have become available. These recognizers provide the possibility to develop applications incorporating various speech recognition techniques easily and quickly. All of these commercial recognizers are typically targeted to widely spoken languages having large market potential; however, it may be possible to adapt available commercial recognizers for use in environments where less widely spoken languages are used. Since most commercial recognition engines are closed systems the single avenue for the adaptation is to try set ways for the selection of proper phonetic transcription methods between the two languages. This paper deals with the methods to find the phonetic transcriptions for Lithuanian voice commands to be recognized using English speech engines. The experimental evaluation showed that it is possible to find phonetic transcriptions that will enable the recognition of Lithuanian voice commands with recognition accuracy of over 90%.

  9. Mispronunciation Detection for Language Learning and Speech Recognition Adaptation

    ERIC Educational Resources Information Center

    Ge, Zhenhao

    2013-01-01

    The areas of "mispronunciation detection" (or "accent detection" more specifically) within the speech recognition community are receiving increased attention now. Two application areas, namely language learning and speech recognition adaptation, are largely driving this research interest and are the focal points of this work.…

  10. A Kinect based sign language recognition system using spatio-temporal features

    NASA Astrophysics Data System (ADS)

    Memiş, Abbas; Albayrak, Songül

    2013-12-01

    This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.

  11. Quantify spatial relations to discover handwritten graphical symbols

    NASA Astrophysics Data System (ADS)

    Li, Jinpeng; Mouchère, Harold; Viard-Gaudin, Christian

    2012-01-01

    To model a handwritten graphical language, spatial relations describe how the strokes are positioned in the 2-dimensional space. Most of existing handwriting recognition systems make use of some predefined spatial relations. However, considering a complex graphical language, it is hard to express manually all the spatial relations. Another possibility would be to use a clustering technique to discover the spatial relations. In this paper, we discuss how to create a relational graph between strokes (nodes) labeled with graphemes in a graphical language. Then we vectorize spatial relations (edges) for clustering and quantization. As the targeted application, we extract the repetitive sub-graphs (graphical symbols) composed of graphemes and learned spatial relations. On two handwriting databases, a simple mathematical expression database and a complex flowchart database, the unsupervised spatial relations outperform the predefined spatial relations. In addition, we visualize the frequent patterns on two text-lines containing Chinese characters.

  12. V2S: Voice to Sign Language Translation System for Malaysian Deaf People

    NASA Astrophysics Data System (ADS)

    Mean Foong, Oi; Low, Tang Jung; La, Wai Wan

    The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.

  13. Verification Processes in Recognition Memory: The Role of Natural Language Mediators

    ERIC Educational Resources Information Center

    Marshall, Philip H.; Smith, Randolph A. S.

    1977-01-01

    The existence of verification processes in recognition memory was confirmed in the context of Adams' (Adams & Bray, 1970) closed-loop theory. Subjects' recognition was tested following a learning session. The expectation was that data would reveal consistent internal relationships supporting the position that natural language mediation plays…

  14. The Relationship between Language Anxiety, Interpretation of Anxiety, Intrinsic Motivation and the Use of Learning Strategies

    ERIC Educational Resources Information Center

    Nishitani, Mari; Matsuda, Toshiki

    2011-01-01

    Researches in language anxiety have focused on the level of language anxiety so far. This study instead, hypothesizes that the interpretation of anxiety and the recognition of failure have an impact on learning and investigates how language anxiety and intrinsic motivation affect the use of learning strategies through the recognition of failure.…

  15. Exploiting Hidden Layer Responses of Deep Neural Networks for Language Recognition

    DTIC Science & Technology

    2016-09-08

    trained DNNs. We evaluated this ap- proach in NIST 2015 language recognition evaluation. The per- formances achieved by the proposed approach are very...activations, used in direct DNN-LID. Results from the LID experiments support our hypothesis. The LID experiments are performed on NIST Language Recognition...of-the-art I- vector system [3, 10, 11] in evaluation (eval) set of NIST LRE 2015. Combination of proposed technique and state-of-the-art I-vector

  16. Development and validation of a smartphone-based digits-in-noise hearing test in South African English.

    PubMed

    Potgieter, Jenni-Marí; Swanepoel, De Wet; Myburgh, Hermanus Carel; Hopper, Thomas Christopher; Smits, Cas

    2015-07-01

    The objective of this study was to develop and validate a smartphone-based digits-in-noise hearing test for South African English. Single digits (0-9) were recorded and spoken by a first language English female speaker. Level corrections were applied to create a set of homogeneous digits with steep speech recognition functions. A smartphone application was created to utilize 120 digit-triplets in noise as test material. An adaptive test procedure determined the speech reception threshold (SRT). Experiments were performed to determine headphones effects on the SRT and to establish normative data. Participants consisted of 40 normal-hearing subjects with thresholds ≤15 dB across the frequency spectrum (250-8000 Hz) and 186 subjects with normal-hearing in both ears, or normal-hearing in the better ear. The results show steep speech recognition functions with a slope of 20%/dB for digit-triplets presented in noise using the smartphone application. The results of five headphone types indicate that the smartphone-based hearing test is reliable and can be conducted using standard Android smartphone headphones or clinical headphones. A digits-in-noise hearing test was developed and validated for South Africa. The mean SRT and speech recognition functions correspond to previous developed telephone-based digits-in-noise tests.

  17. Quadcopter Control Using Speech Recognition

    NASA Astrophysics Data System (ADS)

    Malik, H.; Darma, S.; Soekirno, S.

    2018-04-01

    This research reported a comparison from a success rate of speech recognition systems that used two types of databases they were existing databases and new databases, that were implemented into quadcopter as motion control. Speech recognition system was using Mel frequency cepstral coefficient method (MFCC) as feature extraction that was trained using recursive neural network method (RNN). MFCC method was one of the feature extraction methods that most used for speech recognition. This method has a success rate of 80% - 95%. Existing database was used to measure the success rate of RNN method. The new database was created using Indonesian language and then the success rate was compared with results from an existing database. Sound input from the microphone was processed on a DSP module with MFCC method to get the characteristic values. Then, the characteristic values were trained using the RNN which result was a command. The command became a control input to the single board computer (SBC) which result was the movement of the quadcopter. On SBC, we used robot operating system (ROS) as the kernel (Operating System).

  18. Static sign language recognition using 1D descriptors and neural networks

    NASA Astrophysics Data System (ADS)

    Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César

    2012-10-01

    A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.

  19. Integrating Computer-Assisted Language Learning in Saudi Schools: A Change Model

    ERIC Educational Resources Information Center

    Alresheed, Saleh; Leask, Marilyn; Raiker, Andrea

    2015-01-01

    Computer-assisted language learning (CALL) technology and pedagogy have gained recognition globally for their success in supporting second language acquisition (SLA). In Saudi Arabia, the government aims to provide most educational institutions with computers and networking for integrating CALL into classrooms. However, the recognition of CALL's…

  20. Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children.

    PubMed

    Lewis, Dawna; Kopun, Judy; McCreery, Ryan; Brennan, Marc; Nishi, Kanae; Cordrey, Evan; Stelmachowicz, Pat; Moeller, Mary Pat

    The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.

  1. Pattern activation/recognition theory of mind

    PubMed Central

    du Castel, Bertrand

    2015-01-01

    In his 2012 book How to Create a Mind, Ray Kurzweil defines a “Pattern Recognition Theory of Mind” that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call “Pattern Activation/Recognition Theory of Mind.” While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation. PMID:26236228

  2. Pattern activation/recognition theory of mind.

    PubMed

    du Castel, Bertrand

    2015-01-01

    In his 2012 book How to Create a Mind, Ray Kurzweil defines a "Pattern Recognition Theory of Mind" that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call "Pattern Activation/Recognition Theory of Mind." While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

  3. Gradient language dominance affects talker learning.

    PubMed

    Bregman, Micah R; Creel, Sarah C

    2014-01-01

    Traditional conceptions of spoken language assume that speech recognition and talker identification are computed separately. Neuropsychological and neuroimaging studies imply some separation between the two faculties, but recent perceptual studies suggest better talker recognition in familiar languages than unfamiliar languages. A familiar-language benefit in talker recognition potentially implies strong ties between the two domains. However, little is known about the nature of this language familiarity effect. The current study investigated the relationship between speech and talker processing by assessing bilingual and monolingual listeners' ability to learn voices as a function of language familiarity and age of acquisition. Two effects emerged. First, bilinguals learned to recognize talkers in their first language (Korean) more rapidly than they learned to recognize talkers in their second language (English), while English-speaking participants showed the opposite pattern (learning English talkers faster than Korean talkers). Second, bilinguals' learning rate for talkers in their second language (English) correlated with age of English acquisition. Taken together, these results suggest that language background materially affects talker encoding, implying a tight relationship between speech and talker representations. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. The Effects of First Language Orthographic Features on Word Recognition Processing in English as a Second Language.

    ERIC Educational Resources Information Center

    Akamatsu, Nobuhiko

    1999-01-01

    Uses case alteration (cAse ALteRaTiOn) to investigate effects of first-language (L1) orthographic characteristics on word recognition in English as a second language (ESL). Finds magnitude of case alteration effect for naming tasks was significantly larger for ESL participants whose L1 was not alphabetic. Suggests that L1 orthographic features…

  5. Integrating language models into classifiers for BCI communication: a review

    NASA Astrophysics Data System (ADS)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  6. Integrating language models into classifiers for BCI communication: a review.

    PubMed

    Speier, W; Arnold, C; Pouratian, N

    2016-06-01

    The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  7. Applicability of the Compensatory Encoding Model in Foreign Language Reading: An Investigation with Chinese College English Language Learners

    PubMed Central

    Han, Feifei

    2017-01-01

    While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified into language-oriented strategies, content-oriented strategies, re-reading, pausing, and meta-comment. The correlation analyses showed that while word recognition and working memory were only significantly related to frequency of language-oriented strategies, re-reading, and pausing, but not with reading comprehension. Jointly viewed, the results of the two studies, complimenting each other, supported the applicability of the Compensatory Encoding Model in FL reading with Chinese college ELLs. PMID:28522984

  8. Applicability of the Compensatory Encoding Model in Foreign Language Reading: An Investigation with Chinese College English Language Learners.

    PubMed

    Han, Feifei

    2017-01-01

    While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified into language-oriented strategies, content-oriented strategies, re-reading, pausing, and meta-comment. The correlation analyses showed that while word recognition and working memory were only significantly related to frequency of language-oriented strategies, re-reading, and pausing, but not with reading comprehension. Jointly viewed, the results of the two studies, complimenting each other, supported the applicability of the Compensatory Encoding Model in FL reading with Chinese college ELLs.

  9. Speech Recognition Software for Language Learning: Toward an Evaluation of Validity and Student Perceptions

    ERIC Educational Resources Information Center

    Cordier, Deborah

    2009-01-01

    A renewed focus on foreign language (FL) learning and speech for communication has resulted in computer-assisted language learning (CALL) software developed with Automatic Speech Recognition (ASR). ASR features for FL pronunciation (Lafford, 2004) are functional components of CALL designs used for FL teaching and learning. The ASR features…

  10. Promotion in Times of Endangerment: The Sign Language Act in Finland

    ERIC Educational Resources Information Center

    De Meulder, Maartje

    2017-01-01

    The development of sign language recognition legislation is a relatively recent phenomenon in the field of language policy. So far only few authors have documented signing communities' aspirations for recognition legislation, how they work with their governments to achieve legislation which most reflects these goals, and whether and why outcomes…

  11. Evaluating Automatic Speech Recognition-Based Language Learning Systems: A Case Study

    ERIC Educational Resources Information Center

    van Doremalen, Joost; Boves, Lou; Colpaert, Jozef; Cucchiarini, Catia; Strik, Helmer

    2016-01-01

    The purpose of this research was to evaluate a prototype of an automatic speech recognition (ASR)-based language learning system that provides feedback on different aspects of speaking performance (pronunciation, morphology and syntax) to students of Dutch as a second language. We carried out usability reviews, expert reviews and user tests to…

  12. Morphological Family Size Effects in Young First and Second Language Learners: Evidence of Cross-Language Semantic Activation in Visual Word Recognition

    ERIC Educational Resources Information Center

    de Zeeuw, Marlies; Verhoeven, Ludo; Schreuder, Robert

    2012-01-01

    This study examined to what extent young second language (L2) learners showed morphological family size effects in L2 word recognition and whether the effects were grade-level related. Turkish-Dutch bilingual children (L2) and Dutch (first language, L1) children from second, fourth, and sixth grade performed a Dutch lexical decision task on words…

  13. The Effect of Automatic Speech Recognition Eyespeak Software on Iraqi Students' English Pronunciation: A Pilot Study

    ERIC Educational Resources Information Center

    Sidgi, Lina Fathi Sidig; Shaari, Ahmad Jelani

    2017-01-01

    The use of technology, such as computer-assisted language learning (CALL), is used in teaching and learning in the foreign language classrooms where it is most needed. One promising emerging technology that supports language learning is automatic speech recognition (ASR). Integrating such technology, especially in the instruction of pronunciation…

  14. The Role of Native-Language Phonology in the Auditory Word Identification and Visual Word Recognition of Russian-English Bilinguals

    ERIC Educational Resources Information Center

    Shafiro, Valeriy; Kharkhurin, Anatoliy V.

    2009-01-01

    Abstract Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic…

  15. Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition.

    PubMed

    R, Elakkiya; K, Selvamani

    2017-09-22

    Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent actions under restricted and laboratory conditions. This research paper aims at solving these two important issues (1) Subunit extraction and (2) Signer independent action on visual sign language recognition. Subunit extraction involved in the sequential and parallel breakdown of sign gestures without any prior knowledge on syllables and number of subunits. A novel Bayesian Parallel Hidden Markov Model (BPaHMM) is introduced for subunit extraction to combine the features of manual and non-manual parameters to yield better results in classification and recognition of signs. Signer independent action aims in using a single web camera for different signer behaviour patterns and for cross-signer validation. Experimental results have proved that the proposed signer independent subunit level modelling for sign language classification and recognition has shown improvement and variations when compared with other existing works.

  16. Gender affects body language reading.

    PubMed

    Sokolov, Arseny A; Krüger, Samuel; Enck, Paul; Krägeloh-Mann, Ingeborg; Pavlova, Marina A

    2011-01-01

    Body motion is a rich source of information for social cognition. However, gender effects in body language reading are largely unknown. Here we investigated whether, and, if so, how recognition of emotional expressions revealed by body motion is gender dependent. To this end, females and males were presented with point-light displays portraying knocking at a door performed with different emotional expressions. The findings show that gender affects accuracy rather than speed of body language reading. This effect, however, is modulated by emotional content of actions: males surpass in recognition accuracy of happy actions, whereas females tend to excel in recognition of hostile angry knocking. Advantage of women in recognition accuracy of neutral actions suggests that females are better tuned to the lack of emotional content in body actions. The study provides novel insights into understanding of gender effects in body language reading, and helps to shed light on gender vulnerability to neuropsychiatric and neurodevelopmental impairments in visual social cognition.

  17. Development of a Mandarin-English Bilingual Speech Recognition System for Real World Music Retrieval

    NASA Astrophysics Data System (ADS)

    Zhang, Qingqing; Pan, Jielin; Lin, Yang; Shao, Jian; Yan, Yonghong

    In recent decades, there has been a great deal of research into the problem of bilingual speech recognition-to develop a recognizer that can handle inter- and intra-sentential language switching between two languages. This paper presents our recent work on the development of a grammar-constrained, Mandarin-English bilingual Speech Recognition System (MESRS) for real world music retrieval. Two of the main difficult issues in handling the bilingual speech recognition systems for real world applications are tackled in this paper. One is to balance the performance and the complexity of the bilingual speech recognition system; the other is to effectively deal with the matrix language accents in embedded language**. In order to process the intra-sentential language switching and reduce the amount of data required to robustly estimate statistical models, a compact single set of bilingual acoustic models derived by phone set merging and clustering is developed instead of using two separate monolingual models for each language. In our study, a novel Two-pass phone clustering method based on Confusion Matrix (TCM) is presented and compared with the log-likelihood measure method. Experiments testify that TCM can achieve better performance. Since potential system users' native language is Mandarin which is regarded as a matrix language in our application, their pronunciations of English as the embedded language usually contain Mandarin accents. In order to deal with the matrix language accents in embedded language, different non-native adaptation approaches are investigated. Experiments show that model retraining method outperforms the other common adaptation methods such as Maximum A Posteriori (MAP). With the effective incorporation of approaches on phone clustering and non-native adaptation, the Phrase Error Rate (PER) of MESRS for English utterances was reduced by 24.47% relatively compared to the baseline monolingual English system while the PER on Mandarin utterances was comparable to that of the baseline monolingual Mandarin system. The performance for bilingual utterances achieved 22.37% relative PER reduction.

  18. Young children make their gestural communication systems more language-like: segmentation and linearization of semantic elements in motion events.

    PubMed

    Clay, Zanna; Pople, Sally; Hood, Bruce; Kita, Sotaro

    2014-08-01

    Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children's learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system. © The Author(s) 2014.

  19. Continuous Chinese sign language recognition with CNN-LSTM

    NASA Astrophysics Data System (ADS)

    Yang, Su; Zhu, Qing

    2017-07-01

    The goal of sign language recognition (SLR) is to translate the sign language into text, and provide a convenient tool for the communication between the deaf-mute and the ordinary. In this paper, we formulate an appropriate model based on convolutional neural network (CNN) combined with Long Short-Term Memory (LSTM) network, in order to accomplish the continuous recognition work. With the strong ability of CNN, the information of pictures captured from Chinese sign language (CSL) videos can be learned and transformed into vector. Since the video can be regarded as an ordered sequence of frames, LSTM model is employed to connect with the fully-connected layer of CNN. As a recurrent neural network (RNN), it is suitable for sequence learning tasks with the capability of recognizing patterns defined by temporal distance. Compared with traditional RNN, LSTM has performed better on storing and accessing information. We evaluate this method on our self-built dataset including 40 daily vocabularies. The experimental results show that the recognition method with CNN-LSTM can achieve a high recognition rate with small training sets, which will meet the needs of real-time SLR system.

  20. Recognition of emotion from body language among patients with unipolar depression

    PubMed Central

    Loi, Felice; Vaidya, Jatin G.; Paradiso, Sergio

    2013-01-01

    Major depression may be associated with abnormal perception of emotions and impairment in social adaptation. Emotion recognition from body language and its possible implications to social adjustment have not been examined in patients with depression. Three groups of participants (51 with depression; 68 with history of depression in remission; and 69 never depressed healthy volunteers) were compared on static and dynamic tasks of emotion recognition from body language. Psychosocial adjustment was assessed using the Social Adjustment Scale Self-Report (SAS-SR). Participants with current depression showed reduced recognition accuracy for happy stimuli across tasks relative to remission and comparison participants. Participants with depression tended to show poorer psychosocial adaptation relative to remission and comparison groups. Correlations between perception accuracy of happiness and scores on the SAS-SR were largely not significant. These results indicate that depression is associated with reduced ability to appraise positive stimuli of emotional body language but emotion recognition performance is not tied to social adjustment. These alterations do not appear to be present in participants in remission suggesting state-like qualities. PMID:23608159

  1. Cross-cultural effect on the brain revisited: universal structures plus writing system variation.

    PubMed

    Bolger, Donald J; Perfetti, Charles A; Schneider, Walter

    2005-05-01

    Recognizing printed words requires the mapping of graphic forms, which vary with writing systems, to linguistic forms, which vary with languages. Using a newly developed meta-analytic approach, aggregated Gaussian-estimated sources (AGES; Chein et al. [2002]: Psychol Behav 77:635-639), we examined the neuroimaging results for word reading within and across writing systems and languages. To find commonalities, we compiled 25 studies in English and other Western European languages that use an alphabetic writing system, 9 studies of native Chinese reading, 5 studies of Japanese Kana (syllabic) reading, and 4 studies of Kanji (morpho-syllabic) reading. Using the AGES approach, we created meta-images within each writing system, isolated reliable foci of activation, and compared findings across writing systems and languages. The results suggest that these writing systems utilize a common network of regions in word processing. Writing systems engage largely the same systems in terms of gross cortical regions, but localization within those regions suggests differences across writing systems. In particular, the region known as the visual word form area (VWFA) shows strikingly consistent localization across tasks and across writing systems. This region in the left mid-fusiform gyrus is critical to word recognition across writing systems and languages.

  2. Parallel language activation and cognitive control during spoken word recognition in bilinguals

    PubMed Central

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  3. Automatic Mexican sign language and digits recognition using normalized central moments

    NASA Astrophysics Data System (ADS)

    Solís, Francisco; Martínez, David; Espinosa, Oscar; Toxqui, Carina

    2016-09-01

    This work presents a framework for automatic Mexican sign language and digits recognition based on computer vision system using normalized central moments and artificial neural networks. Images are captured by digital IP camera, four LED reflectors and a green background in order to reduce computational costs and prevent the use of special gloves. 42 normalized central moments are computed per frame and used in a Multi-Layer Perceptron to recognize each database. Four versions per sign and digit were used in training phase. 93% and 95% of recognition rates were achieved for Mexican sign language and digits respectively.

  4. Static hand gesture recognition from a video

    NASA Astrophysics Data System (ADS)

    Rokade, Rajeshree S.; Doye, Dharmpal

    2011-10-01

    A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns to convey meaning- "simultaneously combining hand shapes, orientation and movement of the hands". Sign languages commonly develop in deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. In this paper, we proposed a novel system for recognition of static hand gestures from a video, based on Kohonen neural network. We proposed algorithm to separate out key frames, which include correct gestures from a video sequence. We segment, hand images from complex and non uniform background. Features are extracted by applying Kohonen on key frames and recognition is done.

  5. Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children

    PubMed Central

    Lewis, Dawna E.; Kopun, Judy; McCreery, Ryan; Brennan, Marc; Nishi, Kanae; Cordrey, Evan; Stelmachowicz, Pat; Moeller, Mary Pat

    2016-01-01

    Objectives The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- vs. low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Design Sixteen CHH with mild-to-moderate hearing loss and 16 age-matched CNH participated (5–12 yrs). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a 5- or 3-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Results Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably to CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. Conclusions The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared to their peers with normal hearing suggest variations in how these groups use limited acoustic information to select word candidates. PMID:28045838

  6. Selected Topics from LVCSR Research for Asian Languages at Tokyo Tech

    NASA Astrophysics Data System (ADS)

    Furui, Sadaoki

    This paper presents our recent work in regard to building Large Vocabulary Continuous Speech Recognition (LVCSR) systems for the Thai, Indonesian, and Chinese languages. For Thai, since there is no word boundary in the written form, we have proposed a new method for automatically creating word-like units from a text corpus, and applied topic and speaking style adaptation to the language model to recognize spoken-style utterances. For Indonesian, we have applied proper noun-specific adaptation to acoustic modeling, and rule-based English-to-Indonesian phoneme mapping to solve the problem of large variation in proper noun and English word pronunciation in a spoken-query information retrieval system. In spoken Chinese, long organization names are frequently abbreviated, and abbreviated utterances cannot be recognized if the abbreviations are not included in the dictionary. We have proposed a new method for automatically generating Chinese abbreviations, and by expanding the vocabulary using the generated abbreviations, we have significantly improved the performance of spoken query-based search.

  7. Acoustic-Phonetic Versus Lexical Processing in Nonnative Listeners Differing in Their Dominant Language.

    PubMed

    Shi, Lu-Feng; Koenig, Laura L

    2016-09-01

    Nonnative listeners have difficulty recognizing English words due to underdeveloped acoustic-phonetic and/or lexical skills. The present study used Boothroyd and Nittrouer's (1988)j factor to tease apart these two components of word recognition. Participants included 15 native English and 29 native Russian listeners. Fourteen and 15 of the Russian listeners reported English (ED) and Russian (RD) to be their dominant language, respectively. Listeners were presented 119 consonant-vowel-consonant real and nonsense words in speech-spectrum noise at +6 dB SNR. Responses were scored for word and phoneme recognition, the logarithmic quotient of which yielded j. Word and phoneme recognition was comparable between native and ED listeners but poorer in RD listeners. Analysis of j indicated less effective use of lexical information in RD than in native and ED listeners. Lexical processing was strongly correlated with the length of residence in the United States. Language background is important for nonnative word recognition. Lexical skills can be regarded as nativelike in ED nonnative listeners. Compromised word recognition in ED listeners is unlikely a result of poor lexical processing. Performance should be interpreted with caution for listeners dominant in their first language, whose word recognition is affected by both lexical and acoustic-phonetic factors.

  8. Speech Perception in Noise by Children With Cochlear Implants

    PubMed Central

    Caldwell, Amanda; Nittrouer, Susan

    2013-01-01

    Purpose Common wisdom suggests that listening in noise poses disproportionately greater difficulty for listeners with cochlear implants (CIs) than for peers with normal hearing (NH). The purpose of this study was to examine phonological, language, and cognitive skills that might help explain speech-in-noise abilities for children with CIs. Method Three groups of kindergartners (NH, hearing aid wearers, and CI users) were tested on speech recognition in quiet and noise and on tasks thought to underlie the abilities that fit into the domains of phonological awareness, general language, and cognitive skills. These last measures were used as predictor variables in regression analyses with speech-in-noise scores as dependent variables. Results Compared to children with NH, children with CIs did not perform as well on speech recognition in noise or on most other measures, including recognition in quiet. Two surprising results were that (a) noise effects were consistent across groups and (b) scores on other measures did not explain any group differences in speech recognition. Conclusions Limitations of implant processing take their primary toll on recognition in quiet and account for poor speech recognition and language/phonological deficits in children with CIs. Implications are that teachers/clinicians need to teach language/phonology directly and maximize signal-to-noise levels in the classroom. PMID:22744138

  9. Lexical access in sign language: a computational model.

    PubMed

    Caselli, Naomi K; Cohen-Goldberg, Ariel M

    2014-01-01

    PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  10. Recognition of English and German Borrowings in the Russian Language (Based on Lexical Borrowings in the Field of Economics)

    ERIC Educational Resources Information Center

    Ashrapova, Alsu; Alendeeva, Svetlana

    2014-01-01

    This article is the result of a study of the influence of English and German on the Russian language during the English learning based on lexical borrowings in the field of economics. This paper discusses the use and recognition of borrowings from the English and German languages by Russian native speakers. The use of lexical borrowings from…

  11. Full-body gestures and movements recognition: user descriptive and unsupervised learning approaches in GDL classifier

    NASA Astrophysics Data System (ADS)

    Hachaj, Tomasz; Ogiela, Marek R.

    2014-09-01

    Gesture Description Language (GDL) is a classifier that enables syntactic description and real time recognition of full-body gestures and movements. Gestures are described in dedicated computer language named Gesture Description Language script (GDLs). In this paper we will introduce new GDLs formalisms that enable recognition of selected classes of movement trajectories. The second novelty is new unsupervised learning method with which it is possible to automatically generate GDLs descriptions. We have initially evaluated both proposed extensions of GDL and we have obtained very promising results. Both the novel methodology and evaluation results will be described in this paper.

  12. Selective verbal recognition memory impairments are associated with atrophy of the language network in non-semantic variants of primary progressive aphasia.

    PubMed

    Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J

    2017-06-01

    Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The critical impact of Frantz Fanon and Henri Collomb: race, gender, and personality testing of North and West Africans.

    PubMed

    Bullard, Alice

    2005-01-01

    In 2001, the U.S. Surgeon General declared publicly that culture counts in mental health care. This welcome recognition of the role of culture in mental health appears somewhat belated. In 1956, Frantz Fanon and Henri Collomb both presented culturally sensitive studies of the Thematic Apperception Test at the major French-language mental health conference. The contrast between these two studies and between the careers of Fanon and Collomb reveals some of the difficulties in creating cultural and gender sensitivity in psychiatry or psychology. Copyright 2005 Wiley Periodicals, Inc.

  14. Infant information processing and family history of specific language impairment: converging evidence for RAP deficits from two paradigms

    PubMed Central

    Choudhury, Naseem; Leppanen, Paavo H.T.; Leevers, Hilary J.; Benasich, April A.

    2007-01-01

    An infant’s ability to process auditory signals presented in rapid succession (i.e. rapid auditory processing abilities [RAP]) has been shown to predict differences in language outcomes in toddlers and preschool children. Early deficits in RAP abilities may serve as a behavioral marker for language-based learning disabilities. The purpose of this study is to determine if performance on infant information processing measures designed to tap RAP and global processing skills differ as a function of family history of specific language impairment (SLI) and/or the particular demand characteristics of the paradigm used. Seventeen 6- to 9-month-old infants from families with a history of specific language impairment (FH+) and 29 control infants (FH−) participated in this study. Infants’ performance on two different RAP paradigms (head-turn procedure [HT] and auditory-visual habituation/recognition memory [AVH/RM]) and on a global processing task (visual habituation/recognition memory [VH/RM]) was assessed at 6 and 9 months. Toddler language and cognitive skills were evaluated at 12 and 16 months. A number of significant group differences were seen: FH+ infants showed significantly poorer discrimination of fast rate stimuli on both RAP tasks, took longer to habituate on both habituation/recognition memory measures, and had lower novelty preference scores on the visual habituation/recognition memory task. Infants’ performance on the two RAP measures provided independent but converging contributions to outcome. Thus, different mechanisms appear to underlie performance on operantly conditioned tasks as compared to habituation/recognition memory paradigms. Further, infant RAP processing abilities predicted to 12- and 16-month language scores above and beyond family history of SLI. The results of this study provide additional support for the validity of infant RAP abilities as a behavioral marker for later language outcome. Finally, this is the first study to use a battery of infant tasks to demonstrate multi-modal processing deficits in infants at risk for SLI. PMID:17286846

  15. Mechanically Compliant Electronic Materials for Wearable Photovoltaics and Human-Machine Interfaces

    NASA Astrophysics Data System (ADS)

    O'Connor, Timothy Francis, III

    Applications of stretchable electronic materials for human-machine interfaces are described herein. Intrinsically stretchable organic conjugated polymers and stretchable electronic composites were used to develop stretchable organic photovoltaics (OPVs), mechanically robust wearable OPVs, and human-machine interfaces for gesture recognition, American Sign Language Translation, haptic control of robots, and touch emulation for virtual reality, augmented reality, and the transmission of touch. The stretchable and wearable OPVs comprise active layers of poly-3-alkylthiophene:phenyl-C61-butyric acid methyl ester (P3AT:PCBM) and transparent conductive electrodes of poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) (PEDOT:PSS) and devices could only be fabricated through a deep understanding of the connection between molecular structure and the co-engineering of electronic performance with mechanical resilience. The talk concludes with the use of composite piezoresistive sensors two smart glove prototypes. The first integrates stretchable strain sensors comprising a carbon-elastomer composite, a wearable microcontroller, low energy Bluetooth, and a 6-axis accelerometer/gyroscope to construct a fully functional gesture recognition glove capable of wirelessly translating American Sign Language to text on a cell phone screen. The second creates a system for the haptic control of a 3D printed robot arm, as well as the transmission of touch and temperature information.

  16. Parallel language activation and inhibitory control in bimodal bilinguals.

    PubMed

    Giezen, Marcel R; Blumenfeld, Henrike K; Shook, Anthony; Marian, Viorica; Emmorey, Karen

    2015-08-01

    Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Lexical access in sign language: a computational model

    PubMed Central

    Caselli, Naomi K.; Cohen-Goldberg, Ariel M.

    2014-01-01

    Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition. PMID:24860539

  18. Visual Word Recognition by Bilinguals in a Sentence Context: Evidence for Nonselective Lexical Access

    ERIC Educational Resources Information Center

    Duyck, Wouter; Van Assche, Eva; Drieghe, Denis; Hartsuiker, Robert J.

    2007-01-01

    Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment,…

  19. Automatization and Orthographic Development in Second Language Visual Word Recognition

    ERIC Educational Resources Information Center

    Kida, Shusaku

    2016-01-01

    The present study investigated second language (L2) learners' acquisition of automatic word recognition and the development of L2 orthographic representation in the mental lexicon. Participants in the study were Japanese university students enrolled in a compulsory course involving a weekly 30-minute sustained silent reading (SSR) activity with…

  20. Face Recognition Is Shaped by the Use of Sign Language

    ERIC Educational Resources Information Center

    Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier

    2018-01-01

    Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…

  1. L2 Gender Facilitation and Inhibition in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Behney, Jennifer N.

    2011-01-01

    This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…

  2. Scene Text Recognition using Similarity and a Lexicon with Sparse Belief Propagation

    PubMed Central

    Weinman, Jerod J.; Learned-Miller, Erik; Hanson, Allen R.

    2010-01-01

    Scene text recognition (STR) is the recognition of text anywhere in the environment, such as signs and store fronts. Relative to document recognition, it is challenging because of font variability, minimal language context, and uncontrolled conditions. Much information available to solve this problem is frequently ignored or used sequentially. Similarity between character images is often overlooked as useful information. Because of language priors, a recognizer may assign different labels to identical characters. Directly comparing characters to each other, rather than only a model, helps ensure that similar instances receive the same label. Lexicons improve recognition accuracy but are used post hoc. We introduce a probabilistic model for STR that integrates similarity, language properties, and lexical decision. Inference is accelerated with sparse belief propagation, a bottom-up method for shortening messages by reducing the dependency between weakly supported hypotheses. By fusing information sources in one model, we eliminate unrecoverable errors that result from sequential processing, improving accuracy. In experimental results recognizing text from images of signs in outdoor scenes, incorporating similarity reduces character recognition error by 19%, the lexicon reduces word recognition error by 35%, and sparse belief propagation reduces the lexicon words considered by 99.9% with a 12X speedup and no loss in accuracy. PMID:19696446

  3. Optical character recognition of handwritten Arabic using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M.

    2011-04-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.

  4. Optical character recognition of handwritten Arabic using hidden Markov models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.

    2011-01-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language ismore » initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.« less

  5. Mexican sign language recognition using normalized moments and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita

    2014-09-01

    This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.

  6. The production and recognition of psychiatric original articles published in languages other than English.

    PubMed

    Baethge, Christopher

    2013-03-26

    Whereas the most influential journals in psychiatry are English language journals, periodicals published in other languages serve an important purpose for local communities of clinicians and researchers. This study aimed at analyzing the scientific production and the recognition of non-English general psychiatry journals. In a cohort study, the 2009 volume of ten journals from Brazil (1), German language countries (5), France (2), Italy (1), and Poland (1) was searched for original articles. Patterns of citations to these articles during 2010 and 2011 as documented in Web of Science were analyzed. The journals published 199 original articles (range: 4-46), mostly observational studies. Half of the papers were cited in the following two years. There were 246 citations received, or an average of 1.25 cites per article (range: 0.25-4.04). Many of these citations came from the local community, that is, from the same authors and journals. Citations by other periodicals and other authors accounted for 36% [95%-CI: 30%-42%], citations in English sources for 33% [28%-39%] of all quotations. There was considerable heterogeneity with regard to citations received among the ten journals investigated. Non-English language general psychiatry journals contribute substantially to the body of research. However, recognition, and in particular recognition by the international research community is moderate.

  7. Understanding native Russian listeners' errors on an English word recognition test: model-based analysis of phoneme confusion.

    PubMed

    Shi, Lu-Feng; Morozova, Natalia

    2012-08-01

    Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.

  8. Modern Languages and Antiracism.

    ERIC Educational Resources Information Center

    O'Shaughnessy, Martin

    1988-01-01

    Discusses a school language department's antiracist/multicultural policy for modern languages. The policy stresses the need for a multicultural curriculum, exploration of racism, acceptance of all languages, recognition of specialized knowledge, and positive images of people from ethnic minority groups. (CB)

  9. The Affordance of Speech Recognition Technology for EFL Learning in an Elementary School Setting

    ERIC Educational Resources Information Center

    Liaw, Meei-Ling

    2014-01-01

    This study examined the use of speech recognition (SR) technology to support a group of elementary school children's learning of English as a foreign language (EFL). SR technology has been used in various language learning contexts. Its application to EFL teaching and learning is still relatively recent, but a solid understanding of its…

  10. User Experience of a Mobile Speaking Application with Automatic Speech Recognition for EFL Learning

    ERIC Educational Resources Information Center

    Ahn, Tae youn; Lee, Sangmin-Michelle

    2016-01-01

    With the spread of mobile devices, mobile phones have enormous potential regarding their pedagogical use in language education. The goal of this study is to analyse user experience of a mobile-based learning system that is enhanced by speech recognition technology for the improvement of EFL (English as a foreign language) learners' speaking…

  11. State-of-the-Art Article: The Role and Importance of Lower-Level Processes in Second Language Reading

    ERIC Educational Resources Information Center

    Nassaji, Hossein

    2014-01-01

    This article examines current research on the role and importance of lower-level processes in second language (L2) reading. The focus is on word recognition and its subcomponent processes, including various phonological and orthographic processes. Issues related to syntactic and semantic processes and their relationship with word recognition are…

  12. The Modulation of Visual and Task Characteristics of a Writing System on Hemispheric Lateralization in Visual Word Recognition--A Computational Exploration

    ERIC Educational Resources Information Center

    Hsiao, Janet H.; Lam, Sze Man

    2013-01-01

    Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face…

  13. Detailed Phonetic Labeling of Multi-language Database for Spoken Language Processing Applications

    DTIC Science & Technology

    2015-03-01

    which contains about 60 interfering speakers as well as background music in a bar. The top panel is again clean training /noisy testing settings, and...recognition system for Mandarin was developed and tested. Character recognition rates as high as 88% were obtained, using an approximately 40 training ...Tool_ComputeFeat.m) .............................................................................................................. 50 6.3. Training

  14. Semantic Ambiguity Effects in L2 Word Recognition

    ERIC Educational Resources Information Center

    Ishida, Tomomi

    2018-01-01

    The present study examined the ambiguity effects in second language (L2) word recognition. Previous studies on first language (L1) lexical processing have observed that ambiguous words are recognized faster and more accurately than unambiguous words on lexical decision tasks. In this research, L1 and L2 speakers of English were asked whether a…

  15. EduSpeak[R]: A Speech Recognition and Pronunciation Scoring Toolkit for Computer-Aided Language Learning Applications

    ERIC Educational Resources Information Center

    Franco, Horacio; Bratt, Harry; Rossier, Romain; Rao Gadde, Venkata; Shriberg, Elizabeth; Abrash, Victor; Precoda, Kristin

    2010-01-01

    SRI International's EduSpeak[R] system is a software development toolkit that enables developers of interactive language education software to use state-of-the-art speech recognition and pronunciation scoring technology. Automatic pronunciation scoring allows the computer to provide feedback on the overall quality of pronunciation and to point to…

  16. Using Automatic Speech Recognition Technology with Elicited Oral Response Testing

    ERIC Educational Resources Information Center

    Cox, Troy L.; Davies, Randall S.

    2012-01-01

    This study examined the use of automatic speech recognition (ASR) scored elicited oral response (EOR) tests to assess the speaking ability of English language learners. It also examined the relationship between ASR-scored EOR and other language proficiency measures and the ability of the ASR to rate speakers without bias to gender or native…

  17. Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language

    ERIC Educational Resources Information Center

    Norman, Tal; Degani, Tamar; Peleg, Orna

    2017-01-01

    The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…

  18. The Two-Systems Account of Theory of Mind: Testing the Links to Social- Perceptual and Cognitive Abilities

    PubMed Central

    Meinhardt-Injac, Bozana; Daum, Moritz M.; Meinhardt, Günter; Persike, Malte

    2018-01-01

    According to the two-systems account of theory of mind (ToM), understanding mental states of others involves both fast social-perceptual processes, as well as slower, reflexive cognitive operations (Frith and Frith, 2008; Apperly and Butterfill, 2009). To test the respective roles of specific abilities in either of these processes we administered 15 experimental procedures to a large sample of 343 participants, testing ability in face recognition and holistic perception, language, and reasoning. ToM was measured by a set of tasks requiring ability to track and to infer complex emotional and mental states of others from faces, eyes, spoken language, and prosody. We used structural equation modeling to test the relative strengths of a social-perceptual (face processing related) and reflexive-cognitive (language and reasoning related) path in predicting ToM ability. The two paths accounted for 58% of ToM variance, thus validating a general two-systems framework. Testing specific predictor paths revealed language and face recognition as strong and significant predictors of ToM. For reasoning, there were neither direct nor mediated effects, albeit reasoning was strongly associated with language. Holistic face perception also failed to show a direct link with ToM ability, while there was a mediated effect via face recognition. These results highlight the respective roles of face recognition and language for the social brain, and contribute closer empirical specification of the general two-systems account. PMID:29445336

  19. How do typically developing deaf children and deaf children with autism spectrum disorder use the face when comprehending emotional facial expressions in British sign language?

    PubMed

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-10-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their ability to judge emotion in a signed utterance is impaired (Reilly et al. in Sign Lang Stud 75:113-118, 1992). We examined the role of the face in the comprehension of emotion in sign language in a group of typically developing (TD) deaf children and in a group of deaf children with autism spectrum disorder (ASD). We replicated Reilly et al.'s (Sign Lang Stud 75:113-118, 1992) adult results in the TD deaf signing children, confirming the importance of the face in understanding emotion in sign language. The ASD group performed more poorly on the emotion recognition task than the TD children. The deaf children with ASD showed a deficit in emotion recognition during sign language processing analogous to the deficit in vocal emotion recognition that has been observed in hearing children with ASD.

  20. Bilingual recognition memory: stronger performance but weaker levels-of-processing effects in the less fluent language.

    PubMed

    Francis, Wendy S; Gutiérrez, Marisela

    2012-04-01

    The effects of bilingual proficiency on recognition memory were examined in an experiment with Spanish-English bilinguals. Participants learned lists of words in English and Spanish under shallow- and deep-encoding conditions. Overall, hit rates were higher, discrimination greater, and response times shorter in the nondominant language, consistent with effects previously observed for lower frequency words. Levels-of-processing effects in hit rates, discrimination, and response time were stronger in the dominant language. Specifically, with shallow encoding, the advantage for the nondominant language was larger than with deep encoding. The results support the idea that memory performance in the nondominant language is impacted by both the greater demand for cognitive resources and the lower familiarity of the words.

  1. Neighborhood Density and Syntactic Class Effects on Spoken Word Recognition: Specific Language Impairment and Typical Development

    ERIC Educational Resources Information Center

    Hoover, Jill R.

    2018-01-01

    Purpose: The purpose of the current study was to determine the effect of neighborhood density and syntactic class on word recognition in children with specific language impairment (SLI) and typical development (TD). Method: Fifteen children with SLI ("M" age = 6;5 [years;months]) and 15 with TD ("M" age = 6;4) completed a…

  2. Native-Language Phonological Interference in Early Hakka-Mandarin Bilinguals' Visual Recognition of Chinese Two-Character Compounds: Evidence from the Semantic-Relatedness Decision Task

    ERIC Educational Resources Information Center

    Wu, Shiyu; Ma, Zheng

    2017-01-01

    Previous research has indicated that, in viewing a visual word, the activated phonological representation in turn activates its homophone, causing semantic interference. Using this mechanism of phonological mediation, this study investigated native-language phonological interference in visual recognition of Chinese two-character compounds by early…

  3. Exploring the Relation between Memory, Gestural Communication, and the Emergence of Language in Infancy: A Longitudinal Study

    ERIC Educational Resources Information Center

    Heimann, Mikael; Strid, Karin; Smith, Lars; Tjus, Tomas; Ulvund, Stein Erik; Meltzoff, Andrew N.

    2006-01-01

    The relationship between recall memory, visual recognition memory, social communication, and the emergence of language skills was measured in a longitudinal study. Thirty typically developing Swedish children were tested at 6, 9 and 14 months. The result showed that, in combination, visual recognition memory at 6 months, deferred imitation at 9…

  4. Word Recognition and Nonword Repetition in Children with Language Disorders: The Effects of Neighborhood Density, Lexical Frequency, and Phonotactic Probability

    ERIC Educational Resources Information Center

    Rispens, Judith; Baker, Anne; Duinmeijer, Iris

    2015-01-01

    Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…

  5. Computer-Mediated Input, Output and Feedback in the Development of L2 Word Recognition from Speech

    ERIC Educational Resources Information Center

    Matthews, Joshua; Cheng, Junyu; O'Toole, John Mitchell

    2015-01-01

    This paper reports on the impact of computer-mediated input, output and feedback on the development of second language (L2) word recognition from speech (WRS). A quasi-experimental pre-test/treatment/post-test research design was used involving three intact tertiary level English as a Second Language (ESL) classes. Classes were either assigned to…

  6. Random Forest-Based Recognition of Isolated Sign Language Subwords Using Data from Accelerometers and Surface Electromyographic Sensors.

    PubMed

    Su, Ruiliang; Chen, Xiang; Cao, Shuai; Zhang, Xu

    2016-01-14

    Sign language recognition (SLR) has been widely used for communication amongst the hearing-impaired and non-verbal community. This paper proposes an accurate and robust SLR framework using an improved decision tree as the base classifier of random forests. This framework was used to recognize Chinese sign language subwords using recordings from a pair of portable devices worn on both arms consisting of accelerometers (ACC) and surface electromyography (sEMG) sensors. The experimental results demonstrated the validity of the proposed random forest-based method for recognition of Chinese sign language (CSL) subwords. With the proposed method, 98.25% average accuracy was obtained for the classification of a list of 121 frequently used CSL subwords. Moreover, the random forests method demonstrated a superior performance in resisting the impact of bad training samples. When the proportion of bad samples in the training set reached 50%, the recognition error rate of the random forest-based method was only 10.67%, while that of a single decision tree adopted in our previous work was almost 27.5%. Our study offers a practical way of realizing a robust and wearable EMG-ACC-based SLR systems.

  7. Optimizing estimation of hemispheric dominance for language using magnetic source imaging

    PubMed Central

    Passaro, Antony D.; Rezaie, Roozbeh; Moser, Dana C.; Li, Zhimin; Dias, Nadeeka; Papanicolaou, Andrew C.

    2011-01-01

    The efficacy of magnetoencephalography (MEG) as an alternative to invasive methods for investigating the cortical representation of language has been explored in several studies. Recently, studies comparing MEG to the gold standard Wada procedure have found inconsistent and often less-than accurate estimates of laterality across various MEG studies. Here we attempted to address this issue among normal right-handed adults (N=12) by supplementing a well-established MEG protocol involving word recognition and the single dipole method with a sentence comprehension task and a beamformer approach localizing neural oscillations. Beamformer analysis of word recognition and sentence comprehension tasks revealed a desynchronization in the 10–18 Hz range, localized to the temporo-parietal cortices. Inspection of individual profiles of localized desynchronization (10–18 Hz) revealed left hemispheric dominance in 91.7% and 83.3% of individuals during the word recognition and sentence comprehension tasks, respectively. In contrast, single dipole analysis yielded lower estimates, such that activity in temporal language regions was left-lateralized in 66.7% and 58.3% of individuals during word recognition and sentence comprehension, respectively. The results obtained from the word recognition task and localization of oscillatory activity using a beamformer appear to be in line with general estimates of left hemispheric dominance for language in normal right-handed individuals. Furthermore, the current findings support the growing notion that changes in neural oscillations underlie critical components of linguistic processing. PMID:21890118

  8. Online Collaborative Communities of Learning for Pre-Service Teachers of Languages

    ERIC Educational Resources Information Center

    Morgan, Anne-Marie

    2015-01-01

    University programs for preparing preservice teachers of languages for teaching in schools generally involve generic pedagogy, methodology, curriculum, programming and issues foci, that provide a bridge between the study of languages (or recognition of existing language proficiency) and the teaching of languages. There is much territory to cover…

  9. Hierarchically Structured Non-Intrusive Sign Language Recognition. Chapter 2

    NASA Technical Reports Server (NTRS)

    Zieren, Jorg; Zieren, Jorg; Kraiss, Karl-Friedrich

    2007-01-01

    This work presents a hierarchically structured approach at the nonintrusive recognition of sign language from a monocular frontal view. Robustness is achieved through sophisticated localization and tracking methods, including a combined EM/CAMSHIFT overlap resolution procedure and the parallel pursuit of multiple hypotheses about hands position and movement. This allows handling of ambiguities and automatically corrects tracking errors. A biomechanical skeleton model and dynamic motion prediction using Kalman filters represents high level knowledge. Classification is performed by Hidden Markov Models. 152 signs from German sign language were recognized with an accuracy of 97.6%.

  10. The production and recognition of psychiatric original articles published in languages other than English

    PubMed Central

    2013-01-01

    Background Whereas the most influential journals in psychiatry are English language journals, periodicals published in other languages serve an important purpose for local communities of clinicians and researchers. This study aimed at analyzing the scientific production and the recognition of non-English general psychiatry journals. Methods In a cohort study, the 2009 volume of ten journals from Brazil (1), German language countries (5), France (2), Italy (1), and Poland (1) was searched for original articles. Patterns of citations to these articles during 2010 and 2011 as documented in Web of Science were analyzed. Results The journals published 199 original articles (range: 4–46), mostly observational studies. Half of the papers were cited in the following two years. There were 246 citations received, or an average of 1.25 cites per article (range: 0.25-4.04). Many of these citations came from the local community, that is, from the same authors and journals. Citations by other periodicals and other authors accounted for 36% [95%-CI: 30%-42%], citations in English sources for 33% [28%-39%] of all quotations. There was considerable heterogeneity with regard to citations received among the ten journals investigated. Conclusion Non-English language general psychiatry journals contribute substantially to the body of research. However, recognition, and in particular recognition by the international research community is moderate. PMID:23531084

  11. Recognition of sign language with an inertial sensor-based data glove.

    PubMed

    Kim, Kyung-Won; Lee, Mi-So; Soon, Bo-Ram; Ryu, Mun-Ho; Kim, Je-Nam

    2015-01-01

    Communication between people with normal hearing and hearing impairment is difficult. Recently, a variety of studies on sign language recognition have presented benefits from the development of information technology. This study presents a sign language recognition system using a data glove composed of 3-axis accelerometers, magnetometers, and gyroscopes. Each data obtained by the data glove is transmitted to a host application (implemented in a Window program on a PC). Next, the data is converted into angle data, and the angle information is displayed on the host application and verified by outputting three-dimensional models to the display. An experiment was performed with five subjects, three females and two males, and a performance set comprising numbers from one to nine was repeated five times. The system achieves a 99.26% movement detection rate, and approximately 98% recognition rate for each finger's state. The proposed system is expected to be a more portable and useful system when this algorithm is applied to smartphone applications for use in some situations such as in emergencies.

  12. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    NASA Astrophysics Data System (ADS)

    Assaleh, Khaled; Al-Rousan, M.

    2005-12-01

    Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  13. The Suitability of Cloud-Based Speech Recognition Engines for Language Learning

    ERIC Educational Resources Information Center

    Daniels, Paul; Iwago, Koji

    2017-01-01

    As online automatic speech recognition (ASR) engines become more accurate and more widely implemented with call software, it becomes important to evaluate the effectiveness and the accuracy of these recognition engines using authentic speech samples. This study investigates two of the most prominent cloud-based speech recognition engines--Apple's…

  14. Recognition of face identity and emotion in expressive specific language impairment.

    PubMed

    Merkenschlager, A; Amorosa, H; Kiefl, H; Martinius, J

    2012-01-01

    To study face and emotion recognition in children with mostly expressive specific language impairment (SLI-E). A test movie to study perception and recognition of faces and mimic-gestural expression was applied to 24 children diagnosed as suffering from SLI-E and an age-matched control group of normally developing children. Compared to a normal control group, the SLI-E children scored significantly worse in both the face and expression recognition tasks with a preponderant effect on emotion recognition. The performance of the SLI-E group could not be explained by reduced attention during the test session. We conclude that SLI-E is associated with a deficiency in decoding non-verbal emotional facial and gestural information, which might lead to profound and persistent problems in social interaction and development. Copyright © 2012 S. Karger AG, Basel.

  15. Early Sign Language Experience Goes along with an Increased Cross-Modal Gain for Affective Prosodic Recognition in Congenitally Deaf CI Users

    ERIC Educational Resources Information Center

    Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte

    2018-01-01

    It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and…

  16. Word Recognition and Cognitive Profiles of Chinese Pre-School Children at Risk for Dyslexia through Language Delay or Familial History of Dyslexia

    ERIC Educational Resources Information Center

    McBride-Chang, Catherine; Lam, Fanny; Lam, Catherine; Doo, Sylvia; Wong, Simpson W. L.; Chow, Yvonne Y. Y.

    2008-01-01

    Background: This study sought to identify cognitive abilities that might distinguish Hong Kong Chinese kindergarten children at risk for dyslexia through either language delay or familial history of dyslexia from children who were not at risk and to examine how these abilities were associated with Chinese word recognition. The cognitive skills of…

  17. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  18. Engaging the Discourse of International Language Recognition through ISO 639-3 Signed Language Change Requests

    ERIC Educational Resources Information Center

    Parks, Elizabeth

    2015-01-01

    Linguistic ideologies that are left unquestioned and unexplored, especially as reflected and produced in marginalized language communities, can contribute to inequality made real in decisions about languages and the people who use them. One of the primary bodies of knowledge guiding international language policy is the International Organization…

  19. Sign Perception and Recognition in Non-Native Signers of ASL

    PubMed Central

    Morford, Jill P.; Carlson, Martina L.

    2011-01-01

    Past research has established that delayed first language exposure is associated with comprehension difficulties in non-native signers of American Sign Language (ASL) relative to native signers. The goal of the current study was to investigate potential explanations of this disparity: do non-native signers have difficulty with all aspects of comprehension, or are their comprehension difficulties restricted to some aspects of processing? We compared the performance of deaf non-native, hearing L2, and deaf native signers on a handshape and location monitoring and a sign recognition task. The results indicate that deaf non-native signers are as rapid and accurate on the monitoring task as native signers, with differences in the pattern of relative performance across handshape and location parameters. By contrast, non-native signers differ significantly from native signers during sign recognition. Hearing L2 signers, who performed almost as well as the two groups of deaf signers on the monitoring task, resembled the deaf native signers more than the deaf non-native signers on the sign recognition task. The combined results indicate that delayed exposure to a signed language leads to an overreliance on handshape during sign recognition. PMID:21686080

  20. Robust Optical Recognition of Cursive Pashto Script Using Scale, Rotation and Location Invariant Approach

    PubMed Central

    Ahmad, Riaz; Naz, Saeeda; Afzal, Muhammad Zeshan; Amin, Sayed Hassan; Breuel, Thomas

    2015-01-01

    The presence of a large number of unique shapes called ligatures in cursive languages, along with variations due to scaling, orientation and location provides one of the most challenging pattern recognition problems. Recognition of the large number of ligatures is often a complicated task in oriental languages such as Pashto, Urdu, Persian and Arabic. Research on cursive script recognition often ignores the fact that scaling, orientation, location and font variations are common in printed cursive text. Therefore, these variations are not included in image databases and in experimental evaluations. This research uncovers challenges faced by Arabic cursive script recognition in a holistic framework by considering Pashto as a test case, because Pashto language has larger alphabet set than Arabic, Persian and Urdu. A database containing 8000 images of 1000 unique ligatures having scaling, orientation and location variations is introduced. In this article, a feature space based on scale invariant feature transform (SIFT) along with a segmentation framework has been proposed for overcoming the above mentioned challenges. The experimental results show a significantly improved performance of proposed scheme over traditional feature extraction techniques such as principal component analysis (PCA). PMID:26368566

  1. Intonation and dialog context as constraints for speech recognition.

    PubMed

    Taylor, P; King, S; Isard, S; Wright, H

    1998-01-01

    This paper describes a way of using intonation and dialog context to improve the performance of an automatic speech recognition (ASR) system. Our experiments were run on the DCIEM Maptask corpus, a corpus of spontaneous task-oriented dialog speech. This corpus has been tagged according to a dialog analysis scheme that assigns each utterance to one of 12 "move types," such as "acknowledge," "query-yes/no" or "instruct." Most ASR systems use a bigram language model to constrain the possible sequences of words that might be recognized. Here we use a separate bigram language model for each move type. We show that when the "correct" move-specific language model is used for each utterance in the test set, the word error rate of the recognizer drops. Of course when the recognizer is run on previously unseen data, it cannot know in advance what move type the speaker has just produced. To determine the move type we use an intonation model combined with a dialog model that puts constraints on possible sequences of move types, as well as the speech recognizer likelihoods for the different move-specific models. In the full recognition system, the combination of automatic move type recognition with the move specific language models reduces the overall word error rate by a small but significant amount when compared with a baseline system that does not take intonation or dialog acts into account. Interestingly, the word error improvement is restricted to "initiating" move types, where word recognition is important. In "response" move types, where the important information is conveyed by the move type itself--for example, positive versus negative response--there is no word error improvement, but recognition of the response types themselves is good. The paper discusses the intonation model, the language models, and the dialog model in detail and describes the architecture in which they are combined.

  2. Speech Recognition and Parent Ratings From Auditory Development Questionnaires in Children Who Are Hard of Hearing.

    PubMed

    McCreery, Ryan W; Walker, Elizabeth A; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined. Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use, and better language abilities generally had higher parent ratings of auditory skills and better speech-recognition abilities in quiet and in noise than peers with less audibility, more limited HA use, or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Children who are hard of hearing continue to experience delays in auditory skill development and speech-recognition abilities compared with peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported before the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech-recognition abilities and also may enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children's speech recognition.

  3. Emotion Recognition from Chinese Speech for Smart Affective Services Using a Combination of SVM and DBN

    PubMed Central

    Zhu, Lianzhang; Chen, Leiming; Zhao, Dehai

    2017-01-01

    Accurate emotion recognition from speech is important for applications like smart health care, smart entertainment, and other smart services. High accuracy emotion recognition from Chinese speech is challenging due to the complexities of the Chinese language. In this paper, we explore how to improve the accuracy of speech emotion recognition, including speech signal feature extraction and emotion classification methods. Five types of features are extracted from a speech sample: mel frequency cepstrum coefficient (MFCC), pitch, formant, short-term zero-crossing rate and short-term energy. By comparing statistical features with deep features extracted by a Deep Belief Network (DBN), we attempt to find the best features to identify the emotion status for speech. We propose a novel classification method that combines DBN and SVM (support vector machine) instead of using only one of them. In addition, a conjugate gradient method is applied to train DBN in order to speed up the training process. Gender-dependent experiments are conducted using an emotional speech database created by the Chinese Academy of Sciences. The results show that DBN features can reflect emotion status better than artificial features, and our new classification approach achieves an accuracy of 95.8%, which is higher than using either DBN or SVM separately. Results also show that DBN can work very well for small training databases if it is properly designed. PMID:28737705

  4. Clustering of Farsi sub-word images for whole-book recognition

    NASA Astrophysics Data System (ADS)

    Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2015-01-01

    Redundancy of word and sub-word occurrences in large documents can be effectively utilized in an OCR system to improve recognition results. Most OCR systems employ language modeling techniques as a post-processing step; however these techniques do not use important pictorial information that exist in the text image. In case of large-scale recognition of degraded documents, this information is even more valuable. In our previous work, we proposed a subword image clustering method for the applications dealing with large printed documents. In our clustering method, the ideal case is when all equivalent sub-word images lie in one cluster. To overcome the issues of low print quality, the clustering method uses an image matching algorithm for measuring the distance between two sub-word images. The measured distance with a set of simple shape features were used to cluster all sub-word images. In this paper, we analyze the effects of adding more shape features on processing time, purity of clustering, and the final recognition rate. Previously published experiments have shown the efficiency of our method on a book. Here we present extended experimental results and evaluate our method on another book with totally different font face. Also we show that the number of the new created clusters in a page can be used as a criteria for assessing the quality of print and evaluating preprocessing phases.

  5. Working Memory and Language Learning: A Review

    ERIC Educational Resources Information Center

    Archibald, Lisa M. D.

    2017-01-01

    Children with speech, language, and communication needs (SLCN) form a highly heterogeneous group, including those with an unexplained delay in language development known as specific language impairment (SLI). There is growing recognition that multiple mechanisms underlie the range of profiles observed in these children. Broadly speaking, both the…

  6. Language deficits in poor comprehenders: a case for the simple view of reading.

    PubMed

    Catts, Hugh W; Adlof, Suzanne M; Ellis Weismer, Susan

    2006-04-01

    To examine concurrently and retrospectively the language abilities of children with specific reading comprehension deficits ("poor comprehenders") and compare them to typical readers and children with specific decoding deficits ("poor decoders"). In Study 1, the authors identified 57 poor comprehenders, 27 poor decoders, and 98 typical readers on the basis of 8th-grade reading achievement. These subgroups' performances on 8th-grade measures of language comprehension and phonological processing were investigated. In Study 2, the authors examined retrospectively subgroups' performances on measures of language comprehension and phonological processing in kindergarten, 2nd, and 4th grades. Word recognition and reading comprehension in 2nd and 4th grades were also considered. Study 1 showed that poor comprehenders had concurrent deficits in language comprehension but normal abilities in phonological processing. Poor decoders were characterized by the opposite pattern of language abilities. Study 2 results showed that subgroups had language (and word recognition) profiles in the earlier grades that were consistent with those observed in 8th grade. Subgroup differences in reading comprehension were inconsistent across grades but reflective of the changes in the components of reading comprehension over time. The results support the simple view of reading and the phonological deficit hypothesis. Furthermore, the findings indicate that a classification system that is based on the simple view has advantages over standard systems that focus only on word recognition and/or reading comprehension.

  7. Linguistic contributions to speech-on-speech masking for native and non-native listeners: language familiarity and semantic content.

    PubMed

    Brouwer, Susanne; Van Engen, Kristin J; Calandruccio, Lauren; Bradlow, Ann R

    2012-02-01

    This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener's knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language. © 2012 Acoustical Society of America

  8. Linguistic contributions to speech-on-speech masking for native and non-native listeners: Language familiarity and semantic content

    PubMed Central

    Brouwer, Susanne; Van Engen, Kristin J.; Calandruccio, Lauren; Bradlow, Ann R.

    2012-01-01

    This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener’s knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language. PMID:22352516

  9. Tone Attrition in Mandarin Speakers of Varying English Proficiency

    PubMed Central

    Creel, Sarah C.

    2017-01-01

    Purpose The purpose of this study was to determine whether the degree of dominance of Mandarin–English bilinguals' languages affects phonetic processing of tone content in their native language, Mandarin. Method We tested 72 Mandarin–English bilingual college students with a range of language-dominance profiles in the 2 languages and ages of acquisition of English. Participants viewed 2 photographs at a time while hearing a familiar Mandarin word referring to 1 photograph. The names of the 2 photographs diverged in tone, vowels, or both. Word recognition was evaluated using clicking accuracy, reaction times, and an online recognition measure (gaze) and was compared in the 3 conditions. Results Relative proficiency in English was correlated with reduced word recognition success in tone-disambiguated trials, but not in vowel-disambiguated trials, across all 3 dependent measures. This selective attrition for tone content emerged even though all bilinguals had learned Mandarin from birth. Lengthy experience with English thus weakened tone use. Conclusions This finding has implications for the question of the extent to which bilinguals' 2 phonetic systems interact. It suggests that bilinguals may not process pitch information language-specifically and that processing strategies from the dominant language may affect phonetic processing in the nondominant language—even when the latter was learned natively. PMID:28124064

  10. Action research, simulation, team communication, and bringing the tacit into voice society for simulation in healthcare.

    PubMed

    Forsythe, Lydia

    2009-01-01

    In healthcare, professionals usually function in a time-constrained paradigm because of the nature of care delivery functions and the acute patient populations usually in need of emergent and urgent care. This leaves little, if no time for team reflection, or team processing as a collaborative action. Simulation can be used to create a safe space as a structure for recognition and innovation to continue to develop a culture of safety for healthcare delivery and patient care. To create and develop a safe space, three qualitative modified action research institutional review board-approved studies were developed using simulation to explore team communication as an unfolding in the acute care environment of the operating room. An action heuristic was used for data collection by capturing the participants' narratives in the form of collaborative recall and reflection to standardize task, process, and language. During the qualitative simulations, the team participants identified and changed multiple tasks, process, and language items. The simulations contributed to positive changes for task and efficiencies, team interactions, and overall functionality of the team. The studies demonstrated that simulation can be used in healthcare to define safe spaces to practice, reflect, and develop collaborative relationships, which contribute to the realization of a culture of safety.

  11. Language comprehenders retain implied shape and orientation of objects.

    PubMed

    Pecher, Diane; van Dantzig, Saskia; Zwaan, Rolf A; Zeelenberg, René

    2009-06-01

    According to theories of embodied cognition, language comprehenders simulate sensorimotor experiences to represent the meaning of what they read. Previous studies have shown that picture recognition is better if the object in the picture matches the orientation or shape implied by a preceding sentence. In order to test whether strategic imagery may explain previous findings, language comprehenders first read a list of sentences in which objects were mentioned. Only once the complete list had been read was recognition memory tested with pictures. Recognition performance was better if the orientation or shape of the object matched that implied by the sentence, both immediately after reading the complete list of sentences and after a 45-min delay. These results suggest that previously found match effects were not due to strategic imagery and show that details of sensorimotor simulations are retained over longer periods.

  12. Optimizing estimation of hemispheric dominance for language using magnetic source imaging.

    PubMed

    Passaro, Antony D; Rezaie, Roozbeh; Moser, Dana C; Li, Zhimin; Dias, Nadeeka; Papanicolaou, Andrew C

    2011-10-06

    The efficacy of magnetoencephalography (MEG) as an alternative to invasive methods for investigating the cortical representation of language has been explored in several studies. Recently, studies comparing MEG to the gold standard Wada procedure have found inconsistent and often less-than accurate estimates of laterality across various MEG studies. Here we attempted to address this issue among normal right-handed adults (N=12) by supplementing a well-established MEG protocol involving word recognition and the single dipole method with a sentence comprehension task and a beamformer approach localizing neural oscillations. Beamformer analysis of word recognition and sentence comprehension tasks revealed a desynchronization in the 10-18Hz range, localized to the temporo-parietal cortices. Inspection of individual profiles of localized desynchronization (10-18Hz) revealed left hemispheric dominance in 91.7% and 83.3% of individuals during the word recognition and sentence comprehension tasks, respectively. In contrast, single dipole analysis yielded lower estimates, such that activity in temporal language regions was left-lateralized in 66.7% and 58.3% of individuals during word recognition and sentence comprehension, respectively. The results obtained from the word recognition task and localization of oscillatory activity using a beamformer appear to be in line with general estimates of left hemispheric dominance for language in normal right-handed individuals. Furthermore, the current findings support the growing notion that changes in neural oscillations underlie critical components of linguistic processing. Published by Elsevier B.V.

  13. Effects of Semantic Context and Fundamental Frequency Contours on Mandarin Speech Recognition by Second Language Learners.

    PubMed

    Zhang, Linjun; Li, Yu; Wu, Han; Li, Xin; Shu, Hua; Zhang, Yang; Li, Ping

    2016-01-01

    Speech recognition by second language (L2) learners in optimal and suboptimal conditions has been examined extensively with English as the target language in most previous studies. This study extended existing experimental protocols (Wang et al., 2013) to investigate Mandarin speech recognition by Japanese learners of Mandarin at two different levels (elementary vs. intermediate) of proficiency. The overall results showed that in addition to L2 proficiency, semantic context, F0 contours, and listening condition all affected the recognition performance on the Mandarin sentences. However, the effects of semantic context and F0 contours on L2 speech recognition diverged to some extent. Specifically, there was significant modulation effect of listening condition on semantic context, indicating that L2 learners made use of semantic context less efficiently in the interfering background than in quiet. In contrast, no significant modulation effect of listening condition on F0 contours was found. Furthermore, there was significant interaction between semantic context and F0 contours, indicating that semantic context becomes more important for L2 speech recognition when F0 information is degraded. None of these effects were found to be modulated by L2 proficiency. The discrepancy in the effects of semantic context and F0 contours on L2 speech recognition in the interfering background might be related to differences in processing capacities required by the two types of information in adverse listening conditions.

  14. U.S. Army Research Laboratory (ARL) Corporate Dari Document Transcription and Translation Guidelines

    DTIC Science & Technology

    2012-10-01

    text file format. 15. SUBJECT TERMS Transcription, Translation, guidelines, ground truth, Optical character recognition , OCR, Machine Translation, MT...foreign language into a target language in order to train, test, and evaluate optical character recognition (OCR) and machine translation (MT) embedded...graphic element and should not be transcribed. Elements that are not part of the primary text such as handwritten annotations or stamps should not be

  15. Auditory Modeling for Noisy Speech Recognition.

    DTIC Science & Technology

    2000-01-01

    multiple platforms including PCs, workstations, and DSPs. A prototype version of the SOS process was tested on the Japanese Hiragana language with good...judgment among linguists. American English has 48 phonetic sounds in the ARPABET representation. Hiragana , the Japanese phonetic language, has only 20... Japanese Hiragana ," H.L. Pfister, FL 95, 1995. "State Recognition for Noisy Dynamic Systems," H.L. Pfister, Tech 2005, Chicago, 1995. "Experiences

  16. Sign Language Recognition System using Neural Network for Digital Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Vargas, Lorena P.; Barba, Leiner; Torres, C. O.; Mattos, L.

    2011-01-01

    This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.

  17. Visual recognition of permuted words

    NASA Astrophysics Data System (ADS)

    Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.

    2010-02-01

    In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.

  18. Critically Engaging with Cultural Representations in Foreign Language Textbooks

    ERIC Educational Resources Information Center

    McConachy, Troy

    2018-01-01

    There is currently strong recognition within the field of intercultural language teaching of the need for language learners to develop the ability to actively interpret and critically reflect on cultural meanings and representations from a variety of perspectives. This article argues that cultural representations contained in language textbooks,…

  19. Law, Language, and the Multiethnic State.

    ERIC Educational Resources Information Center

    De Varennes, Fernand

    1996-01-01

    Examines why language policies should be considered in a multiethnic state and suggests that there are human rights issues that mandate some recognition of language demands and usage beyond what some states may provide. The article emphasizes that questions of language, ethnicity, and nationalism must be addressed in a rational and coherent…

  20. The Structural Connectivity Underpinning Language Aptitude, Working Memory, and IQ in the Perisylvian Language Network

    ERIC Educational Resources Information Center

    Xiang, Huadong; Dediu, Dan; Roberts, Leah; van Oort, Erik; Norris, David G.; Hagoort, Peter

    2012-01-01

    In this article, we report the results of a study on the relationship between individual differences in language learning aptitude and the structural connectivity of language pathways in the adult brain, the first of its kind. We measured four components of language aptitude ("vocabulary learning"; "sound recognition"; "sound-symbol…

  1. Staff Report to the Senior Department Official on Recognition Compliance Issues. Recommendation Page: American Speech-Language-Hearing Association

    ERIC Educational Resources Information Center

    US Department of Education, 2010

    2010-01-01

    The American Speech-Language-Hearing Association, Council on Academic Accreditation in Audiology and Speech-Language Pathology (CAA) is a national accrediting agency of graduate education programs in audiology or speech-language pathology. The CAA currently accredits or or preaccredits 319 programs (247 in speech-language pathology and 72 in…

  2. Bilingual Education for Deaf Children in Sweden

    ERIC Educational Resources Information Center

    Svartholm, Kristina

    2010-01-01

    In 1981, Swedish Sign Language gained recognition by the Swedish Parliament as the language of deaf people, a decision that made Sweden the first country in the world to give a sign language the status of a language. Swedish was designated as a second language for deaf people, and the need for bilingualism among them was officially asserted. This…

  3. PLAYGROUND: preparing students for the cyber battleground

    NASA Astrophysics Data System (ADS)

    Nielson, Seth James

    2016-12-01

    Attempting to educate practitioners of computer security can be difficult if for no other reason than the breadth of knowledge required today. The security profession includes widely diverse subfields including cryptography, network architectures, programming, programming languages, design, coding practices, software testing, pattern recognition, economic analysis, and even human psychology. While an individual may choose to specialize in one of these more narrow elements, there is a pressing need for practitioners that have a solid understanding of the unifying principles of the whole. We created the Playground network simulation tool and used it in the instruction of a network security course to graduate students. This tool was created for three specific purposes. First, it provides simulation sufficiently powerful to permit rigorous study of desired principles while simultaneously reducing or eliminating unnecessary and distracting complexities. Second, it permitted the students to rapidly prototype a suite of security protocols and mechanisms. Finally, with equal rapidity, the students were able to develop attacks against the protocols that they themselves had created. Based on our own observations and student reviews, we believe that these three features combine to create a powerful pedagogical tool that provides students with a significant amount of breadth and intense emotional connection to computer security in a single semester.

  4. Discovering Peripheral Arterial Disease Cases from Radiology Notes Using Natural Language Processing

    PubMed Central

    Savova, Guergana K.; Fan, Jin; Ye, Zi; Murphy, Sean P.; Zheng, Jiaping; Chute, Christopher G.; Kullo, Iftikhar J.

    2010-01-01

    As part of the Electronic Medical Records and Genomics Network, we applied, extended and evaluated an open source clinical Natural Language Processing system, Mayo’s Clinical Text Analysis and Knowledge Extraction System, for the discovery of peripheral arterial disease cases from radiology reports. The manually created gold standard consisted of 223 positive, 19 negative, 63 probable and 150 unknown cases. Overall accuracy agreement between the system and the gold standard was 0.93 as compared to a named entity recognition baseline of 0.46. Sensitivity for the positive, probable and unknown cases was 0.93–0.96, and for the negative cases was 0.72. Specificity and negative predictive value for all categories were in the 90’s. The positive predictive value for the positive and unknown categories was in the high 90’s, for the negative category was 0.84, and for the probable category was 0.63. We outline the main sources of errors and suggest improvements. PMID:21347073

  5. L2 Word Recognition Research: A Critical Review.

    ERIC Educational Resources Information Center

    Koda, Keiko

    1996-01-01

    Explores conceptual syntheses advancing second language (L2) word recognition research and uncovers agendas relating to cross-linguistic examinations of L2 processing in a cohort of undergraduate students in France. Describes connections between word recognition and reading, overviews the connectionist construct, and illustrates cross-linguistic…

  6. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  7. Data-driven approach to human motion modeling with Lua and gesture description language

    NASA Astrophysics Data System (ADS)

    Hachaj, Tomasz; Koptyra, Katarzyna; Ogiela, Marek R.

    2017-03-01

    The aim of this paper is to present the novel proposition of the human motion modelling and recognition approach that enables real time MoCap signal evaluation. By motions (actions) recognition we mean classification. The role of this approach is to propose the syntactic description procedure that can be easily understood, learnt and used in various motion modelling and recognition tasks in all MoCap systems no matter if they are vision or wearable sensor based. To do so we have prepared extension of Gesture Description Language (GDL) methodology that enables movements description and real-time recognition so that it can use not only positional coordinates of body joints but virtually any type of discreetly measured output MoCap signals like accelerometer, magnetometer or gyroscope. We have also prepared and evaluated the cross-platform implementation of this approach using Lua scripting language and JAVA technology. This implementation is called Data Driven GDL (DD-GDL). In tested scenarios the average execution speed is above 100 frames per second which is an acquisition time of many popular MoCap solutions.

  8. Episodic memory retrieval in adolescents with and without developmental language disorder (DLD).

    PubMed

    Lee, Joanna C

    2018-03-01

    Two reasons may explain the discrepant findings regarding declarative memory in developmental language disorder (DLD) in the literature. First, standardized tests are one of the primary tools used to assess declarative memory in previous studies. It is possible they are not sensitive enough to subtle memory impairment. Second, the system underlying declarative memory is complex, and thus results may vary depending on the types of encoding and retrieval processes measured (e.g., item specific or relational) and/or task demands (e.g., recall or recognition during memory retrieval). To adopt an experimental paradigm to examine episodic memory functioning in adolescents with and without DLD, with the focus on memory recognition of item-specific and relational information. Two groups of adolescents, one with DLD (n = 23; mean age = 16.73 years) and the other without (n = 23; mean age = 16.75 years), participated in the study. The Relational and Item-Specific Encoding (RISE) paradigm was used to assess the effect of different encoding processes on episodic memory retrieval in DLD. The advantage of using the RISE task is that both item-specific and relational encoding/retrieval can be examined within the same learning paradigm. Adolescents with DLD and those with typical language development showed comparable engagement during the encoding phase. The DLD group showed significantly poorer item recognition than the comparison group. Associative recognition was not significantly different between the two groups; however, there was a non-significant trend for to be poorer in the DLD group than in the comparison group, suggesting a possible impairment in associative recognition in individuals with DLD, but to a lesser magnitude. These results indicate that adolescents with DLD have difficulty with episodic memory retrieval when stimuli are encoded and retrieved without support from contextual information. Associative recognition is relatively less affected than item recognition in adolescents with DLD. © 2017 Royal College of Speech and Language Therapists.

  9. Effective Prediction of Errors by Non-native Speakers Using Decision Tree for Speech Recognition-Based CALL System

    NASA Astrophysics Data System (ADS)

    Wang, Hongcui; Kawahara, Tatsuya

    CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.

  10. Legal Pathways to the Recognition of Sign Languages: A Comparison of the Catalan and Spanish Sign Language Acts

    ERIC Educational Resources Information Center

    Quer, Josep

    2012-01-01

    Despite being minority languages like many others, sign languages have traditionally remained absent from the agendas of policy makers and language planning and policies. In the past two decades, though, this situation has started to change at different paces and to different degrees in several countries. In this article, the author describes the…

  11. Language Development in Children with Language Disorders: An Introduction to Skinner's Verbal Behavior and the Techniques for Initial Language Acquisition

    ERIC Educational Resources Information Center

    Casey, Laura Baylot; Bicard, David F.

    2009-01-01

    Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…

  12. The Promise of NLP and Speech Processing Technologies in Language Assessment

    ERIC Educational Resources Information Center

    Chapelle, Carol A.; Chung, Yoo-Ree

    2010-01-01

    Advances in natural language processing (NLP) and automatic speech recognition and processing technologies offer new opportunities for language testing. Despite their potential uses on a range of language test item types, relatively little work has been done in this area, and it is therefore not well understood by test developers, researchers or…

  13. Representations of the language recognition problem for a theorem prover

    NASA Technical Reports Server (NTRS)

    Minker, J.; Vanderbrug, G. J.

    1972-01-01

    Two representations of the language recognition problem for a theorem prover in first order logic are presented and contrasted. One of the representations is based on the familiar method of generating sentential forms of the language, and the other is based on the Cocke parsing algorithm. An augmented theorem prover is described which permits recognition of recursive languages. The state-transformation method developed by Cordell Green to construct problem solutions in resolution-based systems can be used to obtain the parse tree. In particular, the end-order traversal of the parse tree is derived in one of the representations. An inference system, termed the cycle inference system, is defined which makes it possible for the theorem prover to model the method on which the representation is based. The general applicability of the cycle inference system to state space problems is discussed. Given an unsatisfiable set S, where each clause has at most one positive literal, it is shown that there exists an input proof. The clauses for the two representations satisfy these conditions, as do many state space problems.

  14. Implementation of the Intelligent Voice System for Kazakh

    NASA Astrophysics Data System (ADS)

    Yessenbayev, Zh; Saparkhojayev, N.; Tibeyev, T.

    2014-04-01

    Modern speech technologies are highly advanced and widely used in day-to-day applications. However, this is mostly concerned with the languages of well-developed countries such as English, German, Japan, Russian, etc. As for Kazakh, the situation is less prominent and research in this field is only starting to evolve. In this research and application-oriented project, we introduce an intelligent voice system for the fast deployment of call-centers and information desks supporting Kazakh speech. The demand on such a system is obvious if the country's large size and small population is considered. The landline and cell phones become the only means of communication for the distant villages and suburbs. The system features Kazakh speech recognition and synthesis modules as well as a web-GUI for efficient dialog management. For speech recognition we use CMU Sphinx engine and for speech synthesis- MaryTTS. The web-GUI is implemented in Java enabling operators to quickly create and manage the dialogs in user-friendly graphical environment. The call routines are handled by Asterisk PBX and JBoss Application Server. The system supports such technologies and protocols as VoIP, VoiceXML, FastAGI, Java SpeechAPI and J2EE. For the speech recognition experiments we compiled and used the first Kazakh speech corpus with the utterances from 169 native speakers. The performance of the speech recognizer is 4.1% WER on isolated word recognition and 6.9% WER on clean continuous speech recognition tasks. The speech synthesis experiments include the training of male and female voices.

  15. Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing

    PubMed Central

    McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Objectives Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HA) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children’s auditory experience on parent-report auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design Parent ratings on auditory development questionnaires and children’s speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years of age. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children Rating Scale, and an adaptation of the Speech, Spatial and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open and Closed set task, Early Speech Perception Test, Lexical Neighborhood Test, and Phonetically-balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared to peers with normal hearing matched for age, maternal educational level and nonverbal intelligence. The effects of aided audibility, HA use and language ability on parent responses to auditory development questionnaires and on children’s speech recognition were also examined. Results Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use and better language abilities generally had higher parent ratings of auditory skills and better speech recognition abilities in quiet and in noise than peers with less audibility, more limited HA use or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Conclusions Children who are hard of hearing continue to experience delays in auditory skill development and speech recognition abilities compared to peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported prior to the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech recognition abilities, and may also enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children’s speech recognition. PMID:26731160

  16. The Temporal Structure of Spoken Language Understanding.

    ERIC Educational Resources Information Center

    Marslen-Wilson, William; Tyler, Lorraine Komisarjevsky

    1980-01-01

    An investigation of word-by-word time-course of spoken language understanding focused on word recognition and structural and interpretative processes. Results supported an online interactive language processing theory, in which lexical, structural, and interpretative knowledge sources communicate and interact during processing efficiently and…

  17. Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning

    PubMed Central

    Yee, Meagan; Jones, Susan S.; Smith, Linda B.

    2012-01-01

    Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015

  18. The Influence of Linguistic Proficiency on Masked Text Recognition Performance in Adults With and Without Congenital Hearing Impairment.

    PubMed

    Huysmans, Elke; Bolk, Elske; Zekveld, Adriana A; Festen, Joost M; de Groot, Annette M B; Goverts, S Theo

    2016-01-01

    The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.

  19. [The contribution of different cochlear insertion region to Mandarin speech perception in users of cochlear implant].

    PubMed

    Qi, Beier; Liu, Bo; Liu, Sha; Liu, Haihong; Dong, Ruijuan; Zhang, Ning; Gong, Shusheng

    2011-05-01

    To study the effect of cochlear electrode coverage and different insertion region on speech recognition, especially tone perception of cochlear implant users whose native language is Mandarin Chinese. Setting seven test conditions by fitting software. All conditions were created by switching on/off respective channels in order to simulate different insertion position. Then Mandarin CI users received 4 Speech tests, including Vowel Identification test, Consonant Identification test, Tone Identification test-male speaker, Mandarin HINT test (SRS) in quiet and noise. To all test conditions: the average score of vowel identification was significantly different, from 56% to 91% (Rank sum test, P < 0.05). The average score of consonant identification was significantly different, from 72% to 85% (ANOVNA, P < 0.05). The average score of Tone identification was not significantly different (ANOVNA, P > 0.05). However the more channels activated, the higher scores obtained, from 68% to 81%. This study shows that there is a correlation between insertion depth and speech recognition. Because all parts of the basement membrane can help CI users to improve their speech recognition ability, it is very important to enhance verbal communication ability and social interaction ability of CI users by increasing insertion depth and actively stimulating the top region of cochlear.

  20. Improving Science and Vocabulary Learning of English Language Learners. CREATE Brief

    ERIC Educational Resources Information Center

    August, Diane; Artzi, Lauren; Mazrum, Julie

    2010-01-01

    This brief reviews previous research related to the development of science knowledge and academic language in English language learners as well as the role of English language proficiency, learning in a second language, and first language knowledge in science learning. It also describes two successful CREATE interventions that build academic and…

  1. Translation and Foreign Language Teaching.

    ERIC Educational Resources Information Center

    Tinsley, Royal L., Jr.

    Translators and teachers of foreign languages need each other: translators need formal academic training and recognition and teachers of foreign languages need students. Unfortunately, translators know only too well that most FL teachers are not competent translators, and FL Departments generally consider translation as an activity beneath the…

  2. Word recognition and phonetic structure acquisition: Possible relations

    NASA Astrophysics Data System (ADS)

    Morgan, James

    2002-05-01

    Several accounts of possible relations between the emergence of the mental lexicon and acquisition of native language phonological structure have been propounded. In one view, acquisition of word meanings guides infants' attention toward those contrasts that are linguistically significant in their language. In the opposing view, native language phonological categories may be acquired from statistical patterns of input speech, prior to and independent of learning at the lexical level. Here, a more interactive account will be presented, in which phonological structure is modeled as emerging consequentially from the self-organization of perceptual space underlying word recognition. A key prediction of this model is that early native language phonological categories will be highly context specific. Data bearing on this prediction will be presented which provide clues to the nature of infants' statistical analysis of input.

  3. How vocabulary size in two languages relates to efficiency in spoken word recognition by young Spanish-English bilinguals

    PubMed Central

    Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda

    2010-01-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26; 2;6 yrs). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children’s facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children’s ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language. PMID:19726000

  4. An L2 Reader's Word-Recognition Strategies: Transferred or Developed

    ERIC Educational Resources Information Center

    Alco, Bonnie

    2010-01-01

    Transfer of reading strategies from the first language (L1) to the second language (L2) has long puzzled educators, but what happens if the L1 is an alphabet language and the second is not, or if there is a mismatch in the languages' grapheme-phoneme connection? Although some students readily adjust to reading and writing in their second language,…

  5. A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language.

    PubMed

    Halim, Zahid; Abbas, Ghulam

    2015-01-01

    Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.

  6. Media, Information Technology, and Language Planning: What Can Endangered Language Communities Learn from Created Language Communities?

    ERIC Educational Resources Information Center

    Schreyer, Christine

    2011-01-01

    The languages of Klingon and Na'vi, both created for media, are also languages that have garnered much media attention throughout the course of their existence. Speakers of these languages also utilize social media and information technologies, specifically websites, in order to learn the languages and then put them into practice. While teaching a…

  7. Neural speech recognition: continuous phoneme decoding using spatiotemporal representations of human cortical activity

    NASA Astrophysics Data System (ADS)

    Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.

    2016-10-01

    Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.

  8. An Alignment of the Canadian Language Benchmarks to the BC ESL Articulation Levels. Final Report - January 2007

    ERIC Educational Resources Information Center

    Barbour, Ross; Ostler, Catherine; Templeman, Elizabeth; West, Elizabeth

    2007-01-01

    The British Columbia (BC) English as a Second Language (ESL) Articulation Committee's Canadian Language Benchmarks project was precipitated by ESL instructors' desire to address transfer difficulties of ESL students within the BC transfer system and to respond to the recognition that the Canadian Language Benchmarks, a descriptive scale of ESL…

  9. A Human Mirror Neuron System for Language: Perspectives from Signed Languages of the Deaf

    ERIC Educational Resources Information Center

    Knapp, Heather Patterson; Corina, David P.

    2010-01-01

    Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). "Behavioral and Brain Sciences, 28", 105-167; Arbib…

  10. Multilingual Data Selection for Low Resource Speech Recognition

    DTIC Science & Technology

    2016-09-12

    Figure 1: Identification of language clusters using scores from an LID system training languages used in the Base and OP1 evaluation periods of the Babel...the posterior scores over frames. For a set of languages that are used to train the lan- guage identification (LID) network, pairs of languages that...which are combined during test time to produce 10 dimensional language 3854 Figure 3: Identification of language clusters using scores from individually

  11. A segmentation-free approach to Arabic and Urdu OCR

    NASA Astrophysics Data System (ADS)

    Sabbour, Nazly; Shafait, Faisal

    2013-01-01

    In this paper, we present a generic Optical Character Recognition system for Arabic script languages called Nabocr. Nabocr uses OCR approaches specific for Arabic script recognition. Performing recognition on Arabic script text is relatively more difficult than Latin text due to the nature of Arabic script, which is cursive and context sensitive. Moreover, Arabic script has different writing styles that vary in complexity. Nabocr is initially trained to recognize both Urdu Nastaleeq and Arabic Naskh fonts. However, it can be trained by users to be used for other Arabic script languages. We have evaluated our system's performance for both Urdu and Arabic. In order to evaluate Urdu recognition, we have generated a dataset of Urdu text called UPTI (Urdu Printed Text Image Database), which measures different aspects of a recognition system. The performance of our system for Urdu clean text is 91%. For Arabic clean text, the performance is 86%. Moreover, we have compared the performance of our system against Tesseract's newly released Arabic recognition, and the performance of both systems on clean images is almost the same.

  12. The Role of Native-Language Knowledge in the Perception of Casual Speech in a Second Language

    PubMed Central

    Mitterer, Holger; Tuinman, Annelie

    2012-01-01

    Casual speech processes, such as /t/-reduction, make word recognition harder. Additionally, word recognition is also harder in a second language (L2). Combining these challenges, we investigated whether L2 learners have recourse to knowledge from their native language (L1) when dealing with casual speech processes in their L2. In three experiments, production and perception of /t/-reduction was investigated. An initial production experiment showed that /t/-reduction occurred in both languages and patterned similarly in proper nouns but differed when /t/ was a verbal inflection. Two perception experiments compared the performance of German learners of Dutch with that of native speakers for nouns and verbs. Mirroring the production patterns, German learners’ performance strongly resembled that of native Dutch listeners when the reduced /t/ was part of a word stem, but deviated where /t/ was a verbal inflection. These results suggest that a casual speech process in a second language is problematic for learners when the process is not known from the leaner’s native language, similar to what has been observed for phoneme contrasts. PMID:22811675

  13. Cognitive aging and hearing acuity: modeling spoken language comprehension.

    PubMed

    Wingfield, Arthur; Amichetti, Nicole M; Lash, Amanda

    2015-01-01

    The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.

  14. AutoMap User’s Guide

    DTIC Science & Technology

    2006-10-01

    Hierarchy of Pre-Processing Techniques 3. NLP (Natural Language Processing) Utilities 3.1 Named-Entity Recognition 3.1.1 Example for Named-Entity... Recognition 3.2 Symbol RemovalN-Gram Identification: Bi-Grams 4. Stemming 4.1 Stemming Example 5. Delete List 5.1 Open a Delete List 5.1.1 Small...iterative and involves several key processes: • Named-Entity Recognition Named-Entity Recognition is an Automap feature that allows you to

  15. Characterising receptive language processing in schizophrenia using word and sentence tasks.

    PubMed

    Tan, Eric J; Yelland, Gregory W; Rossell, Susan L

    2016-01-01

    Language dysfunction is proposed to relate to the speech disturbances in schizophrenia, which are more commonly referred to as formal thought disorder (FTD). Presently, language production deficits in schizophrenia are better characterised than language comprehension difficulties. This study thus aimed to examine three aspects of language comprehension in schizophrenia: (1) the role of lexical processing, (2) meaning attribution for words and sentences, and (3) the relationship between comprehension and production. Fifty-seven schizophrenia/schizoaffective disorder patients and 48 healthy controls completed a clinical assessment and three language tasks assessing word recognition, synonym identification, and sentence comprehension. Poorer patient performance was expected on the latter two tasks. Recognition of word form was not impaired in schizophrenia, indicating intact lexical processing. Whereas single-word synonym identification was not significantly impaired, there was a tendency to attribute word meanings based on phonological similarity with increasing FTD severity. Importantly, there was a significant sentence comprehension deficit for processing deep structure, which correlated with FTD severity. These findings established a receptive language deficit in schizophrenia at the syntactic level. There was also evidence for a relationship between some aspects of language comprehension and speech production/FTD. Apart from indicating language as another mechanism in FTD aetiology, the data also suggest that remediating language comprehension problems may be an avenue to pursue in alleviating FTD symptomatology.

  16. Texture for script identification.

    PubMed

    Busch, Andrew; Boles, Wageeh W; Sridharan, Sridha

    2005-11-01

    The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.

  17. Linguistic Corpora and Language Teaching.

    ERIC Educational Resources Information Center

    Murison-Bowie, Simon

    1996-01-01

    Examines issues raised by corpus linguistics concerning the description of language. The article argues that it is necessary to start from correct descriptions of linguistic units and the contexts in which they occur. Corpus linguistics has joined with language teaching by sharing a recognition of the importance of a larger, schematic view of…

  18. Our Perception of Woman as Determined by Language.

    ERIC Educational Resources Information Center

    Ayim, Maryann

    Recognition of gender as a significant factor in the social parameters of language is a very recent phenomonon. The external aspects of language as they relate to sexism have social and political ramifications. Using Peirce's definition of sign, which encompasses the representation, the object, and its interpretation, sexually stereotypic language…

  19. Early Sign Language Exposure and Cochlear Implantation Benefits.

    PubMed

    Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S

    2017-07-01

    Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.

  20. Kinect-based sign language recognition of static and dynamic hand movements

    NASA Astrophysics Data System (ADS)

    Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.

    2017-02-01

    A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.

  1. Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence.

    PubMed

    Parton, Becky Sue

    2006-01-01

    In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network systems). Avatars such as "Tessa" (Text and Sign Support Assistant; three-dimensional imaging) and spoken language to sign language translation systems such as Poland's project entitled "THETOS" (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing) are addressed. The application of this research to education is also explored. The "ICICLE" (Interactive Computer Identification and Correction of Language Errors) project, for example, uses intelligent computer-aided instruction to build a tutorial system for deaf or hard-of-hearing children that analyzes their English writing and makes tailored lessons and recommendations. Finally, the article considers synthesized sign, which is being added to educational material and has the potential to be developed by students themselves.

  2. Influence of oxytocin on emotion recognition from body language: A randomized placebo-controlled trial.

    PubMed

    Bernaerts, Sylvie; Berra, Emmely; Wenderoth, Nicole; Alaerts, Kaat

    2016-10-01

    The neuropeptide 'oxytocin' (OT) is known to play a pivotal role in a variety of complex social behaviors by promoting a prosocial attitude and interpersonal bonding. One mechanism by which OT is hypothesized to promote prosocial behavior is by enhancing the processing of socially relevant information from the environment. With the present study, we explored to what extent OT can alter the 'reading' of emotional body language as presented by impoverished biological motion point light displays (PLDs). To do so, a double-blind between-subjects randomized placebo-controlled trial was conducted, assessing performance on a bodily emotion recognition task in healthy adult males before and after a single-dose of intranasal OT (24 IU). Overall, a single-dose of OT administration had a significant effect of medium size on emotion recognition from body language. OT-induced improvements in emotion recognition were not differentially modulated by the emotional valence of the presented stimuli (positive versus negative) and also, the overall tendency to label an observed emotional state as 'happy' (positive) or 'angry' (negative) was not modified by the administration of OT. Albeit moderate, the present findings of OT-induced improvements in bodily emotion recognition from whole-body PLD provide further support for a link between OT and the processing of socio-communicative cues originating from the body of others. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. How much does language proficiency by non-native listeners influence speech audiometric tests in noise?

    PubMed

    Warzybok, Anna; Brand, Thomas; Wagener, Kirsten C; Kollmeier, Birger

    2015-01-01

    The current study investigates the extent to which the linguistic complexity of three commonly employed speech recognition tests and second language proficiency influence speech recognition thresholds (SRTs) in noise in non-native listeners. SRTs were measured for non-natives and natives using three German speech recognition tests: the digit triplet test (DTT), the Oldenburg sentence test (OLSA), and the Göttingen sentence test (GÖSA). Sixty-four non-native and eight native listeners participated. Non-natives can show native-like SRTs in noise only for the linguistically easy speech material (DTT). Furthermore, the limitation of phonemic-acoustical cues in digit triplets affects speech recognition to the same extent in non-natives and natives. For more complex and less familiar speech materials, non-natives, ranging from basic to advanced proficiency in German, require on average 3-dB better signal-to-noise ratio for the OLSA and 6-dB for the GÖSA to obtain 50% speech recognition compared to native listeners. In clinical audiology, SRT measurements with a closed-set speech test (i.e. DTT for screening or OLSA test for clinical purposes) should be used with non-native listeners rather than open-set speech tests (such as the GÖSA or HINT), especially if a closed-set version in the patient's own native language is available.

  4. The influence of lexical characteristics and talker accent on the recognition of English words by speakers of Japanese.

    PubMed

    Yoneyama, Kiyoko; Munson, Benjamin

    2017-02-01

    Whether or not the influence of listeners' language proficiency on L2 speech recognition was affected by the structure of the lexicon was examined. This specific experiment examined the effect of word frequency (WF) and phonological neighborhood density (PND) on word recognition in native speakers of English and second-language (L2) speakers of English whose first language was Japanese. The stimuli included English words produced by a native speaker of English and English words produced by a native speaker of Japanese (i.e., with Japanese-accented English). The experiment was inspired by the finding of Imai, Flege, and Walley [(2005). J. Acoust. Soc. Am. 117, 896-907] that the influence of talker accent on speech intelligibility for L2 learners of English whose L1 is Spanish varies as a function of words' PND. In the currently study, significant interactions between stimulus accentedness and listener group on the accuracy and speed of spoken word recognition were found, as were significant effects of PND and WF on word-recognition accuracy. However, no significant three-way interaction among stimulus talker, listener group, and PND on either measure was found. Results are discussed in light of recent findings on cross-linguistic differences in the nature of the effects of PND on L2 phonological and lexical processing.

  5. Aberrant neural networks for the recognition memory of socially relevant information in patients with schizophrenia.

    PubMed

    Oh, Jooyoung; Chun, Ji-Won; Kim, Eunseong; Park, Hae-Jeong; Lee, Boreom; Kim, Jae-Jin

    2017-01-01

    Patients with schizophrenia exhibit several cognitive deficits, including memory impairment. Problems with recognition memory can hinder socially adaptive behavior. Previous investigations have suggested that altered activation of the frontotemporal area plays an important role in recognition memory impairment. However, the cerebral networks related to these deficits are not known. The aim of this study was to elucidate the brain networks required for recognizing socially relevant information in patients with schizophrenia performing an old-new recognition task. Sixteen patients with schizophrenia and 16 controls participated in this study. First, the subjects performed the theme-identification task during functional magnetic resonance imaging. In this task, pictures depicting social situations were presented with three words, and the subjects were asked to select the best theme word for each picture. The subjects then performed an old-new recognition task in which they were asked to discriminate whether the presented words were old or new. Task performance and neural responses in the old-new recognition task were compared between the subject groups. An independent component analysis of the functional connectivity was performed. The patients with schizophrenia exhibited decreased discriminability and increased activation of the right superior temporal gyrus compared with the controls during correct responses. Furthermore, aberrant network activities were found in the frontopolar and language comprehension networks in the patients. The functional connectivity analysis showed aberrant connectivity in the frontopolar and language comprehension networks in the patients with schizophrenia, and these aberrations possibly contribute to their low recognition performance and social dysfunction. These results suggest that the frontopolar and language comprehension networks are potential therapeutic targets in patients with schizophrenia.

  6. Creating an Authentic Learning Environment in the Foreign Language Classroom

    ERIC Educational Resources Information Center

    Nikitina, Larisa

    2011-01-01

    Theatrical activities are widely used by language educators to promote and facilitate language learning. Involving students in production of their own video or a short movie in the target language allows a seamless fusion of language learning, art, and popular culture. The activity is also conducive for creating an authentic learning situation…

  7. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    NASA Astrophysics Data System (ADS)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  8. Use of Authentic-Speech Technique for Teaching Sound Recognition to EFL Students

    ERIC Educational Resources Information Center

    Sersen, William J.

    2011-01-01

    The main objective of this research was to test an authentic-speech technique for improving the sound-recognition skills of EFL (English as a foreign language) students at Roi-Et Rajabhat University. The secondary objective was to determine the correlation, if any, between students' self-evaluation of sound-recognition progress and the actual…

  9. Leveraging Automatic Speech Recognition Errors to Detect Challenging Speech Segments in TED Talks

    ERIC Educational Resources Information Center

    Mirzaei, Maryam Sadat; Meshgi, Kourosh; Kawahara, Tatsuya

    2016-01-01

    This study investigates the use of Automatic Speech Recognition (ASR) systems to epitomize second language (L2) listeners' problems in perception of TED talks. ASR-generated transcripts of videos often involve recognition errors, which may indicate difficult segments for L2 listeners. This paper aims to discover the root-causes of the ASR errors…

  10. Developmental Spelling and Word Recognition: A Validation of Ehri's Model of Word Recognition Development

    ERIC Educational Resources Information Center

    Ebert, Ashlee A.

    2009-01-01

    Ehri's developmental model of word recognition outlines early reading development that spans from the use of logos to advanced knowledge of oral and written language to read words. Henderson's developmental spelling theory presents stages of word knowledge that progress in a similar manner to Ehri's phases. The purpose of this research study was…

  11. Featuring Old/New Recognition: The Two Faces of the Pseudoword Effect

    ERIC Educational Resources Information Center

    Joordens, Steve; Ozubko, Jason D.; Niewiadomski, Marty W.

    2008-01-01

    In his analysis of the pseudoword effect, [Greene, R.L. (2004). Recognition memory for pseudowords. "Journal of Memory and Language," 50, 259-267.] suggests nonwords can feel more familiar that words in a recognition context if the orthographic features of the nonword match well with the features of the items presented at study. One possible…

  12. Representation, Classification and Information Fusion for Robust and Efficient Multimodal Human States Recognition

    ERIC Educational Resources Information Center

    Li, Ming

    2013-01-01

    The goal of this work is to enhance the robustness and efficiency of the multimodal human states recognition task. Human states recognition can be considered as a joint term for identifying/verifing various kinds of human related states, such as biometric identity, language spoken, age, gender, emotion, intoxication level, physical activity, vocal…

  13. Wearable Spiral Passive Electromagnetic Sensor (SPES) glove for sign language recognition of alphabet letters and numbers: a preliminary study

    NASA Astrophysics Data System (ADS)

    Iervolino, Onorio; Meo, Michele

    2017-04-01

    Sign language is a method of communication for deaf-mute people with articulated gestures and postures of hands and fingers to represent alphabet letters or complete words. Recognizing gestures is a difficult task, due to intrapersonal and interpersonal variations in performing them. This paper investigates the use of Spiral Passive Electromagnetic Sensor (SPES) as a motion recognition tool. An instrumented glove integrated with wearable multi-SPES sensors was developed to encode data and provide a unique response for each hand gesture. The device can be used for recognition of gestures; motion control and well-defined gesture sets such as sign languages. Each specific gesture was associated to a unique sensor response. The gloves encode data regarding the gesture directly in the frequency spectrum response of the SPES. The absence of chip or complex electronic circuit make the gloves light and comfortable to wear. Results showed encouraging data to use SPES in wearable applications.

  14. Emotion through locomotion: gender impact.

    PubMed

    Krüger, Samuel; Sokolov, Alexander N; Enck, Paul; Krägeloh-Mann, Ingeborg; Pavlova, Marina A

    2013-01-01

    Body language reading is of significance for daily life social cognition and successful social interaction, and constitutes a core component of social competence. Yet it is unclear whether our ability for body language reading is gender specific. In the present work, female and male observers had to visually recognize emotions through point-light human locomotion performed by female and male actors with different emotional expressions. For subtle emotional expressions only, males surpass females in recognition accuracy and readiness to respond to happy walking portrayed by female actors, whereas females exhibit a tendency to be better in recognition of hostile angry locomotion expressed by male actors. In contrast to widespread beliefs about female superiority in social cognition, the findings suggest that gender effects in recognition of emotions from human locomotion are modulated by emotional content of actions and opposite actor gender. In a nutshell, the study makes a further step in elucidation of gender impact on body language reading and on neurodevelopmental and psychiatric deficits in visual social cognition.

  15. Evaluating Effects of Divided Hemispheric Processing on Word Recognition in Foveal and Extrafoveal Displays: The Evidence from Arabic

    PubMed Central

    Almabruk, Abubaker A. A.; Paterson, Kevin B.; McGowan, Victoria; Jordan, Timothy R.

    2011-01-01

    Background Previous studies have claimed that a precise split at the vertical midline of each fovea causes all words to the left and right of fixation to project to the opposite, contralateral hemisphere, and this division in hemispheric processing has considerable consequences for foveal word recognition. However, research in this area is dominated by the use of stimuli from Latinate languages, which may induce specific effects on performance. Consequently, we report two experiments using stimuli from a fundamentally different, non-Latinate language (Arabic) that offers an alternative way of revealing effects of split-foveal processing, if they exist. Methods and Findings Words (and pseudowords) were presented to the left or right of fixation, either close to fixation and entirely within foveal vision, or further from fixation and entirely within extrafoveal vision. Fixation location and stimulus presentations were carefully controlled using an eye-tracker linked to a fixation-contingent display. To assess word recognition, Experiment 1 used the Reicher-Wheeler task and Experiment 2 used the lexical decision task. Results Performance in both experiments indicated a functional division in hemispheric processing for words in extrafoveal locations (in recognition accuracy in Experiment 1 and in reaction times and error rates in Experiment 2) but no such division for words in foveal locations. Conclusions These findings from a non-Latinate language provide new evidence that although a functional division in hemispheric processing exists for word recognition outside the fovea, this division does not extend up to the point of fixation. Some implications for word recognition and reading are discussed. PMID:21559084

  16. A language-familiarity effect for speaker discrimination without comprehension.

    PubMed

    Fleming, David; Giordano, Bruno L; Caldara, Roberto; Belin, Pascal

    2014-09-23

    The influence of language familiarity upon speaker identification is well established, to such an extent that it has been argued that "Human voice recognition depends on language ability" [Perrachione TK, Del Tufo SN, Gabrieli JDE (2011) Science 333(6042):595]. However, 7-mo-old infants discriminate speakers of their mother tongue better than they do foreign speakers [Johnson EK, Westrek E, Nazzi T, Cutler A (2011) Dev Sci 14(5):1002-1011] despite their limited speech comprehension abilities, suggesting that speaker discrimination may rely on familiarity with the sound structure of one's native language rather than the ability to comprehend speech. To test this hypothesis, we asked Chinese and English adult participants to rate speaker dissimilarity in pairs of sentences in English or Mandarin that were first time-reversed to render them unintelligible. Even in these conditions a language-familiarity effect was observed: Both Chinese and English listeners rated pairs of native-language speakers as more dissimilar than foreign-language speakers, despite their inability to understand the material. Our data indicate that the language familiarity effect is not based on comprehension but rather on familiarity with the phonology of one's native language. This effect may stem from a mechanism analogous to the "other-race" effect in face recognition.

  17. Discriminative exemplar coding for sign language recognition with Kinect.

    PubMed

    Sun, Chao; Zhang, Tianzhu; Bao, Bing-Kun; Xu, Changsheng; Mei, Tao

    2013-10-01

    Sign language recognition is a growing research area in the field of computer vision. A challenge within it is to model various signs, varying with time resolution, visual manual appearance, and so on. In this paper, we propose a discriminative exemplar coding (DEC) approach, as well as utilizing Kinect sensor, to model various signs. The proposed DEC method can be summarized as three steps. First, a quantity of class-specific candidate exemplars are learned from sign language videos in each sign category by considering their discrimination. Then, every video of all signs is described as a set of similarities between frames within it and the candidate exemplars. Instead of simply using a heuristic distance measure, the similarities are decided by a set of exemplar-based classifiers through the multiple instance learning, in which a positive (or negative) video is treated as a positive (or negative) bag and those frames similar to the given exemplar in Euclidean space as instances. Finally, we formulate the selection of the most discriminative exemplars into a framework and simultaneously produce a sign video classifier to recognize sign. To evaluate our method, we collect an American sign language dataset, which includes approximately 2000 phrases, while each phrase is captured by Kinect sensor with color, depth, and skeleton information. Experimental results on our dataset demonstrate the feasibility and effectiveness of the proposed approach for sign language recognition.

  18. English Word-Level Decoding and Oral Language Factors as Predictors of Third and Fifth Grade English Language Learners' Reading Comprehension Performance

    ERIC Educational Resources Information Center

    Landon, Laura L.

    2017-01-01

    This study examines the application of the Simple View of Reading (SVR), a reading comprehension theory focusing on word recognition and linguistic comprehension, to English Language Learners' (ELLs') English reading development. This study examines the concurrent and predictive validity of two components of the SVR, oral language and word-level…

  19. Can a Novel Word Repetition Task Be a Language-Neutral Assessment Tool? Evidence from Welsh-English Bilingual Children

    ERIC Educational Resources Information Center

    Sharp, Kathryn M; Gathercole, Virginia C. Mueller

    2013-01-01

    In recent years, there has been growing recognition of a need for a general, non-language-specific assessment tool that could be used to evaluate general speech and language abilities in children, especially to assist in identifying atypical development in bilingual children who speak a language unfamiliar to the assessor. It has been suggested…

  20. Ethnic Contestation and Language Policy in a Plural Society: The Chinese Language Movement in Malaysia, 1952-1967

    ERIC Educational Resources Information Center

    Yao Sua, Tan; Hooi See, Teoh

    2014-01-01

    The Chinese language movement was launched by the Chinese educationists to demand the recognition of Chinese as an official language to legitimise the status of Chinese education in the national education system in Malaysia. It began in 1952 as a response to the British attempt to establish national primary schools teaching in English and Malay to…

  1. Interactive Processing of Words in Connected Speech in L1 and L2.

    ERIC Educational Resources Information Center

    Hayashi, Takuo

    1991-01-01

    A study exploring the differences between first- and second-language word recognition strategies revealed that second-language listeners used more higher level information than native language listeners, when access to higher level information was not hindered by a competence-ceiling effect, indicating that word processing strategy is a function…

  2. Symbolic Play Connects to Language through Visual Object Recognition

    ERIC Educational Resources Information Center

    Smith, Linda B.; Jones, Susan S.

    2011-01-01

    Object substitutions in play (e.g. using a box as a car) are strongly linked to language learning and their absence is a diagnostic marker of language delay. Classic accounts posit a symbolic function that underlies both words and object substitutions. Here we show that object substitutions depend on developmental changes in visual object…

  3. Individual Differences in Language Ability Are Related to Variation in Word Recognition, Not Speech Perception: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    McMurray, Bob; Munson, Cheyenne; Tomblin, J. Bruce

    2014-01-01

    Purpose: The authors examined speech perception deficits associated with individual differences in language ability, contrasting auditory, phonological, or lexical accounts by asking whether lexical competition is differentially sensitive to fine-grained acoustic variation. Method: Adolescents with a range of language abilities (N = 74, including…

  4. Clarifying the Associations between Language and Social Development in Autism: A Study of Non-Native Phoneme Recognition

    ERIC Educational Resources Information Center

    Constantino, John N.; Yang, Dan; Gray, Teddi L.; Gross, Maggie M.; Abbacchi, Anna M.; Smith, Sarah C.; Kohn, Catherine E.; Kuhl, Patricia K.

    2007-01-01

    Autism spectrum disorders (ASDs) are characterized by correlated deficiencies in social and language development. This study explored a fundamental aspect of auditory information processing (AIP) that is dependent on social experience and critical to early language development: the ability to compartmentalize close-sounding speech sounds into…

  5. Codeswitching in the Multilingual English First Language Classroom

    ERIC Educational Resources Information Center

    Moodley, Visvaganthie

    2007-01-01

    This paper focuses on the role of codeswitching (CS) by isiZulu (Zulu) native language (NL) junior secondary learners in English first language (EL1) multilingual classrooms in South Africa. In spite of the educational transformation in South Africa, and the recognition of CS (by education policy documents) as a means of fulfilling pedagogical…

  6. Hemispheric Differences in Bilingual Word and Language Recognition.

    ERIC Educational Resources Information Center

    Roberts, William T.; And Others

    The linguistic role of the right hemisphere in bilingual language processing was examined. Ten right-handed Spanish-English bilinguals were tachistoscopically presented with mixed lists of Spanish and English words to either the right or left visual field and asked to identify the language and the word presented. Five of the subjects identified…

  7. Modern Languages in the United Kingdom

    ERIC Educational Resources Information Center

    Coleman, James A.

    2011-01-01

    The article supplies an overview of UK modern languages education at school and university level. It attends particularly to trends over recent years, with regard both to numbers and to social elitism, and reflects on perceptions of language learning in the wider culture and the importance of gaining wider recognition of the value of languages…

  8. Memory for Nonsemantic Attributes of American Sign Language Signs and English Words

    ERIC Educational Resources Information Center

    Siple, Patricia

    1977-01-01

    Two recognition memory experiments were used to study the retention of language and modality of input. A bilingual list of American Sign Language signs and English words was presented to two deaf and two hearing groups, one instructed to remember mode of input, and one hearing group. Findings are analyzed. (CHK)

  9. The Bimodal Bilingual Brain: Effects of Sign Language Experience

    ERIC Educational Resources Information Center

    Emmorey, Karen; McCullough, Stephen

    2009-01-01

    Bimodal bilinguals are hearing individuals who know both a signed and a spoken language. Effects of bimodal bilingualism on behavior and brain organization are reviewed, and an fMRI investigation of the recognition of facial expressions by ASL-English bilinguals is reported. The fMRI results reveal separate effects of sign language and spoken…

  10. Leadership Practices to Support Teaching and Learning for English Language Learners

    ERIC Educational Resources Information Center

    McGee, Alyson; Haworth, Penny; MacIntyre, Lesieli

    2015-01-01

    With a substantial increase in the numbers of English language learners in schools, particularly in countries where English is the primary use first language, it is vital that educators are able to meet the needs of ethnically and linguistically changing and challenging classrooms. However, despite the recognition of the importance of effective…

  11. On-Line Orthographic Influences on Spoken Language in a Semantic Task

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Perre, Laetitia; Dufau, Stephane; Ziegler, Johannes C.

    2009-01-01

    Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a…

  12. Building words on actions: verb enactment and verb recognition in children with specific language impairment.

    PubMed

    Levi, Gabriel; Colonnello, Valentina; Giacchè, Roberta; Piredda, Maria Letizia; Sogos, Carla

    2014-05-01

    Recent studies have shown that language processing is grounded in actions. Multiple independent research findings indicate that children with specific language impairment (SLI) show subtle difficulties beyond the language domain. Uncertainties remain on possible association between body-mediated, non-linguistic expression of verbs and early manifestation of SLI during verb acquisition. The present study was conducted to determine whether verb production through non-linguistic modalities is impaired in children with SLI. Children with SLI (mean age 41 months) and typically developing children (mean age 40 months) were asked to recognize target verbs while viewing video clips showing the action associated with the verb (verb-recognition task) and to enact the action corresponding to the verb (verb-enacting task). Children with SLI performed more poorly than control children in both tasks. The present study demonstrates that early language impairment emerges at the bodily level. These findings are consistent with the embodied theories of cognition and underscore the role of action-based representations during language development. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    PubMed

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  14. A Novel Approach towards Medical Entity Recognition in Chinese Clinical Text

    PubMed Central

    Yu, Jian

    2017-01-01

    Medical entity recognition, a basic task in the language processing of clinical data, has been extensively studied in analyzing admission notes in alphabetic languages such as English. However, much less work has been done on nonstructural texts that are written in Chinese, or in the setting of differentiation of Chinese drug names between traditional Chinese medicine and Western medicine. Here, we propose a novel cascade-type Chinese medication entity recognition approach that aims at integrating the sentence category classifier from a support vector machine and the conditional random field-based medication entity recognition. We hypothesized that this approach could avoid the side effects of abundant negative samples and improve the performance of the named entity recognition from admission notes written in Chinese. Therefore, we applied this approach to a test set of 324 Chinese-written admission notes with manual annotation by medical experts. Our data demonstrated that this approach had a score of 94.2% in precision, 92.8% in recall, and 93.5% in F-measure for the recognition of traditional Chinese medicine drug names and 91.2% in precision, 92.6% in recall, and 91.7% F-measure for the recognition of Western medicine drug names. The differences in F-measure were significant compared with those in the baseline systems. PMID:29065612

  15. Speech Processing and Recognition (SPaRe)

    DTIC Science & Technology

    2011-01-01

    results in the areas of automatic speech recognition (ASR), speech processing, machine translation (MT), natural language processing ( NLP ), and...Processing ( NLP ), Information Retrieval (IR) 16. SECURITY CLASSIFICATION OF: UNCLASSIFED 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME...Figure 9, the IOC was only expected to provide document submission and search; automatic speech recognition (ASR) for English, Spanish, Arabic , and

  16. Multi-font printed Mongolian document recognition system

    NASA Astrophysics Data System (ADS)

    Peng, Liangrui; Liu, Changsong; Ding, Xiaoqing; Wang, Hua; Jin, Jianming

    2009-01-01

    Mongolian is one of the major ethnic languages in China. Large amount of Mongolian printed documents need to be digitized in digital library and various applications. Traditional Mongolian script has unique writing style and multi-font-type variations, which bring challenges to Mongolian OCR research. As traditional Mongolian script has some characteristics, for example, one character may be part of another character, we define the character set for recognition according to the segmented components, and the components are combined into characters by rule-based post-processing module. For character recognition, a method based on visual directional feature and multi-level classifiers is presented. For character segmentation, a scheme is used to find the segmentation point by analyzing the properties of projection and connected components. As Mongolian has different font-types which are categorized into two major groups, the parameter of segmentation is adjusted for each group. A font-type classification method for the two font-type group is introduced. For recognition of Mongolian text mixed with Chinese and English, language identification and relevant character recognition kernels are integrated. Experiments show that the presented methods are effective. The text recognition rate is 96.9% on the test samples from practical documents with multi-font-types and mixed scripts.

  17. Psycholinguistically Oriented Second Language Research.

    ERIC Educational Resources Information Center

    Juffs, Alan

    2001-01-01

    Reviews recent research that investigates second language performance from the perspective of sentence processing (on-line comprehension studies) and word recognition. Concentrates on describing methods that employ reaction time measures as correlates of processing difficulty or knowledge representation. (Author/VWL)

  18. Plan recognition and generalization in command languages with application to telerobotics

    NASA Technical Reports Server (NTRS)

    Yared, Wael I.; Sheridan, Thomas B.

    1991-01-01

    A method for pragmatic inference as a necessary accompaniment to command languages is proposed. The approach taken focuses on the modeling and recognition of the human operator's intent, which relates sequences of domain actions ('plans') to changes in some model of the task environment. The salient feature of this module is that it captures some of the physical and linguistic contextual aspects of an instruction. This provides a basis for generalization and reinterpretation of the instruction in different task environments. The theoretical development is founded on previous work in computational linguistics and some recent models in the theory of action and intention. To illustrate these ideas, an experimental command language to a telerobot is implemented. The program consists of three different components: a robot graphic simulation, the command language itself, and the domain-independent pragmatic inference module. Examples of task instruction processes are provided to demonstrate the benefits of this approach.

  19. The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters.

    PubMed

    Rempel, David; Camilleri, Matt J; Lee, David L

    2015-10-01

    The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input.

  20. Building intelligent communication systems for handicapped aphasiacs.

    PubMed

    Fu, Yu-Fen; Ho, Cheng-Seen

    2010-01-01

    This paper presents an intelligent system allowing handicapped aphasiacs to perform basic communication tasks. It has the following three key features: (1) A 6-sensor data glove measures the finger gestures of a patient in terms of the bending degrees of his fingers. (2) A finger language recognition subsystem recognizes language components from the finger gestures. It employs multiple regression analysis to automatically extract proper finger features so that the recognition model can be fast and correctly constructed by a radial basis function neural network. (3) A coordinate-indexed virtual keyboard allows the users to directly access the letters on the keyboard at a practical speed. The system serves as a viable tool for natural and affordable communication for handicapped aphasiacs through continuous finger language input.

  1. Speech perception and reading: two parallel modes of understanding language and implications for acquiring literacy naturally.

    PubMed

    Massaro, Dominic W

    2012-01-01

    I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally.

  2. Speech recognition training for enhancing written language generation by a traumatic brain injury survivor.

    PubMed

    Manasse, N J; Hux, K; Rankin-Erickson, J L

    2000-11-01

    Impairments in motor functioning, language processing, and cognitive status may impact the written language performance of traumatic brain injury (TBI) survivors. One strategy to minimize the impact of these impairments is to use a speech recognition system. The purpose of this study was to explore the effect of mild dysarthria and mild cognitive-communication deficits secondary to TBI on a 19-year-old survivor's mastery and use of such a system-specifically, Dragon Naturally Speaking. Data included the % of the participant's words accurately perceived by the system over time, the participant's accuracy over time in using commands for navigation and error correction, and quantitative and qualitative changes in the participant's written texts generated with and without the use of the speech recognition system. Results showed that Dragon NaturallySpeaking was approximately 80% accurate in perceiving words spoken by the participant, and the participant quickly and easily mastered all navigation and error correction commands presented. Quantitatively, the participant produced a greater amount of text using traditional word processing and a standard keyboard than using the speech recognition system. Minimal qualitative differences appeared between writing samples. Discussion of factors that may have contributed to the obtained results and that may affect the generalization of the findings to other TBI survivors is provided.

  3. Relative contributions of acoustic temporal fine structure and envelope cues for lexical tone perception in noise

    PubMed Central

    Qi, Beier; Mao, Yitao; Liu, Jiaxing; Liu, Bo; Xu, Li

    2017-01-01

    Previous studies have shown that lexical tone perception in quiet relies on the acoustic temporal fine structure (TFS) but not on the envelope (E) cues. The contributions of TFS to speech recognition in noise are under debate. In the present study, Mandarin tone tokens were mixed with speech-shaped noise (SSN) or two-talker babble (TTB) at five signal-to-noise ratios (SNRs; −18 to +6 dB). The TFS and E were then extracted from each of the 30 bands using Hilbert transform. Twenty-five combinations of TFS and E from the sound mixtures of the same tone tokens at various SNRs were created. Twenty normal-hearing, native-Mandarin-speaking listeners participated in the tone-recognition test. Results showed that tone-recognition performance improved as the SNRs in either TFS or E increased. The masking effects on tone perception for the TTB were weaker than those for the SSN. For both types of masker, the perceptual weights of TFS and E in tone perception in noise was nearly equivalent, with E playing a slightly greater role than TFS. Thus, the relative contributions of TFS and E cues to lexical tone perception in noise or in competing-talker maskers differ from those in quiet and those to speech perception of non-tonal languages. PMID:28599529

  4. A Multimodal Dialog System for Language Assessment: Current State and Future Directions. Research Report. ETS RR-17-21

    ERIC Educational Resources Information Center

    Suendermann-Oeft, David; Ramanarayanan, Vikram; Yu, Zhou; Qian, Yao; Evanini, Keelan; Lange, Patrick; Wang, Xinhao; Zechner, Klaus

    2017-01-01

    We present work in progress on a multimodal dialog system for English language assessment using a modular cloud-based architecture adhering to open industry standards. Among the modules being developed for the system, multiple modules heavily exploit machine learning techniques, including speech recognition, spoken language proficiency rating,…

  5. Semantic Radical Knowledge and Word Recognition in Chinese for Chinese as Foreign Language Learners

    ERIC Educational Resources Information Center

    Su, Xiaoxiang; Kim, Young-Suk

    2014-01-01

    In the present study, we examined the relation of knowledge of semantic radicals to students' language proficiency and word reading for adult Chinese-as-a-foreign language students. Ninety-seven college students rated their proficiency in speaking, listening, reading, and writing in Chinese, and were administered measures of receptive and…

  6. Yaounde French Speech Corpus

    DTIC Science & Technology

    2017-03-01

    the Center for Technology Enhanced Language Learning (CTELL), a research cell in the Department of Foreign Languages, United States Military Academy...models for automatic speech recognition (ASR), and to, thereby, investigate the utility of ASR in pedagogical technology . The corpus is a sample of...lexical resources, language technology 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF

  7. American or British? L2 Speakers' Recognition and Evaluations of Accent Features in English

    ERIC Educational Resources Information Center

    Carrie, Erin; McKenzie, Robert M.

    2018-01-01

    Recent language attitude research has attended to the processes involved in identifying and evaluating spoken language varieties. This article investigates the ability of second-language learners of English in Spain (N = 71) to identify Received Pronunciation (RP) and General American (GenAm) speech and their perceptions of linguistic variation…

  8. Bilingual Language Representation and Cognitive Processes in Translation

    ERIC Educational Resources Information Center

    Hatzidaki, Anna; Pothos, Emmanuel M.

    2008-01-01

    A "text"-translation task and a recognition task investigated the hypothesis that "semantic memory" principally mediates translation from a bilingual's native first language (L1) to her second language (L2), whereas "lexical memory" mediates translation from L2 to L1. This has been held for word translation by the revised hierarchical model (RHM)…

  9. Language Nonselective Lexical Access in Bilingual Toddlers

    ERIC Educational Resources Information Center

    Von Holzen, Katie; Mani, Nivedita

    2012-01-01

    We examined how words from bilingual toddlers' second language (L2) primed recognition of related target words in their first language (L1). On critical trials, prime-target word pairs were either (a) phonologically related, with L2 primes overlapped phonologically with L1 target words [e.g., "slide" (L2 prime)-"Kleid" (L1 target, "dress")], or…

  10. Microcomputers and Preschoolers.

    ERIC Educational Resources Information Center

    Evans, Dina

    Preschool children can benefit by working with microcomputers. Thinking skills are enhanced by software games that focus on logic, memory, problem solving, and pattern recognition. Counting, sequencing, and matching games develop mathematics skills, and word games focusing on basic letter symbol and word recognition develop language skills.…

  11. Neural-Network Object-Recognition Program

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  12. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    PubMed

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (P<0.01) and cochlear implant (P<0.05); however, in the children with hearing aid, there was no significant difference between word perception score in auditory-only and audiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Perception of Filtered Speech by Children with Developmental Dyslexia and Children with Specific Language Impairments

    PubMed Central

    Goswami, Usha; Cumming, Ruth; Chait, Maria; Huss, Martina; Mead, Natasha; Wilson, Angela M.; Barnes, Lisa; Fosker, Tim

    2016-01-01

    Here we use two filtered speech tasks to investigate children’s processing of slow (<4 Hz) versus faster (∼33 Hz) temporal modulations in speech. We compare groups of children with either developmental dyslexia (Experiment 1) or speech and language impairments (SLIs, Experiment 2) to groups of typically-developing (TD) children age-matched to each disorder group. Ten nursery rhymes were filtered so that their modulation frequencies were either low-pass filtered (<4 Hz) or band-pass filtered (22 – 40 Hz). Recognition of the filtered nursery rhymes was tested in a picture recognition multiple choice paradigm. Children with dyslexia aged 10 years showed equivalent recognition overall to TD controls for both the low-pass and band-pass filtered stimuli, but showed significantly impaired acoustic learning during the experiment from low-pass filtered targets. Children with oral SLIs aged 9 years showed significantly poorer recognition of band pass filtered targets compared to their TD controls, and showed comparable acoustic learning effects to TD children during the experiment. The SLI samples were also divided into children with and without phonological difficulties. The children with both SLI and phonological difficulties were impaired in recognizing both kinds of filtered speech. These data are suggestive of impaired temporal sampling of the speech signal at different modulation rates by children with different kinds of developmental language disorder. Both SLI and dyslexic samples showed impaired discrimination of amplitude rise times. Implications of these findings for a temporal sampling framework for understanding developmental language disorders are discussed. PMID:27303348

  14. Early Postimplant Speech Perception and Language Skills Predict Long-Term Language and Neurocognitive Outcomes Following Pediatric Cochlear Implantation

    PubMed Central

    Kronenberger, William G.; Castellanos, Irina; Pisoni, David B.

    2017-01-01

    Purpose We sought to determine whether speech perception and language skills measured early after cochlear implantation in children who are deaf, and early postimplant growth in speech perception and language skills, predict long-term speech perception, language, and neurocognitive outcomes. Method Thirty-six long-term users of cochlear implants, implanted at an average age of 3.4 years, completed measures of speech perception, language, and executive functioning an average of 14.4 years postimplantation. Speech perception and language skills measured in the 1st and 2nd years postimplantation and open-set word recognition measured in the 3rd and 4th years postimplantation were obtained from a research database in order to assess predictive relations with long-term outcomes. Results Speech perception and language skills at 6 and 18 months postimplantation were correlated with long-term outcomes for language, verbal working memory, and parent-reported executive functioning. Open-set word recognition was correlated with early speech perception and language skills and long-term speech perception and language outcomes. Hierarchical regressions showed that early speech perception and language skills at 6 months postimplantation and growth in these skills from 6 to 18 months both accounted for substantial variance in long-term outcomes for language and verbal working memory that was not explained by conventional demographic and hearing factors. Conclusion Speech perception and language skills measured very early postimplantation, and early postimplant growth in speech perception and language, may be clinically relevant markers of long-term language and neurocognitive outcomes in users of cochlear implants. Supplemental materials https://doi.org/10.23641/asha.5216200 PMID:28724130

  15. The Mission Operations Planning Assistant

    NASA Technical Reports Server (NTRS)

    Schuetzle, James G.

    1987-01-01

    The Mission Operations Planning Assistant (MOPA) is a knowledge-based system developed to support the planning and scheduling of instrument activities on the Upper Atmospheric Research Satellite (UARS). The MOPA system represents and maintains instrument plans at two levels of abstraction in order to keep plans comprehensible to both UARS Principal Investigators and Command Management personnel. The hierarchical representation of plans also allows MOPA to automatically create detailed instrument activity plans from which spacecraft command loads may be generated. The MOPA system was developed on a Symbolics 3640 computer using the ZetaLisp and ART languages. MOPA's features include a textual and graphical interface for plan inspection and modification, recognition of instrument operational constraint violations during the planning process, and consistency maintenance between the different planning levels. This paper describes the current MOPA system.

  16. The mission operations planning assistant

    NASA Technical Reports Server (NTRS)

    Schuetzle, James G.

    1987-01-01

    The Mission Operations Planning Assistant (MOPA) is a knowledge-based system developed to support the planning and scheduling of instrument activities on the Upper Atmospheric Research Satellite (UARS). The MOPA system represents and maintains instrument plans at two levels of abstraction in order to keep plans comprehensible to both UARS Prinicpal Investigators and Command Management personnel. The hierarchical representation of plans also allows MOPA to automatically create detailed instrument activity plans from which spacecraft command loads may be generated. The MOPA system was developed on a Symbolics 3640 computer using the ZETALISP and ART languages. MOPA's features include a textual and graphical interface for plan inspection and modification, recognition of instrument operational constraint violations during the planning process, and consistency maintenance between the different planning levels. This paper describes the current MOPA system.

  17. Long-term language levels and reading skills in mandarin-speaking prelingually deaf children with cochlear implants.

    PubMed

    Wu, Che-Ming; Chen, Yen-An; Chan, Kai-Chieh; Lee, Li-Ang; Hsu, Kuang-Hung; Lin, Bao-Guey; Liu, Tien-Chen

    2011-01-01

    The aim of this study was to document receptive and expressive language levels and reading skills achieved by Mandarin-speaking children who had received cochlear implants (CIs) and used them for 4.75-7.42 years. The effects of possible associated factors were also analyzed. Standardized Mandarin language and reading tests were administered to 39 prelingually deaf children with Nucleus 24 devices. The Mandarin Chinese version of the Peabody Picture Vocabulary Test was used to assess their receptive vocabulary knowledge and the Revised Primary School Language Assessment Test for their receptive and expressive language skills. The Graded Chinese Character Recognition Test was used to test their written word recognition ability and the Reading Comprehension Test for their reading comprehension ability. Raw scores from both language and reading measurements were compared to normative data of nor- mal-hearing children to obtain standard scores. The results showed that the mean standard score for receptive vocabulary measurement and the mean T scores for the receptive language, expressive language and total language measurement were all in the low-average range in comparison to the normative sample. In contrast, the mean T scores for word and text reading comprehension were almost the same as for their age-matched hearing counterparts. Among all children with CIs, 75.7% scored within or above the normal range of their age-matched hearing peers on receptive vocabulary measurement. For total language, Chinese word recognition and reading scores, 71.8, 77 and 82% of children with CIs were age appropriate, respectively. A strong correlation was found between language and reading skills. Age at implantation and sentence perception scores account for 37% of variance for total language outcome. Sentence perception scores and preimplantation residual hearing were revealed to be associated with the outcome of reading comprehension. We concluded that by using standard tests, the language development and reading skill of Mandarin-speaking children who use CIs from a young age appear to fall within the normal range of their hearing age mates, at least after 4.8-7.4 years of experience. However, to fully evaluate the fine linguistic skills of these subjects, a more detailed study and longer follow-up period are needed. Copyright © 2010 S. Karger AG, Basel.

  18. Creating Excitement and Involvement in the Foreign Language Classroom by Getting Away from the Textbook.

    ERIC Educational Resources Information Center

    Wattenmaker, Beverly; Lock, Joanne

    This paper presents a method for making foreign language learning meaningful and interesting through using language as a tool for communicating about the self and others. Drawing from techniques developed by behavioral scientists, the foreign language was used to develop students' self-awareness and confidence. Learning activities were created,…

  19. Speech Recognition for A Digital Video Library.

    ERIC Educational Resources Information Center

    Witbrock, Michael J.; Hauptmann, Alexander G.

    1998-01-01

    Production of the meta-data supporting the Informedia Digital Video Library interface is automated using techniques derived from artificial intelligence research. Speech recognition and natural-language processing, information retrieval, and image analysis are applied to produce an interface that helps users locate information and navigate more…

  20. Distributional structure in language: Contributions to noun–verb difficulty differences in infant word recognition

    PubMed Central

    Willits, Jon A.; Seidenberg, Mark S.; Saffran, Jenny R.

    2014-01-01

    What makes some words easy for infants to recognize, and other words difficult? We addressed this issue in the context of prior results suggesting that infants have difficulty recognizing verbs relative to nouns. In this work, we highlight the role played by the distributional contexts in which nouns and verbs occur. Distributional statistics predict that English nouns should generally be easier to recognize than verbs in fluent speech. However, there are situations in which distributional statistics provide similar support for verbs. The statistics for verbs that occur with the English morpheme –ing, for example, should facilitate verb recognition. In two experiments with 7.5- and 9.5-month-old infants, we tested the importance of distributional statistics for word recognition by varying the frequency of the contextual frames in which verbs occur. The results support the conclusion that distributional statistics are utilized by infant language learners and contribute to noun–verb differences in word recognition. PMID:24908342

  1. Conclusiveness of natural languages and recognition of images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojcik, Z.M.

    1983-01-01

    The conclusiveness is investigated using recognition processes and one-one correspondence between expressions of a natural language and graphs representing events. The graphs, as conceived in psycholinguistics, are obtained as a result of perception processes. It is possible to generate and process the graphs automatically, using computers and then to convert the resulting graphs into expressions of a natural language. Correctness and conclusiveness of the graphs and sentences are investigated using the fundamental condition for events representation processes. Some consequences of the conclusiveness are discussed, e.g. undecidability of arithmetic, human brain assymetry, correctness of statistical calculations and operations research. It ismore » suggested that the group theory should be imposed on mathematical models of any real system. Proof of the fundamental condition is also presented. 14 references.« less

  2. Integration of Speech and Natural Language

    DTIC Science & Technology

    1988-04-01

    major activities: • Development of the syntax and semantics components for natural language processing. • Integration of the developed syntax and...evaluating the performance of speech recognition algonthms developed K» under the Strategic Computing Program. grs Our work on natural language processing...included the developement of a grammar (syntax) that uses the Uiuficanon gnmmaj formaMsm (an augmented context free formalism). The Unification

  3. Speed of Word Recognition and Vocabulary Knowledge in Infancy Predict Cognitive and Language Outcomes in Later Childhood

    ERIC Educational Resources Information Center

    Marchman, Virginia A.; Fernald, Anne

    2008-01-01

    The nature of predictive relations between early language and later cognitive function is a fundamental question in research on human cognition. In a longitudinal study assessing speed of language processing in infancy, Fernald, Perfors and Marchman (2006 ) found that reaction time at 25 months was strongly related to lexical and grammatical…

  4. Auditory Word Recognition of Nouns and Verbs in Children with Specific Language Impairment (SLI)

    ERIC Educational Resources Information Center

    Andreu, Llorenc; Sanz-Torrent, Monica; Guardia-Olmos, Joan

    2012-01-01

    Nouns are fundamentally different from verbs semantically and syntactically, since verbs can specify one, two, or three nominal arguments. In this study, 25 children with Specific Language Impairment (age 5;3-8;2 years) and 50 typically developing children (3;3-8;2 years) participated in an eye-tracking experiment of spoken language comprehension…

  5. Visual Half-Field Experiments Are a Good Measure of Cerebral Language Dominance if Used Properly: Evidence from fMRI

    ERIC Educational Resources Information Center

    Hunter, Zoe R.; Brysbaert, Marc

    2008-01-01

    Traditional neuropsychology employs visual half-field (VHF) experiments to assess cerebral language dominance. This approach is based on the assumption that left cerebral dominance for language leads to faster and more accurate recognition of words in the right visual half-field (RVF) than in the left visual half-field (LVF) during tachistoscopic…

  6. Masked Translation Priming Effects in Visual Word Recognition by Trilinguals

    ERIC Educational Resources Information Center

    Aparicio, Xavier; Lavaur, Jean-Marc

    2016-01-01

    The present study aims to investigate how trilinguals process their two non-dominant languages and how those languages influence one another, as well as the relative importance of the dominant language on their processing. With this in mind, 24 French (L1)- English (L2)- and Spanish (L3)-unbalanced trilinguals, deemed equivalent in their L2 and L3…

  7. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    ERIC Educational Resources Information Center

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  8. Perceptual Decoding Processes for Language in a Visual Mode and for Language in an Auditory Mode.

    ERIC Educational Resources Information Center

    Myerson, Rosemarie Farkas

    The purpose of this paper is to gain insight into the nature of the reading process through an understanding of the general nature of sensory processing mechanisms which reorganize and restructure input signals for central recognition, and an understanding of how the grammar of the language functions in defining the set of possible sentences in…

  9. ANTLR Tree Grammar Generator and Extensions

    NASA Technical Reports Server (NTRS)

    Craymer, Loring

    2005-01-01

    A computer program implements two extensions of ANTLR (Another Tool for Language Recognition), which is a set of software tools for translating source codes between different computing languages. ANTLR supports predicated- LL(k) lexer and parser grammars, a notation for annotating parser grammars to direct tree construction, and predicated tree grammars. [ LL(k) signifies left-right, leftmost derivation with k tokens of look-ahead, referring to certain characteristics of a grammar.] One of the extensions is a syntax for tree transformations. The other extension is the generation of tree grammars from annotated parser or input tree grammars. These extensions can simplify the process of generating source-to-source language translators and they make possible an approach, called "polyphase parsing," to translation between computing languages. The typical approach to translator development is to identify high-level semantic constructs such as "expressions," "declarations," and "definitions" as fundamental building blocks in the grammar specification used for language recognition. The polyphase approach is to lump ambiguous syntactic constructs during parsing and then disambiguate the alternatives in subsequent tree transformation passes. Polyphase parsing is believed to be useful for generating efficient recognizers for C++ and other languages that, like C++, have significant ambiguities.

  10. Are Emojis Creating a New or Old Visual Language for New Generations? A Socio-Semiotic Study

    ERIC Educational Resources Information Center

    Alshenqeeti, Hamza

    2016-01-01

    The increasing use of emojis, digital images that can represent a word or feeling in a text or email, and the fact that they can be strung together to create a sentence with real and full meaning raises the question of whether they are creating a new language amongst technologically savvy youth, or devaluing existing language. There is however a…

  11. Incorporating Speech Recognition into a Natural User Interface

    NASA Technical Reports Server (NTRS)

    Chapa, Nicholas

    2017-01-01

    The Augmented/ Virtual Reality (AVR) Lab has been working to study the applicability of recent virtual and augmented reality hardware and software to KSC operations. This includes the Oculus Rift, HTC Vive, Microsoft HoloLens, and Unity game engine. My project in this lab is to integrate voice recognition and voice commands into an easy to modify system that can be added to an existing portion of a Natural User Interface (NUI). A NUI is an intuitive and simple to use interface incorporating visual, touch, and speech recognition. The inclusion of speech recognition capability will allow users to perform actions or make inquiries using only their voice. The simplicity of needing only to speak to control an on-screen object or enact some digital action means that any user can quickly become accustomed to using this system. Multiple programs were tested for use in a speech command and recognition system. Sphinx4 translates speech to text using a Hidden Markov Model (HMM) based Language Model, an Acoustic Model, and a word Dictionary running on Java. PocketSphinx had similar functionality to Sphinx4 but instead ran on C. However, neither of these programs were ideal as building a Java or C wrapper slowed performance. The most ideal speech recognition system tested was the Unity Engine Grammar Recognizer. A Context Free Grammar (CFG) structure is written in an XML file to specify the structure of phrases and words that will be recognized by Unity Grammar Recognizer. Using Speech Recognition Grammar Specification (SRGS) 1.0 makes modifying the recognized combinations of words and phrases very simple and quick to do. With SRGS 1.0, semantic information can also be added to the XML file, which allows for even more control over how spoken words and phrases are interpreted by Unity. Additionally, using a CFG with SRGS 1.0 produces a Finite State Machine (FSM) functionality limiting the potential for incorrectly heard words or phrases. The purpose of my project was to investigate options for a Speech Recognition System. To that end I attempted to integrate Sphinx4 into a user interface. Sphinx4 had great accuracy and is the only free program able to perform offline speech dictation. However it had a limited dictionary of words that could be recognized, single syllable words were almost impossible for it to hear, and since it ran on Java it could not be integrated into the Unity based NUI. PocketSphinx ran much faster than Sphinx4 which would've made it ideal as a plugin to the Unity NUI, unfortunately creating a C# wrapper for the C code made the program unusable with Unity due to the wrapper slowing code execution and class files becoming unreachable. Unity Grammar Recognizer is the ideal speech recognition interface, it is flexible in recognizing multiple variations of the same command. It is also the most accurate program in recognizing speech due to using an XML grammar to specify speech structure instead of relying solely on a Dictionary and Language model. The Unity Grammar Recognizer will be used with the NUI for these reasons as well as being written in C# which further simplifies the incorporation.

  12. Parametric Representation of the Speaker's Lips for Multimodal Sign Language and Speech Recognition

    NASA Astrophysics Data System (ADS)

    Ryumin, D.; Karpov, A. A.

    2017-05-01

    In this article, we propose a new method for parametric representation of human's lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker's lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.

  13. Improving speech-in-noise recognition for children with hearing loss: Potential effects of language abilities, binaural summation, and head shadow

    PubMed Central

    Nittrouer, Susan; Caldwell-Tarr, Amanda; Tarr, Eric; Lowenstein, Joanna H.; Rice, Caitlin; Moberly, Aaron C.

    2014-01-01

    Objective: This study examined speech recognition in noise for children with hearing loss, compared it to recognition for children with normal hearing, and examined mechanisms that might explain variance in children’s abilities to recognize speech in noise. Design: Word recognition was measured in two levels of noise, both when the speech and noise were co-located in front and when the noise came separately from one side. Four mechanisms were examined as factors possibly explaining variance: vocabulary knowledge, sensitivity to phonological structure, binaural summation, and head shadow. Study sample: Participants were 113 eight-year-old children. Forty-eight had normal hearing (NH) and 65 had hearing loss: 18 with hearing aids (HAs), 19 with one cochlear implant (CI), and 28 with two CIs. Results: Phonological sensitivity explained a significant amount of between-groups variance in speech-in-noise recognition. Little evidence of binaural summation was found. Head shadow was similar in magnitude for children with NH and with CIs, regardless of whether they wore one or two CIs. Children with HAs showed reduced head shadow effects. Conclusion: These outcomes suggest that in order to improve speech-in-noise recognition for children with hearing loss, intervention needs to be comprehensive, focusing on both language abilities and auditory mechanisms. PMID:23834373

  14. Spoken word recognition by Latino children learning Spanish as their first language*

    PubMed Central

    HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE

    2010-01-01

    Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157

  15. Design and performance of a large vocabulary discrete word recognition system. Volume 2: Appendixes. [flow charts and users manual

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The users manual for the word recognition computer program contains flow charts of the logical diagram, the memory map for templates, the speech analyzer card arrangement, minicomputer input/output routines, and assembly language program listings.

  16. Phonological Activation in Multi-Syllabic Sord Recognition

    ERIC Educational Resources Information Center

    Lee, Chang H.

    2007-01-01

    Three experiments were conducted to test the phonological recoding hypothesis in visual word recognition. Most studies on this issue have been conducted using mono-syllabic words, eventually constructing various models of phonological processing. Yet in many languages including English, the majority of words are multi-syllabic words. English…

  17. Real-time processing of ASL signs: Delayed first language acquisition affects organization of the mental lexicon

    PubMed Central

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2014-01-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input and for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf individuals who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native-signers demonstrated early and robust activation of sub-lexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received. PMID:25528091

  18. Tips from the Classroom.

    ERIC Educational Resources Information Center

    Wilhelm, Kim Hughes; Leverett, Thomas; Barrett, Rob J.; Chur-Hansen, Anna; Dantas-Whitney, Maria; Zapata, Gabriela; Garcia, Juan Felix

    1998-01-01

    Five articles present tips for rallying English-as-a-Second-Language students to the enterprise of creating context, tools, and language itself. The articles focus on using original dramas created by students, teaching nonnative English-speaking medical students to comprehend their patients' colloquial language, conducting research with native…

  19. Processing voiceless vowels in Japanese: Effects of language-specific phonological knowledge

    NASA Astrophysics Data System (ADS)

    Ogasawara, Naomi

    2005-04-01

    There has been little research on processing allophonic variation in the field of psycholinguistics. This study focuses on processing the voiced/voiceless allophonic alternation of high vowels in Japanese. Three perception experiments were conducted to explore how listeners parse out vowels with the voicing alternation from other segments in the speech stream and how the different voicing statuses of the vowel affect listeners' word recognition process. The results from the three experiments show that listeners use phonological knowledge of their native language for phoneme processing and for word recognition. However, interactions of the phonological and acoustic effects are observed to be different in each process. The facilitatory phonological effect and the inhibitory acoustic effect cancel out one another in phoneme processing; while in word recognition, the facilitatory phonological effect overrides the inhibitory acoustic effect.

  20. Role des congeneres interlinguaux dans le developpement du vocabulaire receptif: Application au francais langue seconde (The Role of Interlingual Cognates in the Development of Receptive Vocabulary: Application to French as a Second Language).

    ERIC Educational Resources Information Center

    Treville, Marie-Claude

    This study investigated the effects of systematic use of similarities between the native and second languages on the lexical competence of second language learners. Subjects were 209 first- and second-year English-speaking university students in French language classes. The students were pre- and post-tested for their visual recognition of…

  1. Robust Real-Time and Rotation-Invariant American Sign Language Alphabet Recognition Using Range Camera

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2012-07-01

    The automatic interpretation of human gestures can be used for a natural interaction with computers without the use of mechanical devices such as keyboards and mice. The recognition of hand postures have been studied for many years. However, most of the literature in this area has considered 2D images which cannot provide a full description of the hand gestures. In addition, a rotation-invariant identification remains an unsolved problem even with the use of 2D images. The objective of the current study is to design a rotation-invariant recognition process while using a 3D signature for classifying hand postures. An heuristic and voxelbased signature has been designed and implemented. The tracking of the hand motion is achieved with the Kalman filter. A unique training image per posture is used in the supervised classification. The designed recognition process and the tracking procedure have been successfully evaluated. This study has demonstrated the efficiency of the proposed rotation invariant 3D hand posture signature which leads to 98.24% recognition rate after testing 12723 samples of 12 gestures taken from the alphabet of the American Sign Language.

  2. The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition

    NASA Astrophysics Data System (ADS)

    Menasri, Farès; Louradour, Jérôme; Bianne-Bernard, Anne-Laure; Kermorvant, Christopher

    2012-01-01

    This paper describes the system for the recognition of French handwriting submitted by A2iA to the competition organized at ICDAR2011 using the Rimes database. This system is composed of several recognizers based on three different recognition technologies, combined using a novel combination method. A framework multi-word recognition based on weighted finite state transducers is presented, using an explicit word segmentation, a combination of isolated word recognizers and a language model. The system was tested both for isolated word recognition and for multi-word line recognition and submitted to the RIMES-ICDAR2011 competition. This system outperformed all previously proposed systems on these tasks.

  3. Use of Biometrics within Sub-Saharan Refugee Communities

    DTIC Science & Technology

    2013-12-01

    fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity. Biometrics creates and...Biometrics typically comprises fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity...authentication because it identifies an individual based on mathematical analysis of the random pattern visible within the iris. Facial recognition is

  4. Speech Recognition Thresholds for Multilingual Populations.

    ERIC Educational Resources Information Center

    Ramkissoon, Ishara

    2001-01-01

    This article traces the development of speech audiometry in the United States and reports on the current status, focusing on the needs of a multilingual population in terms of measuring speech recognition threshold (SRT). It also discusses sociolinguistic considerations, alternative SRT stimuli for second language learners, and research on using…

  5. Music Education Intervention Improves Vocal Emotion Recognition

    ERIC Educational Resources Information Center

    Mualem, Orit; Lavidor, Michal

    2015-01-01

    The current study is an interdisciplinary examination of the interplay among music, language, and emotions. It consisted of two experiments designed to investigate the relationship between musical abilities and vocal emotional recognition. In experiment 1 (N = 24), we compared the influence of two short-term intervention programs--music and…

  6. Collaborative Efforts to Promote Emergent Literacy and Efficient Word Recognition Skills

    ERIC Educational Resources Information Center

    Roth, Froma P.; Troia, Gary A.

    2006-01-01

    In this article, 3 models of collaboration between speech-language pathologists and classroom teachers are discussed to promote emergent literacy and accurate and fluent word recognition. These models are demonstration lessons, team teaching, and consultation. A number of instructional principles are presented for emergent literacy and decoding…

  7. How Cross-Language Similarity and Task Demands Affect Cognate Recognition

    ERIC Educational Resources Information Center

    Dijkstra, Ton; Miwa, Koji; Brummelhuis, Bianca; Sappelli, Maya; Baayen, Harald

    2010-01-01

    This study examines how the cross-linguistic similarity of translation equivalents affects bilingual word recognition. Performing one of three tasks, Dutch-English bilinguals processed cognates with varying degrees of form overlap between their English and Dutch counterparts (e.g., "lamp-lamp" vs. "flood-vloed" vs. "song-lied"). In lexical…

  8. Language Education and Multilingualism in Colombia: Crossing the Divide

    ERIC Educational Resources Information Center

    de Mejía, Anne-Marie

    2017-01-01

    Despite Colombia's official recognition of its ethnic and cultural diversity, it has yet to develop in practice an inclusive educational vision involving the recognition of diversity, as well as promoting the country's insertion within the global market. Garcia et al. acknowledge the importance of "cultivating" students' diverse…

  9. Differences in Word Recognition between Early Bilinguals and Monolinguals: Behavioral and ERP Evidence

    ERIC Educational Resources Information Center

    Lehtonen, Minna; Hulten, Annika; Rodriguez-Fornells, Antoni; Cunillera, Toni; Tuomainen, Jyrki; Laine, Matti

    2012-01-01

    We investigated the behavioral and brain responses (ERPs) of bilingual word recognition to three fundamental psycholinguistic factors, frequency, morphology, and lexicality, in early bilinguals vs. monolinguals. Earlier behavioral studies have reported larger frequency effects in bilinguals' nondominant vs. dominant language and in some studies…

  10. Cognitive Development and Reading Processes. Developmental Program Report Number 76.

    ERIC Educational Resources Information Center

    West, Richard F.

    In discussing the relationship between cognitive development (perception, pattern recognition, and memory) and reading processes, this paper especially emphasizes developmental factors. After an overview of some issues that bear on how written language is processed, the paper presents a discussion of pattern recognition, including general pattern…

  11. 21 CFR 330.1 - General conditions for general recognition as safe, effective and not misbranded.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... exact language where exact language has been established and identified by quotation marks in an... “minimize”. (64) “Referred to as” or “of”. (65) “Sensation” or “feeling”. (66) “Solution” or “liquid”. (67...

  12. 21 CFR 330.1 - General conditions for general recognition as safe, effective and not misbranded.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... exact language where exact language has been established and identified by quotation marks in an... “minimize”. (64) “Referred to as” or “of”. (65) “Sensation” or “feeling”. (66) “Solution” or “liquid”. (67...

  13. 21 CFR 330.1 - General conditions for general recognition as safe, effective and not misbranded.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... exact language where exact language has been established and identified by quotation marks in an... “minimize”. (64) “Referred to as” or “of”. (65) “Sensation” or “feeling”. (66) “Solution” or “liquid”. (67...

  14. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    PubMed

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  15. Speech perception and spoken word recognition: past and present.

    PubMed

    Jusezyk, Peter W; Luce, Paul A

    2002-02-01

    The scientific study of the perception of spoken language has been an exciting, prolific, and productive area of research for more than 50 yr. We have learned much about infants' and adults' remarkable capacities for perceiving and understanding the sounds of their language, as evidenced by our increasingly sophisticated theories of acquisition, process, and representation. We present a selective, but we hope, representative review of the past half century of research on speech perception, paying particular attention to the historical and theoretical contexts within which this research was conducted. Our foci in this review fall on three principle topics: early work on the discrimination and categorization of speech sounds, more recent efforts to understand the processes and representations that subserve spoken word recognition, and research on how infants acquire the capacity to perceive their native language. Our intent is to provide the reader a sense of the progress our field has experienced over the last half century in understanding the human's extraordinary capacity for the perception of spoken language.

  16. The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters

    PubMed Central

    Rempel, David; Camilleri, Matt J.; Lee, David L.

    2015-01-01

    The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input. PMID:26028955

  17. Second Language Ability and Emotional Prosody Perception

    PubMed Central

    Bhatara, Anjali; Laukka, Petri; Boll-Avetisyan, Natalie; Granjon, Lionel; Anger Elfenbein, Hillary; Bänziger, Tanja

    2016-01-01

    The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions. PMID:27253326

  18. Medical Named Entity Recognition for Indonesian Language Using Word Representations

    NASA Astrophysics Data System (ADS)

    Rahman, Arief

    2018-03-01

    Nowadays, Named Entity Recognition (NER) system is used in medical texts to obtain important medical information, like diseases, symptoms, and drugs. While most NER systems are applied to formal medical texts, informal ones like those from social media (also called semi-formal texts) are starting to get recognition as a gold mine for medical information. We propose a theoretical Named Entity Recognition (NER) model for semi-formal medical texts in our medical knowledge management system by comparing two kinds of word representations: cluster-based word representation and distributed representation.

  19. The U.S.-China E-Language Project: A Study of a Gaming Approach to English Language Learning for Middle School Students

    ERIC Educational Resources Information Center

    Green, Patricia J.; Sha, Mandy; Liu, Lu

    2011-01-01

    In 2001, the U.S. Department of Education and the Ministry of Education in China entered into a bilateral partnership to develop a technology-driven approach to foreign language learning that integrated gaming, immersion, voice recognition, problem-based learning tasks, and other features that made it a significant research and development pilot…

  20. The Role of Syllables in Intermediate-Depth Stress-Timed Languages: Masked Priming Evidence in European Portuguese

    ERIC Educational Resources Information Center

    Campos, Ana Duarte; Mendes Oliveira, Helena; Soares, Ana Paula

    2018-01-01

    The role of syllables as a sublexical unit in visual word recognition and reading is well established in deep and shallow syllable-timed languages such as French and Spanish, respectively. However, its role in intermediate stress-timed languages remains unclear. This paper aims to overcome this gap by studying for the first time the role of…

  1. Self-identity reprogrammed by a single residue switch in a cell surface receptor of a social bacterium.

    PubMed

    Cao, Pengbo; Wall, Daniel

    2017-04-04

    The ability to recognize close kin confers survival benefits on single-celled microbes that live in complex and changing environments. Microbial kinship detection relies on perceptible cues that reflect relatedness between individuals, although the mechanisms underlying recognition in natural populations remain poorly understood. In myxobacteria, cells identify related individuals through a polymorphic cell surface receptor, TraA. Recognition of compatible receptors leads to outer membrane exchange among clonemates and fitness consequences. Here, we investigated how a single receptor creates a diversity in recognition across myxobacterial populations. We first show that TraA requires its partner protein TraB to function in cell-cell adhesion. Recognition is shown to be traA allele-specific, where polymorphisms within TraA dictate binding selectivity. We reveal the malleability of TraA recognition, and seemingly minor changes to its variable region reprogram recognition outcomes. Strikingly, we identify a single residue (A/P205) as a molecular switch for TraA recognition. Substitutions at this position change the specificity of a diverse panel of environmental TraA receptors. In addition, we engineered a receptor with unique specificity by simply creating an A205P substitution, suggesting that modest changes in TraA can lead to diversification of new recognition groups in nature. We hypothesize that the malleable property of TraA has allowed it to evolve and create social barriers between myxobacterial populations and in turn avoid adverse interactions with relatives.

  2. Tension and Approximation in Poetic Translation

    ERIC Educational Resources Information Center

    Al-Shabab, Omar A. S.; Baka, Farida H.

    2015-01-01

    Simple observation reveals that each language and each culture enjoys specific linguistic features and rhetorical traditions. In poetry translation difference and the resultant linguistic tension create a gap between Source Language and Target language, a gap that needs to be bridged by creating an approximation processed through the translator's…

  3. The Language Grid: supporting intercultural collaboration

    NASA Astrophysics Data System (ADS)

    Ishida, T.

    2018-03-01

    A variety of language resources already exist online. Unfortunately, since many language resources have usage restrictions, it is virtually impossible for each user to negotiate with every language resource provider when combining several resources to achieve the intended purpose. To increase the accessibility and usability of language resources (dictionaries, parallel texts, part-of-speech taggers, machine translators, etc.), we proposed the Language Grid [1]; it wraps existing language resources as atomic services and enables users to create new services by combining the atomic services, and reduces the negotiation costs related to intellectual property rights [4]. Our slogan is “language services from language resources.” We believe that modularization with recombination is the key to creating a full range of customized language environments for various user communities.

  4. Recognition of Indian Sign Language in Live Video

    NASA Astrophysics Data System (ADS)

    Singha, Joyeeta; Das, Karen

    2013-05-01

    Sign Language Recognition has emerged as one of the important area of research in Computer Vision. The difficulty faced by the researchers is that the instances of signs vary with both motion and appearance. Thus, in this paper a novel approach for recognizing various alphabets of Indian Sign Language is proposed where continuous video sequences of the signs have been considered. The proposed system comprises of three stages: Preprocessing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, histogram matching. Eigen values and Eigen Vectors were considered for feature extraction stage and finally Eigen value weighted Euclidean distance is used to recognize the sign. It deals with bare hands, thus allowing the user to interact with the system in natural way. We have considered 24 different alphabets in the video sequences and attained a success rate of 96.25%.

  5. Implicit phonological priming during visual word recognition.

    PubMed

    Wilson, Lisa B; Tregellas, Jason R; Slason, Erin; Pasko, Bryce E; Rojas, Donald C

    2011-03-15

    Phonology is a lower-level structural aspect of language involving the sounds of a language and their organization in that language. Numerous behavioral studies utilizing priming, which refers to an increased sensitivity to a stimulus following prior experience with that or a related stimulus, have provided evidence for the role of phonology in visual word recognition. However, most language studies utilizing priming in conjunction with functional magnetic resonance imaging (fMRI) have focused on lexical-semantic aspects of language processing. The aim of the present study was to investigate the neurobiological substrates of the automatic, implicit stages of phonological processing. While undergoing fMRI, eighteen individuals performed a lexical decision task (LDT) on prime-target pairs including word-word homophone and pseudoword-word pseudohomophone pairs with a prime presentation below perceptual threshold. Whole-brain analyses revealed several cortical regions exhibiting hemodynamic response suppression due to phonological priming including bilateral superior temporal gyri (STG), middle temporal gyri (MTG), and angular gyri (AG) with additional region of interest (ROI) analyses revealing response suppression in the left lateralized supramarginal gyrus (SMG). Homophone and pseudohomophone priming also resulted in different patterns of hemodynamic responses relative to one another. These results suggest that phonological processing plays a key role in visual word recognition. Furthermore, enhanced hemodynamic responses for unrelated stimuli relative to primed stimuli were observed in midline cortical regions corresponding to the default-mode network (DMN) suggesting that DMN activity can be modulated by task requirements within the context of an implicit task. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Relationships among Constructs of L2 Chinese Reading and Language Background

    ERIC Educational Resources Information Center

    Hsu, Wei-Li

    2016-01-01

    Extensive research has been conducted on the relationships of Chinese-character recognition to reading development; strategic competence to reading comprehension; and home linguistic exposure to heritage language acquisition. However, studies of these relationships have been marked by widely divergent theoretical underpinnings, and their results…

  7. Implementing ICAO Language Proficiency Requirements in the Versant Aviation English Test

    ERIC Educational Resources Information Center

    Van Moere, Alistair; Suzuki, Masanori; Downey, Ryan; Cheng, Jian

    2009-01-01

    This paper discusses the development of an assessment to satisfy the International Civil Aviation Organization (ICAO) Language Proficiency Requirements. The Versant Aviation English Test utilizes speech recognition technology and a computerized testing platform, such that test administration and scoring are fully automated. Developed in…

  8. The Role of Phonetics in the Teaching of Foreign Languages in India

    ERIC Educational Resources Information Center

    Bansal, R. K.

    1974-01-01

    Oral work is considered the most effective way of laying the foundations for language proficiency. Recognition and production of vowels and consonants, use of a pronouncing dictionary, and practice in accent rhythm and intonation should all be included in a pronunciation course. (SC)

  9. Modern & Classical Languages: K-12 Program EValuation 1988-89.

    ERIC Educational Resources Information Center

    Martinez, Margaret Perea

    This evaluation of the modern and classical languages programs, K-12, in the Albuquerque (New Mexico) public school system provides general information on the program's history, philosophy, recognition, curriculum development, teachers, and activities. Specific information is offered on the different program components, namely, the elementary…

  10. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    PubMed

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  11. Language effects in second-language learners: A longitudinal electrophysiological study of spanish classroom learning.

    PubMed

    Soskey, Laura; Holcomb, Phillip J; Midgley, Katherine J

    2016-09-01

    How do the neural mechanisms involved in word recognition evolve over the course of word learning in adult learners of a new second language? The current study sought to closely track language effects, which are differences in electrophysiological indices of word processing between one's native and second languages, in beginning university learners over the course of a single semester of learning. Monolingual L1 English-speakers enrolled in introductory Spanish were first trained on a list of 228 Spanish words chosen from the vocabulary to be learned in class. Behavioral data from the training session and the following experimental sessions spaced over the course of the semester showed expected learning effects. In the three laboratory sessions participants read words in three lists (English, Spanish and mixed) while performing a go/no-go lexical decision task in which event-related potentials (ERPs) were recorded. As observed in previous studies there were ERP language effects with larger N400s to native than second language words. Importantly, this difference declined over the course of L2 learning with N400 amplitude increasing for new second language words. These results suggest that even over a single semester of learning that new second language words are rapidly incorporated into the word recognition system and begin to take on lexical and semantic properties similar to native language words. Moreover, the results suggest that electrophysiological measures can be used as sensitive measures for tracking the acquisition of new linguistic knowledge. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Being Through Doing: The Self-Immolation of an Asylum Seeker in Switzerland

    PubMed Central

    Womersley, Gail; Kloetzer, Laure

    2018-01-01

    In April 2016, Armin,1 an asylum seeker in a village of Switzerland, set himself alight in the public square of the town, one of a few cases reported across Europe. He performed the act following a denied request for asylum and was saved by bystanders. We present the results of two qualitative interviews conducted with Armin, his translator and his roommate following the incident. The act is theorized through the lens of a dialogical analysis focusing on the concept of social recognition. The notion of trauma is considered as a key mediating mechanism, theorized as creating ruptures in time, memory, language, and social connections to an Other. We conclude this communicative act to represent both “being-toward-death” and a relational striving toward life; a “destruction as the cause of coming into being.” PMID:29686628

  13. Exploring the Relation Between Memory, Gestural Communication, and the Emergence of Language in Infancy: A Longitudinal Study

    PubMed Central

    Heimann, Mikael; Strid, Karin; Smith, Lars; Tjus, Tomas; Ulvund, Stein Erik; Meltzoff, Andrew N.

    2006-01-01

    The relationship between recall memory, visual recognition memory, social communication, and the emergence of language skills was measured in a longitudinal study. Thirty typically developing Swedish children were tested at 6, 9 and 14 months. The result showed that, in combination, visual recognition memory at 6 months, deferred imitation at 9 months and turn-taking skills at 14 months could explain 41% of the variance in the infants’ production of communicative gestures as measured by a Swedish variant of the MacArthur Communicative Development Inventories (CDI). In this statistical model, deferred imitation stood out as the strongest predictor. PMID:16886041

  14. A predictive study of reading comprehension in third-grade Spanish students.

    PubMed

    López-Escribano, Carmen; Elosúa de Juan, María Rosa; Gómez-Veiga, Isabel; García-Madruga, Juan Antonio

    2013-01-01

    The study of the contribution of language and cognitive skills to reading comprehension is an important goal of current reading research. However, reading comprehension is not easily assessed by a single instrument, as different comprehension tests vary in the type of tasks used and in the cognitive demands required. This study examines the contribution of basic language and cognitive skills (decoding, word recognition, reading speed, verbal and nonverbal intelligence and working memory) to reading comprehension, assessed by two tests utilizing various tasks that require different skill sets in third-grade Spanish-speaking students. Linguistic and cognitive abilities predicted reading comprehension. A measure of reading speed (the reading time of pseudo-words) was the best predictor of reading comprehension when assessed by the PROLEC-R test. However, measures of word recognition (the orthographic choice task) and verbal working memory were the best predictors of reading comprehension when assessed by means of the DARC test. These results show, on the one hand, that reading speed and word recognition are better predictors of Spanish language comprehension than reading accuracy. On the other, the reading comprehension test applied here serves as a critical variable when analyzing and interpreting results regarding this topic.

  15. Recognition of oral spelling is diagnostic of the central reading processes.

    PubMed

    Schubert, Teresa; McCloskey, Michael

    2015-01-01

    The task of recognition of oral spelling (stimulus: "C-A-T", response: "cat") is often administered to individuals with acquired written language disorders, yet there is no consensus about the underlying cognitive processes. We adjudicate between two existing hypotheses: Recognition of oral spelling uses central reading processes, or recognition of oral spelling uses central spelling processes in reverse. We tested the recognition of oral spelling and spelling to dictation abilities of a single individual with acquired dyslexia and dysgraphia. She was impaired relative to matched controls in spelling to dictation but unimpaired in recognition of oral spelling. Recognition of oral spelling for exception words (e.g., colonel) and pronounceable nonwords (e.g., larth) was intact. Our results were predicted by the hypothesis that recognition of oral spelling involves the central reading processes. We conclude that recognition of oral spelling is a useful tool for probing the integrity of the central reading processes.

  16. Exploring the Neural Representation of Novel Words Learned through Enactment in a Word Recognition Task

    PubMed Central

    Macedonia, Manuela; Mueller, Karsten

    2016-01-01

    Vocabulary learning in a second language is enhanced if learners enrich the learning experience with self-performed iconic gestures. This learning strategy is called enactment. Here we explore how enacted words are functionally represented in the brain and which brain regions contribute to enhance retention. After an enactment training lasting 4 days, participants performed a word recognition task in the functional Magnetic Resonance Imaging (fMRI) scanner. Data analysis suggests the participation of different and partially intertwined networks that are engaged in higher cognitive processes, i.e., enhanced attention and word recognition. Also, an experience-related network seems to map word representation. Besides core language regions, this latter network includes sensory and motor cortices, the basal ganglia, and the cerebellum. On the basis of its complexity and the involvement of the motor system, this sensorimotor network might explain superior retention for enactment. PMID:27445918

  17. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  18. Evaluating the Motivational Impact of CALL Systems: Current Practices and Future Directions

    ERIC Educational Resources Information Center

    Bodnar, Stephen; Cucchiarini, Catia; Strik, Helmer; van Hout, Roeland

    2016-01-01

    A major aim of computer-assisted language learning (CALL) is to create computer environments that facilitate students' second language (L2) acquisition. To achieve this aim, CALL employs technological innovations to create novel types of language practice. Evaluations of the new practice types serve the important role of distinguishing effective…

  19. Language Policy, Language Ideology, and Visual Art Education for Emergent Bilingual Students

    ERIC Educational Resources Information Center

    Thomas, Beth A.

    2017-01-01

    In 1968 the Bilingual Education Act marked the first comprehensive federal intervention in the schooling of language minoritized students by creating financial incentives for bilingual education in an effort to address social and educational inequities created by poverty and linguistic isolation in schools. Since that time federal education…

  20. Creating Success for Students with Learning Disabilities in Postsecondary Foreign Language Courses

    ERIC Educational Resources Information Center

    Skinner, Michael E.; Smith, Allison T.

    2011-01-01

    The number of students with learning disabilities (LD) attending postsecondary institutions has increased steadily over the past two decades. Many of these students have language-based learning difficulties that create barriers to success in foreign language (FL) courses. Many institutions have responded by providing these students with exemptions…

  1. Automatic Speech Recognition: Reliability and Pedagogical Implications for Teaching Pronunciation

    ERIC Educational Resources Information Center

    Kim, In-Seok

    2006-01-01

    This study examines the reliability of automatic speech recognition (ASR) software used to teach English pronunciation, focusing on one particular piece of software, "FluSpeak, as a typical example." Thirty-six Korean English as a Foreign Language (EFL) college students participated in an experiment in which they listened to 15 sentences…

  2. A Normed Study of Face Recognition in Autism and Related Disorders.

    ERIC Educational Resources Information Center

    Klin, Ami; Sparrow, Sara S.; de Bildt, Annelies; Cicchetti, Domenic V.; Cohen, Donald J.; Volkmar, Fred R.

    1999-01-01

    This study used a well-normed task of face recognition with 102 young children with autism, pervasive developmental disorder (PDD) not otherwise specified, and non-PDD disorders (mental retardation and language disorders) matched for chronological age and either verbal or nonverbal mental age. Autistic subjects exhibited pronounced deficits in…

  3. Cortical Reorganization in Dyslexic Children after Phonological Training: Evidence from Early Evoked Potentials

    ERIC Educational Resources Information Center

    Spironelli, Chiara; Penolazzi, Barbara; Vio, Claudio; Angrilli, Alessandro

    2010-01-01

    Brain plasticity was investigated in 14 Italian children affected by developmental dyslexia after 6 months of phonological training. The means used to measure language reorganization was the recognition potential, an early wave, also called N150, elicited by automatic word recognition. This component peaks over the left temporo-occipital cortex…

  4. Improving Tone Recognition with Nucleus Modeling and Sequential Learning

    ERIC Educational Resources Information Center

    Wang, Siwei

    2010-01-01

    Mandarin Chinese and many other tonal languages use tones that are defined as specific pitch patterns to distinguish syllables otherwise ambiguous. It had been shown that tones carry at least as much information as vowels in Mandarin Chinese [Surendran et al., 2005]. Surprisingly, though, many speech recognition systems for Mandarin Chinese have…

  5. Novel dynamic Bayesian networks for facial action element recognition and understanding

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  6. Literacy in the Workplace: A Whole Language Approach.

    ERIC Educational Resources Information Center

    Carr, Kathryn S.

    The personnel director of a local industry requested reading help from Central Missouri State University for several employees. After several meetings, a workplace literacy program that used the whole language approach supplemented by direct instruction in word recognition skills was developed. Two types of tests were written. One, a vocabulary…

  7. Security Enhancement of Littoral Combat Ship Class Utilizing an Autonomous Mustering and Pier Monitoring System

    DTIC Science & Technology

    2010-03-01

    allows the programmer to use the English language in an expressive manor while still maintaining the logical structure of a programming language ( Pressman ...and Choudhury Tanzeem. 2000. Face Recognition for Smart Environments, IEEE Computer, pp. 50–55. Pressman , Roger. 2010. Software Engineering A

  8. Developmental Differences in Speech Act Recognition: A Pragmatic Awareness Study

    ERIC Educational Resources Information Center

    Garcia, Paula

    2004-01-01

    With the growing acknowledgement of the importance of pragmatic competence in second language (L2) learning, language researchers have identified the comprehension of speech acts as they occur in natural conversation as essential to communicative competence (e.g. Bardovi-Harlig, 2001; Thomas, 1983). Nonconventional indirect speech acts are formed…

  9. Expert Knowledge, Distinctiveness, and Levels of Processing in Language Learning

    ERIC Educational Resources Information Center

    Bird, Steve

    2012-01-01

    The foreign language vocabulary learning research literature often attributes strong mnemonic potency to the cognitive processing of meaning when learning words. Routinely cited as support for this idea are experiments by Craik and Tulving (C&T) demonstrating superior recognition and recall of studied words following semantic tasks ("deep"…

  10. Methodological Note: Analyzing Signs for Recognition & Feature Salience.

    ERIC Educational Resources Information Center

    Shyan, Melissa R.

    1985-01-01

    Presents a method to determine how signs in American Sign Language are recognized by signers. The method uses natural settings and avoids common artificialities found in prior work. A pilot study is described involving language research with Atlantic Bottlenose Dolphins in which the method was successfully used. (SED)

  11. CANTAB object recognition and language tests to detect aging cognitive decline: an exploratory comparative study

    PubMed Central

    Cabral Soares, Fernanda; de Oliveira, Thaís Cristina Galdino; de Macedo, Liliane Dias e Dias; Tomás, Alessandra Mendonça; Picanço-Diniz, Domingos Luiz Wanderley; Bento-Torres, João; Bento-Torres, Natáli Valim Oliver; Picanço-Diniz, Cristovam Wanderley

    2015-01-01

    Objective The recognition of the limits between normal and pathological aging is essential to start preventive actions. The aim of this paper is to compare the Cambridge Neuropsychological Test Automated Battery (CANTAB) and language tests to distinguish subtle differences in cognitive performances in two different age groups, namely young adults and elderly cognitively normal subjects. Method We selected 29 young adults (29.9±1.06 years) and 31 older adults (74.1±1.15 years) matched by educational level (years of schooling). All subjects underwent a general assessment and a battery of neuropsychological tests, including the Mini Mental State Examination, visuospatial learning, and memory tasks from CANTAB and language tests. Cluster and discriminant analysis were applied to all neuropsychological test results to distinguish possible subgroups inside each age group. Results Significant differences in the performance of aged and young adults were detected in both language and visuospatial memory tests. Intragroup cluster and discriminant analysis revealed that CANTAB, as compared to language tests, was able to detect subtle but significant differences between the subjects. Conclusion Based on these findings, we concluded that, as compared to language tests, large-scale application of automated visuospatial tests to assess learning and memory might increase our ability to discern the limits between normal and pathological aging. PMID:25565785

  12. On the Development of Speech Resources for the Mixtec Language

    PubMed Central

    2013-01-01

    The Mixtec language is one of the main native languages in Mexico. In general, due to urbanization, discrimination, and limited attempts to promote the culture, the native languages are disappearing. Most of the information available about the Mixtec language is in written form as in dictionaries which, although including examples about how to pronounce the Mixtec words, are not as reliable as listening to the correct pronunciation from a native speaker. Formal acoustic resources, as speech corpora, are almost non-existent for the Mixtec, and no speech technologies are known to have been developed for it. This paper presents the development of the following resources for the Mixtec language: (1) a speech database of traditional narratives of the Mixtec culture spoken by a native speaker (labelled at the phonetic and orthographic levels by means of spectral analysis) and (2) a native speaker-adaptive automatic speech recognition (ASR) system (trained with the speech database) integrated with a Mixtec-to-Spanish/Spanish-to-Mixtec text translator. The speech database, although small and limited to a single variant, was reliable enough to build the multiuser speech application which presented a mean recognition/translation performance up to 94.36% in experiments with non-native speakers (the target users). PMID:23710134

  13. The role of tone and segmental information in visual-word recognition in Thai.

    PubMed

    Winskel, Heather; Ratitamkul, Theeraporn; Charoensit, Akira

    2017-07-01

    Tone languages represent a large proportion of the spoken languages of the world and yet lexical tone is understudied. Thai offers a unique opportunity to investigate the role of lexical tone processing during visual-word recognition, as tone is explicitly expressed in its script. We used colour words and their orthographic neighbours as stimuli to investigate facilitation (Experiment 1) and interference (Experiment 2) Stroop effects. Five experimental conditions were created: (a) the colour word (e.g., ขาว /k h ã:w/ [white]), (b) tone different word (e.g., ข่าว /k h à:w/[news]), (c) initial consonant phonologically same word (e.g., คาว /k h a:w/ [fishy]), where the initial consonant of the word was phonologically the same but orthographically different, (d) initial consonant different, tone same word (e.g., หาว /hã:w/ yawn), where the initial consonant was orthographically different but the tone of the word was the same, and (e) initial consonant different, tone different word (e.g., กาว /ka:w/ glue), where the initial consonant was orthographically different, and the tone was different. In order to examine whether tone information per se had a facilitative effect, we also included a colour congruent word condition where the segmental (S) information was different but the tone (T) matched the colour word (S-T+) in Experiment 2. Facilitation/interference effects were found for all five conditions when compared with a neutral control word. Results of the critical comparisons revealed that tone information comes into play at a later stage in lexical processing, and orthographic information contributes more than phonological information.

  14. Composite Artistry Meets Facial Recognition Technology: Exploring the Use of Facial Recognition Technology to Identify Composite Images

    DTIC Science & Technology

    2011-09-01

    be submitted into a facial recognition program for comparison with millions of possible matches, offering abundant opportunities to identify the...to leverage the robust number of comparative opportunities associated with facial recognition programs. This research investigates the efficacy of...combining composite forensic artistry with facial recognition technology to create a viable investigative tool to identify suspects, as well as better

  15. Language Measures of the NIH Toolbox Cognition Battery

    PubMed Central

    Gershon, Richard C.; Cook, Karon F.; Mungas, Dan; Manly, Jennifer J.; Slotkin, Jerry; Beaumont, Jennifer L.; Weintraub, Sandra

    2015-01-01

    Language facilitates communication and efficient encoding of thought and experience. Because of its essential role in early childhood development, in educational achievement and in subsequent life adaptation, language was included as one of the subdomains in the NIH Toolbox for the Assessment of Neurological and Behavioral Function Cognition Battery (NIHTB-CB). There are many different components of language functioning, including syntactic processing (i.e., morphology and grammar) and lexical semantics. For purposes of the NIHTB-CB, two tests of language—a picture vocabulary test and a reading recognition test—were selected by consensus based on literature reviews, iterative expert input, and a desire to assess in English and Spanish. NIHTB-CB’s picture vocabulary and reading recognition tests are administered using computer adaptive testing and scored using item response theory. Data are presented from the validation of the English versions in a sample of adults ages 20–85 years (Spanish results will be presented in a future publication). Both tests demonstrated high test–retest reliability and good construct validity compared to corresponding gold-standard measures. Scores on the NIH Toolbox measures were consistent with age-related expectations, namely, growth in language during early development, with relative stabilization into late adulthood. PMID:24960128

  16. Speech and language development in cognitively delayed children with cochlear implants.

    PubMed

    Holt, Rachael Frush; Kirk, Karen Iler

    2005-04-01

    The primary goals of this investigation were to examine the speech and language development of deaf children with cochlear implants and mild cognitive delay and to compare their gains with those of children with cochlear implants who do not have this additional impairment. We retrospectively examined the speech and language development of 69 children with pre-lingual deafness. The experimental group consisted of 19 children with cognitive delays and no other disabilities (mean age at implantation = 38 months). The control group consisted of 50 children who did not have cognitive delays or any other identified disability. The control group was stratified by primary communication mode: half used total communication (mean age at implantation = 32 months) and the other half used oral communication (mean age at implantation = 26 months). Children were tested on a variety of standard speech and language measures and one test of auditory skill development at 6-month intervals. The results from each test were collapsed from blocks of two consecutive 6-month intervals to calculate group mean scores before implantation and at 1-year intervals after implantation. The children with cognitive delays and those without such delays demonstrated significant improvement in their speech and language skills over time on every test administered. Children with cognitive delays had significantly lower scores than typically developing children on two of the three measures of receptive and expressive language and had significantly slower rates of auditory-only sentence recognition development. Finally, there were no significant group differences in auditory skill development based on parental reports or in auditory-only or multimodal word recognition. The results suggest that deaf children with mild cognitive impairments benefit from cochlear implantation. Specifically, improvements are evident in their ability to perceive speech and in their reception and use of language. However, it may be reduced relative to their typically developing peers with cochlear implants, particularly in domains that require higher level skills, such as sentence recognition and receptive and expressive language. These findings suggest that children with mild cognitive deficits be considered for cochlear implantation with less trepidation than has been the case in the past. Although their speech and language gains may be tempered by their cognitive abilities, these limitations do not appear to preclude benefit from cochlear implant stimulation, as assessed by traditional measures of speech and language development.

  17. How Does the Linguistic Distance Between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances During Verbal Memory Examination.

    PubMed

    Taha, Haitham

    2017-06-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and phonologically similar version (PS). The result showed that for immediate free-recall, the performances were better for the SL and the PS conditions compared to the SA one. However, for the parts of delayed recall and recognition, the results did not reveal any significant consistent effect of diglossia. Accordingly, it was suggested that diglossia has a significant effect on the storage and short term memory functions but not on long term memory functions. The results were discussed in light of different approaches in the field of bilingual memory.

  18. Orthographic effects in spoken word recognition: Evidence from Chinese.

    PubMed

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  19. Internship Abstract and Final Reflection

    NASA Technical Reports Server (NTRS)

    Sandor, Edward

    2016-01-01

    The primary objective for this internship is the evaluation of an embedded natural language processor (NLP) as a way to introduce voice control into future space suits. An embedded natural language processor would provide an astronaut hands-free control for making adjustments to the environment of the space suit and checking status of consumables procedures and navigation. Additionally, the use of an embedded NLP could potentially reduce crew fatigue, increase the crewmember's situational awareness during extravehicular activity (EVA) and improve the ability to focus on mission critical details. The use of an embedded NLP may be valuable for other human spaceflight applications desiring hands-free control as well. An embedded NLP is unique because it is a small device that performs language tasks, including speech recognition, which normally require powerful processors. The dedicated device could perform speech recognition locally with a smaller form-factor and lower power consumption than traditional methods.

  20. Creating Subjects: The Language of the Stage 6 English Syllabus

    ERIC Educational Resources Information Center

    Anson, Daniel W. J.

    2016-01-01

    This paper investigates the language of the 2009 NSW Stage 6 English Syllabus. I argue that the language of the syllabus aims to create two distinct subjects: Subject English, that is, what students learn; and the subject position of its students, that is, what students are expected to become. Analysis reveals themes of personal development and…

  1. Child Bilingualism in an Immigrant Society: Implications of Borrowing in the Hebrew 'Language of Games.'

    ERIC Educational Resources Information Center

    Bar-Adon, Aaron

    The first waves of immigrants arriving in Palestine were faced with the problem of forming a new culture and creating a new language, actually, reviving Hebrew, an ancient language. The children were faced with creating their own traditions, games, and folklore; in so doing, through straight borrowing, spontaneous translation (loan translation),…

  2. Creative Strategies for Teaching Language Arts to Gifted Students (K-8). ERIC Digest E612.

    ERIC Educational Resources Information Center

    Smutny, Joan Franklin

    This digest paper presents strategies and activities that can be used to encourage gifted students to develop their individual talents in the language arts. Suggestions for exploring poetic language especially free verse, include ideas for creating group poems and catalysts for creating individual poems. Suggestions for exploring the elements of…

  3. What Really Makes Students like a Web Site? What Are the Implications for Designing Web-Based Language Learning

    ERIC Educational Resources Information Center

    Hughes, Jane; McAvinia, Claire; King, Terry

    2004-01-01

    Faced with reduced numbers choosing to study foreign languages (as in England and Wales), strategies to create and maintain student interest need to be explored. One such strategy is to create "taster" courses in languages, for potential university applicants. The findings presented arise from exploratory research, undertaken to inform…

  4. Second Language Idiom Learning in a Paired-Associate Paradigm: Effects of Direction of Learning, Direction of Testing, Idiom Imageability, and Idiom Transparency

    ERIC Educational Resources Information Center

    Steinel, Margarita P.; Hulstijn, Jan H.; Steinel, Wolfgang

    2007-01-01

    In a paired-associate learning (PAL) task, Dutch university students (n = 129) learned 20 English second language (L2) idioms either receptively or productively (i.e., L2-first language [L1] or L1-L2) and were tested in two directions (i.e., recognition or production) immediately after learning and 3 weeks later. Receptive and productive…

  5. Recognition of Langue des Signes Quebecoise in Eastern Canada

    ERIC Educational Resources Information Center

    Parisot, Anne-Marie; Rinfret, Julie

    2012-01-01

    This article presents a portrait of two community-level and legal efforts in Canada to obtain official recognition of ASL and LSQ (Langue des signes quebecoise), both of which are recognized as official languages by the Canadian Association of the Deaf (CAD). In order to situate this issue in the Canadian linguistic context, the authors first…

  6. Orthographic Neighborhood Effects in Recognition and Recall Tasks in a Transparent Orthography

    ERIC Educational Resources Information Center

    Justi, Francis R. R.; Jaeger, Antonio

    2017-01-01

    The number of orthographic neighbors of a word influences its probability of being retrieved in recognition and free recall memory tests. Even though this phenomenon is well demonstrated for English words, it has yet to be demonstrated for languages with more predictable grapheme-phoneme mappings than English. To address this issue, 4 experiments…

  7. What Type of Vocabulary Knowledge Predicts Reading Comprehension: Word Meaning Recall or Word Meaning Recognition?

    ERIC Educational Resources Information Center

    Laufer, Batia; Aviad-Levitzky, Tami

    2017-01-01

    This study examined how well second language (L2) recall and recognition vocabulary tests correlated with a reading test, how well each vocabulary test discriminated between reading proficiency levels, and how accurate each test was in predicting reading proficiency when compared with corpus studies. A total of 116 college-level learners of…

  8. Recognition of Immaturity and Emotional Expressions in Blended Faces by Children with Autism and Other Developmental Disabilities

    ERIC Educational Resources Information Center

    Gross, Thomas F.

    2008-01-01

    The recognition of facial immaturity and emotional expression by children with autism, language disorders, mental retardation, and non-disabled controls was studied in two experiments. Children identified immaturity and expression in upright and inverted faces. The autism group identified fewer immature faces and expressions than control (Exp. 1 &…

  9. The Effects of Textual Enhancement Type on L2 Form Recognition and Reading Comprehension in Spanish

    ERIC Educational Resources Information Center

    LaBrozzi, Ryan M.

    2016-01-01

    Previous research investigating the effectiveness of textual enhancement as a tool to draw adult second language (L2) learners' attention to the targeted linguistic form has consistently produced mixed results. This article examines how L2 form recognition and reading comprehension are affected by different types of textual enhancement.…

  10. Investigating an Innovative Computer Application to Improve L2 Word Recognition from Speech

    ERIC Educational Resources Information Center

    Matthews, Joshua; O'Toole, John Mitchell

    2015-01-01

    The ability to recognise words from the aural modality is a critical aspect of successful second language (L2) listening comprehension. However, little research has been reported on computer-mediated development of L2 word recognition from speech in L2 learning contexts. This report describes the development of an innovative computer application…

  11. Letter Names: Effect on Letter Saying, Spelling, and Word Recognition in Hebrew.

    ERIC Educational Resources Information Center

    Levin, Iris; Patel, Sigal; Margalit, Tamar; Barad, Noa

    2002-01-01

    Examined whether letter names, which bridge the gap between oral and written language among English speaking children, have a similar function in Hebrew. In findings from studies of Israeli kindergartners and first graders, children were found to rely on letter names in performing a number of letter saying, spelling, and word recognition tasks.…

  12. Foreign Language Analysis and Recognition (FLARe)

    DTIC Science & Technology

    2016-10-08

    10 7 Chinese CER ...Rates ( CERs ) were obtained with each feature set: (1) 19.2%, (2) 17.3%, and (3) 15.3%. Based on these results, a GMM-HMM speech recognition system...These systems were evaluated on the HUB4 and HKUST test partitions. Table 7 shows the CER obtained on each test set. Whereas including the HKUST data

  13. From Numbers to Letters: Feedback Regularization in Visual Word Recognition

    ERIC Educational Resources Information Center

    Molinaro, Nicola; Dunabeitia, Jon Andoni; Marin-Gutierrez, Alejandro; Carreiras, Manuel

    2010-01-01

    Word reading in alphabetic languages involves letter identification, independently of the format in which these letters are written. This process of letter "regularization" is sensitive to word context, leading to the recognition of a word even when numbers that resemble letters are inserted among other real letters (e.g., M4TERI4L). The present…

  14. ASL Handshape Stories, Word Recognition and Signing Deaf Readers: An Exploratory Study

    ERIC Educational Resources Information Center

    Gietz, Merrilee R.

    2013-01-01

    The effectiveness of using American Sign Language (ASL) handshape stories to teach word recognition in whole stories using a descriptive case study approach was explored. Four profoundly deaf children ages 7 to 8, enrolled in a self-contained deaf education classroom in a public school in the south participated in the story time five-week…

  15. A Suggested Automated Branch Program for Foreign Languages.

    ERIC Educational Resources Information Center

    Barrutia, Richard

    1964-01-01

    Completely automated and operated by student feedback, this program teaches and tests foreign language recognition and retention, gives repeated audiolingual practice on model structures, and allows the student to tailor the program to his individual needs. The program is recorded on four tape tracks (track 1 for the most correct answer, etc.).…

  16. Finding Words in a Language that Allows Words without Vowels

    ERIC Educational Resources Information Center

    El Aissati, Abder; McQueen, James M.; Cutler, Anne

    2012-01-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring "win" in "twin" because "t" cannot be a word). However, the constraint would be counter-productive in…

  17. Genetic and Environmental Overlap between Chinese and English Reading-Related Skills in Chinese Children

    ERIC Educational Resources Information Center

    Wong, Simpson W. L.; Chow, Bonnie Wing-Yin; Ho, Connie Suk-Han; Waye, Mary M. Y.; Bishop, Dorothy V. M.

    2014-01-01

    This twin study examined the relative contributions of genes and environment on 2nd language reading acquisition of Chinese-speaking children learning English. We examined whether specific skills-visual word recognition, receptive vocabulary, phonological awareness, phonological memory, and speech discrimination-in the 1st and 2nd languages have…

  18. Actes des Journees de linguistique (Proceedings of the Linguistics Conference) (9th, 1995).

    ERIC Educational Resources Information Center

    Audette, Julie, Ed.; And Others

    Papers (entirely in French) presented at the conference on linguistics include these topics: language used in the legislature of New Brunswick; cohesion in the text of Arabic-speaking language learners; automatic adverb recognition; logic of machine translation in teaching revision; expansion in physics texts; discourse analysis and the syntax of…

  19. Spoken Grammar Practice and Feedback in an ASR-Based CALL System

    ERIC Educational Resources Information Center

    de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland

    2015-01-01

    Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…

  20. The English Language of the Nigeria Police

    ERIC Educational Resources Information Center

    Chinwe, Udo Victoria

    2015-01-01

    In the present day Nigeria, the quality of the English language spoken by Nigerians, is perceived to have been deteriorating and needs urgent attention. The proliferation of books and articles in the recent years can be seen as the native outcrop of its received attention and recognition as a matter of discourse. Evidently, every profession,…

  1. Language Names and Norms in Bosnia and Herzegovina

    ERIC Educational Resources Information Center

    Swagman, Kirstin J.

    2011-01-01

    The institutionalization of separate standard varieties for Bosnian, Croatian, and Serbian in the 1990s was hailed by many Bosnians as the long-denied recognition of the Bosnian idiom as distinct from the Serbian and Croatian varieties it had so often been subordinated under. Yet the accompanying codification of Bosnian standard language forms has…

  2. Complementary Schools in Action: Networking for Language Development in East London

    ERIC Educational Resources Information Center

    Sneddon, Raymonde

    2014-01-01

    In a challenging economic and political context, complementary schools in East London are mentoring each other and forming networks across communities to gain recognition and status for community languages in education and the wider community. As issues of power and status impact in different ways on differently situated communities, complementary…

  3. Reading in EFL: Facts and Fictions.

    ERIC Educational Resources Information Center

    Paran, Amos

    1996-01-01

    Examines the representation of the reading process in English as a Foreign Language (EFL) texts. The article argues that many of these representations are dated and based on a theory that was never a mainstream theory of first-language reading. Suggestions for exercises to strengthen automatic word recognition in EFL readers are provided. (33…

  4. Perceiving and Remembering Events Cross-Linguistically: Evidence from Dual-Task Paradigms

    ERIC Educational Resources Information Center

    Trueswell, John C.; Papafragou, Anna

    2010-01-01

    What role does language play during attention allocation in perceiving and remembering events? We recorded adults' eye movements as they studied animated motion events for a later recognition task. We compared native speakers of two languages that use different means of expressing motion (Greek and English). In Experiment 1, eye movements revealed…

  5. Relationships between Lexical Processing Speed, Language Skills, and Autistic Traits in Children

    ERIC Educational Resources Information Center

    Abrigo, Erin

    2012-01-01

    According to current models of spoken word recognition listeners understand speech as it unfolds over time. Eye tracking provides a non-invasive, on-line method to monitor attention, providing insight into the processing of spoken language. In the current project a spoken lexical processing assessment (LPA) confirmed current theories of spoken…

  6. Games in Language Learning: Opportunities and Challenges

    ERIC Educational Resources Information Center

    Godwin-Jones, Robert

    2014-01-01

    There has been a substantial increase in recent years in the interest in using digital games for language learning. This coincides with the explosive growth in multiplayer online gaming and with the proliferation of mobile games for smart phones. It also reflects the growing recognition among educators of the importance of extramural, informal…

  7. Preserved Visual Language Identification Despite Severe Alexia

    ERIC Educational Resources Information Center

    Di Pietro, Marie; Ptak, Radek; Schnider, Armin

    2012-01-01

    Patients with letter-by-letter alexia may have residual access to lexical or semantic representations of words despite severely impaired overt word recognition (reading). Here, we report a multilingual patient with severe letter-by-letter alexia who rapidly identified the language of written words and sentences in French and English while he had…

  8. Cognate and Word Class Ambiguity Effects in Noun and Verb Processing

    ERIC Educational Resources Information Center

    Bultena, Sybrine; Dijkstra, Ton; van Hell, Janet G.

    2013-01-01

    This study examined how noun and verb processing in bilingual visual word recognition are affected by within and between-language overlap. We investigated how word class ambiguous noun and verb cognates are processed by bilinguals, to see if co-activation of overlapping word forms between languages benefits from additional overlap within a…

  9. Theory of mind and emotion recognition skills in children with specific language impairment, autism spectrum disorder and typical development: group differences and connection to knowledge of grammatical morphology, word-finding abilities and verbal working memory.

    PubMed

    Loukusa, Soile; Mäkinen, Leena; Kuusikko-Gauffin, Sanna; Ebeling, Hanna; Moilanen, Irma

    2014-01-01

    Social perception skills, such as understanding the mind and emotions of others, affect children's communication abilities in real-life situations. In addition to autism spectrum disorder (ASD), there is increasing knowledge that children with specific language impairment (SLI) also demonstrate difficulties in their social perception abilities. To compare the performance of children with SLI, ASD and typical development (TD) in social perception tasks measuring Theory of Mind (ToM) and emotion recognition. In addition, to evaluate the association between social perception tasks and language tests measuring word-finding abilities, knowledge of grammatical morphology and verbal working memory. Children with SLI (n = 18), ASD (n = 14) and TD (n = 25) completed two NEPSY-II subtests measuring social perception abilities: (1) Affect Recognition and (2) ToM (includes Verbal and non-verbal Contextual tasks). In addition, children's word-finding abilities were measured with the TWF-2, grammatical morphology by using the Grammatical Closure subtest of ITPA, and verbal working memory by using subtests of Sentence Repetition or Word List Interference (chosen according the child's age) of the NEPSY-II. Children with ASD scored significantly lower than children with SLI or TD on the NEPSY-II Affect Recognition subtest. Both SLI and ASD groups scored significantly lower than TD children on Verbal tasks of the ToM subtest of NEPSY-II. However, there were no significant group differences on non-verbal Contextual tasks of the ToM subtest of the NEPSY-II. Verbal tasks of the ToM subtest were correlated with the Grammatical Closure subtest and TWF-2 in children with SLI. In children with ASD correlation between TWF-2 and ToM: Verbal tasks was moderate, almost achieving statistical significance, but no other correlations were found. Both SLI and ASD groups showed difficulties in tasks measuring verbal ToM but differences were not found in tasks measuring non-verbal Contextual ToM. The association between Verbal ToM tasks and language tests was stronger in children with SLI than in children with ASD. There is a need for further studies in order to understand interaction between different areas of language and cognitive development. © 2014 Royal College of Speech and Language Therapists.

  10. Language-based communication strategies that support person-centered communication with persons with dementia.

    PubMed

    Savundranayagam, Marie Y; Moore-Nielsen, Kelsey

    2015-10-01

    There are many recommended language-based strategies for effective communication with persons with dementia. What is unknown is whether effective language-based strategies are also person centered. Accordingly, the objective of this study was to examine whether language-based strategies for effective communication with persons with dementia overlapped with the following indicators of person-centered communication: recognition, negotiation, facilitation, and validation. Conversations (N = 46) between staff-resident dyads were audio-recorded during routine care tasks over 12 weeks. Staff utterances were coded twice, using language-based and person-centered categories. There were 21 language-based categories and 4 person-centered categories. There were 5,800 utterances transcribed: 2,409 without indicators, 1,699 coded as language or person centered, and 1,692 overlapping utterances. For recognition, 26% of utterances were greetings, 21% were affirmations, 13% were questions (yes/no and open-ended), and 15% involved rephrasing. Questions (yes/no, choice, and open-ended) comprised 74% of utterances that were coded as negotiation. A similar pattern was observed for utterances coded as facilitation where 51% of utterances coded as facilitation were yes/no questions, open-ended questions, and choice questions. However, 21% of facilitative utterances were affirmations and 13% involved rephrasing. Finally, 89% of utterances coded as validation were affirmations. The findings identify specific language-based strategies that support person-centered communication. However, between 1 and 4, out of a possible 21 language-based strategies, overlapped with at least 10% of utterances coded as each person-centered indicator. This finding suggests that staff need training to use more diverse language strategies that support personhood of residents with dementia.

  11. Impaired recognition of scary music following unilateral temporal lobe excision.

    PubMed

    Gosselin, Nathalie; Peretz, Isabelle; Noulhiane, Marion; Hasboun, Dominique; Beckett, Christine; Baulac, Michel; Samson, Séverine

    2005-03-01

    Music constitutes an ideal means to create a sense of suspense in films. However, there has been minimal investigation into the underlying cerebral organization for perceiving danger created by music. In comparison, the amygdala's role in recognition of fear in non-musical contexts has been well established. The present study sought to fill this gap in exploring how patients with amygdala resection recognize emotional expression in music. To this aim, we tested 16 patients with left (LTR; n = 8) or right (RTR; n = 8) medial temporal resection (including amygdala) for the relief of medically intractable seizures and 16 matched controls in an emotion recognition task involving instrumental music. The musical selections were purposely created to induce fear, peacefulness, happiness and sadness. Participants were asked to rate to what extent each musical passage expressed these four emotions on 10-point scales. In order to check for the presence of a perceptual problem, the same musical selections were presented to the participants in an error detection task. None of the patients was found to perform below controls in the perceptual task. In contrast, both LTR and RTR patients were found to be impaired in the recognition of scary music. Recognition of happy and sad music was normal. These findings suggest that the anteromedial temporal lobe (including the amygdala) plays a role in the recognition of danger in a musical context.

  12. HWDA: A coherence recognition and resolution algorithm for hybrid web data aggregation

    NASA Astrophysics Data System (ADS)

    Guo, Shuhang; Wang, Jian; Wang, Tong

    2017-09-01

    Aiming at the object confliction recognition and resolution problem for hybrid distributed data stream aggregation, a distributed data stream object coherence solution technology is proposed. Firstly, the framework was defined for the object coherence conflict recognition and resolution, named HWDA. Secondly, an object coherence recognition technology was proposed based on formal language description logic and hierarchical dependency relationship between logic rules. Thirdly, a conflict traversal recognition algorithm was proposed based on the defined dependency graph. Next, the conflict resolution technology was prompted based on resolution pattern matching including the definition of the three types of conflict, conflict resolution matching pattern and arbitration resolution method. At last, the experiment use two kinds of web test data sets to validate the effect of application utilizing the conflict recognition and resolution technology of HWDA.

  13. Phoneme Error Pattern by Heritage Speakers of Spanish on an English Word Recognition Test.

    PubMed

    Shi, Lu-Feng

    2017-04-01

    Heritage speakers acquire their native language from home use in their early childhood. As the native language is typically a minority language in the society, these individuals receive their formal education in the majority language and eventually develop greater competency with the majority than their native language. To date, there have not been specific research attempts to understand word recognition by heritage speakers. It is not clear if and to what degree we may infer from evidence based on bilingual listeners in general. This preliminary study investigated how heritage speakers of Spanish perform on an English word recognition test and analyzed their phoneme errors. A prospective, cross-sectional, observational design was employed. Twelve normal-hearing adult Spanish heritage speakers (four men, eight women, 20-38 yr old) participated in the study. Their language background was obtained through the Language Experience and Proficiency Questionnaire. Nine English monolingual listeners (three men, six women, 20-41 yr old) were also included for comparison purposes. Listeners were presented with 200 Northwestern University Auditory Test No. 6 words in quiet. They repeated each word orally and in writing. Their responses were scored by word, word-initial consonant, vowel, and word-final consonant. Performance was compared between groups with Student's t test or analysis of variance. Group-specific error patterns were primarily descriptive, but intergroup comparisons were made using 95% or 99% confidence intervals for proportional data. The two groups of listeners yielded comparable scores when their responses were examined by word, vowel, and final consonant. However, heritage speakers of Spanish misidentified significantly more word-initial consonants and had significantly more difficulty with initial /p, b, h/ than their monolingual peers. The two groups yielded similar patterns for vowel and word-final consonants, but heritage speakers made significantly fewer errors with /e/ and more errors with word-final /p, k/. Data reported in the present study lead to a twofold conclusion. On the one hand, normal-hearing heritage speakers of Spanish may misidentify English phonemes in patterns different from those of English monolingual listeners. Not all phoneme errors can be readily understood by comparing Spanish and English phonology, suggesting that Spanish heritage speakers differ in performance from other Spanish-English bilingual listeners. On the other hand, the absolute number of errors and the error pattern of most phonemes were comparable between English monolingual listeners and Spanish heritage speakers, suggesting that audiologists may assess word recognition in quiet in the same way for these two groups of listeners, if diagnosis is based on words, not phonemes. American Academy of Audiology

  14. Relative Weighting of Semantic and Syntactic Cues in Native and Non-Native Listeners' Recognition of English Sentences.

    PubMed

    Shi, Lu-Feng; Koenig, Laura L

    2016-01-01

    Non-native listeners do not recognize English sentences as effectively as native listeners, especially in noise. It is not entirely clear to what extent such group differences arise from differences in relative weight of semantic versus syntactic cues. This study quantified the use and weighting of these contextual cues via Boothroyd and Nittrouer's j and k factors. The j represents the probability of recognizing sentences with or without context, whereas the k represents the degree to which context improves recognition performance. Four groups of 13 normal-hearing young adult listeners participated. One group consisted of native English monolingual (EMN) listeners, whereas the other three consisted of non-native listeners contrasting in their language dominance and first language: English-dominant Russian-English, Russian-dominant Russian-English, and Spanish-dominant Spanish-English bilinguals. All listeners were presented three sets of four-word sentences: high-predictability sentences included both semantic and syntactic cues, low-predictability sentences included syntactic cues only, and zero-predictability sentences included neither semantic nor syntactic cues. Sentences were presented at 65 dB SPL binaurally in the presence of speech-spectrum noise at +3 dB SNR. Listeners orally repeated each sentence and recognition was calculated for individual words as well as the sentence as a whole. Comparable j values across groups for high-predictability, low-predictability, and zero-predictability sentences suggested that all listeners, native and non-native, utilized contextual cues to recognize English sentences. Analysis of the k factor indicated that non-native listeners took advantage of syntax as effectively as EMN listeners. However, only English-dominant bilinguals utilized semantics to the same extent as EMN listeners; semantics did not provide a significant benefit for the two non-English-dominant groups. When combined, semantics and syntax benefitted EMN listeners significantly more than all three non-native groups of listeners. Language background influenced the use and weighting of semantic and syntactic cues in a complex manner. A native language advantage existed in the effective use of both cues combined. A language-dominance effect was seen in the use of semantics. No first-language effect was present for the use of either or both cues. For all non-native listeners, syntax contributed significantly more to sentence recognition than semantics, possibly due to the fact that semantics develops more gradually than syntax in second-language acquisition. The present study provides evidence that Boothroyd and Nittrouer's j and k factors can be successfully used to quantify the effectiveness of contextual cue use in clinically relevant, linguistically diverse populations.

  15. Action and Emotion Recognition from Point Light Displays: An Investigation of Gender Differences

    PubMed Central

    Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P.; Wenderoth, Nicole

    2011-01-01

    Folk psychology advocates the existence of gender differences in socio-cognitive functions such as ‘reading’ the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of ‘biological motion’ versus ‘non-biological’ (or ‘scrambled’ motion); or (ii) the recognition of the ‘emotional state’ of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the ‘Reading the Mind in the Eyes Test’ (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree – be related to more basic differences in processing biological motion per se. PMID:21695266

  16. Action and emotion recognition from point light displays: an investigation of gender differences.

    PubMed

    Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P; Wenderoth, Nicole

    2011-01-01

    Folk psychology advocates the existence of gender differences in socio-cognitive functions such as 'reading' the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of 'biological motion' versus 'non-biological' (or 'scrambled' motion); or (ii) the recognition of the 'emotional state' of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the 'Reading the Mind in the Eyes Test' (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree - be related to more basic differences in processing biological motion per se.

  17. "Dieu a cree la femelle, l'homme a fait la femme." En rekognoscering i dansk og undenlandsk konssprogsforskning ("God Created the Female, Man Created Woman." A Reconnaissance in Danish and Foreign Research on Sex Differences and Language). ROLIG papir 32.

    ERIC Educational Resources Information Center

    Bennicke, Annette

    Research on sex differences and language includes the following (many titles are English translations): "Language--The Child, the Women, the Family"; "Woman and Man"; "In Society's Words"; "The Life of Words"; "Verbs and Women"; "Lines from a Ladies Luncheon"; "The History of the Danish Language"; "How Sex Roles Are Represented and Conserved in…

  18. A Multidimensional Approach to the Study of Emotion Recognition in Autism Spectrum Disorders

    PubMed Central

    Xavier, Jean; Vignaud, Violaine; Ruggiero, Rosa; Bodeau, Nicolas; Cohen, David; Chaby, Laurence

    2015-01-01

    Although deficits in emotion recognition have been widely reported in autism spectrum disorder (ASD), experiments have been restricted to either facial or vocal expressions. Here, we explored multimodal emotion processing in children with ASD (N = 19) and with typical development (TD, N = 19), considering uni (faces and voices) and multimodal (faces/voices simultaneously) stimuli and developmental comorbidities (neuro-visual, language and motor impairments). Compared to TD controls, children with ASD had rather high and heterogeneous emotion recognition scores but showed also several significant differences: lower emotion recognition scores for visual stimuli, for neutral emotion, and a greater number of saccades during visual task. Multivariate analyses showed that: (1) the difficulties they experienced with visual stimuli were partially alleviated with multimodal stimuli. (2) Developmental age was significantly associated with emotion recognition in TD children, whereas it was the case only for the multimodal task in children with ASD. (3) Language impairments tended to be associated with emotion recognition scores of ASD children in the auditory modality. Conversely, in the visual or bimodal (visuo-auditory) tasks, the impact of developmental coordination disorder or neuro-visual impairments was not found. We conclude that impaired emotion processing constitutes a dimension to explore in the field of ASD, as research has the potential to define more homogeneous subgroups and tailored interventions. However, it is clear that developmental age, the nature of the stimuli, and other developmental comorbidities must also be taken into account when studying this dimension. PMID:26733928

  19. Automatic translation among spoken languages

    NASA Technical Reports Server (NTRS)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  20. Concepts and Objectives for Learning the Structure of English in Grades 7, 8, and 9. Report from the Project on Individually Guided Instruction in English Language, Composition, and Literature.

    ERIC Educational Resources Information Center

    Pooley, Robert C.; Golub, Lester S.

    Emphasizing the behavioral and social aspects of language as a foundation for instruction, 16 concepts for learning the structure of English in grades 7-9 are outlined in an attempt to set down in logical order the basic concepts involved in the understanding of the English language. The concepts begin with a recognition of the social purposes of…

  1. Happy faces, sad faces: Emotion understanding in toddlers and preschoolers with language impairments.

    PubMed

    Rieffe, Carolien; Wiefferink, Carin H

    2017-03-01

    The capacity for emotion recognition and understanding is crucial for daily social functioning. We examined to what extent this capacity is impaired in young children with a Language Impairment (LI). In typical development, children learn to recognize emotions in faces and situations through social experiences and social learning. Children with LI have less access to these experiences and are therefore expected to fall behind their peers without LI. In this study, 89 preschool children with LI and 202 children without LI (mean age 3 years and 10 months in both groups) were tested on three indices for facial emotion recognition (discrimination, identification, and attribution in emotion evoking situations). Parents reported on their children's emotion vocabulary and ability to talk about their own emotions. Preschoolers with and without LI performed similarly on the non-verbal task for emotion discrimination. Children with LI fell behind their peers without LI on both other tasks for emotion recognition that involved labelling the four basic emotions (happy, sad, angry, fear). The outcomes of these two tasks were also related to children's level of emotion language. These outcomes emphasize the importance of 'emotion talk' at the youngest age possible for children with LI. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Early Sign Language Experience Goes Along with an Increased Cross-modal Gain for Affective Prosodic Recognition in Congenitally Deaf CI Users.

    PubMed

    Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte

    2018-04-01

    It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and rely more on the facial cues of audio-visual emotional stimuli. Two groups of young adult CD CI users-early signers (ES CI users; n = 11) and late signers (LS CI users; n = 10)-and a group of hearing, non-signing, age-matched controls (n = 12) performed an emotion recognition task with auditory, visual, and cross-modal emotionally congruent and incongruent speech stimuli. On different trials, participants categorized either the facial or the vocal expressions. The ES CI users more accurately recognized affective prosody than the LS CI users in the presence of congruent facial information. Furthermore, the ES CI users, but not the LS CI users, gained more than the controls from congruent visual stimuli when recognizing affective prosody. Both CI groups performed overall worse than the controls in recognizing affective prosody. These results suggest that early sign language experience affects multisensory emotion perception in CD CI users.

  3. Pancreatic Exocrine Tumors

    MedlinePlus

    ... Legacy Corporate Partnerships How You Change Lives Donor Recognition Donor Stories Give a patient, a caregiver, a ... Give Donate Create a Fundraiser Legacy Giving Donor Recognition Corporate Giving & Sponsorships Shop Purple Research For Researchers ...

  4. Northeast Artificial Intelligence Consortium annual report. Volume 2. 1988. Discussing, using, and recognizing plans (NLP). Interim report, January-December 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, S.C.; Woolf, B.

    The Northeast Artificial Intelligence Consortium (NAIC) was created by the Air Force Systems Command, Rome Air Development Center, and the Office of Scientific Research. Its purpose is to conduct pertinent research in artificial intelligence and to perform activities ancillary to this research. This report describes progress that has been made in the fourth year of the existence of the NAIC on the technical research tasks undertaken at the member universities. The topics covered in general are: versatile expert system for equipment maintenance, distributed AI for communications system control, automatic photointerpretation, time-oriented problem solving, speech understanding systems, knowledge base maintenance, hardwaremore » architectures for very large systems, knowledge-based reasoning and planning, and a knowledge acquisition, assistance, and explanation system. The specific topic for this volume is the recognition of plans expressed in natural language, followed by their discussion and use.« less

  5. Training Joint, Interagency, Intergovernmental, and Multinational (JIIM) Participants for Stability Operations

    DTIC Science & Technology

    2012-09-01

    boxes) using a third-party commercial software component. When creating version 1, it was necessary to enter raw Hypertext Markup Language (HTML) tags...Markup Language (HTML) web page. Figure 12. Authors create procedures using the Procedure Editor. Users run procedures using the...step presents instructions to the user using formatted text and graphics specified using the Hypertext Markup Language (HTML). Instructions can

  6. Background feature descriptor for offline handwritten numeral recognition

    NASA Astrophysics Data System (ADS)

    Ming, Delie; Wang, Hao; Tian, Tian; Jie, Feiran; Lei, Bo

    2011-11-01

    This paper puts forward an offline handwritten numeral recognition method based on background structural descriptor (sixteen-value numerical background expression). Through encoding the background pixels in the image according to a certain rule, 16 different eigenvalues were generated, which reflected the background condition of every digit, then reflected the structural features of the digits. Through pattern language description of images by these features, automatic segmentation of overlapping digits and numeral recognition can be realized. This method is characterized by great deformation resistant ability, high recognition speed and easy realization. Finally, the experimental results and conclusions are presented. The experimental results of recognizing datasets from various practical application fields reflect that with this method, a good recognition effect can be achieved.

  7. Comparative implementation of Handwritten and Machine written Gurmukhi text utilizing appropriate parameters

    NASA Astrophysics Data System (ADS)

    Kaur, Jaswinder; Jagdev, Gagandeep, Dr.

    2018-01-01

    Optical character recognition is concerned with the recognition of optically processed characters. The recognition is done offline after the writing or printing has been completed, unlike online recognition where the computer has to recognize the characters instantly as they are drawn. The performance of character recognition depends upon the quality of scanned documents. The preprocessing steps are used for removing low-frequency background noise and normalizing the intensity of individual scanned documents. Several filters are used for reducing certain image details and enabling an easier or faster evaluation. The primary aim of the research work is to recognize handwritten and machine written characters and differentiate them. The language opted for the research work is Punjabi Gurmukhi and tool utilized is Matlab.

  8. Liberated Learning: Analysis of University Students' Perceptions and Experiences with Continuous Automated Speech Recognition

    ERIC Educational Resources Information Center

    Ryba, Ken; McIvor, Tom; Shakir, Maha; Paez, Di

    2006-01-01

    This study examined continuous automated speech recognition in the university lecture theatre. The participants were both native speakers of English (L1) and English as a second language students (L2) enrolled in an information systems course (Total N=160). After an initial training period, an L2 lecturer in information systems delivered three…

  9. On the Suitability of Mobile Cloud Computing at the Tactical Edge

    DTIC Science & Technology

    2014-04-23

    geolocation; Facial recognition (photo identification/classification); Intelligence, Surveillance, and Reconnaissance (ISR); and Fusion of Electronic...could benefit most from MCC are those with large processing overhead, low bandwidth requirements, and a need for large database support (e.g., facial ... recognition , language translation). The effect—specifically on the communication links—of supporting these applications at the tactical edge

  10. A novel probabilistic framework for event-based speech recognition

    NASA Astrophysics Data System (ADS)

    Juneja, Amit; Espy-Wilson, Carol

    2003-10-01

    One of the reasons for unsatisfactory performance of the state-of-the-art automatic speech recognition (ASR) systems is the inferior acoustic modeling of low-level acoustic-phonetic information in the speech signal. An acoustic-phonetic approach to ASR, on the other hand, explicitly targets linguistic information in the speech signal, but such a system for continuous speech recognition (CSR) is not known to exist. A probabilistic and statistical framework for CSR based on the idea of the representation of speech sounds by bundles of binary valued articulatory phonetic features is proposed. Multiple probabilistic sequences of linguistically motivated landmarks are obtained using binary classifiers of manner phonetic features-syllabic, sonorant and continuant-and the knowledge-based acoustic parameters (APs) that are acoustic correlates of those features. The landmarks are then used for the extraction of knowledge-based APs for source and place phonetic features and their binary classification. Probabilistic landmark sequences are constrained using manner class language models for isolated or connected word recognition. The proposed method could overcome the disadvantages encountered by the early acoustic-phonetic knowledge-based systems that led the ASR community to switch to systems highly dependent on statistical pattern analysis methods and probabilistic language or grammar models.

  11. Mandarin Chinese Tone Identification in Cochlear Implants: Predictions from Acoustic Models

    PubMed Central

    Morton, Kenneth D.; Torrione, Peter A.; Throckmorton, Chandra S.; Collins, Leslie M.

    2015-01-01

    It has been established that current cochlear implants do not supply adequate spectral information for perception of tonal languages. Comprehension of a tonal language, such as Mandarin Chinese, requires recognition of lexical tones. New strategies of cochlear stimulation such as variable stimulation rate and current steering may provide the means of delivering more spectral information and thus may provide the auditory fine structure required for tone recognition. Several cochlear implant signal processing strategies are examined in this study, the continuous interleaved sampling (CIS) algorithm, the frequency amplitude modulation encoding (FAME) algorithm, and the multiple carrier frequency algorithm (MCFA). These strategies provide different types and amounts of spectral information. Pattern recognition techniques can be applied to data from Mandarin Chinese tone recognition tasks using acoustic models as a means of testing the abilities of these algorithms to transmit the changes in fundamental frequency indicative of the four lexical tones. The ability of processed Mandarin Chinese tones to be correctly classified may predict trends in the effectiveness of different signal processing algorithms in cochlear implants. The proposed techniques can predict trends in performance of the signal processing techniques in quiet conditions but fail to do so in noise. PMID:18706497

  12. CWSRF 2017 PISCES Recognition Program Compendium

    EPA Pesticide Factsheets

    The Clean Water State Revolving Fund’s Performance and Innovation in the SRF Creating Environmental Success (PISCES) program allows assistance recipients to gain national recognition for exceptional projects funded by the CWSRF.

  13. Pancreatic Neuroendocrine Tumors (PNETs)

    MedlinePlus

    ... Legacy Corporate Partnerships How You Change Lives Donor Recognition Donor Stories Give a patient, a caregiver, a ... Give Donate Create a Fundraiser Legacy Giving Donor Recognition Corporate Giving & Sponsorships Shop Purple Research For Researchers ...

  14. Quantity Recognition among Speakers of an Anumeric Language

    ERIC Educational Resources Information Center

    Everett, Caleb; Madora, Keren

    2012-01-01

    Recent research has suggested that the Piraha, an Amazonian tribe with a number-less language, are able to match quantities greater than 3 if the matching task does not require recall or spatial transposition. This finding contravenes previous work among the Piraha. In this study, we re-tested the Pirahas' performance in the crucial one-to-one…

  15. The Role of Teachers' Classroom Discipline in Their Teaching Effectiveness and Students' Language Learning Motivation and Achievement: A Path Method

    ERIC Educational Resources Information Center

    Rahimi, Mehrak; Karkami, Fatemeh Hosseini

    2015-01-01

    This study investigated the role of EFL teachers' classroom discipline strategies in their teaching effectiveness and their students' motivation and achievement in learning English as a foreign language. 1408 junior high-school students expressed their perceptions of the strategies their English teachers used (punishment, recognition/reward,…

  16. A Classroom Teacher's Guide to Reading Improvement in Middle School Language Arts. Revised Edition. Resource Monograph No. 18.

    ERIC Educational Resources Information Center

    Guttinger, Hellen I., Ed.

    The reading improvement activities in this handbook are intended for use by middle school language arts teachers. Focusing on study skills, vocabulary development, and comprehension development, the activities include (1) surveying literary materials, (2) outlining, (3) spelling, (4) syllabication, (5) word recognition, (6) using synonyms, (7)…

  17. EFL Learners' Production of Questions over Time: Linguistic, Usage-Based, and Developmental Features

    ERIC Educational Resources Information Center

    Nekrasova-Beker, Tatiana M.

    2011-01-01

    The recognition of second language (L2) development as a dynamic process has led to different claims about how language development unfolds, what represents a learner's linguistic system (i.e., interlanguage) at a certain point in time, and how that system changes over time (Verspoor, de Bot, & Lowie, 2011). Responding to de Bot and…

  18. Categories for Names or Names of Categories? The Interplay Between Domain-Specific Conceptual Structure and Language.

    ERIC Educational Resources Information Center

    Diesendruck, Gil

    2003-01-01

    Drawing on the notion of the domain-specificity of recognition, reviews evidence on the effect of language in classification of and reasoning about categories from different domains. Looks at anthropological infant classification, and preschool categorization literature. Suggests the causal nature and indicative power of animal categories seem to…

  19. Reading Comprehension in Autism Spectrum Disorders: The Role of Oral Language and Social Functioning

    ERIC Educational Resources Information Center

    Ricketts, Jessie; Jones, Catherine R. G.; Happe, Francesca; Charman, Tony

    2013-01-01

    Reading comprehension is an area of difficulty for many individuals with autism spectrum disorders (ASD). According to the Simple View of Reading, word recognition and oral language are both important determinants of reading comprehension ability. We provide a novel test of this model in 100 adolescents with ASD of varying intellectual ability.…

  20. Federal Recognition of the Rights of Minority Language Groups.

    ERIC Educational Resources Information Center

    Leibowitz, Arnold H.

    Federal laws, policies, and court decisions pertaining to the civil rights of minority language groups are reviewed, with an emphasis on political, legal, economic, and educational access. Areas in which progress has been made and those in which access is still limited are identified. It is argued that a continuing federal role is necessary to…

  1. Making a Case for Exact Language as an Aspect of Rigour in Initial Teacher Education Mathematics Programmes

    ERIC Educational Resources Information Center

    van Jaarsveld, Pieter

    2016-01-01

    Pre-service secondary mathematics teachers have a poor command of the exact language of mathematics as evidenced in assignments, micro-lessons and practicums. The unrelenting notorious annual South African National Senior Certificate outcomes in mathematics and the recognition by the Department of Basic Education (DBE) that the correct use of…

  2. Predictive Coding Accelerates Word Recognition and Learning in the Early Stages of Language Development

    ERIC Educational Resources Information Center

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2017-01-01

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard…

  3. Neural Correlates of Morphological Decomposition in a Morphologically Rich Language: An fMRI Study

    ERIC Educational Resources Information Center

    Lehtonen, Minna; Vorobyev, Victor A.; Hugdahl, Kenneth; Tuokkola, Terhi; Laine, Matti

    2006-01-01

    By employing visual lexical decision and functional MRI, we studied the neural correlates of morphological decomposition in a highly inflected language (Finnish) where most inflected noun forms elicit a consistent processing cost during word recognition. This behavioral effect could reflect suffix stripping at the visual word form level and/or…

  4. Costs and Effects of Dual-Language Immersion in the Portland Public Schools

    ERIC Educational Resources Information Center

    Steele, Jennifer L.; Slater, Robert; Li, Jennifer; Zamarro, Gema; Miller, Trey

    2015-01-01

    Though it is estimated that about half of the world's population is bilingual, the estimate for the United States is well below 20% (Grosjean, 2010). Amid growing recognition of the need for second language skills to facilitate international commerce and national security and to enhance learning opportunities for non-native speakers of English,…

  5. Sign Language Recognition and Translation: A Multidisciplined Approach from the Field of Artificial Intelligence

    ERIC Educational Resources Information Center

    Parton, Becky Sue

    2006-01-01

    In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based…

  6. Checklist for Early Recognition and Treatment of Acute Illness (CERTAIN): evolution of a content management system for point-of-care clinical decision support.

    PubMed

    Barwise, Amelia; Garcia-Arguello, Lisbeth; Dong, Yue; Hulyalkar, Manasi; Vukoja, Marija; Schultz, Marcus J; Adhikari, Neill K J; Bonneton, Benjamin; Kilickaya, Oguz; Kashyap, Rahul; Gajic, Ognjen; Schmickl, Christopher N

    2016-10-03

    The Checklist for Early Recognition and Treatment of Acute Illness (CERTAIN) is an international collaborative project with the overall objective of standardizing the approach to the evaluation and treatment of critically ill patients world-wide, in accordance with best-practice principles. One of CERTAIN's key features is clinical decision support providing point-of-care information about common acute illness syndromes, procedures, and medications in an index card format. This paper describes 1) the process of developing and validating the content for point-of-care decision support, and 2) the content management system that facilitates frequent peer-review and allows rapid updates of content across different platforms (CERTAIN software, mobile apps, pdf-booklet) and different languages. Content was created based on survey results of acute care providers and validated using an open peer-review process. Over a 3 year period, CERTAIN content expanded to include 67 syndrome cards, 30 procedure cards, and 117 medication cards. 127 (59 %) cards have been peer-reviewed so far. Initially MS Word® and Dropbox® were used to create, store, and share content for peer-review. Recently Google Docs® was used to make the peer-review process more efficient. However, neither of these approaches met our security requirements nor has the capacity to instantly update the different CERTAIN platforms. Although we were able to successfully develop and validate a large inventory of clinical decision support cards in a short period of time, commercially available software solutions for content management are suboptimal. Novel custom solutions are necessary for efficient global point of care content system management.

  7. Natural Language Processing (NLP), Machine Learning (ML), and Semantics in Polar Science

    NASA Astrophysics Data System (ADS)

    Duerr, R.; Ramdeen, S.

    2017-12-01

    One of the interesting features of Polar Science is that it historically has been extremely interdisciplinary, encompassing all of the physical and social sciences. Given the ubiquity of specialized terminology in each field, enabling researchers to find, understand, and use all of the heterogeneous data needed for polar research continues to be a bottleneck. Within the informatics community, semantics has broadly accepted as a solution to these problems, yet progress in developing reusable semantic resources has been slow. The NSF-funded ClearEarth project has been adapting the methods and tools from other communities such as Biomedicine to the Earth sciences with the goal of enhancing progress and the rate at which the needed semantic resources can be created. One of the outcomes of the project has been a better understanding of the differences in the way linguists and physical scientists understand disciplinary text. One example of these differences is the tendency for each discipline and often disciplinary subfields to expend effort in creating discipline specific glossaries where individual terms often are comprised of more than one word (e.g., first-year sea ice). Often each term in a glossary is imbued with substantial contextual or physical meaning - meanings which are rarely explicitly called out within disciplinary texts; meaning which are therefore not immediately accessible to those outside that discipline or subfield; meanings which can often be represented semantically. Here we show how recognition of these difference and the use of glossaries can be used to speed up the annotation processes endemic to NLP, enable inter-community recognition and possible reconciliation of terminology differences. A number of processes and tools will be described, as will progress towards semi-automated generation of ontology structures.

  8. Unconscious improvement in foreign language learning using mismatch negativity neurofeedback: A preliminary study.

    PubMed

    Chang, Ming; Iizuka, Hiroyuki; Kashioka, Hideki; Naruse, Yasushi; Furukawa, Masahiro; Ando, Hideyuki; Maeda, Taro

    2017-01-01

    When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between 'l' and 'r' sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words 'light' and 'right'. There was also improvement in the recognition of other words containing 'l' and 'r' (e.g., 'blight' and 'bright'), even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses.

  9. Unconscious improvement in foreign language learning using mismatch negativity neurofeedback: A preliminary study

    PubMed Central

    Iizuka, Hiroyuki; Kashioka, Hideki; Naruse, Yasushi; Furukawa, Masahiro; Ando, Hideyuki; Maeda, Taro

    2017-01-01

    When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between ‘l’ and ‘r’ sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words ‘light’ and ‘right’. There was also improvement in the recognition of other words containing ‘l’ and ‘r’ (e.g., ‘blight’ and ‘bright’), even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses. PMID:28617861

  10. Application of a model of the auditory primal sketch to cross-linguistic differences in speech rhythm: Implications for the acquisition and recognition of speech

    NASA Astrophysics Data System (ADS)

    Todd, Neil P. M.; Lee, Christopher S.

    2002-05-01

    It has long been noted that the world's languages vary considerably in their rhythmic organization. Different languages seem to privilege different phonological units as their basic rhythmic unit, and there is now a large body of evidence that such differences have important consequences for crucial aspects of language acquisition and processing. The most fundamental finding is that the rhythmic structure of a language strongly influences the process of spoken-word recognition. This finding, together with evidence that infants are sensitive from birth to rhythmic differences between languages, and exploit rhythmic cues to segmentation at an earlier developmental stage than other cues prompted the claim that rhythm is the key which allows infants to begin building a lexicon and then go on to acquire syntax. It is therefore of interest to determine how differences in rhythmic organization arise at the acoustic/auditory level. In this paper, it is shown how an auditory model of the primitive representation of sound provides just such an account of rhythmic differences. Its performance is evaluated on a data set of French and English sentences and compared with the results yielded by the phonetic accounts of Frank Ramus and his colleagues and Esther Grabe and her colleagues.

  11. Arabic sign language recognition based on HOG descriptor

    NASA Astrophysics Data System (ADS)

    Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid

    2017-02-01

    We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.

  12. Hierarchical singleton-type recurrent neural fuzzy networks for noisy speech recognition.

    PubMed

    Juang, Chia-Feng; Chiou, Chyi-Tian; Lai, Chun-Lung

    2007-05-01

    This paper proposes noisy speech recognition using hierarchical singleton-type recurrent neural fuzzy networks (HSRNFNs). The proposed HSRNFN is a hierarchical connection of two singleton-type recurrent neural fuzzy networks (SRNFNs), where one is used for noise filtering and the other for recognition. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and their recurrent properties make them suitable for processing speech patterns with temporal characteristics. In n words recognition, n SRNFNs are created for modeling n words, where each SRNFN receives the current frame feature and predicts the next one of its modeling word. The prediction error of each SRNFN is used as recognition criterion. In filtering, one SRNFN is created, and each SRNFN recognizer is connected to the same SRNFN filter, which filters noisy speech patterns in the feature domain before feeding them to the SRNFN recognizer. Experiments with Mandarin word recognition under different types of noise are performed. Other recognizers, including multilayer perceptron (MLP), time-delay neural networks (TDNNs), and hidden Markov models (HMMs), are also tested and compared. These experiments and comparisons demonstrate good results with HSRNFN for noisy speech recognition tasks.

  13. Children creating language: how Nicaraguan sign language acquired a spatial grammar.

    PubMed

    Senghas, A; Coppola, M

    2001-07-01

    It has long been postulated that language is not purely learned, but arises from an interaction between environmental exposure and innate abilities. The innate component becomes more evident in rare situations in which the environment is markedly impoverished. The present study investigated the language production of a generation of deaf Nicaraguans who had not been exposed to a developed language. We examined the changing use of early linguistic structures (specifically, spatial modulations) in a sign language that has emerged since the Nicaraguan group first came together: In tinder two decades, sequential cohorts of learners systematized the grammar of this new sign language. We examined whether the systematicity being added to the language stems from children or adults: our results indicate that such changes originate in children aged 10 and younger Thus, sequential cohorts of interacting young children collectively: possess the capacity not only to learn, but also to create, language.

  14. Storytelling, behavior planning, and language evolution in context.

    PubMed

    McBride, Glen

    2014-01-01

    An attempt is made to specify the structure of the hominin bands that began steps to language. Storytelling could evolve without need for language yet be strongly subject to natural selection and could provide a major feedback process in evolving language. A storytelling model is examined, including its effects on the evolution of consciousness and the possible timing of language evolution. Behavior planning is presented as a model of language evolution from storytelling. The behavior programming mechanism in both directions provide a model of creating and understanding behavior and language. Culture began with societies, then family evolution, family life in troops, but storytelling created a culture of experiences, a final step in the long process of achieving experienced adults by natural selection. Most language evolution occurred in conversations where evolving non-verbal feedback ensured mutual agreements on understanding. Natural language evolved in conversations with feedback providing understanding of changes.

  15. Storytelling, behavior planning, and language evolution in context

    PubMed Central

    McBride, Glen

    2014-01-01

    An attempt is made to specify the structure of the hominin bands that began steps to language. Storytelling could evolve without need for language yet be strongly subject to natural selection and could provide a major feedback process in evolving language. A storytelling model is examined, including its effects on the evolution of consciousness and the possible timing of language evolution. Behavior planning is presented as a model of language evolution from storytelling. The behavior programming mechanism in both directions provide a model of creating and understanding behavior and language. Culture began with societies, then family evolution, family life in troops, but storytelling created a culture of experiences, a final step in the long process of achieving experienced adults by natural selection. Most language evolution occurred in conversations where evolving non-verbal feedback ensured mutual agreements on understanding. Natural language evolved in conversations with feedback providing understanding of changes. PMID:25360123

  16. Optical character recognition of camera-captured images based on phase features

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  17. Tracking speech comprehension in space and time.

    PubMed

    Pulvermüller, Friedemann; Shtyrov, Yury; Ilmoniemi, Risto J; Marslen-Wilson, William D

    2006-07-01

    A fundamental challenge for the cognitive neuroscience of language is to capture the spatio-temporal patterns of brain activity that underlie critical functional components of the language comprehension process. We combine here psycholinguistic analysis, whole-head magnetoencephalography (MEG), the Mismatch Negativity (MMN) paradigm, and state-of-the-art source localization techniques (Equivalent Current Dipole and L1 Minimum-Norm Current Estimates) to locate the process of spoken word recognition at a specific moment in space and time. The magnetic MMN to words presented as rare "deviant stimuli" in an oddball paradigm among repetitive "standard" speech stimuli, peaked 100-150 ms after the information in the acoustic input, was sufficient for word recognition. The latency with which words were recognized corresponded to that of an MMN source in the left superior temporal cortex. There was a significant correlation (r = 0.7) of latency measures of word recognition in individual study participants with the latency of the activity peak of the superior temporal source. These results demonstrate a correspondence between the behaviorally determined recognition point for spoken words and the cortical activation in left posterior superior temporal areas. Both the MMN calculated in the classic manner, obtained by subtracting standard from deviant stimulus response recorded in the same experiment, and the identity MMN (iMMN), defined as the difference between the neuromagnetic responses to the same stimulus presented as standard and deviant stimulus, showed the same significant correlation with word recognition processes.

  18. Programmable molecular recognition based on the geometry of DNA nanostructures.

    PubMed

    Woo, Sungwook; Rothemund, Paul W K

    2011-07-10

    From ligand-receptor binding to DNA hybridization, molecular recognition plays a central role in biology. Over the past several decades, chemists have successfully reproduced the exquisite specificity of biomolecular interactions. However, engineering multiple specific interactions in synthetic systems remains difficult. DNA retains its position as the best medium with which to create orthogonal, isoenergetic interactions, based on the complementarity of Watson-Crick binding. Here we show that DNA can be used to create diverse bonds using an entirely different principle: the geometric arrangement of blunt-end stacking interactions. We show that both binary codes and shape complementarity can serve as a basis for such stacking bonds, and explore their specificity, thermodynamics and binding rules. Orthogonal stacking bonds were used to connect five distinct DNA origami. This work, which demonstrates how a single attractive interaction can be developed to create diverse bonds, may guide strategies for molecular recognition in systems beyond DNA nanostructures.

  19. We Call It "Our Language": A Children's Swahili Pidgin Transforms Social and Symbolic Order on a Remote Hillside in Up-Country Kenya

    ERIC Educational Resources Information Center

    Gilmore, Perry

    2011-01-01

    This study describes a rare Swahili pidgin created by two five-year-old boys, one American and one African. The discussion examines the linguistic and social factors affecting the "origins, maintenance, change and loss" (Hymes 1971) of their language and the place it created for their friendship. This place, constructed by and through language,…

  20. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  1. Semantic Ambiguity Effects in L2 Word Recognition.

    PubMed

    Ishida, Tomomi

    2018-06-01

    The present study examined the ambiguity effects in second language (L2) word recognition. Previous studies on first language (L1) lexical processing have observed that ambiguous words are recognized faster and more accurately than unambiguous words on lexical decision tasks. In this research, L1 and L2 speakers of English were asked whether a letter string on a computer screen was an English word or not. An ambiguity advantage was found for both groups and greater ambiguity effects were found for the non-native speaker group when compared to the native speaker group. The findings imply that the larger ambiguity advantage for L2 processing is due to their slower response time in producing adequate feedback activation from the semantic level to the orthographic level.

  2. Speech and language delay in two children: an unusual presentation of hyperthyroidism.

    PubMed

    Sohal, Aman P S; Dasarathi, Madhuri; Lodh, Rajib; Cheetham, Tim; Devlin, Anita M

    2013-01-01

    Hyperthyroidism is rare in pre-school children. Untreated, it can have a profound effect on normal growth and development, particularly in the first 2 years of life. Although neurological manifestations of dysthyroid states are well known, specific expressive speech and language disorder as a presentation of hyperthyroidism is rarely documented. Case reports of two children with hyperthyroidism presenting with speech and language delay. We report two pre-school children with hyperthyroidism, who presented with expressive speech and language delay, and demonstrated a significant improvement in their language skills following treatment with anti-thyroid medication. Hyperthyroidism must be considered in all children presenting with speech and language difficulties, particularly expressive speech delay. Prompt recognition and early treatment are likely to improve outcome.

  3. Does input influence uptake? Links between maternal talk, processing speed and vocabulary size in Spanish-learning children

    PubMed Central

    Hurtado, Nereyda; Marchman, Virginia A.; Fernald, Anne

    2010-01-01

    It is well established that variation in caregivers' speech is associated with language outcomes, yet little is known about the learning principles that mediate these effects. This longitudinal study (n = 27) explores whether Spanish-learning children's early experiences with language predict efficiency in real-time comprehension and vocabulary learning. Measures of mothers' speech at 18 months were examined in relation to children's speech processing efficiency and reported vocabulary at 18 and 24 months. Children of mothers who provided more input at 18 months knew more words and were faster in word recognition at 24 months. Moreover, multiple regression analyses indicated that the influences of caregiver speech on speed of word recognition and vocabulary were largely overlapping. This study provides the first evidence that input shapes children's lexical processing efficiency and that vocabulary growth and increasing facility in spoken word comprehension work together to support the uptake of the information that rich input affords the young language learner. PMID:19046145

  4. Structuring Broadcast Audio for Information Access

    NASA Astrophysics Data System (ADS)

    Gauvain, Jean-Luc; Lamel, Lori

    2003-12-01

    One rapidly expanding application area for state-of-the-art speech recognition technology is the automatic processing of broadcast audiovisual data for information access. Since much of the linguistic information is found in the audio channel, speech recognition is a key enabling technology which, when combined with information retrieval techniques, can be used for searching large audiovisual document collections. Audio indexing must take into account the specificities of audio data such as needing to deal with the continuous data stream and an imperfect word transcription. Other important considerations are dealing with language specificities and facilitating language portability. At Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI), broadcast news transcription systems have been developed for seven languages: English, French, German, Mandarin, Portuguese, Spanish, and Arabic. The transcription systems have been integrated into prototype demonstrators for several application areas such as audio data mining, structuring audiovisual archives, selective dissemination of information, and topic tracking for media monitoring. As examples, this paper addresses the spoken document retrieval and topic tracking tasks.

  5. When does Iconicity in Sign Language Matter?

    PubMed Central

    Baus, Cristina; Carreiras, Manuel; Emmorey, Karen

    2012-01-01

    We examined whether iconicity in American Sign Language (ASL) enhances translation performance for new learners and proficient signers. Fifteen hearing nonsigners and 15 proficient ASL-English bilinguals performed a translation recognition task and a production translation task. Nonsigners were taught 28 ASL verbs (14 iconic; 14 non-iconic) prior to performing these tasks. Only new learners benefited from sign iconicity, recognizing iconic translations faster and more accurately and exhibiting faster forward (English-ASL) and backward (ASL-English) translation times for iconic signs. In contrast, proficient ASL-English bilinguals exhibited slower recognition and translation times for iconic signs. We suggest iconicity aids memorization in the early stages of adult sign language learning, but for fluent L2 signers, iconicity interacts with other variables that slow translation (specifically, the iconic signs had more translation equivalents than the non-iconic signs). Iconicity may also have slowed translation performance by forcing conceptual mediation for iconic signs, which is slower than translating via direct lexical links. PMID:23543899

  6. Recognizing Chinese characters in digital ink from non-native language writers using hierarchical models

    NASA Astrophysics Data System (ADS)

    Bai, Hao; Zhang, Xi-wen

    2017-06-01

    While Chinese is learned as a second language, its characters are taught step by step from their strokes to components, radicals to components, and their complex relations. Chinese Characters in digital ink from non-native language writers are deformed seriously, thus the global recognition approaches are poorer. So a progressive approach from bottom to top is presented based on hierarchical models. Hierarchical information includes strokes and hierarchical components. Each Chinese character is modeled as a hierarchical tree. Strokes in one Chinese characters in digital ink are classified with Hidden Markov Models and concatenated to the stroke symbol sequence. And then the structure of components in one ink character is extracted. According to the extraction result and the stroke symbol sequence, candidate characters are traversed and scored. Finally, the recognition candidate results are listed by descending. The method of this paper is validated by testing 19815 copies of the handwriting Chinese characters written by foreign students.

  7. Impact of Article Language in Multi-Language Medical Journals - a Bibliometric Analysis of Self-Citations and Impact Factor

    PubMed Central

    Diekhoff, Torsten; Schlattmann, Peter; Dewey, Marc

    2013-01-01

    Background In times of globalization there is an increasing use of English in the medical literature. The aim of this study was to analyze the influence of English-language articles in multi-language medical journals on their international recognition – as measured by a lower rate of self-citations and higher impact factor (IF). Methods and Findings We analyzed publications in multi-language journals in 2008 and 2009 using the Web of Science (WoS) of Thomson Reuters (former Institute of Scientific Information) and PubMed as sources of information. The proportion of English-language articles during the period was compared with both the share of self-citations in the year 2010 and the IF with and without self-citations. Multivariable linear regression analysis was performed to analyze these factors as well as the influence of the journals‘ countries of origin and of the other language(s) used in publications besides English. We identified 168 multi-language journals that were listed in WoS as well as in PubMed and met our criteria. We found a significant positive correlation of the share of English articles in 2008 and 2009 with the IF calculated without self-citations (Pearson r=0.56, p = <0.0001), a correlation with the overall IF (Pearson r = 0.47, p = <0.0001) and with the cites to years of IF calculation (Pearson r = 0.34, p = <0.0001), and a weak negative correlation with the share of self-citations (Pearson r = -0.2, p = 0.009). The IF without self-citations also correlated with the journal‘s country of origin – North American journals had a higher IF compared to Middle and South American or European journals. Conclusion Our findings suggest that a larger share of English articles in multi-language medical journals is associated with greater international recognition. Fewer self-citations were found in multi-language journals with a greater share of original articles in English. PMID:24146929

  8. Computational Modeling of Emotions and Affect in Social-Cultural Interaction

    DTIC Science & Technology

    2013-10-02

    acoustic and textual information sources. Second, a cross-lingual study was performed that shed light on how human perception and automatic recognition...speech is produced, a speaker’s pitch and intonational pattern, and word usage. Better feature representation and advanced approaches were used to...recognition performance, and improved our understanding of language/cultural impact on human perception of emotion and automatic classification. • Units

  9. The Effects of Linguistic Context on Word Recognition in Noise by Elderly Listeners Using Spanish Sentence Lists (SSL)

    ERIC Educational Resources Information Center

    Cervera, Teresa; Rosell, Vicente

    2015-01-01

    This study evaluated the effects of the linguistic context on the recognition of words in noise in older listeners using the Spanish Sentence Lists. These sentences were developed based on the approach of the SPIN test for the English language, which contains high and low predictability (HP and LP) sentences. In addition, the relative contribution…

  10. Foreign Language Analysis and Recognition (FLARE) Progress

    DTIC Science & Technology

    2015-02-01

    Copies may be obtained from the Defense Technical Information Center (DTIC) (http://www.dtic.mil). AFRL- RH -WP-TR-2015-0007 HAS BEEN REVIEWED AND IS... retrieval (IR). 15. SUBJECT TERMS Automatic speech recognition (ASR), information retrieval (IR). 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...to the Haystack Multilingual Multimedia Information Extraction and Retrieval (MMIER) system that was initially developed under a prior work unit

  11. Effects of An Integrated Format for Reading Instruction on the Comprehension and Word-Recognition Performance of Fourth- and Fifth-Grade Students Who Exhibit Severe Reading Problems.

    ERIC Educational Resources Information Center

    Parmer, Lavada Jacumin; Thames, Dana G.; Kazelskis, Richard

    A study examined the effectiveness of an integrated language arts instructional format for teaching reading compared with the effectiveness of the typical traditional reading program. The study investigated the effectiveness of approaches that are representative of both viewpoints of the reading process (i.e., word recognition and the construction…

  12. The Development and Evaluation of a Teleprocessed Computer-Assisted Instruction Course in the Recognition of Malarial Parasites. Final Report; May 1, 1967 - June 30, 1968.

    ERIC Educational Resources Information Center

    Mitzel, Harold E.

    A computer-assisted instruction course in the recognition of malarial parasites was developed and evaluated. The course includes stage discrimination, species discrimination, and case histories. Segments developed use COURSEWRITER as an author language and are presented via a display terminal that permits two-way communication with an IBM computer…

  13. Effects of Three Forms of Reading-Based Output Activity on L2 Vocabulary Learning

    ERIC Educational Resources Information Center

    Rassaei, Ehsan

    2017-01-01

    The current study investigated the effects of three forms of output activity on EFL learners' recognition and recall of second language (L2) vocabulary. To this end, three groups of learners of English as a foreign language (EFL) were instructed to employ the following three output activities after reading two narrative texts: (1) summarizing the…

  14. Beyond the "Cultural Turn": The Politics of Recognition versus the Politics of Redistribution in the Field of Intercultural Communication

    ERIC Educational Resources Information Center

    Zotzmann, Karin; Hernández-Zamora, Gregorio

    2013-01-01

    Since the 1980s the field of language teaching and learning has emphasised the interplay between language, culture and identity and promotes both communicative and intercultural competencies. This mirrors a general trend in the social sciences after the so-called "cultural turn" which brought about a concentration on culture, identity…

  15. ELF on a Mushroom: The Overnight Growth in English as a Lingua Franca

    ERIC Educational Resources Information Center

    Sowden, Colin

    2012-01-01

    In an effort to curtail native-speaker dominance of global English, and in recognition of the growing role of the language among non-native speakers from different first-language backgrounds, some academics have been urging the teaching of English as a Lingua Franca (ELF). Although at first this proposal seems to offer a plausible alternative to…

  16. Social Media and Language Processing: How Facebook and Twitter Provide the Best Frequency Estimates for Studying Word Recognition

    ERIC Educational Resources Information Center

    Herdagdelen, Amaç; Marelli, Marco

    2017-01-01

    Corpus-based word frequencies are one of the most important predictors in language processing tasks. Frequencies based on conversational corpora (such as movie subtitles) are shown to better capture the variance in lexical decision tasks compared to traditional corpora. In this study, we show that frequencies computed from social media are…

  17. Assessing Children's Home Language Environments Using Automatic Speech Recognition Technology

    ERIC Educational Resources Information Center

    Greenwood, Charles R.; Thiemann-Bourque, Kathy; Walker, Dale; Buzhardt, Jay; Gilkerson, Jill

    2011-01-01

    The purpose of this research was to replicate and extend some of the findings of Hart and Risley using automatic speech processing instead of human transcription of language samples. The long-term goal of this work is to make the current approach to speech processing possible by researchers and clinicians working on a daily basis with families and…

  18. Language Processing in Children with Cochlear Implants: A Preliminary Report on Lexical Access for Production and Comprehension

    ERIC Educational Resources Information Center

    Schwartz, Richard G.; Steinman, Susan; Ying, Elizabeth; Mystal, Elana Ying; Houston, Derek M.

    2013-01-01

    In this plenary paper, we present a review of language research in children with cochlear implants along with an outline of a 5-year project designed to examine the lexical access for production and recognition. The project will use auditory priming, picture naming with auditory or visual interfering stimuli (Picture-Word Interference and…

  19. Beyond Language: Strategies for Promoting Academic Excellence among Immigrant Students. The Claremont Letter. Volume 2, Issue 2

    ERIC Educational Resources Information Center

    Perez, William

    2007-01-01

    In this issue of "The Claremont Letter," the author argues that efforts to educate immigrant youth need to expand beyond a sole focus on language. Focusing solely on getting immigrant students to become English Proficient is short-sighted and inadequate. Efforts to educate immigrant youth need to include recognition of the various…

  20. Thai Automatic Speech Recognition

    DTIC Science & Technology

    2005-01-01

    used in an external DARPA evaluation involving medical scenarios between an American Doctor and a naïve monolingual Thai patient. 2. Thai Language... dictionary generation more challenging, and (3) the lack of word segmentation, which calls for automatic segmentation approaches to make n-gram language...requires a dictionary and provides various segmentation algorithms to automatically select suitable segmentations. Here we used a maximal matching

  1. Language Learning and Control in Monolinguals and Bilinguals

    PubMed Central

    Bartolotti, James; Marian, Viorica

    2012-01-01

    Parallel language activation in bilinguals leads to competition between languages. Experience managing this interference may aid novel language learning by improving the ability to suppress competition from known languages. To investigate the effect of bilingualism on the ability to control native-language interference, monolinguals and bilinguals were taught an artificial language designed to elicit between-language competition. Partial activation of interlingual competitors was assessed with eye-tracking and mouse-tracking during a word recognition task in the novel language. Eye-tracking results showed that monolinguals looked at competitors more than bilinguals, and for a longer duration of time. Mouse-tracking results showed that monolinguals’ mouse-movements were attracted to native-language competitors, while bilinguals overcame competitor interference by increasing activation of target items. Results suggest that bilinguals manage cross-linguistic interference more effectively than monolinguals. We conclude that language interference can affect lexical retrieval, but bilingualism may reduce this interference by facilitating access to a newly-learned language. PMID:22462514

  2. NATIONAL PREPAREDNESS: Technologies to Secure Federal Buildings

    DTIC Science & Technology

    2002-04-25

    Medium, some resistance based on sensitivity of eye Facial recognition Facial features are captured and compared Dependent on lighting, positioning...two primary types of facial recognition technology used to create templates: 1. Local feature analysis—Dozens of images from regions of the face are...an adjacent feature. Attachment I—Access Control Technologies: Biometrics Facial Recognition How the technology works

  3. Neural associative memories for the integration of language, vision and action in an autonomous agent.

    PubMed

    Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G

    2009-03-01

    Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.

  4. Preserved recognition in a case of developmental amnesia: implications for the acquisition of semantic memory?

    PubMed

    Baddeley, A; Vargha-Khadem, F; Mishkin, M

    2001-04-01

    We report the performance on recognition memory tests of Jon, who, despite amnesia from early childhood, has developed normal levels of performance on tests of intelligence, language, and general knowledge. Despite impaired recall, he performed within the normal range on each of six recognition tests, but he appears to lack the recollective phenomenological experience normally associated with episodic memory. His recall of previously unfamiliar newsreel events was impaired, but gained substantially from repetition over a 2-day period. Our results are consistent with the hypothesis that the recollective process of episodic memory is not necessary either for recognition or for the acquisition of semantic knowledge.

  5. Post interaural neural net-based vowel recognition

    NASA Astrophysics Data System (ADS)

    Jouny, Ismail I.

    2001-10-01

    Interaural head related transfer functions are used to process speech signatures prior to neural net based recognition. Data representing the head related transfer function of a dummy has been collected at MIT and made available on the Internet. This data is used to pre-process vowel signatures to mimic the effects of human ear on speech perception. Signatures representing various vowels of the English language are then presented to a multi-layer perceptron trained using the back propagation algorithm for recognition purposes. The focus in this paper is to assess the effects of human interaural system on vowel recognition performance particularly when using a classification system that mimics the human brain such as a neural net.

  6. Finger tips detection for two handed gesture recognition

    NASA Astrophysics Data System (ADS)

    Bhuyan, M. K.; Kar, Mithun Kumar; Neog, Debanga Raj

    2011-10-01

    In this paper, a novel algorithm is proposed for fingertips detection in view of two-handed static hand pose recognition. In our method, finger tips of both hands are detected after detecting hand regions by skin color-based segmentation. At first, the face is removed in the image by using Haar classifier and subsequently, the regions corresponding to the gesturing hands are isolated by a region labeling technique. Next, the key geometric features characterizing gesturing hands are extracted for two hands. Finally, for all possible/allowable finger movements, a probabilistic model is developed for pose recognition. Proposed method can be employed in a variety of applications like sign language recognition and human-robot-interactions etc.

  7. Conversational sensemaking

    NASA Astrophysics Data System (ADS)

    Preece, Alun; Webberley, Will; Braines, Dave

    2015-05-01

    Recent advances in natural language question-answering systems and context-aware mobile apps create opportunities for improved sensemaking in a tactical setting. Users equipped with mobile devices act as both sensors (able to acquire information) and effectors (able to act in situ), operating alone or in collectives. The currently- dominant technical approaches follow either a pull model (e.g. Apple's Siri or IBM's Watson which respond to users' natural language queries) or a push model (e.g. Google's Now which sends notifications to a user based on their context). There is growing recognition that users need more flexible styles of conversational interaction, where they are able to freely ask or tell, be asked or told, seek explanations and clarifications. Ideally such conversations should involve a mix of human and machine agents, able to collaborate in collective sensemaking activities with as few barriers as possible. Desirable capabilities include adding new knowledge, collaboratively building models, invoking specific services, and drawing inferences. As a step towards this goal, we collect evidence from a number of recent pilot studies including natural experiments (e.g. situation awareness in the context of organised protests) and synthetic experiments (e.g. human and machine agents collaborating in information seeking and spot reporting). We identify some principles and areas of future research for "conversational sensemaking".

  8. Theory, practice, and history in critical GIS: Reports on an AAG panel session

    USGS Publications Warehouse

    Wilson, M.W.; Poore, B.S.

    2009-01-01

    Extending a special session held at the 2008 annual meeting of the Association of American Geographers in Boston, this commentary collection highlights elements of the critical GIS research agenda that are particularly pressing. Responding to a Progress report on critical GIS written by David O'Sullivan in 2006, these six commentaries discuss how different interpretations of 'critical' are traced through critical GIS research. Participants in the panel session discussed the need for a continued discussion of a code of ethics in GIS use in the context of ongoing efforts to alter or remake the software and its associated practices, of neo-geographies and volunteered geographies. There were continued calls for hope and practical ways to actualize this hope, and a recognition that critical GIS needs to remain relevant to the technology. This 'relevance' can be variously defined, and in doing so, researchers should consider their positioning vis-??-vis the technology. Throughout the commentaries collected here, a question remains as to what kind of work disciplinary sub-fields such as critical GIS and GIScience perform. This is a question about language, specifically the distance that language can create among practitioners and theoreticians, both in the case of critical GIS and more broadly throughout GIScience.

  9. Letter-transposition effects are not universal: The impact of transposing letters in Hebrew

    PubMed Central

    Velan, Hadas; Frost, Ram

    2009-01-01

    We examined the effects of letter transposition in Hebrew in three masked-priming experiments. Hebrew, like English has an alphabetic orthography where sequential and contiguous letter strings represent phonemes. However, being a Semitic language it has a non-concatenated morphology that is based on root derivations. Experiment 1 showed that transposed-letter (TL) root primes inhibited responses to targets derived from the non-transposed root letters, and that this inhibition was unrelated to relative root frequency. Experiment 2 replicated this result and showed that if the transposed letters of the root created a nonsense-root that had no lexical representation, then no inhibition and no facilitation were obtained. Finally, Experiment 3 demonstrated that in contrast to English, French, or Spanish, TL nonword primes did not facilitate recognition of targets, and when the root letters embedded in them consisted of a legal root morpheme, they produced inhibition. These results suggest that lexical space in alphabetic orthographies may be structured very differently in different languages if their morphological structure diverges qualitatively. In Hebrew, lexical space is organized according to root families rather than simple orthographic structure, so that all words derived from the same root are interconnected or clustered together, independent of overall orthographic similarity. PMID:20161017

  10. Creating Inclusive Learning Communities through English Language Arts: From "Chanclas" to "Canicas."

    ERIC Educational Resources Information Center

    Franquiz, Maria E.; Reyes, Maria de la Luz

    1998-01-01

    Offers examples of exemplary classroom practice to address the issue of whether English language arts teachers can teach effectively if they are not fluent in the languages students speak. Discusses acts of inclusion (such as raising language status, and having flexible language boundaries), codeswitching as a resource, language choice and…

  11. Gestalt Imagery: A Critical Factor in Language Comprehension.

    ERIC Educational Resources Information Center

    Bell, Nanci

    1991-01-01

    Lack of gestalt imagery (the ability to create imaged wholes) can contribute to language comprehension disorder characterized by weak reading comprehension, weak oral language comprehension, weak oral language expression, weak written language expression, difficulty following directions, and a weak sense of humor. Sequential stimulation using an…

  12. Towards a Sign Language Synthesizer: a Bridge to Communication Gap of the Hearing/Speech Impaired Community

    NASA Astrophysics Data System (ADS)

    Maarif, H. A.; Akmeliawati, R.; Gunawan, T. S.; Shafie, A. A.

    2013-12-01

    Sign language synthesizer is a method to visualize the sign language movement from the spoken language. The sign language (SL) is one of means used by HSI people to communicate to normal people. But, unfortunately the number of people, including the HSI people, who are familiar with sign language is very limited. These cause difficulties in the communication between the normal people and the HSI people. The sign language is not only hand movement but also the face expression. Those two elements have complimentary aspect each other. The hand movement will show the meaning of each signing and the face expression will show the emotion of a person. Generally, Sign language synthesizer will recognize the spoken language by using speech recognition, the grammatical process will involve context free grammar, and 3D synthesizer will take part by involving recorded avatar. This paper will analyze and compare the existing techniques of developing a sign language synthesizer, which leads to IIUM Sign Language Synthesizer.

  13. Performance-intensity functions of Mandarin word recognition tests in noise: test dialect and listener language effects.

    PubMed

    Liu, Danzheng; Shi, Lu-Feng

    2013-06-01

    This study established the performance-intensity function for Beijing and Taiwan Mandarin bisyllabic word recognition tests in noise in native speakers of Wu Chinese. Effects of the test dialect and listeners' first language on psychometric variables (i.e., slope and 50%-correct threshold) were analyzed. Thirty-two normal-hearing Wu-speaking adults who used Mandarin since early childhood were compared to 16 native Mandarin-speaking adults. Both Beijing and Taiwan bisyllabic word recognition tests were presented at 8 signal-to-noise ratios (SNRs) in 4-dB steps (-12 dB to +16 dB). At each SNR, a half list (25 words) was presented in speech-spectrum noise to listeners' right ear. The order of the test, SNR, and half list was randomized across listeners. Listeners responded orally and in writing. Overall, the Wu-speaking listeners performed comparably to the Mandarin-speaking listeners on both tests. Compared to the Taiwan test, the Beijing test yielded a significantly lower threshold for both the Mandarin- and Wu-speaking listeners, as well as a significantly steeper slope for the Wu-speaking listeners. Both Mandarin tests can be used to evaluate Wu-speaking listeners. Of the 2, the Taiwan Mandarin test results in more comparable functions across listener groups. Differences in the performance-intensity function between listener groups and between tests indicate a first language and dialectal effect, respectively.

  14. Towards a Universal Model of Reading

    PubMed Central

    Frost, Ram

    2013-01-01

    In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding, have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter-order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the special way in which the human brain encodes the position of letters in printed words. The present paper discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is not a general property of the cognitive system, neither it is a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed. PMID:22929057

  15. Towards a universal model of reading.

    PubMed

    Frost, Ram

    2012-10-01

    In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the special way in which the human brain encodes the position of letters in printed words. The present article discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is neither a general property of the cognitive system nor a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed.

  16. Reconstructing the Past? Low German and the Creating of Regional Identity in Public Language Display

    ERIC Educational Resources Information Center

    Reershemius, Gertrud

    2011-01-01

    This article deals with language contact between a dominant standard language--German--and a lesser-used variety--Low German--in a situation in which the minoritised language is threatened by language shift and language loss. It analyses the application of Low German in forms of public language display and the self-presentation of the community in…

  17. Saving a Language with Computers, Tape Recorders, and Radio.

    ERIC Educational Resources Information Center

    Bennett, Ruth

    This paper discusses the use of technology in instruction. It begins by examining research on technology and indigenous languages, focusing on the use of technology to get community attention for an indigenous language, improve the quantity of quality language, document spoken language, create sociocultural learning contexts, improve study skills,…

  18. Neural Insights into the Relation between Language and Communication

    PubMed Central

    Willems, Roel M.; Varley, Rosemary

    2010-01-01

    The human capacity to communicate has been hypothesized to be causally dependent upon language. Intuitively this seems plausible since most communication relies on language. Moreover, intention recognition abilities (as a necessary prerequisite for communication) and language development seem to co-develop. Here we review evidence from neuroimaging as well as from neuropsychology to evaluate the relationship between communicative and linguistic abilities. Our review indicates that communicative abilities are best considered as neurally distinct from language abilities. This conclusion is based upon evidence showing that humans rely on different cortical systems when designing a communicative message for someone else as compared to when performing core linguistic tasks, as well as upon observations of individuals with severe language loss after extensive lesions to the language system, who are still able to perform tasks involving intention understanding. PMID:21151364

  19. Describing, using 'recognition cones'. [parallel-series model with English-like computer program

    NASA Technical Reports Server (NTRS)

    Uhr, L.

    1973-01-01

    A parallel-serial 'recognition cone' model is examined, taking into account the model's ability to describe scenes of objects. An actual program is presented in an English-like language. The concept of a 'description' is discussed together with possible types of descriptive information. Questions regarding the level and the variety of detail are considered along with approaches for improving the serial representations of parallel systems.

  20. Eyes and ears: Using eye tracking and pupillometry to understand challenges to speech recognition.

    PubMed

    Van Engen, Kristin J; McLaughlin, Drew J

    2018-05-04

    Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition. Copyright © 2018. Published by Elsevier B.V.

  1. The Impact of Early Bilingualism on Face Recognition Processes.

    PubMed

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker's face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals' face processing abilities differ from monolinguals'. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation.

  2. Get rich quick: the signal to respond procedure reveals the time course of semantic richness effects during visual word recognition.

    PubMed

    Hargreaves, Ian S; Pexman, Penny M

    2014-05-01

    According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Spoken Word Recognition in Toddlers Who Use Cochlear Implants

    PubMed Central

    Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.

    2010-01-01

    Purpose The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of speech competitors, while their eye movements to target objects were digitally recorded. Word recognition performance was quantified by measuring each child’s reaction time (i.e., the latency between the spoken auditory label and the first look at the target object) and accuracy (i.e., the amount of time that children looked at target objects within 367 ms to 2,000 ms after the label onset). Results Children with CIs were less accurate and took longer to fixate target objects than did age-matched children without hearing loss. Both groups of children showed reduced performance in the presence of the speech competitors, although many children continued to recognize labels at above-chance levels. Conclusion The results suggest that the unique auditory experience of young CI users slows the time course of spoken word recognition abilities. In addition, real-world listening environments may slow language processing in young language learners, regardless of their hearing status. PMID:19951921

  4. Feature Selection for Speech Emotion Recognition in Spanish and Basque: On the Use of Machine Learning to Improve Human-Computer Interaction

    PubMed Central

    Arruti, Andoni; Cearreta, Idoia; Álvarez, Aitor; Lazkano, Elena; Sierra, Basilio

    2014-01-01

    Study of emotions in human–computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested. PMID:25279686

  5. Kannada character recognition system using neural network

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  6. Vocabulary Knowledge Predicts Lexical Processing: Evidence from a Group of Participants with Diverse Educational Backgrounds

    PubMed Central

    Mainz, Nina; Shao, Zeshu; Brysbaert, Marc; Meyer, Antje S.

    2017-01-01

    Vocabulary knowledge is central to a speaker's command of their language. In previous research, greater vocabulary knowledge has been associated with advantages in language processing. In this study, we examined the relationship between individual differences in vocabulary and language processing performance more closely by (i) using a battery of vocabulary tests instead of just one test, and (ii) testing not only university students (Experiment 1) but young adults from a broader range of educational backgrounds (Experiment 2). Five vocabulary tests were developed, including multiple-choice and open antonym and synonym tests and a definition test, and administered together with two established measures of vocabulary. Language processing performance was measured using a lexical decision task. In Experiment 1, vocabulary and word frequency were found to predict word recognition speed while we did not observe an interaction between the effects. In Experiment 2, word recognition performance was predicted by word frequency and the interaction between word frequency and vocabulary, with high-vocabulary individuals showing smaller frequency effects. While overall the individual vocabulary tests were correlated and showed similar relationships with language processing as compared to a composite measure of all tests, they appeared to share less variance in Experiment 2 than in Experiment 1. Implications of our findings concerning the assessment of vocabulary size in individual differences studies and the investigation of individuals from more varied backgrounds are discussed. PMID:28751871

  7. Creating Official Language Policy from Local Practice: The Example of the Native American Languages Act 1990/1992

    ERIC Educational Resources Information Center

    Warhol, Larisa

    2012-01-01

    This research explores the development of landmark federal language policy in the United States: the Native American Languages Act of 1990/1992 (NALA). Overturning more than two centuries of United States American Indian policy, NALA established the federal role in preserving and protecting Native American languages. Indigenous languages in the…

  8. The Effects of Response Modes and Cues on Language Learning, Cognitive Load and Self-Efficacy Beliefs in Web-Based Learning

    ERIC Educational Resources Information Center

    Chen, Ching-Huei; Huang, Kun

    2014-01-01

    An experiment was conducted to examine how different response modes for practice questions and the presence or absence of cues influenced students' self-efficacy beliefs, perceived cognitive load, and performance in language recall and recognition tasks. One hundred fifty-seven 6th grade students were randomly assigned to one of four conditions:…

  9. Flexible letter-position coding is unlikely to hold for morphologically rich languages.

    PubMed

    Hyönä, Jukka; Bertram, Raymond

    2012-10-01

    We agree with Frost that flexible letter-position coding is unlikely to be a universal property of word recognition across different orthographies. We argue that it is particularly unlikely in morphologically rich languages like Finnish. We also argue that dual-route models are not overly flexible and that they are well equipped to adapt to the linguistic environment at hand.

  10. Journeys and Destinations: The Challenge of Change. A Language Arts Unit for Grades 2-3.

    ERIC Educational Resources Information Center

    Cawley, Carol; And Others

    This unit of study for high-ability language arts students in grades 2-3 uses an inquiry-based approach to investigate literature in an interdisciplinary curriculum. The guiding theme of the unit is the recognition of change as a concept that affects people and their relationships as well as the world around them. The unit provides the vehicle for…

  11. Repeated Reading for Developing Reading Fluency and Reading Comprehension: The Case of EFL Learners in Vietnam

    ERIC Educational Resources Information Center

    Gorsuch, Greta; Taguchi, Etsuo

    2008-01-01

    Reading in a foreign or second language is often a laborious process, often caused by underdeveloped word recognition skills, among other things, of second and foreign language readers. Developing fluency in L2/FL reading has become an important pedagogical issue in L2 settings and one major component of reading fluency is fast and accurate word…

  12. Language Context Effects on Interlingual Homograph Recognition: Evidence from Event-Related Potentials and Response Times in Semantic Priming.

    ERIC Educational Resources Information Center

    de Bruijn, Ellen R. A.; Dijkstra, Ton; Chwilla, Dorothee J.; Schriefers, Herbert J.

    2001-01-01

    Dutch-English bilinguals performed a generalized lexical decision task on triplets of items, responding with "yes" if all items wee correct Dutch and/or English words, and with "no" if one or ore of the items was not a word in wither language. Semantic priming effects were found in on-line response times. Event-related…

  13. Spanish as the Second National Language of the United States: Fact, Future, Fiction, or Hope?

    ERIC Educational Resources Information Center

    Macías, Reynaldo F.

    2014-01-01

    The status of a language is very often described and measured by different factors, including the length of time it has been in use in a particular territory, the official recognition it has been given by governmental units, and the number and proportion of speakers. Spanish has a unique history and, so some argue status, in the contemporary…

  14. Korean letter handwritten recognition using deep convolutional neural network on android platform

    NASA Astrophysics Data System (ADS)

    Purnamawati, S.; Rachmawati, D.; Lumanauw, G.; Rahmat, R. F.; Taqyuddin, R.

    2018-03-01

    Currently, popularity of Korean culture attracts many people to learn everything about Korea, particularly its language. To acquire Korean Language, every single learner needs to be able to understand Korean non-Latin character. A digital approach needs to be carried out in order to make Korean learning process easier. This study is done by using Deep Convolutional Neural Network (DCNN). DCNN performs the recognition process on the image based on the model that has been trained such as Inception-v3 Model. Subsequently, re-training process using transfer learning technique with the trained and re-trained value of model is carried though in order to develop a new model with a better performance without any specific systemic errors. The testing accuracy of this research results in 86,9%.

  15. Ambulance Clinical Triage for Acute Stroke Treatment: Paramedic Triage Algorithm for Large Vessel Occlusion.

    PubMed

    Zhao, Henry; Pesavento, Lauren; Coote, Skye; Rodrigues, Edrich; Salvaris, Patrick; Smith, Karen; Bernard, Stephen; Stephenson, Michael; Churilov, Leonid; Yassi, Nawaf; Davis, Stephen M; Campbell, Bruce C V

    2018-04-01

    Clinical triage scales for prehospital recognition of large vessel occlusion (LVO) are limited by low specificity when applied by paramedics. We created the 3-step ambulance clinical triage for acute stroke treatment (ACT-FAST) as the first algorithmic LVO identification tool, designed to improve specificity by recognizing only severe clinical syndromes and optimizing paramedic usability and reliability. The ACT-FAST algorithm consists of (1) unilateral arm drift to stretcher <10 seconds, (2) severe language deficit (if right arm is weak) or gaze deviation/hemineglect assessed by simple shoulder tap test (if left arm is weak), and (3) eligibility and stroke mimic screen. ACT-FAST examination steps were retrospectively validated, and then prospectively validated by paramedics transporting culturally and linguistically diverse patients with suspected stroke in the emergency department, for the identification of internal carotid or proximal middle cerebral artery occlusion. The diagnostic performance of the full ACT-FAST algorithm was then validated for patients accepted for thrombectomy. In retrospective (n=565) and prospective paramedic (n=104) validation, ACT-FAST displayed higher overall accuracy and specificity, when compared with existing LVO triage scales. Agreement of ACT-FAST between paramedics and doctors was excellent (κ=0.91; 95% confidence interval, 0.79-1.0). The full ACT-FAST algorithm (n=60) assessed by paramedics showed high overall accuracy (91.7%), sensitivity (85.7%), specificity (93.5%), and positive predictive value (80%) for recognition of endovascular-eligible LVO. The 3-step ACT-FAST algorithm shows higher specificity and reliability than existing scales for clinical LVO recognition, despite requiring just 2 examination steps. The inclusion of an eligibility step allowed recognition of endovascular-eligible patients with high accuracy. Using a sequential algorithmic approach eliminates scoring confusion and reduces assessment time. Future studies will test whether field application of ACT-FAST by paramedics to bypass suspected patients with LVO directly to endovascular-capable centers can reduce delays to endovascular thrombectomy. © 2018 American Heart Association, Inc.

  16. Combining functional neuroimaging with off-line brain stimulation: modulation of task-related activity in language areas.

    PubMed

    Andoh, Jamila; Paus, Tomás

    2011-02-01

    Repetitive TMS (rTMS) provides a noninvasive tool for modulating neural activity in the human brain. In healthy participants, rTMS applied over the language-related areas in the left hemisphere, including the left posterior temporal area of Wernicke (LTMP) and inferior frontal area of Broca, have been shown to affect performance on word recognition tasks. To investigate the neural substrate of these behavioral effects, off-line rTMS was combined with fMRI acquired during the performance of a word recognition task. Twenty right-handed healthy men underwent fMRI scans before and after a session of 10-Hz rTMS applied outside the magnetic resonance scanner. Functional magnetic resonance images were acquired during the performance of a word recognition task that used English or foreign-language words. rTMS was applied over the LTMP in one group of 10 participants (LTMP group), whereas the homologue region in the right hemisphere was stimulated in another group of 10 participants (RTMP group). Changes in task-related fMRI response (English minus foreign languages) and task performances (response time and accuracy) were measured in both groups and compared between pre-rTMS and post-rTMS. Our results showed that rTMS increased task-related fMRI response in the homologue areas contralateral to the stimulated sites. We also found an effect of rTMS on response time for the LTMP group only. These findings provide insights into changes in neural activity in cortical regions connected to the stimulated site and are consistent with a hypothesis raised in a previous review about the role of the homologue areas in the contralateral hemisphere for preserving behavior after neural interference.

  17. Novel images and novel locations of familiar images as sensitive translational cognitive tests in humans.

    PubMed

    Raber, Jacob

    2015-05-15

    Object recognition is a sensitive cognitive test to detect effects of genetic and environmental factors on cognition in rodents. There are various versions of object recognition that have been used since the original test was reported by Ennaceur and Delacour in 1988. There are nonhuman primate and human primate versions of object recognition as well, allowing cross-species comparisons. As no language is required for test performance, object recognition is a very valuable test for human research studies in distinct parts of the world, including areas where there might be less years of formal education. The main focus of this review is to illustrate how object recognition can be used to assess cognition in humans under normal physiological and neurological conditions. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. The 'Reading the Mind in Films' Task [child version]: complex emotion and mental state recognition in children with and without autism spectrum conditions.

    PubMed

    Golan, Ofer; Baron-Cohen, Simon; Golan, Yael

    2008-09-01

    Children with autism spectrum conditions (ASC) have difficulties recognizing others' emotions. Research has mostly focused on basic emotion recognition, devoid of context. This study reports the results of a new task, assessing recognition of complex emotions and mental states in social contexts. An ASC group (n = 23) was compared to a general population control group (n = 24). Children with ASC performed lower than controls on the task. Using task scores, more than 87% of the participants were allocated to their group. This new test quantifies complex emotion and mental state recognition in life-like situations. Our findings reveal that children with ASC have residual difficulties in this aspect of empathy. The use of language-based compensatory strategies for emotion recognition is discussed.

  19. Speaker recognition with temporal cues in acoustic and electric hearing

    NASA Astrophysics Data System (ADS)

    Vongphoe, Michael; Zeng, Fan-Gang

    2005-08-01

    Natural spoken language processing includes not only speech recognition but also identification of the speaker's gender, age, emotional, and social status. Our purpose in this study is to evaluate whether temporal cues are sufficient to support both speech and speaker recognition. Ten cochlear-implant and six normal-hearing subjects were presented with vowel tokens spoken by three men, three women, two boys, and two girls. In one condition, the subject was asked to recognize the vowel. In the other condition, the subject was asked to identify the speaker. Extensive training was provided for the speaker recognition task. Normal-hearing subjects achieved nearly perfect performance in both tasks. Cochlear-implant subjects achieved good performance in vowel recognition but poor performance in speaker recognition. The level of the cochlear implant performance was functionally equivalent to normal performance with eight spectral bands for vowel recognition but only to one band for speaker recognition. These results show a disassociation between speech and speaker recognition with primarily temporal cues, highlighting the limitation of current speech processing strategies in cochlear implants. Several methods, including explicit encoding of fundamental frequency and frequency modulation, are proposed to improve speaker recognition for current cochlear implant users.

  20. Mirror neurons, birdsong, and human language: a hypothesis.

    PubMed

    Levy, Florence

    2011-01-01

    THE MIRROR SYSTEM HYPOTHESIS AND INVESTIGATIONS OF BIRDSONG ARE REVIEWED IN RELATION TO THE SIGNIFICANCE FOR THE DEVELOPMENT OF HUMAN SYMBOLIC AND LANGUAGE CAPACITY, IN TERMS OF THREE FUNDAMENTAL FORMS OF COGNITIVE REFERENCE: iconic, indexical, and symbolic. Mirror systems are initially iconic but can progress to indexical reference when produced without the need for concurrent stimuli. Developmental stages in birdsong are also explored with reference to juvenile subsong vs complex stereotyped adult syllables, as an analogy with human language development. While birdsong remains at an indexical reference stage, human language benefits from the capacity for symbolic reference. During a pre-linguistic "babbling" stage, recognition of native phonemic categories is established, allowing further development of subsequent prefrontal and linguistic circuits for sequential language capacity.

  1. Mirror Neurons, Birdsong, and Human Language: A Hypothesis

    PubMed Central

    Levy, Florence

    2012-01-01

    The mirror system hypothesis and investigations of birdsong are reviewed in relation to the significance for the development of human symbolic and language capacity, in terms of three fundamental forms of cognitive reference: iconic, indexical, and symbolic. Mirror systems are initially iconic but can progress to indexical reference when produced without the need for concurrent stimuli. Developmental stages in birdsong are also explored with reference to juvenile subsong vs complex stereotyped adult syllables, as an analogy with human language development. While birdsong remains at an indexical reference stage, human language benefits from the capacity for symbolic reference. During a pre-linguistic “babbling” stage, recognition of native phonemic categories is established, allowing further development of subsequent prefrontal and linguistic circuits for sequential language capacity. PMID:22287950

  2. Human and animal sounds influence recognition of body language.

    PubMed

    Van den Stock, Jan; Grèzes, Julie; de Gelder, Beatrice

    2008-11-25

    In naturalistic settings emotional events have multiple correlates and are simultaneously perceived by several sensory systems. Recent studies have shown that recognition of facial expressions is biased towards the emotion expressed by a simultaneously presented emotional expression in the voice even if attention is directed to the face only. So far, no study examined whether this phenomenon also applies to whole body expressions, although there is no obvious reason why this crossmodal influence would be specific for faces. Here we investigated whether perception of emotions expressed in whole body movements is influenced by affective information provided by human and by animal vocalizations. Participants were instructed to attend to the action displayed by the body and to categorize the expressed emotion. The results indicate that recognition of body language is biased towards the emotion expressed by the simultaneously presented auditory information, whether it consist of human or of animal sounds. Our results show that a crossmodal influence from auditory to visual emotional information obtains for whole body video images with the facial expression blanked and includes human as well as animal sounds.

  3. Diminutives facilitate word segmentation in natural speech: cross-linguistic evidence.

    PubMed

    Kempe, Vera; Brooks, Patricia J; Gillis, Steven; Samson, Graham

    2007-06-01

    Final-syllable invariance is characteristic of diminutives (e.g., doggie), which are a pervasive feature of the child-directed speech registers of many languages. Invariance in word endings has been shown to facilitate word segmentation (Kempe, Brooks, & Gillis, 2005) in an incidental-learning paradigm in which synthesized Dutch pseudonouns were used. To broaden the cross-linguistic evidence for this invariance effect and to increase its ecological validity, adult English speakers (n=276) were exposed to naturally spoken Dutch or Russian pseudonouns presented in sentence contexts. A forced choice test was given to assess target recognition, with foils comprising unfamiliar syllable combinations in Experiments 1 and 2 and syllable combinations straddling word boundaries in Experiment 3. A control group (n=210) received the recognition test with no prior exposure to targets. Recognition performance improved with increasing final-syllable rhyme invariance, with larger increases for the experimental group. This confirms that word ending invariance is a valid segmentation cue in artificial, as well as naturalistic, speech and that diminutives may aid segmentation in a number of languages.

  4. Research review: reading comprehension in developmental disorders of language and communication.

    PubMed

    Ricketts, Jessie

    2011-11-01

    Deficits in reading airment (SLI), Down syndrome (DS) and autism spectrum disorders (ASD). In this review (based on a search of the ISI Web of Knowledge database to 2011), the Simple View of Reading is used as a framework for considering reading comprehension in these groups. There is substantial evidence for reading comprehension impairments in SLI and growing evidence that weaknesses in this domain are common in DS and ASD. Further, in these groups reading comprehension is typically more impaired than word recognition. However, there is also evidence that some children and adolescents with DS, ASD and a history of SLI develop reading comprehension and word recognition skills at or above the age appropriate level. This review of the literature indicates that factors including word recognition, oral language, nonverbal ability and working memory may explain reading comprehension difficulties in SLI, DS and ASD. In addition, it highlights methodological issues, implications of poor reading comprehension and fruitful areas for future research. © 2011 The Author. Journal of Child Psychology and Psychiatry © 2011 Association for Child and Adolescent Mental Health.

  5. How Does Anxiety Affect Second Language Learning? A Reply to Sparks and Ganschow.

    ERIC Educational Resources Information Center

    MacIntyre, Peter D.

    1995-01-01

    Advocates that language anxiety can play a significant causal role in creating individual differences in both language learning and communication. This paper studies the role of anxiety in the language learning process and concludes that the linguistic coding deficit hypothesis errs in assigning epiphenomenal status to language anxiety. (57…

  6. A Typology of Language-Brokering Events in Dual-Language Immersion Classrooms

    ERIC Educational Resources Information Center

    Coyoca, Anne Marie; Lee, Jin Sook

    2009-01-01

    This paper examines language-brokering events to better understand how children utilize their linguistic resources to create spaces where the coexistence of two languages can enable or restrict understanding and learning of academic content for themselves and others. An analysis of the structure of language-brokering events reveals that different…

  7. Accuracy of Teachers' Predictions of Language Minority and Majority Students' Language Comprehension

    ERIC Educational Resources Information Center

    Wold, Astri Heen

    2013-01-01

    The classroom is an important context of second language learning for language minority children. Teachers are responsible for creating good learning opportunities by securing language use well adjusted to the children's level of comprehension. Thus, teachers should have realistic expectations of this. The purpose of the research is to increase…

  8. Teachers' English Proficiency and Classroom Language Use: A Conversation Analysis Study

    ERIC Educational Resources Information Center

    Van Canh, Le; Renandya, Willy A.

    2017-01-01

    How does teachers' target language proficiency correlate with their ability to use the target language effectively in order to provide optimal learning opportunities in the language classroom? Adopting a conversation analysis approach, this study examines the extent to which teachers' use of the target language in the classroom creates learning…

  9. HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition.

    PubMed

    Lagorce, Xavier; Orchard, Garrick; Galluppi, Francesco; Shi, Bertram E; Benosman, Ryad B

    2017-07-01

    This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.

  10. [The role of external letter positions in visual word recognition].

    PubMed

    Perea, Manuel; Lupker, Sthephen J

    2007-11-01

    A key issue for any computational model of visual word recognition is the choice of an input coding schema, which is responsible for assigning letter positions. Such a schema must reflect the fact that, according to recent research, nonwords created by transposing letters (e.g., caniso for CASINO ), typically, appear to be more similar to the word than nonwords created by replacing letters (e.g., caviro ). In the present research, we initially carried out a computational analysis examining the degree to which the position of the transposition influences transposed-letter similarity effects. We next conducted a masked priming experiment with the lexical decision task to determine whether a transposed-letter priming advantage occurs when the first letter position is involved. Primes were created by either transposing the first and third letters (démula-MEDULA ) or replacing the first and third letters (bérula-MEDULA). Results showed that there was no transposed-letter priming advantage in this situation. We discuss the implications of these results for models of visual word recognition.

  11. Computer-Assisted Update of a Consumer Health Vocabulary Through Mining of Social Network Data

    PubMed Central

    2011-01-01

    Background Consumer health vocabularies (CHVs) have been developed to aid consumer health informatics applications. This purpose is best served if the vocabulary evolves with consumers’ language. Objective Our objective was to create a computer assisted update (CAU) system that works with live corpora to identify new candidate terms for inclusion in the open access and collaborative (OAC) CHV. Methods The CAU system consisted of three main parts: a Web crawler and an HTML parser, a candidate term filter that utilizes natural language processing tools including term recognition methods, and a human review interface. In evaluation, the CAU system was applied to the health-related social network website PatientsLikeMe.com. The system’s utility was assessed by comparing the candidate term list it generated to a list of valid terms hand extracted from the text of the crawled webpages. Results The CAU system identified 88,994 unique terms 1- to 7-grams (“n-grams” are n consecutive words within a sentence) in 300 crawled PatientsLikeMe.com webpages. The manual review of the crawled webpages identified 651 valid terms not yet included in the OAC CHV or the Unified Medical Language System (UMLS) Metathesaurus, a collection of vocabularies amalgamated to form an ontology of medical terms, (ie, 1 valid term per 136.7 candidate n-grams). The term filter selected 774 candidate terms, of which 237 were valid terms, that is, 1 valid term among every 3 or 4 candidates reviewed. Conclusion The CAU system is effective for generating a list of candidate terms for human review during CHV development. PMID:21586386

  12. Computer-assisted update of a consumer health vocabulary through mining of social network data.

    PubMed

    Doing-Harris, Kristina M; Zeng-Treitler, Qing

    2011-05-17

    Consumer health vocabularies (CHVs) have been developed to aid consumer health informatics applications. This purpose is best served if the vocabulary evolves with consumers' language. Our objective was to create a computer assisted update (CAU) system that works with live corpora to identify new candidate terms for inclusion in the open access and collaborative (OAC) CHV. The CAU system consisted of three main parts: a Web crawler and an HTML parser, a candidate term filter that utilizes natural language processing tools including term recognition methods, and a human review interface. In evaluation, the CAU system was applied to the health-related social network website PatientsLikeMe.com. The system's utility was assessed by comparing the candidate term list it generated to a list of valid terms hand extracted from the text of the crawled webpages. The CAU system identified 88,994 unique terms 1- to 7-grams ("n-grams" are n consecutive words within a sentence) in 300 crawled PatientsLikeMe.com webpages. The manual review of the crawled webpages identified 651 valid terms not yet included in the OAC CHV or the Unified Medical Language System (UMLS) Metathesaurus, a collection of vocabularies amalgamated to form an ontology of medical terms, (ie, 1 valid term per 136.7 candidate n-grams). The term filter selected 774 candidate terms, of which 237 were valid terms, that is, 1 valid term among every 3 or 4 candidates reviewed. The CAU system is effective for generating a list of candidate terms for human review during CHV development.

  13. NEREC, an effective brain mapping protocol for combined language and long-term memory functions.

    PubMed

    Perrone-Bertolotti, Marcela; Girard, Cléa; Cousin, Emilie; Vidal, Juan Ricardo; Pichat, Cédric; Kahane, Philippe; Baciu, Monica

    2015-12-01

    Temporal lobe epilepsy can induce functional plasticity in temporoparietal networks involved in language and long-term memory processing. Previous studies in healthy subjects have revealed the relative difficulty for this network to respond effectively across different experimental designs, as compared to more reactive regions such as frontal lobes. For a protocol to be optimal for clinical use, it has to first show robust effects in a healthy cohort. In this study, we developed a novel experimental paradigm entitled NEREC, which is able to reveal the robust participation of temporoparietal networks in a uniquely combined language and memory task, validated in an fMRI study with healthy subjects. Concretely, NEREC is composed of two runs: (a) an intermixed language-memory task (confrontation naming associated with encoding in nonverbal items, NE) to map language (i.e., word retrieval and lexico-semantic processes) combined with simultaneous long-term verbal memory encoding (NE items named but also explicitly memorized) and (b) a memory retrieval task of items encoded during NE (word recognition, REC) intermixed with new items. Word recognition is based on both perceptual-semantic familiarity (feeling of 'know') and accessing stored memory representations (remembering). In order to maximize the remembering and recruitment of medial temporal lobe structures, we increased REC difficulty by changing the modality of stimulus presentation (from nonverbal during NE to verbal during REC). We report that (a) temporoparietal activation during NE was attributable to both lexico-semantic (language) and memory (episodic encoding and semantic retrieval) processes; that (b) encoding activated the left hippocampus, bilateral fusiform, and bilateral inferior temporal gyri; and that (c) task recognition (recollection) activated the right hippocampus and bilateral but predominant left fusiform gyrus. The novelty of this protocol consists of (a) combining two tasks in one (language and long-term memory encoding/recall) instead of applying isolated tasks to map temporoparietal regions, (b) analyzing NE data based on performances recorded during REC, (c) double-mapping networks involved in naming and in long-term memory encoding and retrieval, (d) focusing on remembering with hippocampal activation and familiarity judgment with lateral temporal cortices activation, and (e) short duration of examination and feasibility. These aspects are of particular interest in patients with TLE, who frequently show impairment of these cognitive functions. Here, we show that the novel protocol is suited for this clinical evaluation. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A Database-Based and Web-Based Meta-CASE System

    NASA Astrophysics Data System (ADS)

    Eessaar, Erki; Sgirka, Rünno

    Each Computer Aided Software Engineering (CASE) system provides support to a software process or specific tasks or activities that are part of a software process. Each meta-CASE system allows us to create new CASE systems. The creators of a new CASE system have to specify abstract syntax of the language that is used in the system and functionality as well as non-functional properties of the new system. Many meta-CASE systems record their data directly in files. In this paper, we introduce a meta-CASE system, the enabling technology of which is an object-relational database system (ORDBMS). The system allows users to manage specifications of languages and create models by using these languages. The system has web-based and form-based user interface. We have created a proof-of-concept prototype of the system by using PostgreSQL ORDBMS and PHP scripting language.

  15. Creating a Global Cultural Consciousness in a Japanese EFL Classroom

    ERIC Educational Resources Information Center

    Aubrey, Scott

    2009-01-01

    Recently, culture has taken an important role in language education. In this view, creating a global cultural consciousness among second language (L2) students can help bridge the gap between linguistic ability and functional intercultural communication. This paper, which makes reference to Japanese adult EFL learners, justifies the body of…

  16. A Tutorial in Creating Web-Enabled Databases with Inmagic DB/TextWorks through ODBC.

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2000-01-01

    Explains how to create Web-enabled databases. Highlights include Inmagic's DB/Text WebPublisher product called DB/TextWorks; ODBC (Open Database Connectivity) drivers; Perl programming language; HTML coding; Structured Query Language (SQL); Common Gateway Interface (CGI) programming; and examples of HTML pages and Perl scripts. (LRW)

  17. Esperanto Studies: An Overview. Esperanto Document 43A.

    ERIC Educational Resources Information Center

    Tonkin, Humphrey; Fettes, Mark

    The history and development of research on Esperanto, a language created for universal communication, are reviewed. Discussion begins with the early context and intellectual tradition of efforts to create a universal language and proceeds to the creation of an Esperanto community and the context in which it operates currently. Expansion of the…

  18. Complex Adaptive Systems Based Data Integration: Theory and Applications

    ERIC Educational Resources Information Center

    Rohn, Eliahu

    2008-01-01

    Data Definition Languages (DDLs) have been created and used to represent data in programming languages and in database dictionaries. This representation includes descriptions in the form of data fields and relations in the form of a hierarchy, with the common exception of relational databases where relations are flat. Network computing created an…

  19. Khmer Rap Boys, X-Men, Asia's Fruits, and Dragonball Z: Creating Multilingual and Multimodal Classroom Contexts

    ERIC Educational Resources Information Center

    McGinnis, Theresa Ann

    2007-01-01

    In our rapidly changing global culture, students' social worlds are becoming increasingly multilingual and multimodal, yet school practices do not often reflect the complexities or diversity of students' literacy and language practices. Valuing students' experiences with language and culture is important in creating supportive learning…

  20. Children's Agency in Creating and Maintaining Language Policy in Practice in Two "Language Profile" Preschools in Sweden

    ERIC Educational Resources Information Center

    Boyd, Sally; Huss, Leena; Ottesjö, Cajsa

    2017-01-01

    This paper presents results from an ethnographic study of language policy as it is enacted in everyday interaction in two language profile preschools in Sweden with explicit monolingual language policies: English and Finnish, respectively. However, in both preschools, children are free to choose language or code alternate. The study shows how…

Top