Real-time speech-driven animation of expressive talking faces
NASA Astrophysics Data System (ADS)
Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli
2011-05-01
In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.
ERIC Educational Resources Information Center
Treurniet, William
A study applied artificial neural networks, trained with the back-propagation learning algorithm, to modelling phonemes extracted from the DARPA TIMIT multi-speaker, continuous speech data base. A number of proposed network architectures were applied to the phoneme classification task, ranging from the simple feedforward multilayer network to more…
Automated Classification of Phonological Errors in Aphasic Language
Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.
1984-01-01
Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.
Effects of emotion on different phoneme classes
NASA Astrophysics Data System (ADS)
Lee, Chul Min; Yildirim, Serdar; Bulut, Murtaza; Busso, Carlos; Kazemzadeh, Abe; Lee, Sungbok; Narayanan, Shrikanth
2004-10-01
This study investigates the effects of emotion on different phoneme classes using short-term spectral features. In the research on emotion in speech, most studies have focused on prosodic features of speech. In this study, based on the hypothesis that different emotions have varying effects on the properties of the different speech sounds, we investigate the usefulness of phoneme-class level acoustic modeling for automatic emotion classification. Hidden Markov models (HMM) based on short-term spectral features for five broad phonetic classes are used for this purpose using data obtained from recordings of two actresses. Each speaker produces 211 sentences with four different emotions (neutral, sad, angry, happy). Using the speech material we trained and compared the performances of two sets of HMM classifiers: a generic set of ``emotional speech'' HMMs (one for each emotion) and a set of broad phonetic-class based HMMs (vowel, glide, nasal, stop, fricative) for each emotion type considered. Comparison of classification results indicates that different phoneme classes were affected differently by emotional change and that the vowel sounds are the most important indicator of emotions in speech. Detailed results and their implications on the underlying speech articulation will be discussed.
Framewise phoneme classification with bidirectional LSTM and other neural network architectures.
Graves, Alex; Schmidhuber, Jürgen
2005-01-01
In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.
NASA Astrophysics Data System (ADS)
Fredouille, Corinne; Pouchoulin, Gilles; Ghio, Alain; Revis, Joana; Bonastre, Jean-François; Giovanni, Antoine
2009-12-01
This paper addresses voice disorder assessment. It proposes an original back-and-forth methodology involving an automatic classification system as well as knowledge of the human experts (machine learning experts, phoneticians, and pathologists). The goal of this methodology is to bring a better understanding of acoustic phenomena related to dysphonia. The automatic system was validated on a dysphonic corpus (80 female voices), rated according to the GRBAS perceptual scale by an expert jury. Firstly, focused on the frequency domain, the classification system showed the interest of 0-3000 Hz frequency band for the classification task based on the GRBAS scale. Later, an automatic phonemic analysis underlined the significance of consonants and more surprisingly of unvoiced consonants for the same classification task. Submitted to the human experts, these observations led to a manual analysis of unvoiced plosives, which highlighted a lengthening of VOT according to the dysphonia severity validated by a preliminary statistical analysis.
Choi, Ja Young; Hu, Elly R; Perrachione, Tyler K
2018-04-01
The nondeterministic relationship between speech acoustics and abstract phonemic representations imposes a challenge for listeners to maintain perceptual constancy despite the highly variable acoustic realization of speech. Talker normalization facilitates speech processing by reducing the degrees of freedom for mapping between encountered speech and phonemic representations. While this process has been proposed to facilitate the perception of ambiguous speech sounds, it is currently unknown whether talker normalization is affected by the degree of potential ambiguity in acoustic-phonemic mapping. We explored the effects of talker normalization on speech processing in a series of speeded classification paradigms, parametrically manipulating the potential for inconsistent acoustic-phonemic relationships across talkers for both consonants and vowels. Listeners identified words with varying potential acoustic-phonemic ambiguity across talkers (e.g., beet/boat vs. boot/boat) spoken by single or mixed talkers. Auditory categorization of words was always slower when listening to mixed talkers compared to a single talker, even when there was no potential acoustic ambiguity between target sounds. Moreover, the processing cost imposed by mixed talkers was greatest when words had the most potential acoustic-phonemic overlap across talkers. Models of acoustic dissimilarity between target speech sounds did not account for the pattern of results. These results suggest (a) that talker normalization incurs the greatest processing cost when disambiguating highly confusable sounds and (b) that talker normalization appears to be an obligatory component of speech perception, taking place even when the acoustic-phonemic relationships across sounds are unambiguous.
Revis, J; Galant, C; Fredouille, C; Ghio, A; Giovanni, A
2012-01-01
Widely studied in terms of perception, acoustics or aerodynamics, dysphonia stays nevertheless a speech phenomenon, closely related to the phonetic composition of the message conveyed by the voice. In this paper, we present a series of three works with the aim to understand the implications of the phonetic manifestation of dysphonia. Our first study proposes a new approach to the perceptual analysis of dysphonia (the phonetic labeling), which principle is to listen and evaluate each phoneme in a sentence separately. This study confirms the hypothesis of Laver that the dysphonia is not a constant noise added to the speech signal, but a discontinuous phenomenon, occurring on certain phonemes, based on the phonetic context. However, the burden of executing the task has led us to look to the techniques of automatic speaker recognition (ASR) to automate the procedure. With the collaboration of the LIA, we have developed a system for automatic classification of dysphonia from the techniques of ASR. This is the subject of our second study. The first results obtained with this system suggest that the unvoiced consonants show predominant performance in the task of automatic classification of dysphonia. This result is surprising since it is often assumed that dysphonia occurs only on laryngeal vibration. We started looking for explanations of this phenomenon and we present our assumptions and experiences in the third work we present.
ERIC Educational Resources Information Center
Yang, Hui-Jen; Lay, Yun-Long
2005-01-01
A computer-aided Mandarin phonemes training (CAMPT) system was developed and evaluated for training hearing-impaired students in their pronunciation of Mandarin phonemes. Deaf or hearing-impaired people have difficulty hearing their own voice, hence most of them cannot learn how to speak. Phonemes are the basis for learning to read and speak in…
Tongue corticospinal modulation during attended verbal stimuli: priming and coarticulation effects.
D'Ausilio, Alessandro; Jarmolowska, Joanna; Busan, Pierpaolo; Bufalari, Ilaria; Craighero, Laila
2011-11-01
Humans perceive continuous speech through interruptions or brief noise bursts cancelling entire phonemes. This robust phenomenon has been classically associated with mechanisms of perceptual restoration. In parallel, recent experimental evidence suggests that the motor system may actively participate in speech perception, even contributing to phoneme discrimination. In the present study we intended to verify if the motor system has a specific role in speech perceptual restoration as well. To this aim we recorded tongue corticospinal excitability during phoneme expectation induced by contextual information. Results showed that phoneme expectation determines an involvement of the individual's motor system specifically implicated in the production of the attended phoneme, exactly as it happens during actual listening of that phoneme, suggesting the presence of a speech imagery-like process. Very interestingly, this motoric phoneme expectation is also modulated by subtle coarticulation cues of which the listener is not consciously aware. Present data indicate that the rehearsal of a specific phoneme requires the contribution of the motor system exactly as it happens during the rehearsal of actions executed by the limbs, and that this process is abolished when an incongruent phonemic cue is presented, as similarly occurs during observation of anomalous hand actions. We propose that altogether these effects indicate that during speech listening an attentional-like mechanism driven by the motor system, based on a feed-forward anticipatory mechanism constantly verifying incoming information, is working allowing perceptual restoration. Copyright © 2011 Elsevier Ltd. All rights reserved.
Can Explicit Training in Cued Speech Improve Phoneme Identification?
ERIC Educational Resources Information Center
Rees, R.; Fitzpatrick, C.; Foulkes, J.; Peterson, H.; Newton, C.
2017-01-01
When identifying phonemes in new spoken words, lipreading is an important source of information for many deaf people. Because many groups of phonemes are virtually indistinguishable by sight, deaf people are able to identify about 30% of phonemes when lipreading non-words. Cued speech (CS) is a system of hand shapes and hand positions used…
Disorders of Articulation. Prentice-Hall Foundations of Speech Pathology Series.
ERIC Educational Resources Information Center
Carrell, James A.
Designed for students of speech pathology and audiology and for practicing clinicians, the text considers the nature of the articulation process, criteria for diagnosis, and classification and etiology of disorders. Also discussed are phonetic characteristics, including phonemic errors and configurational and contextual defects; and functional…
Statistical properties of Chinese phonemic networks
NASA Astrophysics Data System (ADS)
Yu, Shuiyuan; Liu, Haitao; Xu, Chunshan
2011-04-01
The study of properties of speech sound systems is of great significance in understanding the human cognitive mechanism and the working principles of speech sound systems. Some properties of speech sound systems, such as the listener-oriented feature and the talker-oriented feature, have been unveiled with the statistical study of phonemes in human languages and the research of the interrelations between human articulatory gestures and the corresponding acoustic parameters. With all the phonemes of speech sound systems treated as a coherent whole, our research, which focuses on the dynamic properties of speech sound systems in operation, investigates some statistical parameters of Chinese phoneme networks based on real text and dictionaries. The findings are as follows: phonemic networks have high connectivity degrees and short average distances; the degrees obey normal distribution and the weighted degrees obey power law distribution; vowels enjoy higher priority than consonants in the actual operation of speech sound systems; the phonemic networks have high robustness against targeted attacks and random errors. In addition, for investigating the structural properties of a speech sound system, a statistical study of dictionaries is conducted, which shows the higher frequency of shorter words and syllables and the tendency that the longer a word is, the shorter the syllables composing it are. From these structural properties and dynamic properties one can derive the following conclusion: the static structure of a speech sound system tends to promote communication efficiency and save articulation effort while the dynamic operation of this system gives preference to reliable transmission and easy recognition. In short, a speech sound system is an effective, efficient and reliable communication system optimized in many aspects.
[Intervention in dyslexic disorders: phonological awareness training].
Etchepareborda, M C
2003-02-01
Taking into account the systems for the treatment of brain information when drawing up a work plan allows us to recreate processing routines that go from multisensory perception to motor, oral and cognitive production, which is the step prior to executive levels of thought, bottom-up and top-down processing systems. In recent years, the use of phonological methods to prevent or resolve reading disorders has become the fundamental mainstay in the treatment of dyslexia. The work is mainly based on phonological proficiency, which enables the patient to detect phonemes (input), to think about them (performance) and to use them to build words (output). Daily work with rhymes, the capacity to listen, the identification of phrases and words, and handling syllables and phonemes allows us to perform a preventive intervention that enhances the capacity to identify letters, phonological analysis and the reading of single words. We present the different therapeutic models that are most frequently employed. Fast For Word (FFW) training helps make progress in phonematic awareness and other linguistic skills, such as phonological awareness, semantics, syntax, grammar, working memory and event sequencing. With Deco-Fon, a programme for training phonological decoding, work is carried out on the auditory discrimination of pure tones, letters and consonant clusters, auditory processing speed, auditory and phonematic memory, and graphophonological processing, which is fundamental for speech, language and reading writing disorders. Hamlet is a programme based on categorisation activities for working on phonological conceptualisation. It attempts to encourage the analysis of the segments of words, syllables or phonemes, and the classification of a certain segment as belonging or not to a particular phonological or orthographical category. Therapeutic approaches in the early phases of reading are oriented towards two poles based on the basic mechanisms underlying the process of learning to read, the grapheme phoneme transformation process and global word recognition. The interventionalist strategies used at school are focused on the use of cognitive strategy techniques. The purpose of these techniques is to teach pupils practical strategies or resources aimed at overcoming specific deficiencies.
Automatic Analysis of Pronunciations for Children with Speech Sound Disorders.
Dudy, Shiran; Bedrick, Steven; Asgari, Meysam; Kain, Alexander
2018-07-01
Computer-Assisted Pronunciation Training (CAPT) systems aim to help a child learn the correct pronunciations of words. However, while there are many online commercial CAPT apps, there is no consensus among Speech Language Therapists (SLPs) or non-professionals about which CAPT systems, if any, work well. The prevailing assumption is that practicing with such programs is less reliable and thus does not provide the feedback necessary to allow children to improve their performance. The most common method for assessing pronunciation performance is the Goodness of Pronunciation (GOP) technique. Our paper proposes two new GOP techniques. We have found that pronunciation models that use explicit knowledge about error pronunciation patterns can lead to more accurate classification whether a phoneme was correctly pronounced or not. We evaluate the proposed pronunciation assessment methods against a baseline state of the art GOP approach, and show that the proposed techniques lead to classification performance that is more similar to that of a human expert.
Lexical Classification and Spelling: Do People Use Atypical Spellings for Atypical Pseudowords?
ERIC Educational Resources Information Center
Kemp, Nenagh; Treiman, Rebecca; Blackley, Hollie; Svoboda, Imogen; Kessler, Brett
2015-01-01
Many English phonemes have more than one possible spelling. People's choices among the options may be influenced by sublexical patterns, such as the identity of neighboring sounds within the word. However, little research has explored the possible role of lexical conditioning. Three experiments examined the potential effects of one such factor:…
An Introduction to Descriptive Linguistics. Revised Edition.
ERIC Educational Resources Information Center
Gleason, H.A., Jr.
Beginning chapters of this volume define language and describe the sound, stress, and intonation systems of English. The body of the text explores extensively morphology, phonetics, phonemics, and the process of communication. Individual chapters detail such topics as morphemes, syntactic devices, grammatical systems, phonemic problems in language…
Introduction and Overview of the Vicens-Reddy Speech Recognition System.
ERIC Educational Resources Information Center
Kameny, Iris; Ritea, H.
The Vicens-Reddy System is unique in the sense that it approaches the problem of speech recognition as a whole, rather than treating particular aspects of the problems as in previous attempts. For example, where earlier systems treated only segmentation of speech into phoneme groups, or detected phonemes in a given context, the Vicens-Reddy System…
Voice intelligibility in satellite mobile communications
NASA Technical Reports Server (NTRS)
Wishna, S.
1973-01-01
An amplitude control technique is reported that equalizes low level phonemes in a satellite narrow band FM voice communication system over channels having low carrier to noise ratios. This method presents at the transmitter equal amplitude phonemes so that the low level phonemes, when they are transmitted over the noisey channel, are above the noise and contribute to output intelligibility. The amplitude control technique provides also for squelching of noise when speech is not being transmitted.
Mimological Reveries? Disconfirming the Hypothesis of Phono-Emotional Iconicity in Poetry
Kraxenberger, Maria; Menninghaus, Winfried
2016-01-01
The present study retested previously reported empirical evidence suggesting an iconic relation between sound and emotional meaning in poetry. To this end, we analyzed the frequency of certain phoneme classes in 48 German poems and correlated them with ratings for emotional classification. Our analyses provide evidence for a link between the emotional classification of poems (joyful vs. sad) and the perception of tonal contrast as reflected in the attribution of phenomenological sound qualia (bright vs. dark). However, we could not confirm any of the previous hypotheses and findings regarding either a connection between the frequencies of occurrence of specific vowel classes and the perception of tonal contrast, or a relation between the frequencies of occurrence of consonant classes and emotional classification. PMID:27895614
First Language Grapheme-Phoneme Transparency Effects in Adult Second Language Learning
ERIC Educational Resources Information Center
Ijalba, Elizabeth; Obler, Loraine K.
2015-01-01
The Spanish writing system has consistent grapheme-to-phoneme correspondences (GPC), rendering it more transparent than English. We compared first-language (L1) orthographic transparency on how monolingual English- and Spanish-readers learned a novel writing system with a 1:1 (LT) and a 1:2 (LO) GPC. Our dependent variables were learning time,…
(abstract) Synthesis of Speaker Facial Movements to Match Selected Speech Sequences
NASA Technical Reports Server (NTRS)
Scott, Kenneth C.
1994-01-01
We are developing a system for synthesizing image sequences the simulate the facial motion of a speaker. To perform this synthesis, we are pursuing two major areas of effort. We are developing the necessary computer graphics technology to synthesize a realistic image sequence of a person speaking selected speech sequences. Next, we are developing a model that expresses the relation between spoken phonemes and face/mouth shape. A subject is video taped speaking an arbitrary text that contains expression of the full list of desired database phonemes. The subject is video taped from the front speaking normally, recording both audio and video detail simultaneously. Using the audio track, we identify the specific video frames on the tape relating to each spoken phoneme. From this range we digitize the video frame which represents the extreme of mouth motion/shape. Thus, we construct a database of images of face/mouth shape related to spoken phonemes. A selected audio speech sequence is recorded which is the basis for synthesizing a matching video sequence; the speaker need not be the same as used for constructing the database. The audio sequence is analyzed to determine the spoken phoneme sequence and the relative timing of the enunciation of those phonemes. Synthesizing an image sequence corresponding to the spoken phoneme sequence is accomplished using a graphics technique known as morphing. Image sequence keyframes necessary for this processing are based on the spoken phoneme sequence and timing. We have been successful in synthesizing the facial motion of a native English speaker for a small set of arbitrary speech segments. Our future work will focus on advancement of the face shape/phoneme model and independent control of facial features.
ERIC Educational Resources Information Center
LUELSDORFF, PHILIP A.
THE LANGUAGES OF OKINAWAN MAY BE DIVIDED INTO THREE MUTUALLY UNINTELLIGIBLE REGIONAL DIALECTS, CORRESPONDING GEOGRAPHICALLY TO THE THREE GROUPS OF ISLANDS OF THE RYUUKYUU ARCHIPELAGO. AS REPRESENTATIVE MODEL OF THE REGIONAL DIALECTS, AGENA-GUCHI IS ANALYZED WITH RESPECT TO PHONEMIC SYSTEMS, OKINAWAN MORPHOPHONEMICS, AND OKINAWAN SYLLABLE STRUCTURE…
Grigos, Maria I.; Kolenda, Nicole
2010-01-01
Jaw movement patterns were examined longitudinally in a 3-year-old male with childhood apraxia of speech (CAS) and compared with a typically developing control group. The child with CAS was followed for 8 months, until he began accurately and consistently producing the bilabial phonemes /p/, /b/, and /m/. A movement tracking system was used to study jaw duration, displacement, velocity, and stability. A transcription analysis determined the percentage of phoneme errors and consistency. Results showed phoneme-specific changes which included increases in jaw velocity and stability over time, as well as decreases in duration. Kinematic parameters became more similar to patterns seen in the controls during final sessions where tokens were produced most accurately and consistently. Closing velocity and stability, however, were the only measures to fall within a 95% confidence interval established for the controls across all three target phonemes. These findings suggest that motor processes may differ between children with CAS and their typically developing peers. PMID:20030551
Conditioned allophony in speech perception: an ERP study.
Miglietta, Sandra; Grimaldi, Mirko; Calabrese, Andrea
2013-09-01
A Mismatch Negativity (MMN) study was performed to investigate whether pre-attentive vowel perception is influenced by phonological status. We compared the MMN response to the acoustic distinction between the allophonic variation [ε-e] and phonemic contrast [e-i] present in a Southern-Italian variety (Tricase dialect). Clear MMNs were elicited for both the phonemic and allophonic conditions. Interestingly, a shorter latency was observed for the phonemic pair, but no significant amplitude difference was observed between the two conditions. Together, these results suggest that for isolated vowels, the phonological status of a vowel category is reflected in the latency of the MMN peak. The earlier latency of the phonemic condition argues for an easier parsing and encoding of phonemic contrasts in memory representations. Thus, neural computations mapping auditory inputs into higher perceptual representations seem 'sensitive' to the contrastive/non-contrastive status of the sounds as determined by the listeners' knowledge of the own phonological system. Copyright © 2013 Elsevier Inc. All rights reserved.
Real-time classification of auditory sentences using evoked cortical activity in humans
NASA Astrophysics Data System (ADS)
Moses, David A.; Leonard, Matthew K.; Chang, Edward F.
2018-06-01
Objective. Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces. Approach. Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes. Main results. We observed single-trial sentence classification accuracies of 90% or higher for each subject with less than 7 minutes of training data, demonstrating the ability of rtNSR to use cortical recordings to perform accurate real-time speech decoding in a limited vocabulary setting. Significance. Further development and testing of the package with different speech paradigms could influence the design of future speech neuroprosthetic applications.
Phonemes: Lexical access and beyond.
Kazanina, Nina; Bowers, Jeffrey S; Idsardi, William
2018-04-01
Phonemes play a central role in traditional theories as units of speech perception and access codes to lexical representations. Phonemes have two essential properties: they are 'segment-sized' (the size of a consonant or vowel) and abstract (a single phoneme may be have different acoustic realisations). Nevertheless, there is a long history of challenging the phoneme hypothesis, with some theorists arguing for differently sized phonological units (e.g. features or syllables) and others rejecting abstract codes in favour of representations that encode detailed acoustic properties of the stimulus. The phoneme hypothesis is the minority view today. We defend the phoneme hypothesis in two complementary ways. First, we show that rejection of phonemes is based on a flawed interpretation of empirical findings. For example, it is commonly argued that the failure to find acoustic invariances for phonemes rules out phonemes. However, the lack of invariance is only a problem on the assumption that speech perception is a bottom-up process. If learned sublexical codes are modified by top-down constraints (which they are), then this argument loses all force. Second, we provide strong positive evidence for phonemes on the basis of linguistic data. Almost all findings that are taken (incorrectly) as evidence against phonemes are based on psycholinguistic studies of single words. However, phonemes were first introduced in linguistics, and the best evidence for phonemes comes from linguistic analyses of complex word forms and sentences. In short, the rejection of phonemes is based on a false analysis and a too-narrow consideration of the relevant data.
NASA Astrophysics Data System (ADS)
Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.
2016-10-01
Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.
ERIC Educational Resources Information Center
Sáez, Leilani; Irvin, P. Shawn; Alonzo, Julie; Tindal, Gerald
2012-01-01
In 2006, the easyCBM reading assessment system was developed to support the progress monitoring of phoneme segmenting, letter names and sounds recognition, word reading, passage reading fluency, and comprehension skill development in elementary schools. More recently, the Common Core Standards in English Language Arts have been introduced as a…
Semantic and Phonemic Verbal Fluency in Blinds
ERIC Educational Resources Information Center
Nejati, Vahid; Asadi, Anoosh
2010-01-01
A person who has suffered the total loss of a sensory system has, indirectly, suffered a brain lesion. Semantic and phonologic verbal fluency are used for evaluation of executive function and language. The aim of this study is evaluation and comparison of phonemic and semantic verbal fluency in acquired blinds. We compare 137 blinds and 124…
Do Phonological Constraints on the Spoken Word Affect Visual Lexical Decision?
ERIC Educational Resources Information Center
Lee, Yang; Moreno, Miguel A.; Carello, Claudia; Turvey, M. T.
2013-01-01
Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language's writing system and the assimilation of phonemes according to the language's constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes…
Causal Influence of Articulatory Motor Cortex on Comprehending Single Spoken Words: TMS Evidence.
Schomers, Malte R; Kirilina, Evgeniya; Weigand, Anne; Bajbouj, Malek; Pulvermüller, Friedemann
2015-10-01
Classic wisdom had been that motor and premotor cortex contribute to motor execution but not to higher cognition and language comprehension. In contrast, mounting evidence from neuroimaging, patient research, and transcranial magnetic stimulation (TMS) suggest sensorimotor interaction and, specifically, that the articulatory motor cortex is important for classifying meaningless speech sounds into phonemic categories. However, whether these findings speak to the comprehension issue is unclear, because language comprehension does not require explicit phonemic classification and previous results may therefore relate to factors alien to semantic understanding. We here used the standard psycholinguistic test of spoken word comprehension, the word-to-picture-matching task, and concordant TMS to articulatory motor cortex. TMS pulses were applied to primary motor cortex controlling either the lips or the tongue as subjects heard critical word stimuli starting with bilabial lip-related or alveolar tongue-related stop consonants (e.g., "pool" or "tool"). A significant cross-over interaction showed that articulatory motor cortex stimulation delayed comprehension responses for phonologically incongruent words relative to congruous ones (i.e., lip area TMS delayed "tool" relative to "pool" responses). As local TMS to articulatory motor areas differentially delays the comprehension of phonologically incongruous spoken words, we conclude that motor systems can take a causal role in semantic comprehension and, hence, higher cognition. © The Author 2014. Published by Oxford University Press.
Causal Influence of Articulatory Motor Cortex on Comprehending Single Spoken Words: TMS Evidence
Schomers, Malte R.; Kirilina, Evgeniya; Weigand, Anne; Bajbouj, Malek; Pulvermüller, Friedemann
2015-01-01
Classic wisdom had been that motor and premotor cortex contribute to motor execution but not to higher cognition and language comprehension. In contrast, mounting evidence from neuroimaging, patient research, and transcranial magnetic stimulation (TMS) suggest sensorimotor interaction and, specifically, that the articulatory motor cortex is important for classifying meaningless speech sounds into phonemic categories. However, whether these findings speak to the comprehension issue is unclear, because language comprehension does not require explicit phonemic classification and previous results may therefore relate to factors alien to semantic understanding. We here used the standard psycholinguistic test of spoken word comprehension, the word-to-picture-matching task, and concordant TMS to articulatory motor cortex. TMS pulses were applied to primary motor cortex controlling either the lips or the tongue as subjects heard critical word stimuli starting with bilabial lip-related or alveolar tongue-related stop consonants (e.g., “pool” or “tool”). A significant cross-over interaction showed that articulatory motor cortex stimulation delayed comprehension responses for phonologically incongruent words relative to congruous ones (i.e., lip area TMS delayed “tool” relative to “pool” responses). As local TMS to articulatory motor areas differentially delays the comprehension of phonologically incongruous spoken words, we conclude that motor systems can take a causal role in semantic comprehension and, hence, higher cognition. PMID:25452575
Strategic Deployment of Orthographic Knowledge in Phoneme Detection
ERIC Educational Resources Information Center
Cutler, Anne; Treiman, Rebecca; van Ooijen, Brit
2010-01-01
The phoneme detection task is widely used in spoken-word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realized. Listeners detected…
The Development of the Speaker Independent ARM Continuous Speech Recognition System
1992-01-01
spokeTi airborne reconnaissance reports u-ing a speech recognition system based on phoneme-level hidden Markov models (HMMs). Previous versions of the ARM...will involve automatic selection from multiple model sets, corresponding to different speaker types, and that the most rudimen- tary partition of a...The vocabulary size for the ARM task is 497 words. These words are related to the phoneme-level symbols corresponding to the models in the model set
Marshall, D F; Strutt, A M; Williams, A E; Simpson, R K; Jankovic, J; York, M K
2012-12-01
Despite common occurrences of verbal fluency declines following bilateral subthalamic nucleus deep brain stimulation (STN-DBS) for the treatment of Parkinson's disease (PD), alternating fluency measures using cued and uncued paradigms have not been evaluated. Twenty-three STN-DBS patients were compared with 20 non-surgical PD patients on a comprehensive neuropsychological assessment, including cued and uncued intradimensional (phonemic/phonemic and semantic/semantic) and extradimensional (phonemic/semantic) alternating fluency measures at baseline and 6-month follow-up. STN-DBS patients demonstrated a greater decline on the cued phonemic/phonemic fluency and the uncued phonemic/semantic fluency tasks compared to the PD patients. For STN-DBS patients, verbal learning and information processing speed accounted for a significant proportion of the variance in declines in alternating phonemic/phonemic and phonemic/semantic fluency scores, respectively, whilst only naming was related to uncued phonemic/semantic performance for the PD patients. Both groups were aided by cueing for the extradimensional task at baseline and follow-up, and the PD patients were also aided by cueing for the phonemic/phonemic task on follow-up. These findings suggest that changes in alternating fluency are not related to disease progression alone as STN-DBS patients demonstrated greater declines over time than the PD patients, and this change was related to declines in information processing speed. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.
Bilingualism affects audiovisual phoneme identification
Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia
2014-01-01
We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience—i.e., the exposure to a double phonological code during childhood—affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically “deaf” and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech. PMID:25374551
Recognition of speaker-dependent continuous speech with KEAL
NASA Astrophysics Data System (ADS)
Mercier, G.; Bigorgne, D.; Miclet, L.; Le Guennec, L.; Querre, M.
1989-04-01
A description of the speaker-dependent continuous speech recognition system KEAL is given. An unknown utterance, is recognized by means of the followng procedures: acoustic analysis, phonetic segmentation and identification, word and sentence analysis. The combination of feature-based, speaker-independent coarse phonetic segmentation with speaker-dependent statistical classification techniques is one of the main design features of the acoustic-phonetic decoder. The lexical access component is essentially based on a statistical dynamic programming technique which aims at matching a phonemic lexical entry containing various phonological forms, against a phonetic lattice. Sentence recognition is achieved by use of a context-free grammar and a parsing algorithm derived from Earley's parser. A speaker adaptation module allows some of the system parameters to be adjusted by matching known utterances with their acoustical representation. The task to be performed, described by its vocabulary and its grammar, is given as a parameter of the system. Continuously spoken sentences extracted from a 'pseudo-Logo' language are analyzed and results are presented.
Extrinsic cognitive load impairs low-level speech perception.
Mattys, Sven L; Barden, Katharine; Samuel, Arthur G
2014-06-01
Recent research has suggested that the extrinsic cognitive load generated by performing a nonlinguistic visual task while perceiving speech increases listeners' reliance on lexical knowledge and decreases their capacity to perceive phonetic detail. In the present study, we asked whether this effect is accounted for better at a lexical or a sublexical level. The former would imply that cognitive load directly affects lexical activation but not perceptual sensitivity; the latter would imply that increased lexical reliance under cognitive load is only a secondary consequence of imprecise or incomplete phonetic encoding. Using the phoneme restoration paradigm, we showed that perceptual sensitivity decreases (i.e., phoneme restoration increases) almost linearly with the effort involved in the concurrent visual task. However, cognitive load had only a minimal effect on the contribution of lexical information to phoneme restoration. We concluded that the locus of extrinsic cognitive load on the speech system is perceptual rather than lexical. Mechanisms by which cognitive load increases tolerance to acoustic imprecision and broadens phonemic categories were discussed.
Coping Strategies in Reading: Multi-Readers in the Norwegian General Education System
ERIC Educational Resources Information Center
Vik, Astrid Kristin; Fellenius, Kerstin
2007-01-01
Six primary school-aged braille students were taught to name 4 to 10 braille letters as phonemes and another 4 to 10 braille letters as graphemes (Study 1). They were then taught to name 10 braille words as onset-rimes and another 10 braille words as whole words (Study 2). Instruction in phonemes and onset rimes resulted in fewer trials and a…
A comparison of worldwide phonemic and genetic variation in human populations
Creanza, Nicole; Ruhlen, Merritt; Pemberton, Trevor J.; Rosenberg, Noah A.; Feldman, Marcus W.; Ramachandran, Sohini
2015-01-01
Worldwide patterns of genetic variation are driven by human demographic history. Here, we test whether this demographic history has left similar signatures on phonemes—sound units that distinguish meaning between words in languages—to those it has left on genes. We analyze, jointly and in parallel, phoneme inventories from 2,082 worldwide languages and microsatellite polymorphisms from 246 worldwide populations. On a global scale, both genetic distance and phonemic distance between populations are significantly correlated with geographic distance. Geographically close language pairs share significantly more phonemes than distant language pairs, whether or not the languages are closely related. The regional geographic axes of greatest phonemic differentiation correspond to axes of genetic differentiation, suggesting that there is a relationship between human dispersal and linguistic variation. However, the geographic distribution of phoneme inventory sizes does not follow the predictions of a serial founder effect during human expansion out of Africa. Furthermore, although geographically isolated populations lose genetic diversity via genetic drift, phonemes are not subject to drift in the same way: within a given geographic radius, languages that are relatively isolated exhibit more variance in number of phonemes than languages with many neighbors. This finding suggests that relatively isolated languages are more susceptible to phonemic change than languages with many neighbors. Within a language family, phoneme evolution along genetic, geographic, or cognate-based linguistic trees predicts similar ancestral phoneme states to those predicted from ancient sources. More genetic sampling could further elucidate the relative roles of vertical and horizontal transmission in phoneme evolution. PMID:25605893
Varnet, Léo; Meunier, Fanny; Trollé, Gwendoline; Hoen, Michel
2016-01-01
A vast majority of dyslexic children exhibit a phonological deficit, particularly noticeable in phonemic identification or discrimination tasks. The gap in performance between dyslexic and normotypical listeners appears to decrease into adulthood, suggesting that some individuals with dyslexia develop compensatory strategies. Some dyslexic adults however remain impaired in more challenging listening situations such as in the presence of background noise. This paper addresses the question of the compensatory strategies employed, using the recently developed Auditory Classification Image (ACI) methodology. The results of 18 dyslexics taking part in a phoneme categorization task in noise were compared with those of 18 normotypical age-matched controls. By fitting a penalized Generalized Linear Model on the data of each participant, we obtained his/her ACI, a map of the time-frequency regions he/she relied on to perform the task. Even though dyslexics performed significantly less well than controls, we were unable to detect a robust difference between the mean ACIs of the two groups. This is partly due to the considerable heterogeneity in listening strategies among a subgroup of 7 low-performing dyslexics, as confirmed by a complementary analysis. When excluding these participants to restrict our comparison to the 11 dyslexics performing as well as their average-reading peers, we found a significant difference in the F3 onset of the first syllable, and a tendency of difference on the F4 onset, suggesting that these listeners can compensate for their deficit by relying upon additional allophonic cues. PMID:27100662
Varnet, Léo; Meunier, Fanny; Trollé, Gwendoline; Hoen, Michel
2016-01-01
A vast majority of dyslexic children exhibit a phonological deficit, particularly noticeable in phonemic identification or discrimination tasks. The gap in performance between dyslexic and normotypical listeners appears to decrease into adulthood, suggesting that some individuals with dyslexia develop compensatory strategies. Some dyslexic adults however remain impaired in more challenging listening situations such as in the presence of background noise. This paper addresses the question of the compensatory strategies employed, using the recently developed Auditory Classification Image (ACI) methodology. The results of 18 dyslexics taking part in a phoneme categorization task in noise were compared with those of 18 normotypical age-matched controls. By fitting a penalized Generalized Linear Model on the data of each participant, we obtained his/her ACI, a map of the time-frequency regions he/she relied on to perform the task. Even though dyslexics performed significantly less well than controls, we were unable to detect a robust difference between the mean ACIs of the two groups. This is partly due to the considerable heterogeneity in listening strategies among a subgroup of 7 low-performing dyslexics, as confirmed by a complementary analysis. When excluding these participants to restrict our comparison to the 11 dyslexics performing as well as their average-reading peers, we found a significant difference in the F3 onset of the first syllable, and a tendency of difference on the F4 onset, suggesting that these listeners can compensate for their deficit by relying upon additional allophonic cues.
Investigating lexical competition and the cost of phonemic restoration.
Balling, Laura Winther; Morris, David Jackson; Tøndering, John
2017-12-01
Due to phonemic restoration, listeners can reliably perceive words when a phoneme is replaced with noise. The cost associated with this process was investigated along with the effect of lexical uniqueness on phonemic restoration, using data from a lexical decision experiment where noise replaced phonemes that were either uniqueness points (the phoneme at which a word deviates from all nonrelated words that share the same onset) or phonemes immediately prior to these. A baseline condition was also included with no noise-interrupted stimuli. Results showed a significant cost of phonemic restoration, with 100 ms longer word identification times and a 14% decrease in word identification accuracy for interrupted stimuli compared to the baseline. Regression analysis of response times from the interrupted conditions showed no effect of whether the interrupted phoneme was a uniqueness point, but significant effects for several temporal attributes of the stimuli, including the duration and position of the interrupted segment. These results indicate that uniqueness points are not distinct breakpoints in the cohort reduction that occurs during lexical processing, but that temporal properties of the interrupted stimuli are central to auditory word recognition. These results are interpreted in the context of models of speech perception.
Carlisle, J F
1987-01-01
Currently popular systems for classification of spelling words or errors emphasize the learning of phoneme-grapheme correspondences and memorization of irregular words, but do not take into account the morphophonemic nature of the English language. This study is based on the premise that knowledge of the morphological rules of derivational morphology is acquired developmentally and is related to the spelling abilities of both normal and learning-disabled (LD) students. It addresses three issues: 1) how the learning of derivational morphology and the spelling of derived words by LD students compares to that of normal students; 2) whether LD students learn derived forms rulefully; and 3) the extent to which LD and normal students use knowledge of relationships between base and derived forms to spell derived words (e.g. "magic" and "magician"). The results showed that LD ninth graders' knowledge of derivational morphology was equivalent to that of normal sixth graders, following similar patterns of mastery of orthographic and phonological rules, but that their spelling of derived forms was equivalent to that of the fourth graders. Thus, they know more about derivational morphology than they use in spelling. In addition, they were significantly more apt to spell derived words as whole words, without regard for morphemic structure, than even the fourth graders. Nonetheless, most of the LD spelling errors were phonetically acceptable, suggesting that their misspellings cannot be attributed primarily to poor knowledge of phoneme-grapheme correspondences.
Auditory Phoneme Discrimination in Illiterates: Mismatch Negativity--A Question of Literacy?
ERIC Educational Resources Information Center
Schaadt, Gesa; Pannekamp, Ann; van der Meer, Elke
2013-01-01
These days, illiteracy is still a major problem. There is empirical evidence that auditory phoneme discrimination is one of the factors contributing to written language acquisition. The current study investigated auditory phoneme discrimination in participants who did not acquire written language sufficiently. Auditory phoneme discrimination was…
Homophone Dominance Modulates the Phonemic-Masking Effect.
ERIC Educational Resources Information Center
Berent, Iris; Van Orden, Guy C.
2000-01-01
Finds (1) positive phonemic-masking effects occurred for dominant homophones; (2) null phonemic-masking effects occurred for subordinate homophones; and (3) subordinate homophones were much more likely to be falsely identified as their dominant mate. Suggests the source of these null phonemic-masking is itself a phonology effect. Concludes…
Receptive Vocabulary and Cross-Language Transfer of Phonemic Awareness in Kindergarten Children
ERIC Educational Resources Information Center
Atwill, Kim; Blanchard, Jay; Gorin, Joanna S.; Burstein, Karen
2007-01-01
The authors investigated the influence of language proficiency on the cross-language transfer (CLT) of phonemic awareness in Spanish-speaking kindergarten students and assessed Spanish and English receptive vocabulary and phonemic awareness abilities. Correlation results indicated positive correlations between phonemic awareness across languages;…
ERIC Educational Resources Information Center
Sayeski, Kristin L.; Earle, Gentry A.; Eslinger, R. Paige; Whitenton, Jessy N.
2017-01-01
Matching phonemes (speech sounds) to graphemes (letters and letter combinations) is an important aspect of decoding (translating print to speech) and encoding (translating speech to print). Yet, many teacher candidates do not receive explicit training in phoneme-grapheme correspondence. Difficulty with accurate phoneme production and/or lack of…
The Linguistic Affiliation Constraint and Phoneme Recognition in Diglossic Arabic
ERIC Educational Resources Information Center
Saiegh-Haddad, Elinor; Levin, Iris; Hende, Nareman; Ziv, Margalit
2011-01-01
This study tested the effect of the phoneme's linguistic affiliation (Standard Arabic versus Spoken Arabic) on phoneme recognition among five-year-old Arabic native speaking kindergarteners (N=60). Using a picture selection task of words beginning with the same phoneme, and through careful manipulation of the phonological properties of target…
Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane
2016-01-01
We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants' VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition.
Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane
2016-01-01
We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants’ VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition. PMID:26950210
[Phoneme analysis and phoneme discrimination of juvenile speech therapy school students].
Franz, S; Rosanowski, F; Eysholdt, U; Hoppe, U
2011-05-01
Phoneme analysis and phoneme discrimination, important factors in acquiring spoken and written language, have been evaluated in juvenile speech therapy school students. The results have been correlated with the results of a school achievement test. The following questions were of interest: Do students in the lower verbal skill segment show pathological phoneme analysis and phoneme discrimination skills? Do the results of the school achievement test differ from the results by students visiting German "Hauptschule"? How does phoneme analysis and phoneme discrimination performance correlate to other tested parameters? 74 students of a speech therapy school ranging from 7 (th) to 9 (th) grade were examined (ages 12;10-17;04) with the Heidelberg Phoneme Discrimination Test H-LAD and the school achievement test "Prüfsystem für Schul- und Bildungsberatung PSB-R 6-13". Compared to 4 (th) graders the juvenile speech therapy school students showed worse results in the H-LAD test with good differentiation in the lower measuring range. In the PSB-R 6-13 test the examined students did worse compared to students visiting German "Hauptschule" for all grades except 9 (th) grade. Comparing H-LAD and PSB-R 6-13 shows a significant correlation for the sub-tests covering language competence and intelligence but not for the concentration tests. Pathological phoneme analysis and phoneme discrimination skills suggest elevated need for counseling, but this needs to corroborated through additional linguistic parameters and measuring non-verbal intelligence. Further trails are needed in order to clarify whether the results can lead to sophisticated therapy algorithms for educational purposes. © Georg Thieme Verlag KG Stuttgart · New York.
Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech.
Khalighinejad, Bahar; Cruzatto da Silva, Guilherme; Mesgarani, Nima
2017-02-22
Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). We showed that responses to different phoneme categories are organized by phonetic features. We found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations revealed that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders. SIGNIFICANCE STATEMENT We characterized the properties of evoked neural responses to phoneme instances in continuous speech. We show that each instance of a phoneme in continuous speech produces several observable neural responses at different times occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Each temporal event explicitly encodes the acoustic similarity of phonemes, and linguistic and nonlinguistic information are best represented at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. These findings provide compelling new evidence for dynamic processing of speech sounds in the auditory pathway. Copyright © 2017 Khalighinejad et al.
Sleep underpins the plasticity of language production.
Gaskell, M Gareth; Warker, Jill; Lindsay, Shane; Frost, Rebecca; Guest, James; Snowdon, Reza; Stackhouse, Abigail
2014-07-01
The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep. © The Author(s) 2014.
Hardy, Chris J D; Agustus, Jennifer L; Marshall, Charles R; Clark, Camilla N; Russell, Lucy L; Bond, Rebecca L; Brotherhood, Emilie V; Thomas, David L; Crutch, Sebastian J; Rohrer, Jonathan D; Warren, Jason D
2017-07-27
Non-verbal auditory impairment is increasingly recognised in the primary progressive aphasias (PPAs) but its relationship to speech processing and brain substrates has not been defined. Here we addressed these issues in patients representing the non-fluent variant (nfvPPA) and semantic variant (svPPA) syndromes of PPA. We studied 19 patients with PPA in relation to 19 healthy older individuals. We manipulated three key auditory parameters-temporal regularity, phonemic spectral structure and prosodic predictability (an index of fundamental information content, or entropy)-in sequences of spoken syllables. The ability of participants to process these parameters was assessed using two-alternative, forced-choice tasks and neuroanatomical associations of task performance were assessed using voxel-based morphometry of patients' brain magnetic resonance images. Relative to healthy controls, both the nfvPPA and svPPA groups had impaired processing of phonemic spectral structure and signal predictability while the nfvPPA group additionally had impaired processing of temporal regularity in speech signals. Task performance correlated with standard disease severity and neurolinguistic measures. Across the patient cohort, performance on the temporal regularity task was associated with grey matter in the left supplementary motor area and right caudate, performance on the phoneme processing task was associated with grey matter in the left supramarginal gyrus, and performance on the prosodic predictability task was associated with grey matter in the right putamen. Our findings suggest that PPA syndromes may be underpinned by more generic deficits of auditory signal analysis, with a distributed cortico-subcortical neuraoanatomical substrate extending beyond the canonical language network. This has implications for syndrome classification and biomarker development.
Initial Insights into Phoneme Awareness Intervention for Children with Complex Communication Needs
ERIC Educational Resources Information Center
Clendon, Sally; Gillon, Gail; Yoder, David
2005-01-01
This study provides insights into the benefits of phoneme awareness intervention for children with complex communication needs (CCN). The specific aims of the study were: (1) to determine whether phoneme awareness skills can be successfully trained in children with CCN; and (2) to observe any transfer effects to phoneme awareness tasks not…
Cousin, Emilie; Perrone, Marcela; Baciu, Monica
2009-04-01
This behavioral study aimed at assessing the effect of two variables on the degree of hemispheric specialization for language. One of them was the grapho-phonemic translation (transformation) (letter-sound mapping) and the other was the participants'gender. The experiment was conducted with healthy volunteers. A divided visual field procedure has been used to perform a phoneme detection task implying either regular (transparent) grapho-phonemic translation (letter-sound mapping consistency) or irregular (non-transparent) grapho-phonemic translation (letter-sound mapping inconsistency). Our results reveal a significant effect of grapho-phonemic translation on the degree of hemispheric dominance for language. The phoneme detection on items with transparent translation (TT) was performed faster than phoneme detection on items with non-transparent translation (NTT). This effect seems to be due to faster identification of TT than NTT when the items were presented in the left visual field (LVF)-right hemisphere (RH). There was no difference between TT and NTT for stimuli presented in the right visual field (RVF)-left hemisphere (LH). This result suggests that grapho-phonemic translation or the degree of transparency can affect the degree of hemispheric specialization, by modulating the right hemisphere activity. With respect to gender, male participants were significantly more lateralized than female participants but no interaction was observed between gender and degree of transparency.
When Variability Matters More than Meaning: The Effect of Lexical Forms on Use of Phonemic Contrasts
ERIC Educational Resources Information Center
Thiessen, Erik D.
2011-01-01
During the first half of the 2nd year of life, infants struggle to use phonemic distinctions in label-object association tasks. Prior experiments have demonstrated that exposure to the phonemes in distinct lexical forms (e.g., /"d"/ and /"t"/ in "daddy" and "tiger", respectively) facilitates infants' use of phonemic contrasts but also that they…
ERIC Educational Resources Information Center
Saiegh-Haddad, Elinor; Kogan, Nadya; Walters, Joel
2010-01-01
The study tested phonemic awareness in the two languages of Russian (L1)-Hebrew (L2) sequential bilingual children (N = 20) using phoneme deletion tasks where the phoneme to be deleted occurred word initial, word final, as a singleton, or part of a cluster, in long and short words and stressed and unstressed syllables. The experiments were…
Speaker-independent phoneme recognition with a binaural auditory image model
NASA Astrophysics Data System (ADS)
Francis, Keith Ivan
1997-09-01
This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.
Influences of spoken word planning on speech recognition.
Roelofs, Ardi; Ozdemir, Rebecca; Levelt, Willem J M
2007-09-01
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. 2007 APA
Acquisition of Malay word recognition skills: lessons from low-progress early readers.
Lee, Lay Wah; Wheldall, Kevin
2011-02-01
Malay is a consistent alphabetic orthography with complex syllable structures. The focus of this research was to investigate word recognition performance in order to inform reading interventions for low-progress early readers. Forty-six Grade 1 students were sampled and 11 were identified as low-progress readers. The results indicated that both syllable awareness and phoneme blending were significant predictors of word recognition, suggesting that both syllable and phonemic grain-sizes are important in Malay word recognition. Item analysis revealed a hierarchical pattern of difficulty based on the syllable and the phonic structure of the words. Error analysis identified the sources of errors to be errors due to inefficient syllable segmentation, oversimplification of syllables, insufficient grapheme-phoneme knowledge and inefficient phonemic code assembly. Evidence also suggests that direct instruction in syllable segmentation, phonemic awareness and grapheme-phoneme correspondence is necessary for low-progress readers to acquire word recognition skills. Finally, a logical sequence to teach grapheme-phoneme decoding in Malay is suggested. Copyright © 2010 John Wiley & Sons, Ltd.
Loui, Psyche; Kroog, Kenneth; Zuk, Jennifer; Winner, Ellen; Schlaug, Gottfried
2011-01-01
Language and music are complex cognitive and neural functions that rely on awareness of one's own sound productions. Information on the awareness of vocal pitch, and its relation to phonemic awareness which is crucial for learning to read, will be important for understanding the relationship between tone-deafness and developmental language disorders such as dyslexia. Here we show that phonemic awareness skills are positively correlated with pitch perception–production skills in children. Children between the ages of seven and nine were tested on pitch perception and production, phonemic awareness, and IQ. Results showed a significant positive correlation between pitch perception–production and phonemic awareness, suggesting that the relationship between musical and linguistic sound processing is intimately linked to awareness at the level of pitch and phonemes. Since tone-deafness is a pitch-related impairment and dyslexia is a deficit of phonemic awareness, we suggest that dyslexia and tone-deafness may have a shared and/or common neural basis. PMID:21687467
Secure Recognition of Voice-Less Commands Using Videos
NASA Astrophysics Data System (ADS)
Yau, Wai Chee; Kumar, Dinesh Kant; Weghorn, Hans
Interest in voice recognition technologies for internet applications is growing due to the flexibility of speech-based communication. The major drawback with the use of sound for internet access with computers is that the commands will be audible to other people in the vicinity. This paper examines a secure and voice-less method for recognition of speech-based commands using video without evaluating sound signals. The proposed approach represents mouth movements in the video data using 2D spatio-temporal templates (STT). Zernike moments (ZM) are computed from STT and fed into support vector machines (SVM) to be classified into one of the utterances. The experimental results demonstrate that the proposed technique produces a high accuracy of 98% in a phoneme classification task. The proposed technique is demonstrated to be invariant to global variations of illumination level. Such a system is useful for securely interpreting user commands for internet applications on mobile devices.
Reconsidering the role of temporal order in spoken word recognition.
Toscano, Joseph C; Anderson, Nathaniel D; McMurray, Bob
2013-10-01
Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.
ERIC Educational Resources Information Center
Ryder, Janice F.; Tunmer, William E.; Greaney, Keith T.
2008-01-01
The aim of this study was to determine whether explicit instruction in phonemic awareness and phonemically based decoding skills would be an effective intervention strategy for children with early reading difficulties in a whole language instructional environment. Twenty-four 6- and 7-year-old struggling readers were randomly assigned to an…
NASA Astrophysics Data System (ADS)
Lindsay, D.
1985-02-01
Research on the automatic computer analysis of intonation using linguistic knowledge is described. The use of computer programs to analyze and classify fundamental frequency (FO) contours, and work on the psychophysics of British English intonation and on the phonetics of FO contours are described. Results suggest that FO can be conveniently tracked to represent intonation through time, which can be subsequently used by a computer program as the basis for analysis. Nuclear intonation, where the intonational nucleus is the region of auditory prominence, or information focus, found in all spoken sentences was studied. The main mechanism behind such prominence is the perception of an extensive FO movement on the nuclear syllable. A classification of the nuclear contour shape is a classification of the sentence type, often into categories that cannot be readily determined from only the segmental phonemes of the utterance.
Caballero-Morales, Santiago-Omar
2013-01-01
An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness). Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR's output for the sentence. With this approach, accuracy of 87–100% was achieved for the recognition of emotional state of Mexican Spanish speech. PMID:23935410
Manoiloff, Laura; Segui, Juan; Hallé, Pierre
2016-01-01
In this research, we combine a cross-form word-picture visual masked priming procedure with an internal phoneme monitoring task to examine repetition priming effects. In this paradigm, participants have to respond to pictures whose names begin with a prespecified target phoneme. This task unambiguously requires retrieving the word-form of the target picture's name and implicitly orients participants' attention towards a phonological level of representation. The experiments were conducted within Spanish, whose highly transparent orthography presumably promotes fast and automatic phonological recoding of subliminal, masked visual word primes. Experiments 1 and 2 show that repetition primes speed up internal phoneme monitoring in the target, compared to primes beginning with a different phoneme from the target, or sharing only their first phoneme with the target. This suggests that repetition primes preactivate the phonological code of the entire target picture's name, hereby speeding up internal monitoring, which is necessarily based on such a code. To further qualify the nature of the phonological code underlying internal phoneme monitoring, a concurrent articulation task was used in Experiment 3. This task did not affect the repetition priming effect. We propose that internal phoneme monitoring is based on an abstract phonological code, prior to its translation into articulation.
Fully optimized discrimination of physiological responses to auditory stimuli
Kruglikov, Stepan Y; Chari, Sharmila; Rapp, Paul E; Weinstein, Steven L; Given, Barbara K; Schiff, Steven J
2008-01-01
The use of multivariate measurements to characterize brain activity (electrical, magnetic, optical) is widespread. The most common approaches to reduce the complexity of such observations include principal and independent component analyses (PCA and ICA), which are not well suited for discrimination tasks. We addressed two questions: first, how do the neurophysiological responses to elongated phonemes relate to tone and phoneme responses in normal children, and, second, how discriminable are these responses. We employed fully optimized linear discrimination analysis to maximally separate the multi-electrode responses to tones and phonemes, and classified the response to elongated phonemes. We find that discrimination between tones and phonemes is dependent upon responses from associative regions of the brain apparently distinct from the primary sensory cortices typically emphasized by PCA or ICA, and that the neuronal correlates corresponding to elongated phonemes are highly variable in normal children (about half respond with neural correlates of tones and half as phonemes). Our approach is made feasible by the increase in computational power of ordinary personal computers and has significant advantages for a wide range of neuronal imaging modalities. PMID:18430975
NASA Astrophysics Data System (ADS)
Minagawa-Kawai, Yasuyo; Mori, Koichi; Furuya, Izumi; Hayashi, Ryoko; Sato, Yutaka
2002-05-01
The present study examined cerebral responses to phoneme categories, using near-infrared spectroscopy (NIRS) by measuring the concentration and oxygenation of hemoglobin accompanying local brain activities. Targeted phonemes used here are Japanese long and short vowel categories realized only by durational differences. Results of NIRS and behavioral test revealed NIRS could capture phoneme-specific information. The left side of the auditory area showed large hemodynamic changes only for contrasting stimuli between which phonemic boundary was estimated (across-category condition), but not for stimuli differing by an equal duration but belonging to the same phoneme category (within-category condition). Left dominance in phoneme processing was also confirmed for the across-category stimuli. These findings indicate that the Japanese vowel contrast based only on duration is dealt with in the same language-dominant hemisphere as the other phonemic categories as studied with MEG and PET, and that the cortical activities related to its processing can be detected with NIRS. [Work supported by Japan Society for Promotion of Science (No. 8484) and a grant from Ministry of Health and Welfare of Japan.
Mele, Sonia; Ghirardi, Valentina; Craighero, Laila
2017-12-01
A long-term debate concerns whether the sensorimotor coding carried out during transitive actions observation reflects the low-level movement implementation details or the movement goals. On the contrary, phonemes and emotional facial expressions are intransitive actions that do not fall into this debate. The investigation of phonemes discrimination has proven to be a good model to demonstrate that the sensorimotor system plays a role in understanding actions acoustically presented. In the present study, we adapted the experimental paradigms already used in phonemes discrimination during face posture manipulation, to the discrimination of emotional facial expressions. We submitted participants to a lower or to an upper face posture manipulation during the execution of a four alternative labelling task of pictures randomly taken from four morphed continua between two emotional facial expressions. The results showed that the implementation of low-level movement details influence the discrimination of ambiguous facial expressions differing for a specific involvement of those movement details. These findings indicate that facial expressions discrimination is a good model to test the role of the sensorimotor system in the perception of actions visually presented.
Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel
2015-01-01
Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.
Functions of graphemic and phonemic codes in visual word-recognition.
Meyer, D E; Schvaneveldt, R W; Ruddy, M G
1974-03-01
Previous investigators have argued that printed words are recognized directly from visual representations and/or phonological representations obtained through phonemic recoding. The present research tested these hypotheses by manipulating graphemic and phonemic relations within various pairs of letter strings. Ss in two experiments classified the pairs as words or nonwords. Reaction times and error rates were relatively small for word pairs (e.g., BRIBE-TRIBE) that were both graphemically, and phonemically similar. Graphemic similarity alone inhibited performance on other word pairs (e.g., COUCH-TOUCH). These and other results suggest that phonological representations play a significant role in visual word recognition and that there is a dependence between successive phonemic-encoding operations. An encoding-bias model is proposed to explain the data.
Assessing Specific Grapho-Phonemic Skills in Elementary Students
ERIC Educational Resources Information Center
Robbins, Kelly P.; Hosp, John L.; Hosp, Michelle K.; Flynn, Lindsay J.
2010-01-01
This study examines the relation between decoding and spelling performance on tasks that represent identical specific grapho-phonemic patterns. Elementary students (N = 206) were administered a 597 pseudoword decoding inventory representing 12 specific grapho-phonemic patterns and a 104 real-word spelling inventory representing identical…
Analyzing Distributional Learning of Phonemic Categories in Unsupervised Deep Neural Networks
Räsänen, Okko; Nagamine, Tasha; Mesgarani, Nima
2017-01-01
Infants’ speech perception adapts to the phonemic categories of their native language, a process assumed to be driven by the distributional properties of speech. This study investigates whether deep neural networks (DNNs), the current state-of-the-art in distributional feature learning, are capable of learning phoneme-like representations of speech in an unsupervised manner. We trained DNNs with unlabeled and labeled speech and analyzed the activations of each layer with respect to the phones in the input segments. The analyses reveal that the emergence of phonemic invariance in DNNs is dependent on the availability of phonemic labeling of the input during the training. No increased phonemic selectivity of the hidden layers was observed in the purely unsupervised networks despite successful learning of low-dimensional representations for speech. This suggests that additional learning constraints or more sophisticated models are needed to account for the emergence of phone-like categories in distributional learning operating on natural speech. PMID:29359204
Nurturing Phonemic Awareness and Alphabetic Knowledge in Pre-Kindergartners.
ERIC Educational Resources Information Center
Steinhaus, Patricia L.
Reading research continues to identify phonemic awareness and knowledge of the alphabetic principle as key factors in the literacy acquisition process and to indicate that they greatly facilitate decoding efforts. While research indicates that phonemic awareness and alphabetic knowledge are necessary to literacy acquisition, many early childhood…
What Does the Right Hemisphere Know about Phoneme Categories?
ERIC Educational Resources Information Center
Wolmetz, Michael; Poeppel, David; Rapp, Brenda
2011-01-01
Innate auditory sensitivities and familiarity with the sounds of language give rise to clear influences of phonemic categories on adult perception of speech. With few exceptions, current models endorse highly left-hemisphere-lateralized mechanisms responsible for the influence of phonemic category on speech perception, based primarily on results…
Fee, Fie, Phonemic Awareness: 130 Prereading Activities for Preschoolers.
ERIC Educational Resources Information Center
Hohmann, Mary
Noting that phonemic awareness has been identified as an essential skill that prepares children for reading, this book contains 130 phonemic awareness activities suitable for small-group learning in preschools, prekindergarten programs, Head Start programs, child care centers, and home-based programs. Reflecting the teaching strategies of the…
The Nature of Phoneme Representation in Spoken Word Recognition
ERIC Educational Resources Information Center
Gaskell, M. Gareth; Quinlan, Philip T.; Tamminen, Jakke; Cleland, Alexandra A.
2008-01-01
Four experiments used the psychological refractory period logic to examine whether integration of multiple sources of phonemic information has a decisional locus. All experiments made use of a dual-task paradigm in which participants made forced-choice color categorization (Task 1) and phoneme categorization (Task 2) decisions at varying stimulus…
Sayeski, Kristin L; Earle, Gentry A; Eslinger, R Paige; Whitenton, Jessy N
2017-04-01
Matching phonemes (speech sounds) to graphemes (letters and letter combinations) is an important aspect of decoding (translating print to speech) and encoding (translating speech to print). Yet, many teacher candidates do not receive explicit training in phoneme-grapheme correspondence. Difficulty with accurate phoneme production and/or lack of understanding of sound-symbol correspondence can make it challenging for teachers to (a) identify student errors on common assessments and (b) serve as a model for students when teaching beginning reading or providing remedial reading instruction. For students with dyslexia, lack of teacher proficiency in this area is particularly problematic. This study examined differences between two learning conditions (massed and distributed practice) on teacher candidates' development of phoneme-grapheme correspondence knowledge and skills. An experimental, pretest-posttest-delayed test design was employed with teacher candidates (n = 52) to compare a massed practice condition (one, 60-min session) to a distributed practice condition (four, 15-min sessions distributed over 4 weeks) for learning phonemes associated with letters and letter combinations. Participants in the distributed practice condition significantly outperformed participants in the massed practice condition on their ability to correctly produce phonemes associated with different letters and letter combinations. Implications for teacher preparation are discussed.
2014-01-01
Background The processing of verbal fluency tasks relies on the coordinated activity of a number of brain areas, particularly in the frontal and temporal lobes of the left hemisphere. Recent studies using functional magnetic resonance imaging (fMRI) to study the neural networks subserving verbal fluency functions have yielded divergent results especially with respect to a parcellation of the inferior frontal gyrus for phonemic and semantic verbal fluency. We conducted a coordinate-based activation likelihood estimation (ALE) meta-analysis on brain activation during the processing of phonemic and semantic verbal fluency tasks involving 28 individual studies with 490 healthy volunteers. Results For phonemic as well as for semantic verbal fluency, the most prominent clusters of brain activation were found in the left inferior/middle frontal gyrus (LIFG/MIFG) and the anterior cingulate gyrus. BA 44 was only involved in the processing of phonemic verbal fluency tasks, BA 45 and 47 in the processing of phonemic and semantic fluency tasks. Conclusions Our comparison of brain activation during the execution of either phonemic or semantic verbal fluency tasks revealed evidence for spatially different activation in BA 44, but not other regions of the LIFG/LMFG (BA 9, 45, 47) during phonemic and semantic verbal fluency processing. PMID:24456150
Phonetic basis of phonemic paraphasias in aphasia: Evidence for cascading activation.
Kurowski, Kathleen; Blumstein, Sheila E
2016-02-01
Phonemic paraphasias are a common presenting symptom in aphasia and are thought to reflect a deficit in which selecting an incorrect phonemic segment results in the clear-cut substitution of one phonemic segment for another. The current study re-examines the basis of these paraphasias. Seven left hemisphere-damaged aphasics with a range of left hemisphere lesions and clinical diagnoses including Broca's, Conduction, and Wernicke's aphasia, were asked to produce syllable-initial voiced and voiceless fricative consonants, [z] and [s], in CV syllables followed by one of five vowels [i e a o u] in isolation and in a carrier phrase. Acoustic analyses were conducted focusing on two acoustic parameters signaling voicing in fricative consonants: duration and amplitude properties of the fricative noise. Results show that for all participants, regardless of clinical diagnosis or lesion site, phonemic paraphasias leave an acoustic trace of the original target in the error production. These findings challenge the view that phonemic paraphasias arise from a mis-selection of phonemic units followed by its correct implementation, as traditionally proposed. Rather, they appear to derive from a common mechanism with speech errors reflecting the co-activation of a target and competitor resulting in speech output that has some phonetic properties of both segments. Copyright © 2015 Elsevier Ltd. All rights reserved.
Strategic deployment of orthographic knowledge in phoneme detection.
Cutler, Anne; Treiman, Rebecca; van Ooijen, Brit
2010-01-01
The phoneme detection task is widely used in spoken-word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realized. Listeners detected the target sounds [b, m, t, f, s, k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b, m, t], which have consistent word-initial spelling, than to the targets [f, s, k], which are inconsistently spelled, but only when spelling was rendered salient by the presence in the experiment of many irregularly spelled filler words. Within the inconsistent targets [f, s, k], there was no significant difference between responses to targets in words with more usual (foam, seed, cattle) versus less usual (phone, cede, kettle) spellings. Phoneme detection is thus not necessarily sensitive to orthographic effects; knowledge of spelling stored in the lexical representations of words does not automatically become available as word candidates are activated. However, salient orthographic manipulations in experimental input can induce such sensitivity. We attribute this to listeners' experience of the value of spelling in everyday situations that encourage phonemic decisions (such as learning new names).
Effect of speech-intrinsic variations on human and automatic recognition of spoken phonemes.
Meyer, Bernd T; Brand, Thomas; Kollmeier, Birger
2011-01-01
The aim of this study is to quantify the gap between the recognition performance of human listeners and an automatic speech recognition (ASR) system with special focus on intrinsic variations of speech, such as speaking rate and effort, altered pitch, and the presence of dialect and accent. Second, it is investigated if the most common ASR features contain all information required to recognize speech in noisy environments by using resynthesized ASR features in listening experiments. For the phoneme recognition task, the ASR system achieved the human performance level only when the signal-to-noise ratio (SNR) was increased by 15 dB, which is an estimate for the human-machine gap in terms of the SNR. The major part of this gap is attributed to the feature extraction stage, since human listeners achieve comparable recognition scores when the SNR difference between unaltered and resynthesized utterances is 10 dB. Intrinsic variabilities result in strong increases of error rates, both in human speech recognition (HSR) and ASR (with a relative increase of up to 120%). An analysis of phoneme duration and recognition rates indicates that human listeners are better able to identify temporal cues than the machine at low SNRs, which suggests incorporating information about the temporal dynamics of speech into ASR systems.
ERIC Educational Resources Information Center
Gross, Jo-Anne
The Remediation Plus System for reading, spelling, and writing is based on phonemic awareness training, linguistic gymnastics, and Orton Gillingham methodology. It employs multisensory, systematic phonics and "exhaustively thorough" lesson plans. The system contains a training manual, a testing manual, three training videos, a…
ERIC Educational Resources Information Center
Moore, D.R.; Rosenberg, J.F.; Coleman, J.S.
2005-01-01
Auditory perceptual learning has been proposed as effective for remediating impaired language and for enhancing normal language development. We examined the effect of phonemic contrast discrimination training on the discrimination of whole words and on phonological awareness in 8- to 10-year-old mainstream school children. Eleven phonemic contrast…
Phonemic Code Dependence Varies with Previous Exposure to Words.
ERIC Educational Resources Information Center
Rabin, Jeffrey L.; Zecker, Steven G.
Reading researchers and theorists are sharply divided as to how meaning is obtained from the printed word. Three current explanations are that (1) meaning is accessed directly, without any intermediate processes; (2) meaning is accessed only through an intermediate phonemic stage; and (3) both direct access and phonemic mediation can occur. To…
A REFERENCE GRAMMAR OF ADAMAWA FULANI. AFRICAN LANGUAGE MONOGRAPH NUMBER 8.
ERIC Educational Resources Information Center
STENNES, LESLIE H.
THIS REFERENCE WORK IS A STRUCTURAL GRAMMAR OF THE ADAMAWA DIALECT OF FULANI AS SPOKEN IN NIGERIA AND CAMEROUN. IT IS PRIMARILY WRITTEN FOR LINGUISTS AND THOSE WHO ALREADY KNOW FULANI. THE GRAMMAR IS DIVIDED INTO THREE PARTS--(1) PHONEMICS AND MORPHOPHONEMICS, DISCUSSING SEGMENTAL AND SUPRASEGMENTAL PHONEMES, PERMITTED SEQUENCES OF PHONEMES,…
Learning Phonemes with a Proto-Lexicon
ERIC Educational Resources Information Center
Martin, Andrew; Peperkamp, Sharon; Dupoux, Emmanuel
2013-01-01
Before the end of the first year of life, infants begin to lose the ability to perceive distinctions between sounds that are not phonemic in their native language. It is typically assumed that this developmental change reflects the construction of language-specific phoneme categories, but how these categories are learned largely remains a mystery.…
ERIC Educational Resources Information Center
Edwards, Oliver W.; Taub, Gordon E.
2016-01-01
Research indicates the primary difference between strong and weak readers is their phonemic awareness skills. However, there is no consensus regarding which specific components of phonemic awareness contribute most robustly to reading comprehension. In this study, the relationship among sound blending, sound segmentation, and reading comprehension…
ERIC Educational Resources Information Center
Davidson, Lisa
2011-01-01
Previous research indicates that multiple levels of linguistic information play a role in the perception and discrimination of non-native phonemes. This study examines the interaction of phonetic, phonemic and phonological factors in the discrimination of non-native phonotactic contrasts. Listeners of Catalan, English, and Russian are presented…
NASA Astrophysics Data System (ADS)
Modegi, Toshio
Using our previously developed audio to MIDI code converter tool “Auto-F”, from given vocal acoustic signals we can create MIDI data, which enable to playback the voice-like signals with a standard MIDI synthesizer. Applying this tool, we are constructing a MIDI database, which consists of previously converted simple harmonic structured MIDI codes from a set of 71 Japanese male and female syllable recorded signals. And we are developing a novel voice synthesizing system based on harmonically synthesizing musical sounds, which can generate MIDI data and playback voice signals with a MIDI synthesizer by giving Japanese plain (kana) texts, referring to the syllable MIDI code database. In this paper, we propose an improved MIDI converter tool, which can produce temporally higher-resolution MIDI codes. Then we propose an algorithm separating a set of 20 consonant and vowel phoneme MIDI codes from 71 syllable MIDI converted codes in order to construct a voice synthesizing system. And, we present the evaluation results of voice synthesizing quality between these separated phoneme MIDI codes and their original syllable MIDI codes by our developed 4-syllable word listening tests.
Can a linguistic serial founder effect originating in Africa explain the worldwide phonemic cline?
2016-01-01
It has been proposed that a serial founder effect could have caused the present observed pattern of global phonemic diversity. Here we present a model that simulates the human range expansion out of Africa and the subsequent spatial linguistic dynamics until today. It does not assume copying errors, Darwinian competition, reduced contrastive possibilities or any other specific linguistic mechanism. We show that the decrease of linguistic diversity with distance (from the presumed origin of the expansion) arises under three assumptions, previously introduced by other authors: (i) an accumulation rate for phonemes; (ii) small phonemic inventories for the languages spoken before the out-of-Africa dispersal; (iii) an increase in the phonemic accumulation rate with the number of speakers per unit area. Numerical simulations show that the predictions of the model agree with the observed decrease of linguistic diversity with increasing distance from the most likely origin of the out-of-Africa dispersal. Thus, the proposal that a serial founder effect could have caused the present observed pattern of global phonemic diversity is viable, if three strong assumptions are satisfied. PMID:27122180
Faes, Jolien; Gillis, Joris; Gillis, Steven
2016-01-01
Phonemic accuracy of children with cochlear implants (CI) is often reported to be lower in comparison with normally hearing (NH) age-matched children. In this study, we compare phonemic accuracy development in the spontaneous speech of Dutch-speaking children with CI and NH age-matched peers. A dynamic cost model of Levenshtein distance is used to compute the accuracy of each word token. We set up a longitudinal design with monthly data for comparisons up to age two and a cross-sectional design with yearly data between three and five years of age. The main finding is that phonemic accuracy steadily increases throughout the period studied. Children with CI's accuracy is lower than that of their NH age mates, but this difference is not statistically significant in the earliest stages of lexical development. But accuracy of children with CI initially improves significantly less steeply than that of NH peers. Furthermore, the number of syllables in the target word and target word's complexity influence children's accuracy, as longer and more complex target words are less accurately produced. Up to age four, children with CI are significantly less accurate than NH children with increasing word length and word complexity. This difference has disappeared at age five. Finally, hearing age is shown to influence accuracy development of children with CI, while age of implant activation is not. This article informs the reader about phonemic accuracy development in children. The reader will be able to (a) discuss different metrics to measure phonemic accuracy development, (b) discuss phonemic accuracy of children with CI up to five years of age and compare them with NH children, (c) discuss the influence of target word's complexity and target word's syllable length on phonemic accuracy, (d) discuss the influence of hearing experience and age of implantation on phonemic accuracy of children with CI. Copyright © 2015 Elsevier Inc. All rights reserved.
Voice Response Systems Technology.
ERIC Educational Resources Information Center
Gerald, Jeanette
1984-01-01
Examines two methods of generating synthetic speech in voice response systems, which allow computers to communicate in human terms (speech), using human interface devices (ears): phoneme and reconstructed voice systems. Considerations prior to implementation, current and potential applications, glossary, directory, and introduction to Input Output…
Magnuson, James S.
2015-01-01
Grossberg and Kazerounian [(2011). J. Acoust. Soc. Am. 130, 440–460] present a model of sequence representation for spoken word recognition, the cARTWORD model, which simulates essential aspects of phoneme restoration. Grossberg and Kazerounian also include simulations with the TRACE model presented by McClelland and Elman [(1986). Cognit. Psychol. 18, 1–86] that seem to indicate that TRACE cannot simulate phoneme restoration. Grossberg and Kazerounian also claim cARTWORD should be preferred to TRACE because of TRACE's implausible approach to sequence representation (reduplication of time-specific units) and use of non-modulatory feedback (i.e., without position-specific bottom-up support). This paper responds to Grossberg and Kazerounian first with TRACE simulations that account for phoneme restoration when appropriately constructed noise is used (and with minor changes to TRACE phoneme definitions), then reviews the case for reduplicated units and feedback as implemented in TRACE, as well as TRACE's broad and deep coverage of empirical data. Finally, it is argued that cARTWORD is not comparable to TRACE because cARTWORD cannot represent sequences with repeated elements, has only been implemented with small phoneme and lexical inventories, and has been applied to only one phenomenon (phoneme restoration). Without evidence that cARTWORD captures a similar range and detail of human spoken language processing as alternative models, it is premature to prefer cARTWORD to TRACE. PMID:25786959
Harciarek, Michał; Williamson, John B; Biedunkiewicz, Bogdan; Lichodziejewska-Niemierko, Monika; Dębska-Ślizień, Alicja; Rutkowski, Bolesław
2012-01-01
Although dialyzed patients often have cognitive problems, little is known about the nature of these deficits. We hypothesized that, in contrast to semantic fluency relying mainly on temporal lobes, phonemic fluency, preferentially depending on functions of frontal-subcortical systems, would be particularly sensitive to the constellation of physiological pathological processes associated with end-stage renal disease and dialysis. Therefore, we longitudinally compared phonemic and semantic fluency performance between 49 dialyzed patients and 30 controls. Overall, patients performed below controls only on the phonemic fluency task. Furthermore, their performance on this task declined over time, whereas there was no change in semantic fluency. Moreover, this decline was related to the presence of hypertension and higher blood urea nitrogen. We suggest that these findings may be due to a combination of vascular and topic effects that impact more on fronto-subcortical than temporal lobe networks, but this speculation requires direct confirmation.
Influence of Eye Movements, Auditory Perception, and Phonemic Awareness in the Reading Process
ERIC Educational Resources Information Center
Megino-Elvira, Laura; Martín-Lobo, Pilar; Vergara-Moragues, Esperanza
2016-01-01
The authors' aim was to analyze the relationship of eye movements, auditory perception, and phonemic awareness with the reading process. The instruments used were the King-Devick Test (saccade eye movements), the PAF test (auditory perception), the PFC (phonemic awareness), the PROLEC-R (lexical process), the Canals reading speed test, and the…
The Use of Handheld Devices for Improved Phonemic Awareness in a Traditional Kindergarten Classroom
ERIC Educational Resources Information Center
Magagna-McBee, Cristy Ann
2010-01-01
Effective teaching strategies that improve the development of phonemic awareness are important to ensure students are fluent readers by third grade. The use of handheld devices to improve phonemic awareness with kindergarten students may be such a strategy, but no research exists that evaluates the use of these devices. This study explored the…
Phonemic Awareness: A Step by Step Approach for Success in Early Reading
ERIC Educational Resources Information Center
Perez, Idalia Rodriguez
2008-01-01
This guide will help teach phonemic awareness to Pre K-3 students. It presents phonemic awareness as a sophisticated branch of phonological awareness through interactive activities that allows the student to succeed in learning the sounds represented by the letters of the alphabet. The book is designed to provide easy-to-follow suggestions for:…
ERIC Educational Resources Information Center
Kelley, Michael F.; Roe, Mary; Blanchard, Jay; Atwill, Kim
2015-01-01
This investigation examined the influence of varying levels of Spanish receptive vocabulary and phonemic awareness ability on beginning English vocabulary, phonemic awareness, word reading fluency, and reading comprehension development across kindergarten through second grade. The 80 respondents were Spanish speaking children with no English…
ERIC Educational Resources Information Center
Werfel, Krystal L.
2017-01-01
The purpose of this study was to evaluate the effects of phonetic transcription training on the explicit phonemic awareness of adults. Fifty undergraduate students enrolled in a phonetic transcription course and 107 control undergraduate students completed a paper-and-pencil measure of explicit phonemic awareness on the first and last days of…
ERIC Educational Resources Information Center
Cardoso-Martins, Claudia; Michalick, Mirelle Franca; Pollo, Tatiana Cury
2002-01-01
Investigates sensitivity to rhyme and phoneme among readers and nonreaders with Down Syndrome (DS) and normally developing children. Evaluates a rhyme detection task and initial and middle phoneme detection tasks. Concludes the rhyme detection task was the easiest for nonreaders without DS and most difficult for readers with DS. (PM)
Tucker Signing as a Phonics Instruction Tool to Develop Phonemic Awareness in Children
ERIC Educational Resources Information Center
Valbuena, Amanda Carolina
2014-01-01
To develop reading acquisition in an effective way, it is necessary to take into account three goals during the process: automatic word recognition, or development of phonemic awareness, reading comprehension, and a desire for reading. This article focuses on promoting phonemic awareness in English as a second language through a program called…
SYNAPTIC DEPRESSION IN DEEP NEURAL NETWORKS FOR SPEECH PROCESSING.
Zhang, Wenhao; Li, Hanyu; Yang, Minda; Mesgarani, Nima
2016-03-01
A characteristic property of biological neurons is their ability to dynamically change the synaptic efficacy in response to variable input conditions. This mechanism, known as synaptic depression, significantly contributes to the formation of normalized representation of speech features. Synaptic depression also contributes to the robust performance of biological systems. In this paper, we describe how synaptic depression can be modeled and incorporated into deep neural network architectures to improve their generalization ability. We observed that when synaptic depression is added to the hidden layers of a neural network, it reduces the effect of changing background activity in the node activations. In addition, we show that when synaptic depression is included in a deep neural network trained for phoneme classification, the performance of the network improves under noisy conditions not included in the training phase. Our results suggest that more complete neuron models may further reduce the gap between the biological performance and artificial computing, resulting in networks that better generalize to novel signal conditions.
Neurophysiological evidence of efference copies to inner speech
Jack, Bradley N; Pearson, Daniel; Griffiths, Oren; Luque, David; Harris, Anthony WF; Spencer, Kevin M; Le Pelley, Mike E
2017-01-01
Efference copies refer to internal duplicates of movement-producing neural signals. Their primary function is to predict, and often suppress, the sensory consequences of willed movements. Efference copies have been almost exclusively investigated in the context of overt movements. The current electrophysiological study employed a novel design to show that inner speech – the silent production of words in one’s mind – is also associated with an efference copy. Participants produced an inner phoneme at a precisely specified time, at which an audible phoneme was concurrently presented. The production of the inner phoneme resulted in electrophysiological suppression, but only if the content of the inner phoneme matched the content of the audible phoneme. These results demonstrate that inner speech – a purely mental action – is associated with an efference copy with detailed auditory properties. These findings suggest that inner speech may ultimately reflect a special type of overt speech. PMID:29199947
The speech perception skills of children with and without speech sound disorder.
Hearnshaw, Stephanie; Baker, Elise; Munro, Natalie
To investigate whether Australian-English speaking children with and without speech sound disorder (SSD) differ in their overall speech perception accuracy. Additionally, to investigate differences in the perception of specific phonemes and the association between speech perception and speech production skills. Twenty-five Australian-English speaking children aged 48-60 months participated in this study. The SSD group included 12 children and the typically developing (TD) group included 13 children. Children completed routine speech and language assessments in addition to an experimental Australian-English lexical and phonetic judgement task based on Rvachew's Speech Assessment and Interactive Learning System (SAILS) program (Rvachew, 2009). This task included eight words across four word-initial phonemes-/k, ɹ, ʃ, s/. Children with SSD showed significantly poorer perceptual accuracy on the lexical and phonetic judgement task compared with TD peers. The phonemes /ɹ/ and /s/ were most frequently perceived in error across both groups. Additionally, the phoneme /ɹ/ was most commonly produced in error. There was also a positive correlation between overall speech perception and speech production scores. Children with SSD perceived speech less accurately than their typically developing peers. The findings suggest that an Australian-English variation of a lexical and phonetic judgement task similar to the SAILS program is promising and worthy of a larger scale study. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Noguchi, Masaki; Hudson Kam, Carla L.
2018-01-01
In human languages, different speech sounds can be contextual variants of a single phoneme, called allophones. Learning which sounds are allophones is an integral part of the acquisition of phonemes. Whether given sounds are separate phonemes or allophones in a listener's language affects speech perception. Listeners tend to be less sensitive to…
Tell Me about Fred's Fat Foot Again: Four Tips for Successful PA Lessons
ERIC Educational Resources Information Center
Murray, Bruce A.
2012-01-01
This teaching tip applies research on phoneme awareness (PA) to propose an instructional model for teaching PA. Research suggests children need to learn the identifying features of phonemes to recognize them in spoken words. In the model, teachers focus on one phoneme at a time; make it memorable to children through sound analogies supported by…
Phoneme Restoration Methods Reveal Prosodic Influences on Syntactic Parsing: Data from Bulgarian
ERIC Educational Resources Information Center
Stoyneshka-Raleva, Iglika
2013-01-01
This dissertation introduces and evaluates a new methodology for studying aspects of human language processing and the factors to which it is sensitive. It makes use of the phoneme restoration illusion (Warren, 1970). A small portion of a spoken sentence is replaced by a burst of noise. Listeners typically mentally restore the missing phoneme(s),…
The Effect of Phoneme Awareness Instruction on Students in Small Group and Whole Class Settings
ERIC Educational Resources Information Center
VanBoden, Angelique Fleurette
2011-01-01
Phoneme awareness instruction plays a crucial role in reading acquisition for young children. While this early literacy topic has been studied for over 30 years, and cited by the National Reading Panel Report (2000) as an important area for further research, no reports to date explore the influence of instructional group size on phoneme awareness…
ERIC Educational Resources Information Center
Jefferies, Elizabeth; Grogan, John; Mapelli, Cristina; Isella, Valeria
2012-01-01
Patients with semantic dementia (SD) show deficits in phoneme binding in immediate serial recall: when attempting to reproduce a sequence of words that they no longer fully understand, they show frequent migrations of phonemes between items (e.g., cap, frog recalled as "frap, cog"). This suggests that verbal short-term memory emerges directly from…
ERIC Educational Resources Information Center
Warmington, Meesha; Hulme, Charles
2012-01-01
This study examines the concurrent relationships between phoneme awareness, visual-verbal paired-associate learning, rapid automatized naming (RAN), and reading skills in 7- to 11-year-old children. Path analyses showed that visual-verbal paired-associate learning and RAN, but not phoneme awareness, were unique predictors of word recognition,…
The Role of Phoneme and Onset-Rime Awareness in Second Language Reading Acquisition
ERIC Educational Resources Information Center
Haigh, Corinne A.; Savage, Robert; Erdos, Caroline; Genesee, Fred
2011-01-01
This study investigated the link between phoneme and onset-rime awareness and reading outcomes in children learning to read in a second language (L2). Closely matched phoneme and onset-rime awareness tasks were administered in English and French in the spring of kindergarten to English-dominant children in French immersion programmes (n=98).…
Hogan, Tiffany P.
2010-01-01
In this study, we examined the influence of word-level phonological and lexical characteristics on early phoneme awareness. Typically-developing children, ages 61–78 months, completed a phoneme-based, odd-one-out task that included consonant-vowel-consonant word sets (e.g., “chair-chain-ship”) that varied orthogonally by a phonological characteristic, sound-contrast similarity (similar vs. dissimilar), and a lexical characteristic, neighborhood density (dense vs. sparse). In a subsample of the participants – those with the highest vocabularies – results were in line with a predicted interactive effect of phonological and lexical characteristics on phoneme awareness performance: word sets contrasting similar sounds were less likely to yield correct responses in words from sparse neighborhoods than words from dense neighborhoods. Word sets contrasting dissimilar sounds were most likely to yield correct responses regardless of the words’ neighborhood density. Based on these findings, theories of early phoneme awareness development should consider both word-level (e.g., phonological and lexical characteristics) and child-level (e.g., vocabulary knowledge) influences on phoneme awareness performance. Attention to these word-level item influences is predicted to result in more sensitive and specific measures of reading risk. PMID:20574064
Jürgens, Tim; Brand, Thomas
2009-11-01
This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.
Discrimination of phoneme length differences in word and sentence contexts
NASA Astrophysics Data System (ADS)
Kawai, Norimune; Carrell, Thomas
2005-09-01
The ability of listeners to discriminate phoneme duration differences within word and sentence contexts was measured. This investigation was part of a series of studies examining the audibility and perceptual importance of speech modifications produced by stuttering intervention techniques. Just noticeable differences (jnd's) of phoneme lengths were measured via the parameter estimation by sequential testing (PEST) task, an adaptive tracking procedure. The target phonemes were digitally manipulated to vary from normal (130 m) to prolonged (210 m) duration in 2-m increments. In the first condition the phonemes were embedded in words. In the second condition the phonemes were embedded within words, which were further embedded in sentences. A four-interval forced-choice (4IAX) task was employed on each trial, and the PEST procedure determined the duration at which each listener correctly detected a difference between the normal duration and the test duration 71% of the time. The results revealed that listeners were able to reliably discriminate approximately 15-m differences in word context and 10-m differences in sentence context. An independent t-test showed a difference in discriminability between word and sentence contexts to be significant. These results indicate that duration differences were better perceived within a sentence context.
Phonological Feature Repetition Suppression in the Left Inferior Frontal Gyrus.
Okada, Kayoko; Matchin, William; Hickok, Gregory
2018-06-07
Models of speech production posit a role for the motor system, predominantly the posterior inferior frontal gyrus, in encoding complex phonological representations for speech production, at the phonemic, syllable, and word levels [Roelofs, A. A dorsal-pathway account of aphasic language production: The WEAVER++/ARC model. Cortex, 59(Suppl. C), 33-48, 2014; Hickok, G. Computational neuroanatomy of speech production. Nature Reviews Neuroscience, 13, 135-145, 2012; Guenther, F. H. Cortical interactions underlying the production of speech sounds. Journal of Communication Disorders, 39, 350-365, 2006]. However, phonological theory posits subphonemic units of representation, namely phonological features [Chomsky, N., & Halle, M. The sound pattern of English, 1968; Jakobson, R., Fant, G., & Halle, M. Preliminaries to speech analysis. The distinctive features and their correlates. Cambridge, MA: MIT Press, 1951], that specify independent articulatory parameters of speech sounds, such as place and manner of articulation. Therefore, motor brain systems may also incorporate phonological features into speech production planning units. Here, we add support for such a role with an fMRI experiment of word sequence production using a phonemic similarity manipulation. We adapted and modified the experimental paradigm of Oppenheim and Dell [Oppenheim, G. M., & Dell, G. S. Inner speech slips exhibit lexical bias, but not the phonemic similarity effect. Cognition, 106, 528-537, 2008; Oppenheim, G. M., & Dell, G. S. Motor movement matters: The flexible abstractness of inner speech. Memory & Cognition, 38, 1147-1160, 2010]. Participants silently articulated words cued by sequential visual presentation that varied in degree of phonological feature overlap in consonant onset position: high overlap (two shared phonological features; e.g., /r/ and /l/) or low overlap (one shared phonological feature, e.g., /r/ and /b/). We found a significant repetition suppression effect in the left posterior inferior frontal gyrus, with increased activation for phonologically dissimilar words compared with similar words. These results suggest that phonemes, particularly phonological features, are part of the planning units of the motor speech system.
ERIC Educational Resources Information Center
International Reading Association, Newark, DE.
This position paper considers the complex relation between phonemic awareness and reading. The paper seeks to define phonemic awareness (although there is no single definition), stating that it is typically described as an insight about oral language and in particular about the segmentation of sounds that are used in speech communication. It also…
ERIC Educational Resources Information Center
Kyle, Fiona; Kujala, Janne; Richardson, Ulla; Lyytinen, Heikki; Goswami, Usha
2013-01-01
We report an empirical comparison of the effectiveness of two theoretically motivated computer-assisted reading interventions (CARI) based on the Finnish GraphoGame CARI: English GraphoGame Rime (GG Rime) and English GraphoGame Phoneme (GG Phoneme). Participants were 6-7-year-old students who had been identified by their teachers as being…
Analysis of Phonemes, Graphemes, Onset-Rimes, and Words with Braille-Learning Children
ERIC Educational Resources Information Center
Crawford, Shauna; Elliott, Robert T.
2007-01-01
Six primary school-aged braille students were taught to name 4 to 10 braille letters as phonemes and another 4 to 10 braille letters as graphemes (Study 1). They were then taught to name 10 braille words as onset-rimes and another 10 braille words as whole words (Study 2). Instruction in phonemes and onset rimes resulted in fewer trials and a…
ERIC Educational Resources Information Center
Isakson, Lisa; Marchand-Martella, Nancy; Martella, Ronald C.
2011-01-01
This study assessed the effects of "McGraw Hill Phonemic Awareness" on the phonemic awareness skills of 5 preschool children with developmental delays. The children received 60 of the 110 lessons included in this program over 5 months. They were pre- and posttested using the kindergarten level Initial Sound Fluency and Phoneme…
ERIC Educational Resources Information Center
Ukrainetz, Teresa A.; Ross, Catherine L.; Harm, Heide M.
2009-01-01
Purpose: This study examined 2 schedules of treatment for phonemic awareness. Method: Forty-one 5- to 6-year-old kindergartners, including 22 English learners, with low letter-name and first-sound knowledge received 11 hr of phonemic awareness treatment: concentrated (CP, 3x/wk to December), dispersed (DP, 1x/wk to March), and dispersed vocabulary…
How musical expertise shapes speech perception: evidence from auditory classification images.
Varnet, Léo; Wang, Tianyun; Peter, Chloe; Meunier, Fanny; Hoen, Michel
2015-09-24
It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians' higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.
Onomatopoeias: a new perspective around space, image schemas and phoneme clusters.
Catricalà, Maria; Guidi, Annarita
2015-09-01
Onomatopoeias (
How do associative and phonemic overlap interact to boost illusory recollection?
Hutchison, Keith A; Meade, Michelle L; Williams, Nikolas S; Manley, Krista D; McNabb, Jaimie C
2018-05-01
This project investigated the underlying mechanisms that boost false remember responses when participants receive study words that are both semantically and phonologically similar to a critical lure. Participants completed a memory task in which they were presented with a list of words all associated with a critical lure. Included within the list of semantic associates was a target that was either semantically associated (e.g., yawn) to the critical lure (e.g., sleep) or shared the initial (e.g., slam) or final (e.g., beep) phoneme(s) with the critical lure. After hearing the list, participants recalled each list item and indicated whether they just knew it was on the list or if they instead recollected specific contextual details of that item's presentation. We found that inserting an initial phonemic overlap target boosted experiences of recollection, but only when semantically related associates were presented beforehand. The results are consistent with models of spoken word recognition and show that established semantic context plus initial phonemic overlap play important roles in boosting false recollection.
Do students with and without lexical retrieval weaknesses respond differently to instruction?
Allor, J H; Fuchs, D; Mathes, P G
2001-01-01
Deficits in phonological processing are theorized to be responsible for at least some reading disabilities. A considerable amount of research demonstrates that many students can be taught one of these phonological processes-phonemic awareness. However, not all students have responded favorably to this instruction. Research has suggested that these nonresponders may be unable to retrieve phonological codes quickly from long-term memory. The purpose of this study was to examine whether such a deficiency, which we refer to as lexical retrieval weakness, blunts the effectiveness of combined phonemic awareness and decoding training. To this end, we compared the effectiveness of phonemic awareness and decoding training for students with and without severe lexical retrieval weaknesses. All students in both groups demonstrated poor phonemic awareness. The results suggested that students with relatively strong lexical retrieval skill responded more favorably to beginning reading instruction than did students with weak lexical retrieval skill. In other words, lexical retrieval weakness may influence reading development independently of the effects of phonemic awareness. Implications for instruction are discussed.
Martinussen, Rhonda; Grimbos, Teresa; Ferrari, Julia L. S.
2014-01-01
This study investigated the contribution of naming speed and phonemic awareness to teacher inattention ratings and word-level reading proficiency in 79 first grade children (43 boys, 36 girls). Participants completed the cognitive and reading measures midway through the school year. Teacher ratings of inattention were obtained for each child at the same time point. A path analysis revealed that behavioral inattention had a significant direct effect on word reading proficiency as well as significant indirect effects through phonemic awareness and naming speed. For pseudoword reading proficiency, the effects of inattention were indirect only through phonemic awareness and naming speed. A regression analysis indicated that naming speed, but not phonemic awareness, was significantly associated with teacher inattention ratings controlling for word reading proficiency. The findings highlight the need to better understand the role of behavioral inattention in the development of emergent literacy skills and reading proficiency. PMID:25178628
Adapted cuing technique: facilitating sequential phoneme production.
Klick, S L
1994-09-01
ACT is a visual cuing technique designed to facilitate dyspraxic speech by highlighting the sequential production of phonemes. In using ACT, cues are presented in such a way as to suggest sequential, coarticulatory movement in an overall pattern of motion. While using ACT, the facilitator's hand moves forward and back along the side of her (or his) own face. Finger movements signal specific speech sounds in formations loosely based on the manual alphabet for the hearing impaired. The best movements suggest the flowing, interactive nature of coarticulated phonemes. The synergistic nature of speech is suggested by coordinated hand motions which tighten and relax, move quickly or slowly, reflecting the motions of the vocal tract at various points during production of phonemic sequences. General principles involved in using ACT include a primary focus on speech-in-motion, the monitoring and fading of cues, and the presentation of stimuli based on motor-task analysis of phonemic sequences. Phonemic sequences are cued along three dimensions: place, manner, and vowel-related mandibular motion. Cuing vowels is a central feature of ACT. Two parameters of vowel production, focal point of resonance and mandibular closure, are cued. The facilitator's hand motions reflect the changing shape of the vocal tract and the trajectory of the tongue that result from the coarticulation of vowels and consonants. Rigid presentation of the phonemes is secondary to the facilitator's primary focus on presenting the overall sequential movement. The facilitator's goal is to self-tailor ACT in response to the changing needs and abilities of the client.(ABSTRACT TRUNCATED AT 250 WORDS)
Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review
Schomers, Malte R.; Pulvermüller, Friedemann
2016-01-01
In the neuroscience of language, phonemes are frequently described as multimodal units whose neuronal representations are distributed across perisylvian cortical regions, including auditory and sensorimotor areas. A different position views phonemes primarily as acoustic entities with posterior temporal localization, which are functionally independent from frontoparietal articulatory programs. To address this current controversy, we here discuss experimental results from functional magnetic resonance imaging (fMRI) as well as transcranial magnetic stimulation (TMS) studies. On first glance, a mixed picture emerges, with earlier research documenting neurofunctional distinctions between phonemes in both temporal and frontoparietal sensorimotor systems, but some recent work seemingly failing to replicate the latter. Detailed analysis of methodological differences between studies reveals that the way experiments are set up explains whether sensorimotor cortex maps phonological information during speech perception or not. In particular, acoustic noise during the experiment and ‘motor noise’ caused by button press tasks work against the frontoparietal manifestation of phonemes. We highlight recent studies using sparse imaging and passive speech perception tasks along with multivariate pattern analysis (MVPA) and especially representational similarity analysis (RSA), which succeeded in separating acoustic-phonological from general-acoustic processes and in mapping specific phonological information on temporal and frontoparietal regions. The question about a causal role of sensorimotor cortex on speech perception and understanding is addressed by reviewing recent TMS studies. We conclude that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding. PMID:27708566
Automatic Recognition of Phonemes Using a Syntactic Processor for Error Correction.
1980-12-01
OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS AFIT/GE/EE/8D-45 Robert B. ’Taylor 2Lt USAF Approved for public release...distribution unlimilted. AbP AFIT/GE/EE/ 80D-45 AUTOMATIC RECOGNITION OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS Presented to the...Testing ..................... 37 Bayes Decision Rule for Minimum Error ........... 37 Bayes Decision Rule for Minimum Risk ............ 39 Mini Max Test
Stochastic Model for Phonemes Uncovers an Author-Dependency of Their Usage.
Deng, Weibing; Allahverdyan, Armen E
2016-01-01
We study rank-frequency relations for phonemes, the minimal units that still relate to linguistic meaning. We show that these relations can be described by the Dirichlet distribution, a direct analogue of the ideal-gas model in statistical mechanics. This description allows us to demonstrate that the rank-frequency relations for phonemes of a text do depend on its author. The author-dependency effect is not caused by the author's vocabulary (common words used in different texts), and is confirmed by several alternative means. This suggests that it can be directly related to phonemes. These features contrast to rank-frequency relations for words, which are both author and text independent and are governed by the Zipf's law.
Vonberg, Isabelle; Ehlen, Felicitas; Fromm, Ortwin; Klostermann, Fabian
2014-01-01
For word production, we may consciously pursue semantic or phonological search strategies, but it is uncertain whether we can retrieve the different aspects of lexical information independently from each other. We therefore studied the spread of semantic information into words produced under exclusively phonemic task demands. 42 subjects participated in a letter verbal fluency task, demanding the production of as many s-words as possible in two minutes. Based on curve fittings for the time courses of word production, output spurts (temporal clusters) considered to reflect rapid lexical retrieval based on automatic activation spread, were identified. Semantic and phonemic word relatedness within versus between these clusters was assessed by respective scores (0 meaning no relation, 4 maximum relation). Subjects produced 27.5 (±9.4) words belonging to 6.7 (±2.4) clusters. Both phonemically and semantically words were more related within clusters than between clusters (phon: 0.33±0.22 vs. 0.19±0.17, p<.01; sem: 0.65±0.29 vs. 0.37±0.29, p<.01). Whereas the extent of phonemic relatedness correlated with high task performance, the contrary was the case for the extent of semantic relatedness. The results indicate that semantic information spread occurs, even if the consciously pursued word search strategy is purely phonological. This, together with the negative correlation between semantic relatedness and verbal output suits the idea of a semantic default mode of lexical search, acting against rapid task performance in the given scenario of phonemic verbal fluency. The simultaneity of enhanced semantic and phonemic word relatedness within the same temporal cluster boundaries suggests an interaction between content and sound-related information whenever a new semantic field has been opened.
Dittinger, Eva; D'Imperio, Mariapaola; Besson, Mireille
2018-05-12
Based on growing evidence suggesting that professional music training facilitates foreign language perception and learning, we examined the impact of musical expertise on the categorisation of syllables including phonemes that did (/p/, /b/) or did not (/p h /) belong to the French repertoire by analysing both behaviour (error rates and reaction times) and Event-Related brain Potentials (N200 and P300 components). Professional musicians and nonmusicians categorised syllables either as /ba/ or /pa/ (voicing task), or as /pa/ or /p h a/ with /p h / being a nonnative phoneme for French speakers (aspiration task). In line with our hypotheses, results showed that musicians outperformed nonmusicians in the aspiration task but not in the voicing task. Moreover, the difference between the native (/p/) and the nonnative phoneme (/p h /), as reflected in N200 and P300 amplitudes, was larger in musicians than in nonmusicians in the aspiration task but not in the voicing task. These results show that behaviour and brain activity associated to nonnative phoneme perception are influenced by musical expertise and that these effects are task-dependent. The implications of these findings for current models of phoneme perception and for understanding the qualitative and quantitative differences found on the N200 and P300 components are discussed. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification
Samal, Ashok; Rong, Panying; Green, Jordan R.
2016-01-01
Purpose The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of words, and a set of short phrases during the recording. We used a machine-learning classifier (support-vector machine) to classify the speech stimuli on the basis of articulatory movements. We then compared classification accuracies of the flesh-point combinations to determine an optimal set of sensors. Results When data from the 4 sensors (T1: the vicinity between the tongue tip and tongue blade; T4: the tongue-body back; UL: the upper lip; and LL: the lower lip) were combined, phoneme and word classifications were most accurate and were comparable with the full set (including T2: the tongue-body front; and T3: the tongue-body front). Conclusion We identified a 4-sensor set—that is, T1, T4, UL, LL—that yielded a classification accuracy (91%–95%) equivalent to that using all 6 sensors. These findings provide an empirical basis for selecting sensors and their locations for scientific and emerging clinical applications that incorporate articulatory movements. PMID:26564030
Emotion to emotion speech conversion in phoneme level
NASA Astrophysics Data System (ADS)
Bulut, Murtaza; Yildirim, Serdar; Busso, Carlos; Lee, Chul Min; Kazemzadeh, Ebrahim; Lee, Sungbok; Narayanan, Shrikanth
2004-10-01
Having an ability to synthesize emotional speech can make human-machine interaction more natural in spoken dialogue management. This study investigates the effectiveness of prosodic and spectral modification in phoneme level on emotion-to-emotion speech conversion. The prosody modification is performed with the TD-PSOLA algorithm (Moulines and Charpentier, 1990). We also transform the spectral envelopes of source phonemes to match those of target phonemes using LPC-based spectral transformation approach (Kain, 2001). Prosodic speech parameters (F0, duration, and energy) for target phonemes are estimated from the statistics obtained from the analysis of an emotional speech database of happy, angry, sad, and neutral utterances collected from actors. Listening experiments conducted with native American English speakers indicate that the modification of prosody only or spectrum only is not sufficient to elicit targeted emotions. The simultaneous modification of both prosody and spectrum results in higher acceptance rates of target emotions, suggesting that not only modeling speech prosody but also modeling spectral patterns that reflect underlying speech articulations are equally important to synthesize emotional speech with good quality. We are investigating suprasegmental level modifications for further improvement in speech quality and expressiveness.
Martinussen, Rhonda; Grimbos, Teresa; Ferrari, Julia L S
2014-11-01
This study investigated the contribution of naming speed and phonemic awareness to teacher inattention ratings and word-level reading proficiency in 79 first grade children (43 boys, 36 girls). Participants completed the cognitive and reading measures midway through the school year. Teacher ratings of inattention were obtained for each child at the same time point. A path analysis revealed that behavioral inattention had a significant direct effect on word reading proficiency as well as significant indirect effects through phonemic awareness and naming speed. For pseudoword reading proficiency, the effects of inattention were indirect only through phonemic awareness and naming speed. A regression analysis indicated that naming speed, but not phonemic awareness, was significantly associated with teacher inattention ratings controlling for word reading proficiency. The findings highlight the need to better understand the role of behavioral inattention in the development of emergent literacy skills and reading proficiency. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Zipf’s Law and the Frequency of Kazak Phonemes in Word Formation
NASA Astrophysics Data System (ADS)
Xin, Ruiqing; Li, Yonghong; Yu, Hongzhi
2018-03-01
Zipf’s Law is the basis of the principle of Least Effort, and is widely applicable in all natural fields. The occurring frequency of each phoneme in all Kazak words has been counted to testify the application of Zipf’s law in Kazak. Due to the limitation of the sample size, deviation is unavoidable, but overall results indicate that the occurring frequency and the reciprocal rank of each phoneme in Kazak words formation are in line with Zipf’s distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aimthikul, Y.
This thesis reviews the essential aspects of speech synthesis and distinguishes between the two prevailing techniques: compressed digital speech and phonemic synthesis. It then presents the hardware details of the five speech modules evaluated. FORTRAN programs were written to facilitate message creation and retrieval with four of the modules driven by a PDP-11 minicomputer. The fifth module was driven directly by a computer terminal. The compressed digital speech modules (T.I. 990/306, T.S.I. Series 3D and N.S. Digitalker) each contain a limited vocabulary produced by the manufacturers while both the phonemic synthesizers made by Votrax permit an almost unlimited set ofmore » sounds and words. A text-to-phoneme rules program was adapted for the PDP-11 (running under the RSX-11M operating system) to drive the Votrax Speech Pac module. However, the Votrax Type'N Talk unit has its own built-in translator. Comparison of these modules revealed that the compressed digital speech modules were superior in pronouncing words on an individual basis but lacked the inflection capability that permitted the phonemic synthesizers to generate more coherent phrases. These findings were necessarily highly subjective and dependent on the specific words and phrases studied. In addition, the rapid introduction of new modules by manufacturers will necessitate new comparisons. However, the results of this research verified that all of the modules studied do possess reasonable quality of speech that is suitable for man-machine applications. Furthermore, the development tools are now in place to permit the addition of computer speech output in such applications.« less
Wilson Reading System[R]. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2007
2007-01-01
Wilson Reading System[R] is a supplemental reading and writing curriculum designed to promote reading accuracy (decoding) and spelling (encoding) skills for students with word-level deficits. The program is designed to teach phonemic awareness, alphabetic principles (sound-symbol relationship), word study, spelling, sight word instruction,…
Dating the Origin of Language Using Phonemic Diversity
2012-01-01
Language is a key adaptation of our species, yet we do not know when it evolved. Here, we use data on language phonemic diversity to estimate a minimum date for the origin of language. We take advantage of the fact that phonemic diversity evolves slowly and use it as a clock to calculate how long the oldest African languages would have to have been around in order to accumulate the number of phonemes they possess today. We use a natural experiment, the colonization of Southeast Asia and Andaman Islands, to estimate the rate at which phonemic diversity increases through time. Using this rate, we estimate that present-day languages date back to the Middle Stone Age in Africa. Our analysis is consistent with the archaeological evidence suggesting that complex human behavior evolved during the Middle Stone Age in Africa, and does not support the view that language is a recent adaptation that has sparked the dispersal of humans out of Africa. While some of our assumptions require testing and our results rely at present on a single case-study, our analysis constitutes the first estimate of when language evolved that is directly based on linguistic data. PMID:22558135
The Downside of Greater Lexical Influences: Selectively Poorer Speech Perception in Noise
Xie, Zilong; Tessmer, Rachel; Chandrasekaran, Bharath
2017-01-01
Purpose Although lexical information influences phoneme perception, the extent to which reliance on lexical information enhances speech processing in challenging listening environments is unclear. We examined the extent to which individual differences in lexical influences on phonemic processing impact speech processing in maskers containing varying degrees of linguistic information (2-talker babble or pink noise). Method Twenty-nine monolingual English speakers were instructed to ignore the lexical status of spoken syllables (e.g., gift vs. kift) and to only categorize the initial phonemes (/g/ vs. /k/). The same participants then performed speech recognition tasks in the presence of 2-talker babble or pink noise in audio-only and audiovisual conditions. Results Individuals who demonstrated greater lexical influences on phonemic processing experienced greater speech processing difficulties in 2-talker babble than in pink noise. These selective difficulties were present across audio-only and audiovisual conditions. Conclusion Individuals with greater reliance on lexical processes during speech perception exhibit impaired speech recognition in listening conditions in which competing talkers introduce audible linguistic interferences. Future studies should examine the locus of lexical influences/interferences on phonemic processing and speech-in-speech processing. PMID:28586824
Christiansen, Morten H.; Onnis, Luca; Hockema, Stephen A.
2009-01-01
When learning language young children are faced with many seemingly formidable challenges, including discovering words embedded in a continuous stream of sounds and determining what role these words play in syntactic constructions. We suggest that knowledge of phoneme distributions may play a crucial part in helping children segment words and determine their lexical category, and propose an integrated model of how children might go from unsegmented speech to lexical categories. We corroborated this theoretical model using a two-stage computational analysis of a large corpus of English child-directed speech. First, we used transition probabilities between phonemes to find words in unsegmented speech. Second, we used distributional information about word edges—the beginning and ending phonemes of words—to predict whether the segmented words from the first stage were nouns, verbs, or something else. The results indicate that discovering lexical units and their associated syntactic category in child-directed speech is possible by attending to the statistics of single phoneme transitions and word-initial and final phonemes. Thus, we suggest that a core computational principle in language acquisition is that the same source of information is used to learn about different aspects of linguistic structure. PMID:19371361
ERIC Educational Resources Information Center
Lafontaine, Helene; Chetail, Fabienne; Colin, Cecile; Kolinsky, Regine; Pattamadilok, Chotiga
2012-01-01
Acquiring literacy establishes connections between the spoken and written system and modifies the functioning of the spoken system. As most evidence comes from on-line speech recognition tasks, it is still a matter of debate when and how these two systems interact in metaphonological tasks. The present event-related potentials study investigated…
Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies
2015-12-01
Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fishman, Keera N; Ashbaugh, Andrea R; Lanctôt, Krista L; Cayley, Megan L; Herrmann, Nathan; Murray, Brian J; Sicard, Michelle; Lien, Karen; Sahlas, Demetrios J; Swartz, Richard H
2018-06-01
This study examined the relationship between apathy and cognition in patients with cerebrovascular disease. Apathy may result from damage to frontal subcortical circuits causing dysexecutive syndromes, but apathy is also related to depression. We assessed the ability of apathy to predict phonemic fluency and semantic fluency performance after controlling for depressive symptoms in 282 individuals with stroke and/or transient ischemic attack. Participants (N = 282) completed the Phonemic Fluency Test, Semantic Fluency Test, Center for Epidemiologic Studies Depression Scale, and Apathy Evaluation Scale. A cross-sectional correlational design was utilized. Using hierarchical linear regressions, apathy scores significantly predicted semantic fluency performance (β = -.159, p = .020), but not phonemic fluency performance (β = -.112, p = .129) after scaling scores by age and years of education and controlling for depressive symptoms. Depressive symptoms entered into the first step of both hierarchical linear regressions did not predict semantic fluency (β = -.035, p = .554) or phonemic fluency (β = -.081, p = .173). Apathy and depressive symptoms were moderately correlated, r(280) = .58, p < .001. The results of this study are consistent with research supporting a differentiation between phonemic and semantic fluency tasks, whereby phonemic fluency tasks primarily involve frontal regions, and semantic fluency tasks involve recruitment of more extended networks. The results also highlight a distinction between apathy and depressive symptoms and suggest that apathy may be a more reliable predictor of cognitive deficits than measures of mood in individuals with cerebrovascular disease. Apathy may also be more related to cognition due to overlapping motivational and cognitive frontal subcortical circuitry. Future research should explore whether treatments for apathy could be a novel target for improving cognitive outcomes after stroke.
Smirni, Daniela; Turriziani, Patrizia; Mangano, Giuseppa Renata; Bracco, Martina; Oliveri, Massimiliano; Cipolotti, Lisa
2017-07-28
A growing body of evidence have suggested that non-invasive brain stimulation techniques, such as transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS), can improve the performance of aphasic patients in language tasks. For example, application of inhibitory rTMS or tDCs over the right frontal lobe of dysphasic patients resulted in improved naming abilities. Several studies have also reported that in healthy controls (HC) tDCS application over the left prefrontal cortex (PFC) improve performance in naming and semantic fluency tasks. The aim of this study was to investigate in HC, for the first time, the effects of inhibitory repetitive TMS (rTMS) over left and right lateral frontal cortex (BA 47) on two phonemic fluency tasks (FAS or FPL). 44 right-handed HCs were administered rTMS or sham over the left or right lateral frontal cortex in two separate testing sessions, with a 24h interval, followed by the two phonemic fluency tasks. To account for possible practice effects, an additional 22 HCs were tested on only the phonemic fluency task across two sessions with no stimulation. We found that rTMS-inhibition over the left lateral frontal cortex significantly worsened phonemic fluency performance when compared to sham. In contrast, rTMS-inhibition over the right lateral frontal cortex significantly improved phonemic fluency performance when compared to sham. These results were not accounted for practice effects. We speculated that rTMS over the right lateral frontal cortex may induce plastic neural changes to the left lateral frontal cortex by suppressing interhemispheric inhibitory interactions. This resulted in an increased excitability (disinhibition) of the contralateral unstimulated left lateral frontal cortex, consequently enhancing phonemic fluency performance. Conversely, application of rTMS over the left lateral frontal cortex may induce a temporary, virtual lesion, with effects similar to those reported in left frontal patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hayes-Harb, Rachel; Cheng, Hui-Wen
2016-01-01
The role of written input in second language (L2) phonological and lexical acquisition has received increased attention in recent years. Here we investigated the influence of two factors that may moderate the influence of orthography on L2 word form learning: (i) whether the writing system is shared by the native language and the L2, and (ii) if the writing system is shared, whether the relevant grapheme-phoneme correspondences are also shared. The acquisition of Mandarin via the Pinyin and Zhuyin writing systems provides an ecologically valid opportunity to explore these factors. We first asked whether there is a difference in native English speakers' ability to learn Pinyin and Zhuyin grapheme-phoneme correspondences. In Experiment 1, native English speakers assigned to either Pinyin or Zhuyin groups were exposed to Mandarin words belonging to one of two conditions: in the “congruent” condition, the Pinyin forms are possible English spellings for the auditory words (e.g., < nai> for [nai]); in the “incongruent” condition, the Pinyin forms involve a familiar grapheme representing a novel phoneme (e.g., < xiu> for [ɕiou]). At test, participants were asked to indicate whether auditory and written forms matched; in the crucial trials, the written forms from training (e.g., < xiu>) were paired with possible English pronunciations of the Pinyin written forms (e.g., [ziou]). Experiment 2 was identical to Experiment 1 except that participants additionally saw pictures depicting word meanings during the exposure phase, and at test were asked to match auditory forms with the pictures. In both experiments the Zhuyin group outperformed the Pinyin group due to the Pinyin group's difficulty with “incongruent” items. A third experiment confirmed that the groups did not differ in their ability to perceptually distinguish the relevant Mandarin consonants (e.g., [ɕ]) from the foils (e.g., [z]), suggesting that the findings of Experiments 1 and 2 can be attributed to the effects of orthographic input. We thus conclude that despite the familiarity of Pinyin graphemes to native English speakers, the need to suppress native language grapheme-phoneme correspondences in favor of new ones can lead to less target-like knowledge of newly learned words' forms than does learning Zhuyin's entirely novel graphemes. PMID:27375506
A Multimedia English Learning System Using HMMs to Improve Phonemic Awareness for English Learning
ERIC Educational Resources Information Center
Lai, Yen-Shou; Tsai, Hung-Hsu; Yu, Pao-Ta
2009-01-01
This paper proposes a multimedia English learning (MEL) system, based on Hidden Markov Models (HMMs) and mastery theory strategy, for teaching students with the aim of enhancing their English phonetic awareness and pronunciation. It can analyze phonetic structures, identify and capture pronunciation errors to provide students with targeted advice…
Phoneme Similarity and Confusability
ERIC Educational Resources Information Center
Bailey, T.M.; Hahn, U.
2005-01-01
Similarity between component speech sounds influences language processing in numerous ways. Explanation and detailed prediction of linguistic performance consequently requires an understanding of these basic similarities. The research reported in this paper contrasts two broad classes of approach to the issue of phoneme similarity-theoretically…
Ultrasound visual feedback in articulation therapy following partial glossectomy.
Blyth, Katrina M; Mccabe, Patricia; Madill, Catherine; Ballard, Kirrie J
2016-01-01
Disordered speech is common following treatment for tongue cancer, however there is insufficient high quality evidence to guide clinical decision making about treatment. This study investigated use of ultrasound tongue imaging as a visual feedback tool to guide tongue placement during articulation therapy with two participants following partial glossectomy. A Phase I multiple baseline design across behaviors was used to investigate therapeutic effect of ultrasound visual feedback during speech rehabilitation. Percent consonants correct and speech intelligibility at sentence level were used to measure acquisition, generalization and maintenance of speech skills for treated and untreated related phonemes, while unrelated phonemes were tested to demonstrate experimental control. Swallowing and oromotor measures were also taken to monitor change. Sentence intelligibility was not a sensitive measure of speech change, but both participants demonstrated significant change in percent consonants correct for treated phonemes. One participant also demonstrated generalization to non-treated phonemes. Control phonemes along with swallow and oromotor measures remained stable throughout the study. This study establishes therapeutic benefit of ultrasound visual feedback in speech rehabilitation following partial glossectomy. Readers will be able to explain why and how tongue cancer surgery impacts on articulation precision. Readers will also be able to explain the acquisition, generalization and maintenance effects in the study. Copyright © 2016. Published by Elsevier Inc.
Schumann, Annette; Serman, Maja; Gefeller, Olaf; Hoppe, Ulrich
2015-03-01
Specific computer-based auditory training may be a useful completion in the rehabilitation process for cochlear implant (CI) listeners to achieve sufficient speech intelligibility. This study evaluated the effectiveness of a computerized, phoneme-discrimination training programme. The study employed a pretest-post-test design; participants were randomly assigned to the training or control group. Over a period of three weeks, the training group was instructed to train in phoneme discrimination via computer, twice a week. Sentence recognition in different noise conditions (moderate to difficult) was tested pre- and post-training, and six months after the training was completed. The control group was tested and retested within one month. Twenty-seven adult CI listeners who had been using cochlear implants for more than two years participated in the programme; 15 adults in the training group, 12 adults in the control group. Besides significant improvements for the trained phoneme-identification task, a generalized training effect was noted via significantly improved sentence recognition in moderate noise. No significant changes were noted in the difficult noise conditions. Improved performance was maintained over an extended period. Phoneme-discrimination training improves experienced CI listeners' speech perception in noise. Additional research is needed to optimize auditory training for individual benefit.
Shi, Lu-Feng; Morozova, Natalia
2012-08-01
Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.
Kawase, Saya; Hannah, Beverly; Wang, Yue
2014-09-01
This study examines how visual speech information affects native judgments of the intelligibility of speech sounds produced by non-native (L2) speakers. Native Canadian English perceivers as judges perceived three English phonemic contrasts (/b-v, θ-s, l-ɹ/) produced by native Japanese speakers as well as native Canadian English speakers as controls. These stimuli were presented under audio-visual (AV, with speaker voice and face), audio-only (AO), and visual-only (VO) conditions. The results showed that, across conditions, the overall intelligibility of Japanese productions of the native (Japanese)-like phonemes (/b, s, l/) was significantly higher than the non-Japanese phonemes (/v, θ, ɹ/). In terms of visual effects, the more visually salient non-Japanese phonemes /v, θ/ were perceived as significantly more intelligible when presented in the AV compared to the AO condition, indicating enhanced intelligibility when visual speech information is available. However, the non-Japanese phoneme /ɹ/ was perceived as less intelligible in the AV compared to the AO condition. Further analysis revealed that, unlike the native English productions, the Japanese speakers produced /ɹ/ without visible lip-rounding, indicating that non-native speakers' incorrect articulatory configurations may decrease the degree of intelligibility. These results suggest that visual speech information may either positively or negatively affect L2 speech intelligibility.
The serial nature of the masked onset priming effect revisited.
Mousikou, Petroula; Coltheart, Max
2014-01-01
Reading aloud is faster when target words/nonwords are preceded by masked prime words/nonwords that share their first sound with the target (e.g., save-SINK) compared to when primes and targets are unrelated to each other (e.g., farm-SINK). This empirical phenomenon is the masked onset priming effect (MOPE) and is known to be due to serial left-to-right processing of the prime by a sublexical reading mechanism. However, the literature in this domain lacks a critical experiment. It is possible that when primes are real words their orthographic/phonological representations are activated in parallel and holistically during prime presentation, so any phoneme overlap between primes and targets (and not just initial-phoneme overlap) could facilitate target reading aloud. This is the prediction made by the only computational models of reading aloud that are able to simulate the MOPE, namely the DRC1.2.1, CDP+, and CDP++ models. We tested this prediction in the present study and found that initial-phoneme overlap (blip-BEST), but not end-phoneme overlap (flat-BEST), facilitated target reading aloud compared to no phoneme overlap (junk-BEST). These results provide support for a reading mechanism that operates serially and from left to right, yet are inconsistent with all existing computational models of single-word reading aloud.
Ullrich, Susann; Kotz, Sonja A.; Schmidtke, David S.; Aryani, Arash; Conrad, Markus
2016-01-01
While linguistic theory posits an arbitrary relation between signifiers and the signified (de Saussure, 1916), our analysis of a large-scale German database containing affective ratings of words revealed that certain phoneme clusters occur more often in words denoting concepts with negative and arousing meaning. Here, we investigate how such phoneme clusters that potentially serve as sublexical markers of affect can influence language processing. We registered the EEG signal during a lexical decision task with a novel manipulation of the words' putative sublexical affective potential: the means of valence and arousal values for single phoneme clusters, each computed as a function of respective values of words from the database these phoneme clusters occur in. Our experimental manipulations also investigate potential contributions of formal salience to the sublexical affective potential: Typically, negative high-arousing phonological segments—based on our calculations—tend to be less frequent and more structurally complex than neutral ones. We thus constructed two experimental sets, one involving this natural confound, while controlling for it in the other. A negative high-arousing sublexical affective potential in the strictly controlled stimulus set yielded an early posterior negativity (EPN), in similar ways as an independent manipulation of lexical affective content did. When other potentially salient formal features at the sublexical level were not controlled for, the effect of the sublexical affective potential was strengthened and prolonged (250–650 ms), presumably because formal salience helps making specific phoneme clusters efficient sublexical markers of negative high-arousing affective meaning. These neurophysiological data support the assumption that the organization of a language's vocabulary involves systematic sound-to-meaning correspondences at the phonemic level that influence the way we process language. PMID:27588008
Parietotemporal Stimulation Affects Acquisition of Novel Grapheme-Phoneme Mappings in Adult Readers
Younger, Jessica W.; Booth, James R.
2018-01-01
Neuroimaging work from developmental and reading intervention research has suggested a cause of reading failure may be lack of engagement of parietotemporal cortex during initial acquisition of grapheme-phoneme (letter-sound) mappings. Parietotemporal activation increases following grapheme-phoneme learning and successful reading intervention. Further, stimulation of parietotemporal cortex improves reading skill in lower ability adults. However, it is unclear whether these improvements following stimulation are due to enhanced grapheme-phoneme mapping abilities. To test this hypothesis, we used transcranial direct current stimulation (tDCS) to manipulate parietotemporal function in adult readers as they learned a novel artificial orthography with new grapheme-phoneme mappings. Participants received real or sham stimulation to the left inferior parietal lobe (L IPL) for 20 min before training. They received explicit training over the course of 3 days on 10 novel words each day. Learning of the artificial orthography was assessed at a pre-training baseline session, the end of each of the three training sessions, an immediate post-training session and a delayed post-training session about 4 weeks after training. Stimulation interacted with baseline reading skill to affect learning of trained words and transfer to untrained words. Lower skill readers showed better acquisition, whereas higher skill readers showed worse acquisition, when training was paired with real stimulation, as compared to readers who received sham stimulation. However, readers of all skill levels showed better maintenance of trained material following parietotemporal stimulation, indicating a differential effect of stimulation on initial learning and consolidation. Overall, these results indicate that parietotemporal stimulation can enhance learning of new grapheme-phoneme relationships in readers with lower reading skill. Yet, while parietotemporal function is critical to new learning, its role in continued reading improvement likely changes as readers progress in skill. PMID:29628882
De Vos, Astrid; Vanvooren, Sophie; Vanderauwera, Jolijn; Ghesquière, Pol; Wouters, Jan
2017-08-01
Recent evidence suggests that a fundamental deficit in the synchronization of neural oscillations to temporal information in speech may underlie phonological processing problems in dyslexia. Since previous studies were performed cross-sectionally in school-aged children or adults, developmental aspects of neural auditory processing in relation to reading acquisition and dyslexia remain to be investigated. The present longitudinal study followed 68 children during development from pre-reader (5 years old) to beginning reader (7 years old) and more advanced reader (9 years old). Thirty-six children had a family risk for dyslexia and 14 children eventually developed dyslexia. EEG recordings of auditory steady-state responses to 4 and 20 Hz modulations, corresponding to syllable and phoneme rates, were collected at each point in time. Our results demonstrate an increase in neural synchronization to phoneme-rate modulations around the onset of reading acquisition. This effect was negatively correlated with later reading and phonological skills, indicating that children who exhibit the largest increase in neural synchronization to phoneme rates, develop the poorest reading and phonological skills. Accordingly, neural synchronization to phoneme-rate modulations was found to be significantly higher in beginning and more advanced readers with dyslexia. We found no developmental effects regarding neural synchronization to syllable rates, nor any effects of a family risk for dyslexia. Altogether, our findings suggest that the onset of reading instruction coincides with an increase in neural responsiveness to phoneme-rate modulations, and that the extent of this increase is related to (the outcome of) reading development. Hereby, dyslexic children persistently demonstrate atypically high neural synchronization to phoneme rates from the beginning of reading acquisition onwards. Copyright © 2017 Elsevier Ltd. All rights reserved.
Arabic Script and the Rise of Arabic Calligraphy
ERIC Educational Resources Information Center
Alshahrani, Ali A.
2008-01-01
The aim of this paper is to present a concise coherent literature review of the Arabic Language script system as one of the oldest living Semitic languages in the world. The article discusses in depth firstly, Arabic script as a phonemic sound-based writing system of twenty eight, right to left cursive script where letterforms shaped by their…
Roles of Position, Stress, and Proficiency in L2 Children's Spelling: A Developmental Perspective
ERIC Educational Resources Information Center
Hong, Su Chin; Chen, Shu Hui
2011-01-01
This study investigated the roles of phoneme position, stress, and proficiency in L2 spelling development by Taiwanese students learning English as a Foreign Language (EFL), an alphabetic writing system typologically different from the learners' L1 logographic system. Structured nonword spelling tests were administered to EFL sixth-graders with…
ERIC Educational Resources Information Center
McNeill, Brigid C.
2018-01-01
Few studies have examined the effectiveness of methods to develop preservice teachers' phonemic, morphological and orthographic awareness for spelling instruction. Preservice teachers (n = 86) participated in 10 hours of metalinguistic coursework. The coursework focused on: phonological awareness, orthographic awareness, morphological awareness…
Clayton, Francina J; Sears, Claire; Davis, Alice; Hulme, Charles
2018-07-01
Paired-associate learning (PAL) tasks measure the ability to form a novel association between a stimulus and a response. Performance on such tasks is strongly associated with reading ability, and there is increasing evidence that verbal task demands may be critical in explaining this relationship. The current study investigated the relationships between different forms of PAL and reading ability. A total of 97 children aged 8-10 years completed a battery of reading assessments and six different PAL tasks (phoneme-phoneme, visual-phoneme, nonverbal-nonverbal, visual-nonverbal, nonword-nonword, and visual-nonword) involving both familiar phonemes and unfamiliar nonwords. A latent variable path model showed that PAL ability is captured by two correlated latent variables: auditory-articulatory and visual-articulatory. The auditory-articulatory latent variable was the stronger predictor of reading ability, providing support for a verbal account of the PAL-reading relationship. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Conant, Lisa L; Liebenthal, Einat; Desai, Anjali; Binder, Jeffrey R
2017-08-01
Relationships between maternal education (ME) and both behavioral performances and brain activation during the discrimination of phonemic and nonphonemic sounds were examined using fMRI in children with different levels of phoneme categorization proficiency (CP). Significant relationships were found between ME and intellectual functioning and vocabulary, with a trend for phonological awareness. A significant interaction between CP and ME was seen for nonverbal reasoning abilities. In addition, fMRI analyses revealed a significant interaction between CP and ME for phonemic discrimination in left prefrontal cortex. Thus, ME was associated with differential patterns of both neuropsychological performance and brain activation contingent on the level of CP. These results highlight the importance of examining SES effects at different proficiency levels. The pattern of results may suggest the presence of neurobiological differences in the children with low CP that affect the nature of relationships with ME. Copyright © 2017 Elsevier Inc. All rights reserved.
Lukatela, G; Turvey, M T
1998-09-01
Many speakers of Serbo-Croatian read the language in two phonemically precise and partially overlapping alphabets. Twenty years of experiments directed toward this ability have led to deeper understandings of the role of speech-related processes in reading and the contrasts and similarities among the world's alphabetic writing systems.
Reading faces: investigating the use of a novel face-based orthography in acquired alexia.
Moore, Michelle W; Brendel, Paul C; Fiez, Julie A
2014-02-01
Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic "FaceFont" orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a "linguistic bridge" into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. Copyright © 2013 Elsevier Inc. All rights reserved.
Reading faces: Investigating the use of a novel face-based orthography in acquired alexia
Moore, Michelle W.; Brendel, Paul C.; Fiez, Julie A.
2014-01-01
Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic “FaceFont” orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a “linguistic bridge” into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. PMID:24463310
Verbal and Nonverbal Predictors of Spelling Performance
ERIC Educational Resources Information Center
Sadoski, Mark; Willson, Victor L.; Holcomb, Angelia; Boulware-Gooden, Regina
2005-01-01
Verbal and nonverbal predictors of spelling performance in Grades 1-12 were investigated using the national norming data from a standardized spelling test. Verbal variables included number of letters, phonemes, syllables, digraphs, blends, silent markers, r-controlled vowels, and the proportion of grapheme-phoneme correspondence. The nonverbal…
ERIC Educational Resources Information Center
Gates, Louis
2018-01-01
The accompanying article introduces highly transparent grapheme-phoneme relationships embodied within a Periodic table of decoding cells, which arguably presents the quintessential transparent decoding elements. The study then folds these cells into one highly transparent but simply stated singularity generalization--this generalization unifies…
The Categorical Perception Deficit in Dyslexia: A Meta-Analysis
ERIC Educational Resources Information Center
Noordenbos, Mark W.; Serniclaes, Willy
2015-01-01
Speech perception in dyslexia is characterized by a categorical perception (CP) deficit, demonstrated by weaker discrimination of acoustic differences between phonemic categories in conjunction with better discrimination of acoustic differences within phonemic categories. We performed a meta-analysis of studies that examined the reliability of the…
Smith-Spark, James H; Henry, Lucy A; Messer, David J; Zięcik, Adam P
2017-08-01
The executive function of fluency describes the ability to generate items according to specific rules. Production of words beginning with a certain letter (phonemic fluency) is impaired in dyslexia, while generation of words belonging to a certain semantic category (semantic fluency) is typically unimpaired. However, in dyslexia, verbal fluency has generally been studied only in terms of overall words produced. Furthermore, performance of adults with dyslexia on non-verbal design fluency tasks has not been explored but would indicate whether deficits could be explained by executive control, rather than phonological processing, difficulties. Phonemic, semantic and design fluency tasks were presented to adults with dyslexia and without dyslexia, using fine-grained performance measures and controlling for IQ. Hierarchical regressions indicated that dyslexia predicted lower phonemic fluency, but not semantic or design fluency. At the fine-grained level, dyslexia predicted a smaller number of switches between subcategories on phonemic fluency, while dyslexia did not predict the size of phonemically related clusters of items. Overall, the results suggested that phonological processing problems were at the root of dyslexia-related fluency deficits; however, executive control difficulties could not be completely ruled out as an alternative explanation. Developments in research methodology, equating executive demands across fluency tasks, may resolve this issue. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
What a Nonnative Speaker of English Needs to Learn through Listening.
ERIC Educational Resources Information Center
Bohlken, Robert; Macias, Lori
Teaching nonnative speakers of English to listen for the discriminating nuances of the language is an important but neglected aspect of American English language training. A discriminating listening process follows a sequence of distinguishing phonemes, supra segmental phonemes, morphemes, and syntax. Certain phonetic differences can be noted…
The Downside of Greater Lexical Influences: Selectively Poorer Speech Perception in Noise
ERIC Educational Resources Information Center
Lam, Boji P. W.; Xie, Zilong; Tessmer, Rachel; Chandrasekaran, Bharath
2017-01-01
Purpose: Although lexical information influences phoneme perception, the extent to which reliance on lexical information enhances speech processing in challenging listening environments is unclear. We examined the extent to which individual differences in lexical influences on phonemic processing impact speech processing in maskers containing…
Changes in Articulator Movement Variability during Phonemic Development: A Longitudinal Study
ERIC Educational Resources Information Center
Grigos, Maria I.
2009-01-01
Purpose: The present study explored articulator movement variability during voicing contrast acquisition. The purpose was to examine whether oral articulator movement trajectories associated with the production of voiced/voiceless bilabial phonemes in children became less variable over time. Method: Jaw, lower lip, and upper lip movements were…
First-Year Teacher Knowledge of Phonemic Awareness and Its Instruction
ERIC Educational Resources Information Center
Cheesman, Elaine A.; McGuire, Joan M.; Shankweiler, Donald; Coyne, Michael
2009-01-01
Converging evidence has identified phonemic awareness (PA) as one of five essential components of beginning reading instruction. Evidence suggests that many teachers do not have the recommended knowledge or skills sufficient to provide effective PA instruction within the context of scientifically validated reading education. This study examines…
Techniques for decoding speech phonemes and sounds: A concept
NASA Technical Reports Server (NTRS)
Lokerson, D. C.; Holby, H. G.
1975-01-01
Techniques studied involve conversion of speech sounds into machine-compatible pulse trains. (1) Voltage-level quantizer produces number of output pulses proportional to amplitude characteristics of vowel-type phoneme waveforms. (2) Pulses produced by quantizer of first speech formants are compared with pulses produced by second formants.
Evolution of a Rapidly Learned Representation for Speech.
ERIC Educational Resources Information Center
Nakisa, Ramin Charles; Plunkett, Kim
1998-01-01
Describes a connectionist model accounting for newborn infants' ability to finely discriminate almost all human speech contrasts and the fact that their phonemic category boundaries are identical, even for phonemes outside their target language. The model posits an innately guided learning in which an artificial neural network is stored in a…
ERIC Educational Resources Information Center
McGarr, Nancy S.; Whitehead, Robert
1992-01-01
This paper on physiologic correlates of speech production in children and youth with hearing impairments focuses specifically on the production of phonemes and includes data on respiration for speech production, phonation, speech aerodynamics, articulation, and acoustic analyses of speech by hearing-impaired persons. (Author/DB)
Preschool Teacher Knowledge and Skills: Phonemic Awareness and Instruction
ERIC Educational Resources Information Center
Billow, Cecilia
2017-01-01
The extent of phonemic awareness knowledge and skills early childhood teachers bring to beginning literacy instruction lays the foundation upon which reading success is built for preschool children in their care. A significant number of preschool children receive their first literacy instruction in community-based or Head Start preschools.…
Does Vowel Inventory Density Affect Vowel-to-Vowel Coarticulation?
ERIC Educational Resources Information Center
Mok, Peggy P. K.
2013-01-01
This study tests the output constraints hypothesis that languages with a crowded phonemic vowel space would allow less vowel-to-vowel coarticulation than languages with a sparser vowel space to avoid perceptual confusion. Mandarin has fewer vowel phonemes than Cantonese, but their allophonic vowel spaces are similarly crowded. The hypothesis…
A Phonological Exploration of Oral Reading Errors.
ERIC Educational Resources Information Center
Moscicki, Eve K.; Tallal, Paula
1981-01-01
Presents study exploring oral reading errors of normally developing readers to determine any developmental differences in learning phoneme-grapheme units; to discover if the grapheme representations of some phonemes are more difficult to read than others; and to replicate results reported by Fowler, et. al. Findings show most oral reading errors…
Phonological Treatment Efficacy and Developmental Norms.
ERIC Educational Resources Information Center
Gierut, Judith A.; And Others
1996-01-01
Two studies, one within subjects and the other across subjects, evaluated the efficacy of teaching sounds in developmental sequence to nine young children (ages three to five). Treatment of later-acquired phonemes led to systemwide changes in untreated sound classes, whereas treatment of early-acquired phonemes did not. Findings suggest…
Computer Processing of Esperanto Text.
ERIC Educational Resources Information Center
Sherwood, Bruce
1981-01-01
Basic aspects of computer processing of Esperanto are considered in relation to orthography and computer representation, phonetics, morphology, one-syllable and multisyllable words, lexicon, semantics, and syntax. There are 28 phonemes in Esperanto, each represented in orthography by a single letter. The PLATO system handles diacritics by using a…
Multilingual Phoneme Models for Rapid Speech Processing System Development
2006-09-01
processes are used to develop an Arabic speech recognition system starting from monolingual English models, In- ternational Phonetic Association (IPA...clusters. It was found that multilingual bootstrapping methods out- perform monolingual English bootstrapping methods on the Arabic evaluation data initially...International Phonetic Alphabet . . . . . . . . . 7 2.3.2 Multilingual vs. Monolingual Speech Recognition 7 2.3.3 Data-Driven Approaches
Memory for pictures and words as a function of level of processing: Depth or dual coding?
D'Agostino, P R; O'Neill, B J; Paivio, A
1977-03-01
The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural < phonemic
Computer game as a tool for training the identification of phonemic length.
Pennala, Riitta; Richardson, Ulla; Ylinen, Sari; Lyytinen, Heikki; Martin, Maisa
2014-12-01
Computer-assisted training of Finnish phonemic length was conducted with 7-year-old Russian-speaking second-language learners of Finnish. Phonemic length plays a different role in these two languages. The training included game activities with two- and three-syllable word and pseudo-word minimal pairs with prototypical vowel durations. The lowest accuracy scores were recorded for two-syllable words. Accuracy scores were higher for the minimal pairs with larger rather than smaller differences in duration. Accuracy scores were lower for long duration than for short duration. The ability to identify quantity degree was generalized to stimuli used in the identification test in two of the children. Ideas for improving the game are introduced.
Cortical oscillations related to processing congruent and incongruent grapheme-phoneme pairs.
Herdman, Anthony T; Fujioka, Takako; Chau, Wilkin; Ross, Bernhard; Pantev, Christo; Picton, Terence W
2006-05-15
In this study, we investigated changes in cortical oscillations following congruent and incongruent grapheme-phoneme stimuli. Hiragana graphemes and phonemes were simultaneously presented as congruent or incongruent audiovisual stimuli to native Japanese-speaking participants. The discriminative reaction time was 57 ms shorter for congruent than incongruent stimuli. Analysis of MEG responses using synthetic aperture magnetometry (SAM) revealed that congruent stimuli evoked larger 2-10 Hz activity in the left auditory cortex within the first 250 ms after stimulus onset, and smaller 2-16 Hz activity in bilateral visual cortices between 250 and 500 ms. These results indicate that congruent visual input can modify cortical activity in the left auditory cortex.
Discrimination of Phonemic Vowel Length by Japanese Infants
ERIC Educational Resources Information Center
Sato, Yutaka; Sogabe, Yuko; Mazuka, Reiko
2010-01-01
Japanese has a vowel duration contrast as one component of its language-specific phonemic repertory to distinguish word meanings. It is not clear, however, how a sensitivity to vowel duration can develop in a linguistic context. In the present study, using the visual habituation-dishabituation method, the authors evaluated infants' abilities to…
The Tonal Function of a Task-Irrelevant Chord Modulates Speed of Visual Processing
ERIC Educational Resources Information Center
Escoffier, N.; Tillmann, B.
2008-01-01
Harmonic priming studies have provided evidence that musical expectations influence sung phoneme monitoring, with facilitated processing for phonemes sung on tonally related (expected) chords in comparison to less-related (less-expected) chords [Bigand, Tillmann, Poulin, D'Adamo, and Madurell (2001). "The effect of harmonic context on phoneme…
The Status of the Concept of "Phoneme" in Psycholinguistics
ERIC Educational Resources Information Center
Uppstad, Per Henning; Tonnessen, Finn Egil
2010-01-01
The notion of the phoneme counts as a break-through of modern theoretical linguistics in the early twentieth century. It paved the way for descriptions of distinctive features at different levels in linguistics. Although it has since then had a turbulent existence across altering theoretical positions, it remains a powerful concept of a…
ERIC Educational Resources Information Center
Haley, Katarina L.; Jacks, Adam; Cunningham, Kevin T.
2013-01-01
Purpose: This study was conducted to evaluate the clinical utility of error variability for differentiating between apraxia of speech (AOS) and aphasia with phonemic paraphasia. Method: Participants were 32 individuals with aphasia after left cerebral injury. Diagnostic groups were formed on the basis of operationalized measures of recognized…
Perception of Vowel Length by Japanese- and English-Learning Infants
ERIC Educational Resources Information Center
Mugitani, Ryoko; Pons, Ferran; Fais, Laurel; Dietrich, Christiane; Werker, Janet F.; Amano, Shigeaki
2009-01-01
This study investigated vowel length discrimination in infants from 2 language backgrounds, Japanese and English, in which vowel length is either phonemic or nonphonemic. Experiment 1 revealed that English 18-month-olds discriminate short and long vowels although vowel length is not phonemically contrastive in English. Experiments 2 and 3 revealed…
Error Biases in Inner and Overt Speech: Evidence from Tongue Twisters
ERIC Educational Resources Information Center
Corley, Martin; Brocklehurst, Paul H.; Moat, H. Susannah
2011-01-01
To compare the properties of inner and overt speech, Oppenheim and Dell (2008) counted participants' self-reported speech errors when reciting tongue twisters either overtly or silently and found a bias toward substituting phonemes that resulted in words in both conditions, but a bias toward substituting similar phonemes only when speech was…
Neural Correlates in the Processing of Phoneme-Level Complexity in Vowel Production
ERIC Educational Resources Information Center
Park, Haeil; Iverson, Gregory K.; Park, Hae-Jeong
2011-01-01
We investigated how articulatory complexity at the phoneme level is manifested neurobiologically in an overt production task. fMRI images were acquired from young Korean-speaking adults as they pronounced bisyllabic pseudowords in which we manipulated phonological complexity defined in terms of vowel duration and instability (viz., COMPLEX:…
Teaching Phonemic Awareness through Children's Literature and Experiences
ERIC Educational Resources Information Center
Jurenka, Nancy
2006-01-01
Teaching phonemic awareness can be boring and repetitive in the hands of a teacher who wishes to just use a workbook approach. This delightful book packs loads of fun into 75 lesson plans, providing educators with myriad creative strategies for integrating word study with children's picture books. Each lesson includes a read-aloud book…
Training Phoneme Blending Skills in Children with Down Syndrome
ERIC Educational Resources Information Center
Burgoyne, Kelly; Duff, Fiona; Snowling, Maggie; Buckley, Sue; Hulme, Charles
2013-01-01
This article reports the evaluation of a 6-week programme of teaching designed to support the development of phoneme blending skills in children with Down syndrome (DS). Teaching assistants (TAs) were trained to deliver the intervention to individual children in daily 10-15-minute sessions, within a broader context of reading and language…
A Brief Critique of Chomsky's Challenge to Classical Phonemic Phonology.
ERIC Educational Resources Information Center
Liu, Ngar-Fun
1994-01-01
Phonemic phonology became important because it provided a descriptive account of dialects and languages that had never been transcribed before, and it derives its greatest strength from its practical orientation, which has proved beneficial to language teaching and learning. Noam Chomsky's criticisms of it are largely unjust because he has not…
Dynamic Assessment of Phonological Awareness for Children with Speech Sound Disorders
ERIC Educational Resources Information Center
Gillam, Sandra Laing; Ford, Mikenzi Bentley
2012-01-01
The current study was designed to examine the relationships between performance on a nonverbal phoneme deletion task administered in a dynamic assessment format with performance on measures of phoneme deletion, word-level reading, and speech sound production that required verbal responses for school-age children with speech sound disorders (SSDs).…
Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users
ERIC Educational Resources Information Center
Jaekel, Brittany N.; Newman, Rochelle S.; Goupell, Matthew J.
2017-01-01
Purpose: Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate…
Early Speech Production of Children with Cleft Palate.
ERIC Educational Resources Information Center
Estrem, Theresa; Broen, Patricia A.
1989-01-01
The study comparing word-initial target phonemes and phoneme production of five toddlers with cleft palate and five normal toddlers found that the cleft palate children tended to target more words with word-initial nasals, approximants, and vowels and fewer words with word-initial stops, fricatives, and affricates than normal children. (Author/DB)
How Important Is Teaching Phonemic Awareness to Children Learning to Read in Spanish?
ERIC Educational Resources Information Center
Goldenberg, Claude; Tolar, Tammy D.; Reese, Leslie; Francis, David J.; Bazán, Antonio Ray; Mejía-Arauz, Rebeca
2014-01-01
This comparative study examines relationships between phonemic awareness and Spanish reading skill acquisition among three groups of Spanish-speaking first and second graders: children in Mexico receiving reading instruction in Spanish and children in the United States receiving reading instruction in either Spanish or English. Children were…
On Sources of the Word Length Effect in Young Readers
ERIC Educational Resources Information Center
Gagl, Benjamin; Hawelka, Stefan; Wimmer, Heinz
2015-01-01
We investigated how letter length, phoneme length, and consonant clusters contribute to the word length effect in 2nd- and 4th-grade children. They read words from three different conditions: In one condition, letter length increased but phoneme length did not due to multiletter graphemes (H"aus"-B"auch"-S"chach"). In…
Early Orthographic Influences on Phonemic Awareness Tasks: Evidence from a Preschool Training Study
ERIC Educational Resources Information Center
Castles, Anne; Wilson, Katherine; Coltheart, Max
2011-01-01
Experienced readers show influences of orthographic knowledge on tasks ostensibly tapping phonemic awareness. Here we draw on data from an experimental training study to demonstrate that even preschoolers show influences of their emerging orthographic abilities in such tasks. A total of 40 children were taught some letter-sound correspondences but…
ERIC Educational Resources Information Center
Tupak, Sara V.; Badewien, Meike; Dresler, Thomas; Hahn, Tim; Ernst, Lena H.; Herrmann, Martin J.; Fallgatter, Andreas J.; Ehlis, Ann-Christine
2012-01-01
Movement artifacts are still considered a problematic issue for imaging research on overt language production. This motion-sensitivity can be overcome by functional near-infrared spectroscopy (fNIRS). In the present study, 50 healthy subjects performed a combined phonemic and semantic overt verbal fluency task while frontal and temporal cortex…
DECODAGE DE LA CHAINE PARLEE ET APPRENTISSAGE DES LANGUES (SPEECH DECODING AND LANGUAGE LEARNING).
ERIC Educational Resources Information Center
COMPANYS, EMMANUEL
THIS PAPER WRITTEN IN FRENCH, PRESENTS A HYPOTHESIS CONCERNING THE DECODING OF SPEECH IN SECOND LANGUAGE LEARNING. THE THEORETICAL BACKGROUND OF THE DISCUSSION CONSISTS OF WIDELY ACCEPTED LINGUISTIC CONCEPTS SUCH AS THE PHONEME, DISTINCTIVE FEATURES, NEUTRALIZATION, LINGUISTIC LEVELS, FORM AND SUBSTANCE, EXPRESSION AND CONTENT, SOUNDS, PHONEMES,…
Listeners Retune Phoneme Categories across Languages
ERIC Educational Resources Information Center
Reinisch, Eva; Weber, Andrea; Mitterer, Holger
2013-01-01
Native listeners adapt to noncanonically produced speech by retuning phoneme boundaries by means of lexical knowledge. We asked whether a second language lexicon can also guide category retuning and whether perceptual learning transfers from a second language (L2) to the native language (L1). During a Dutch lexical-decision task, German and Dutch…
Phoneme Awareness, Vocabulary and Word Decoding in Monolingual and Bilingual Dutch Children
ERIC Educational Resources Information Center
Janssen, Marije; Bosman, Anna M. T.; Leseman, Paul P. M.
2013-01-01
The aim of this study was to investigate whether bilingually raised children in the Netherlands, who receive literacy instruction in their second language only, show an advantage on Dutch phoneme-awareness tasks compared with monolingual Dutch-speaking children. Language performance of a group of 47 immigrant first-grade children with various…
ERIC Educational Resources Information Center
Anthony, Jason L.; Lonigan, Christopher J.; Burgess, Stephen R.; Driscoll, Kimberly; Phillips, Beth M.; Cantor, Brenlee G.
2002-01-01
This study examined relations among sensitivity to words, syllables, rhymes, and phonemes in older and younger preschoolers. Confirmatory factor analyses found that a one-factor model best explained the date from both groups of children. Only variance common to all phonological sensitivity skills was related to print knowledge and rudimentary…
Phonemic Awareness and Middle-Ear Disease among Bedouin Arabs in Israel.
ERIC Educational Resources Information Center
Abu-Rabia, Salim
2002-01-01
Investigates the effect of middle-ear infections on phonemic awareness on first-grade Bedouin Arab elementary school children in northern Israel. Divides 49 children who were screened according to their infant medical records into two groups: one with repeated middle-ear infection and one without. Indicates a nonsignificant effect of middle-ear…
The Effect of Adaptive Nonlinear Frequency Compression on Phoneme Perception.
Glista, Danielle; Hawkins, Marianne; Bohnert, Andrea; Rehmann, Julia; Wolfe, Jace; Scollie, Susan
2017-12-12
This study implemented a fitting method, developed for use with frequency lowering hearing aids, across multiple testing sites, participants, and hearing aid conditions to evaluate speech perception with a novel type of frequency lowering. A total of 8 participants, including children and young adults, participated in real-world hearing aid trials. A blinded crossover design, including posttrial withdrawal testing, was used to assess aided phoneme perception. The hearing aid conditions included adaptive nonlinear frequency compression (NFC), static NFC, and conventional processing. Enabling either adaptive NFC or static NFC improved group-level detection and recognition results for some high-frequency phonemes, when compared with conventional processing. Mean results for the distinction component of the Phoneme Perception Test (Schmitt, Winkler, Boretzki, & Holube, 2016) were similar to those obtained with conventional processing. Findings suggest that both types of NFC tested in this study provided a similar amount of speech perception benefit, when compared with group-level performance with conventional hearing aid technology. Individual-level results are presented with discussion around patterns of results that differ from the group average.
Torgesen, Joseph K; Wagner, Richard K; Rashotte, Carol A; Herron, Jeannine; Lindamood, Patricia
2010-06-01
The relative effectiveness of two computer-assisted instructional programs designed to provide instruction and practice in foundational reading skills was examined. First-grade students at risk for reading disabilities received approximately 80 h of small-group instruction in four 50-min sessions per week from October through May. Approximately half of the instruction was delivered by specially trained teachers to prepare students for their work on the computer, and half was delivered by the computer programs. At the end of first grade, there were no differences in student reading performance between students assigned to the different intervention conditions, but the combined-intervention students performed significantly better than control students who had been exposed to their school's normal reading program. Significant differences were obtained for phonemic awareness, phonemic decoding, reading accuracy, rapid automatic naming, and reading comprehension. A follow-up test at the end of second grade showed a similar pattern of differences, although only differences in phonemic awareness, phonemic decoding, and rapid naming remained statistically reliable.
The Spelling Sensitivity Score: Noting Developmental Changes in Spelling Knowledge
ERIC Educational Resources Information Center
Masterson, Julie J.; Apel, Kenn
2010-01-01
Spelling is a language skill supported by several linguistic knowledge sources, including phonemic, orthographic, and morphological knowledge. Typically, however, spelling assessment procedures do not capture the development and use of these linguistic knowledge sources. The purpose of this article is to describe a new assessment system, the…
The Role of Orthography in Oral Vocabulary Learning in Chinese Children
ERIC Educational Resources Information Center
Li, Hong; Zhang, Jie; Ehri, Linnea; Chen, Yu; Ruan, Xiaotong; Dong, Qiong
2016-01-01
Previous research has shown that the presence of English word spellings facilitates children's oral vocabulary learning. Whether a similar orthographic facilitation effect may exist in Chinese is interesting but not intuitively obvious due to the character writing system representing morphosyllabic but not phoneme-size information, and the more…
Process Deficits in Learning Disabled Children and Implications for Reading.
ERIC Educational Resources Information Center
Johnson, Doris J.
An exploration of specific deficits of learning disabled children, especially in the auditory system, is presented in this paper. Disorders of attention, perception, phonemic and visual discrimination, memory, and symbolization and conceptualization are considered. The paper develops several questions for teachers of learning disabled children to…
Tile Test: A Hands-On Approach for Assessing Phonics in the Early Grades
ERIC Educational Resources Information Center
Norman, Kimberly A.; Calfee, Robert C.
2004-01-01
An instrument for assessing young students' understanding of the English orthographic system is presented. The Tile Test measures understanding of phoneme awareness, letter-sound correspondences, decoding and spelling of words, sight-word reading, and the application of decoding and spelling in sentences. Metalinguistic questions embedded within…
Hemispheric asymmetry in auditory processing of speech envelope modulations in prereading children.
Vanvooren, Sophie; Poelmans, Hanne; Hofmann, Michael; Ghesquière, Pol; Wouters, Jan
2014-01-22
The temporal envelope of speech is an important cue contributing to speech intelligibility. Theories about the neural foundations of speech perception postulate that the left and right auditory cortices are functionally specialized in analyzing speech envelope information at different time scales: the right hemisphere is thought to be specialized in processing syllable rate modulations, whereas a bilateral or left hemispheric specialization is assumed for phoneme rate modulations. Recently, it has been found that this functional hemispheric asymmetry is different in individuals with language-related disorders such as dyslexia. Most studies were, however, performed in adults and school-aged children, and only a little is known about how neural auditory processing at these specific rates manifests and develops in very young children before reading acquisition. Yet, studying hemispheric specialization for processing syllable and phoneme rate modulations in preliterate children may reveal early neural markers for dyslexia. In the present study, human cortical evoked potentials to syllable and phoneme rate modulations were measured in 5-year-old children at high and low hereditary risk for dyslexia. The results demonstrate a right hemispheric preference for processing syllable rate modulations and a symmetric pattern for phoneme rate modulations, regardless of hereditary risk for dyslexia. These results suggest that, while hemispheric specialization for processing syllable rate modulations seems to be mature in prereading children, hemispheric specialization for phoneme rate modulation processing may still be developing. These findings could have important implications for the development of phonological and reading skills.
A real-time phoneme counting algorithm and application for speech rate monitoring.
Aharonson, Vered; Aharonson, Eran; Raichlin-Levi, Katia; Sotzianu, Aviv; Amir, Ofer; Ovadia-Blechman, Zehava
2017-03-01
Adults who stutter can learn to control and improve their speech fluency by modifying their speaking rate. Existing speech therapy technologies can assist this practice by monitoring speaking rate and providing feedback to the patient, but cannot provide an accurate, quantitative measurement of speaking rate. Moreover, most technologies are too complex and costly to be used for home practice. We developed an algorithm and a smartphone application that monitor a patient's speaking rate in real time and provide user-friendly feedback to both patient and therapist. Our speaking rate computation is performed by a phoneme counting algorithm which implements spectral transition measure extraction to estimate phoneme boundaries. The algorithm is implemented in real time in a mobile application that presents its results in a user-friendly interface. The application incorporates two modes: one provides the patient with visual feedback of his/her speech rate for self-practice and another provides the speech therapist with recordings, speech rate analysis and tools to manage the patient's practice. The algorithm's phoneme counting accuracy was validated on ten healthy subjects who read a paragraph at slow, normal and fast paces, and was compared to manual counting of speech experts. Test-retest and intra-counter reliability were assessed. Preliminary results indicate differences of -4% to 11% between automatic and human phoneme counting. Differences were largest for slow speech. The application can thus provide reliable, user-friendly, real-time feedback for speaking rate control practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Three DIBELS Tasks vs. Three Informal Reading/Spelling Tasks: A Comparison of Predictive Validity
ERIC Educational Resources Information Center
Morris, Darrell; Trathen, Woodrow; Perney, Jan; Gill, Tom; Schlagal, Robert; Ward, Devery; Frye, Elizabeth M.
2017-01-01
Within a developmental framework, this study compared the predictive validity of three DIBELS tasks (phoneme segmentation fluency [PSF], nonsense word fluency [NWF], and oral reading fluency [ORF]) with that of three alternative tasks drawn from the field of reading (phonemic spelling [phSPEL], word recognition-timed [WR-t], and graded passage…
ERIC Educational Resources Information Center
Pieretti, Robert A.; Kaul, Sandra D.; Zarchy, Razi M.; O'Hanlon, Laureen M.
2015-01-01
The primary focus of this research study was to examine the benefit of a using a multimodal approach to speech sound correction with preschool children. The approach uses the auditory, tactile, and kinesthetic modalities and includes a unique, interactive visual focus that attempts to provide a visual representation of a phonemic category. The…
Musical Structure Modulates Semantic Priming in Vocal Music
ERIC Educational Resources Information Center
Poulin-Charronnat, Benedicte; Bigand, Emmanuel; Madurell, Francois; Peereman, Ronald
2005-01-01
It has been shown that harmonic structure may influence the processing of phonemes whatever the extent of participants' musical expertise [Bigand, E., Tillmann, B., Poulin, B., D'Adamo, D. A., & Madurell, F. (2001). The effect of harmonic context on phoneme monitoring in vocal music. "Cognition," 81, B11-B20]. The present study goes a step further…
ERIC Educational Resources Information Center
Brunelliere, Angele; Dufour, Sophie; Nguyen, Noel; Frauenfelder, Ulrich Hans
2009-01-01
This event-related potential (ERP) study examined the impact of phonological variation resulting from a vowel merger on phoneme perception. The perception of the /e/-/[epsilon]/ contrast which does not exist in Southern French-speaking regions, and which is in the process of merging in Northern French-speaking regions, was compared to the…
The Perception of Second Language Sounds in Early Bilinguals: New Evidence from an Implicit Measure
ERIC Educational Resources Information Center
Navarra, Jordi; Sebastian-Galles, Nuria; Soto-Faraco, Salvador
2005-01-01
Previous studies have suggested that nonnative (L2) linguistic sounds are accommodated to native language (L1) phonemic categories. However, this conclusion may be compromised by the use of explicit discrimination tests. The present study provides an implicit measure of L2 phoneme discrimination in early bilinguals (Catalan and Spanish).…
The Role of Hypercorrection in the Acquisition of L2 Phonemic Contrasts
ERIC Educational Resources Information Center
Eckman, Fred R.; Iverson, Gregory K.; Song, Jae Yung
2013-01-01
This article reports empirical findings from an ongoing investigation into the acquisition of second-language (L2) phonemic contrasts. Specifically, we consider the status and role of the phenomenon of hypercorrection in the various stages through which L2 learners develop and internalize a target language (TL) contrast. We adopt the prevailing…
Lindamood Phonemic Sequencing (LiPS) [R]. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2008
2008-01-01
The Lindamood Phonemic Sequencing (LiPS)[R] program (formerly called the Auditory Discrimination in Depth[R] [ADD] program) is designed to teach students skills to decode words and to identify individual sounds and blends in words. The program is individualized to meet student needs and is often used with students who have learning disabilities or…
African American English Dialect and Performance on Nonword Spelling and Phonemic Awareness Tasks
ERIC Educational Resources Information Center
Kohler, Candida T.; Bahr, Ruth Huntley; Silliman, Elaine R.; Bryant, Judith Becker; Apel, Kenn; Wilkinson, Louise C.
2007-01-01
Purpose: To evaluate the role of dialect on phonemic awareness and nonword spelling tasks. These tasks were selected for their reliance on phonological and orthographic processing, which may be influenced by dialect use. Method: Eighty typically developing African American children in Grades 1 and 3 were first screened for dialect use and then…
Fluency Training in Phoneme Blending: A Preliminary Study of Generalized Effects
ERIC Educational Resources Information Center
Martens, Brian K.; Werder, Candace S.; Hier, Bridget O.; Koenig, Elizabeth A.
2013-01-01
We examined the generalized effects of training children to fluently blend phonemes of words containing target vowel teams on their reading of trained and untrained words in lists and passages. Three second-grade students participated. A subset of words containing each of 3 target vowel teams ("aw," "oi," and "au") was trained in lists, and…
Interaction between Phonemic Abilities and Syllable Congruency Effect in Young Readers
ERIC Educational Resources Information Center
Chetail, Fabienne; Mathey, Stephanie
2013-01-01
This study investigated whether and to what extent phonemic abilities of young readers (Grade 5) influence syllabic effects in reading. More precisely, the syllable congruency effect was tested in the lexical decision task combined with masked priming in eleven-year-old children. Target words were preceded by a pseudo-word prime sharing the first…
Poor Phonemic Discrimination Does Not Underlie Poor Verbal Short-Term Memory in Down Syndrome
ERIC Educational Resources Information Center
Purser, Harry R. M.; Jarrold, Christopher
2013-01-01
Individuals with Down syndrome tend to have a marked impairment of verbal short-term memory. The chief aim of this study was to investigate whether phonemic discrimination contributes to this deficit. The secondary aim was to investigate whether phonological representations are degraded in verbal short-term memory in people with Down syndrome…
ERIC Educational Resources Information Center
Gelfand, Jessica T.; Christie, Robert E.; Gelfand, Stanley A.
2014-01-01
Purpose: Speech recognition may be analyzed in terms of recognition probabilities for perceptual wholes (e.g., words) and parts (e.g., phonemes), where j or the j-factor reveals the number of independent perceptual units required for recognition of the whole (Boothroyd, 1968b; Boothroyd & Nittrouer, 1988; Nittrouer & Boothroyd, 1990). For…
ERIC Educational Resources Information Center
Cousin, Emilie; Perrone, Marcela; Baciu, Monica
2009-01-01
This behavioral study aimed at assessing the effect of two variables on the degree of hemispheric specialization for language. One of them was the "grapho-phonemic translation (transformation)" (letter-sound mapping) and the other was the participants' "gender". The experiment was conducted with healthy volunteers. A divided visual field procedure…
ERIC Educational Resources Information Center
Rakhlin, Natalia; Cardoso-Martins, Cláudia; Grigorenko, Elena L.
2014-01-01
We studied the relationship between rapid serial naming (RSN) and orthographic processing in Russian, an asymmetrically transparent orthography. Ninety-six students (M age = 13.73) completed tests of word and pseudoword reading fluency, spelling, orthographic choice, phonological choice, phoneme awareness (PA), and RSN. PA was a better predictor…
Music and Phonemic Awareness: The Kindergarten Connection
ERIC Educational Resources Information Center
Newland, Cheyrl M.
2013-01-01
With the passage of No Child Left Behind (NCLB, 2001), schools have become aware of the consequences of successfully teaching children to read. A major building block in early childhood education includes the decoding of phonemes, rhymes, and the rhythm of spoken and written word. As reading is crucial to success in any subject area or career…
Teaching Adults to Read Braille Using Phonological Methods: Single-Case Studies
ERIC Educational Resources Information Center
Crawford, Shauna; Elliott, Robert T.
2009-01-01
Four women with visual impairments were taught 13 braille letters as phonemes and another 13 braille letters as graphemes and then were taught 10 braille words as onset-rime and another 10 braille words as whole words. Phoneme and onset-rime instruction resulted in faster and more accurate performance. (Contains 1 table and 2 figures.)
ERIC Educational Resources Information Center
Cohen-Goldberg, Ariel M.
2012-01-01
Theories of spoken production have not specifically addressed whether the phonemes of a word compete with each other for selection during phonological encoding (e.g., whether /t/ competes with /k/ in cat). Spoken production theories were evaluated and found to fall into three classes, theories positing (1) no competition, (2) competition among…
Learning of a Formation Principle for the Secondary Phonemic Function of a Syllabic Orthography
ERIC Educational Resources Information Center
Fletcher-Flinn, Claire M.; Thompson, G. Brian; Yamada, Megumi; Meissel, Kane
2014-01-01
It has been observed in Japanese children learning to read that there is an early and rapid shift from exclusive reading of hiragana as syllabograms to the dual-use convention in which some hiragana also represent phonemic elements. Such rapid initial learning appears contrary to the standard theories of reading acquisition that require…
Prosodic and Phonemic Awareness in Children's Reading of Long and Short Words
ERIC Educational Resources Information Center
Wade-Woolley, Lesly
2016-01-01
Phonemic and prosodic awareness are both phonological processes that operate at different levels: the former at the level of the individual sound segment and the latter at the suprasegmental level across syllables. Both have been shown to be related to word reading in young readers. In this study we examine how these processes are differentially…
ERIC Educational Resources Information Center
Nimmo, Lisa M.; Roodenrys, Steven
2004-01-01
The aim of the present research was to determine whether the effect that phonological similarity has on immediate serial recall is influenced by the consistency and position of phonemes within words. In comparison to phonologically dissimilar lists, when the stimulus lists rhyme there is a facilitative effect on the recall of item information and…
ERIC Educational Resources Information Center
Moberly, Aaron C.; Lowenstein, Joanna H.; Tarr, Eric; Caldwell-Tarr, Amanda; Welling, D. Bradley; Shahin, Antoine J.; Nittrouer, Susan
2014-01-01
Purpose: Several acoustic cues specify any single phonemic contrast. Nonetheless, adult, native speakers of a language share weighting strategies, showing preferential attention to some properties over others. Cochlear implant (CI) signal processing disrupts the salience of some cues: In general, amplitude structure remains readily available, but…
ERIC Educational Resources Information Center
Garcia-Sierra, Adrian; Ramirez-Esparza, Nairan; Silva-Pereyra, Juan; Siard, Jennifer; Champlin, Craig A.
2012-01-01
Event Related Potentials (ERPs) were recorded from Spanish-English bilinguals (N = 10) to test pre-attentive speech discrimination in two language contexts. ERPs were recorded while participants silently read magazines in English or Spanish. Two speech contrast conditions were recorded in each language context. In the "phonemic in English"…
Tuning of Human Modulation Filters Is Carrier-Frequency Dependent
Simpson, Andrew J. R.; Reiss, Joshua D.; McAlpine, David
2013-01-01
Recent studies employing speech stimuli to investigate ‘cocktail-party’ listening have focused on entrainment of cortical activity to modulations at syllabic (5 Hz) and phonemic (20 Hz) rates. The data suggest that cortical modulation filters (CMFs) are dependent on the sound-frequency channel in which modulations are conveyed, potentially underpinning a strategy for separating speech from background noise. Here, we characterize modulation filters in human listeners using a novel behavioral method. Within an ‘inverted’ adaptive forced-choice increment detection task, listening level was varied whilst contrast was held constant for ramped increments with effective modulation rates between 0.5 and 33 Hz. Our data suggest that modulation filters are tonotopically organized (i.e., vary along the primary, frequency-organized, dimension). This suggests that the human auditory system is optimized to track rapid (phonemic) modulations at high sound-frequencies and slow (prosodic/syllabic) modulations at low frequencies. PMID:24009759
Learning Novel Musical Pitch via Distributional Learning
ERIC Educational Resources Information Center
Ong, Jia Hoong; Burnham, Denis; Stevens, Catherine J.
2017-01-01
Because different musical scales use different sets of intervals and, hence, different musical pitches, how do music listeners learn those that are in their native musical system? One possibility is that musical pitches are acquired in the same way as phonemes, that is, via distributional learning, in which learners infer knowledge from the…
Cued American English: A Variety in the Visual Mode
ERIC Educational Resources Information Center
Portolano, Marlana
2008-01-01
Cued American English (CAE) is a visual variety of English derived from a mode of communication called Cued Speech (CS). CS, or cueing, is a system of communication for use with the deaf, which consists of hand shapes, hand placements, and mouth shapes that signify the phonemic information conventionally conveyed through speech in spoken…
A Mediating Role of the Premotor Cortex in Phoneme Segmentation
ERIC Educational Resources Information Center
Sato, Marc; Tremblay, Pascale; Gracco, Vincent L.
2009-01-01
Consistent with a functional role of the motor system in speech perception, disturbing the activity of the left ventral premotor cortex by means of repetitive transcranial magnetic stimulation (rTMS) has been shown to impair auditory identification of syllables that were masked with white noise. However, whether this region is crucial for speech…
Application of a Multitiered System of Support with English Language Learners
ERIC Educational Resources Information Center
Vanderwood, Mike L.; Tung, Catherine; Arellano, Elizabeth
2014-01-01
This study examined the effects of a phonological awareness (PA) intervention on the phonological and alphabetic principle skills of first-grade English language learners (ELLs). Nine first-grade classrooms in two large elementary schools were screened with DIBELS Phoneme Segmentation Fluency (PSF) and Nonsense Word Fluency (NWF) in the fall and…
Psychoacoustic Assessment of Speech Communication Systems. The Diagnostic Discrimination Test.
ERIC Educational Resources Information Center
Grether, Craig Blaine
The present report traces the rationale, development and experimental evaluation of the Diagnostic Discrimination Test (DDT). The DDT is a three-choice test of consonant discriminability of the perceptual/acoustic dimensions of consonant phonemes within specific vowel contexts. The DDT was created and developed in an attempt to provide a…
Pinyin Invented Spelling in Mandarin Chinese-Speaking Children with and without Reading Difficulties
ERIC Educational Resources Information Center
Ding, Yi; Liu, Ru-De; McBride, Catherine; Zhang, Dake
2015-01-01
This study examined analytical pinyin (a phonological coding system for teaching pronunciation and lexical tones of Chinese characters) skills in 54 Mandarin-speaking fourth graders by using an invented spelling instrument that tapped into syllable awareness, phoneme awareness, lexical tones, and tone sandhi in Chinese. Pinyin invented spelling…
ERIC Educational Resources Information Center
Wang, Ling; Blackwell, Aleka Akoyunoglou
2015-01-01
Native speakers of alphabetic languages, which use letters governed by grapheme-phoneme correspondence rules, often find it particularly challenging to learn a logographic language whose writing system employs symbols with no direct sound-to-spelling connection but links to the visual and semantic information. The visuospatial properties of…
ERIC Educational Resources Information Center
Ghoneim, Nahed Mohammed Mahmoud; Elghotmy, Heba Elsayed Abdelsalam
2015-01-01
The current study investigates the effect of a suggested multisensory phonics program on developing kindergarten pre-service teachers' EFL reading accuracy and phonemic awareness. A total of 40 fourth year kindergarten pre-service teachers, Faculty of Education, participated in the study that involved one group experimental design. Pre-post tests…
ERIC Educational Resources Information Center
Melby-Lervag, Monica
2012-01-01
The acknowledgement that educational achievement is highly dependent on successful reading development, has led to extensive research on its underlying factors. Evidence clearly suggests that the relation between reading skills, phoneme awareness, rhyme awareness, and verbal short-term memory is more than a mere association. A strong argument has…
ERIC Educational Resources Information Center
Hesketh, Anne; Dima, Evgenia; Nelson, Veronica
2007-01-01
Background: Awareness of individual phonemes in words is a late-acquired level of phonological awareness that usually develops in the early school years. It is generally agreed to have a close relationship with early literacy development, but its role in speech change is less well understood. Speech and language therapy for children with speech…
ERIC Educational Resources Information Center
Nittrouer, Susan
1996-01-01
This study of 41 children (ages 7 and 8) studied the effects of low socioeconomic status (SES) and chronic otitis media (OM) on speech perception and phonemic awareness. Findings indicated the children with low SES did poorly on both kinds of tasks whether or not they had chronic OM. (CR)
ERIC Educational Resources Information Center
Hill, Susan
Since one way to study a non-reading primary child's phonics knowledge is to examine his/her invented spelling, a researcher's quandary led to a quasi-experimental design study, employed to answer three questions: (1) Do primary non-readers possess phonics knowledge? (2) Do primary non-readers possess phonemic awareness? and (3) Do primary…
Learning to Read and Spell in Persian: A Cross-Sectional Study from Grades 1 to 4
ERIC Educational Resources Information Center
Rahbari, Noriyeh; Senechal, Monique
2010-01-01
We investigated the reading and spelling development of 140 Persian children attending Grades 1-4 in Iran. Persian has very consistent letter-sound correspondences, but it varies in transparency because 3 of its 6 vowel phonemes are not marked with letters. Persian also varies in spelling consistency because 6 phonemes have more than one…
ERIC Educational Resources Information Center
Liao, Chen-Huei; Kuo, Bor-Chen; Deenang, Exkarach; Mok, Magdalena Mo Ching
2016-01-01
This study aimed to investigate the structure and the validity of the cognitive components of reading in Thai, which is a language with a high degree of grapheme-phoneme correspondence. The participants were 1181 fourth-grade students in 29 schools in Thailand, divided into two subsamples for data analysis. Phoneme isolation, rapid colour naming,…
ERIC Educational Resources Information Center
Hoffman, Paul; Jefferies, Elizabeth; Ehsan, Sheeba; Jones, Roy W.; Lambon Ralph, Matthew A.
2009-01-01
Patients with semantic dementia (SD) make numerous phoneme migration errors when recalling lists of words they no longer fully understand, suggesting that word meaning makes a critical contribution to phoneme binding in verbal short-term memory. Healthy individuals make errors that appear similar when recalling lists of nonwords, which also lack…
ERIC Educational Resources Information Center
Chen, Victoria; Savage, Robert S.
2014-01-01
This study examines the effects of teaching common complex grapheme-to-phoneme correspondences (GPCs) on reading and reading motivation for at-risk readers using a randomised control trial design with taught intervention and control conditions. One reading programme taught children complex GPCs ordered by their frequency of occurrence in…
ERIC Educational Resources Information Center
Faes, Jolien; Gillis, Joris; Gillis, Steven
2017-01-01
The frequency of occurrence of words and sounds has a pervasive influence on typically developing children's language acquisition. For instance, highly frequent words appear earliest in a child's lexicon, and highly frequent phonemes are produced more accurately. This study evaluates (a) whether word frequency influences word accuracy and (b)…
Catch Up® Literacy: Evaluation Report and Executive Summary
ERIC Educational Resources Information Center
Rutt, Simon; Kettlewell, Kelly; Bernardinelli, Daniele
2015-01-01
Catch Up® Literacy is a structured one-to-one literacy intervention for pupils between the ages of 6 and 14 who are struggling to learn to read. It teaches pupils to blend phonemes (combine letter sounds into words), segment phonemes (separate words into letter sounds), and memorise particular words so they can be understood without needing to use…
Age and Schooling Effects on Early Literacy and Phoneme Awareness
ERIC Educational Resources Information Center
Cunningham, Anna; Carroll, Julia
2011-01-01
Previous research on age and schooling effects is largely restricted to studies of children who begin formal schooling at 6 years of age, and the measures of phoneme awareness used have typically lacked sensitivity for beginning readers. Our study addresses these issues by testing 4 to 6 year-olds (first 2 years of formal schooling in the United…
Phonological Processing of Second Language Phonemes: A Selective Deficit in a Bilingual Aphasic.
ERIC Educational Resources Information Center
Eviatar, Zohar; Leikin, Mark; Ibrahim, Raphiq
1999-01-01
A case study of a Russian-Hebrew bilingual woman with transcortical sensory aphasia showed that overall, aphasic symptoms were similar in the two languages, with Hebrew somewhat more impaired. The woman revealed a difference in her ability to perceive phonemes in the context of Hebrew words that depended on whether they were presented in a Russian…
Strategy Choice Mediates the Link between Auditory Processing and Spelling
Kwong, Tru E.; Brachman, Kyle J.
2014-01-01
Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities. PMID:25198787
Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users.
Jaekel, Brittany N; Newman, Rochelle S; Goupell, Matthew J
2017-05-24
Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes. Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates. Twenty-three CI and 29 NH participants performed a phoneme identification task. NH participants heard the same unprocessed stimuli as the CI participants or stimuli degraded by a sine vocoder, simulating aspects of CI processing. CI participants showed larger rate normalization effects (6.6 ms) than the NH participants (3.7 ms) and had shallower (less reliable) category boundary slopes. NH participants showed similarly shallow slopes when presented acoustically degraded vocoded signals, but an equal or smaller rate effect in response to reductions in available spectral and temporal information. CI participants can rate normalize, despite their degraded speech input, and show a larger rate effect compared to NH participants. CI participants may particularly rely on rate normalization to better maintain perceptual constancy of the speech signal.
Lexical statistics of competition in L2 versus L1 listening
NASA Astrophysics Data System (ADS)
Cutler, Anne
2005-09-01
Spoken-word recognition involves multiple activation of alternative word candidates and competition between these alternatives. Phonemic confusions in L2 listening increase the number of potentially active words, thus slowing word recognition by adding competitors. This study used a 70,000-word English lexicon backed by frequency statistics from a 17,900,000-word corpus to assess the competition increase resulting from two representative phonemic confusions, one vocalic (ae/E) and one consonantal (r/l), in L2 versus L1 listening. The first analysis involved word embedding. Embedded words (cat in cattle, rib in ribbon) cause competition, which phonemic confusion can increase (cat in kettle, rib in liberty). The average increase in number of embedded words was 59.6 and 48.3 temporary ambiguity. Even when no embeddings are present, multiple alternatives are possible: para- can become parrot, paradise, etc., but also pallet, palace given /r/-/l/ confusion. Phoneme confusions (vowel or consonant) in first or second position in the word approximately doubled the number of activated candidates; confusions later in the word increased activation by on average 53 third, 42 confusions significantly increase competition for L2 compared with L1 listeners.
Depth and elaboration of processing in relation to age.
Simon, E
1979-03-01
Processing at encoding and retrieval was jointly manipulated, and then the retrieval effectiveness of different cues was directly compared to uncover the relative pattern of deep and elaborate processing in relation to both age and different experimental manipulations. In experiment 1 phonemic and semantic cues were effective retrieval aids for to-be-remembered words in the youngest group; with increasing age, semantic cues decreased in effectiveness more than phonemic cues. These data showed phonemic features to have an importance that is not recognized in the data generated by the typical levels paradigm. When elaboration of the words was induced in Experiment 2 by presenting them in sentences, semantic and context cues were most effective in the youngest group whereas phonemic cues were most effective in the oldest group. Since the pattern of cue effectiveness in the elderly was similar to that in Experiment 1, where the same words were presented alone, it was concluded that aging results in poor elaboration, in particular, in inefficient integration of word events with the context of presentation. These age effects were mimicked in young subjects in Experiment 3 by experimentally restricting encoding time. The present approach uses somewhat modified views of depth and elaboration.
Strategy choice mediates the link between auditory processing and spelling.
Kwong, Tru E; Brachman, Kyle J
2014-01-01
Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.
Effects of lips and hands on auditory learning of second-language speech sounds.
Hirata, Yukari; Kelly, Spencer D
2010-04-01
Previous research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the authors examined whether multimodal input helps to improve native English speakers' ability to perceive Japanese vowel length contrasts. Sixty native English speakers participated in 1 of 4 types of training: (a) audio-only; (b) audio-mouth; (c) audio-hands; and (d) audio-mouth-hands. Before and after training, participants were given phoneme perception tests that measured their ability to identify short and long vowels in Japanese (e.g., /kato/ vs. /kato/). Although all 4 groups improved from pre- to posttest (replicating previous research), the participants in the audio-mouth condition improved more than those in the audio-only condition, whereas the 2 conditions involving hand gestures did not. Seeing lip movements during training significantly helps learners to perceive difficult second-language phonemic contrasts, but seeing hand gestures does not. The authors discuss possible benefits and limitations of using multimodal information in second-language phoneme learning.
Implementation of Three Text to Speech Systems for Kurdish Language
NASA Astrophysics Data System (ADS)
Bahrampour, Anvar; Barkhoda, Wafa; Azami, Bahram Zahir
Nowadays, concatenative method is used in most modern TTS systems to produce artificial speech. The most important challenge in this method is choosing appropriate unit for creating database. This unit must warranty smoothness and high quality speech, and also, creating database for it must reasonable and inexpensive. For example, syllable, phoneme, allophone, and, diphone are appropriate units for all-purpose systems. In this paper, we implemented three synthesis systems for Kurdish language based on syllable, allophone, and diphone and compare their quality using subjective testing.
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
2016-02-15
The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Carro, Dorothy J.
The purpose of this study was to evaluate the effect of increased phonemic awareness instruction on the writing ability of At Risk first graders. Twenty-three students from a suburban first grade classroom in Central New Jersey were involved in the study. Twelve at risk students were divided into two groups, each of which received one half hour of…
ERIC Educational Resources Information Center
Melby-Lervag, Monica
2012-01-01
The acknowledgement that educational achievement is highly dependent on successful reading development has led to extensive research on its underlying factors. A strong argument has been made for a causal relationship between reading and phoneme awareness; similarly, causal relations have been suggested for reading with short-term memory and rhyme…
ERIC Educational Resources Information Center
Boyer, Nancy; Ehri, Linnea C.
2011-01-01
English-speaking preschoolers who knew letters but were nonreaders (M = 4 years 9 months; n = 60) were taught to segment consonant-vowel (CV), VC, and CVC words into phonemes either with letters and pictures of articulatory gestures (the LPA condition) or with letters only (the LO condition). A control group received no treatment. Both trained…
ERIC Educational Resources Information Center
Carlisle, Abigail A.; Thomas, Cathy Newman; McCathren, Rebecca B.
2016-01-01
The purpose of this study was to examine the effects of using a content acquisition podcast (CAP) to teach phonological awareness, phonemic awareness, and phonics (PA) to preservice special education teachers. Fifty undergraduate preservice special education teachers over 2 years were randomly assigned to either the CAP group or a comparison group…
ERIC Educational Resources Information Center
Walter, Nancy
2010-01-01
Students entering school with little knowledge of English do not have the foundation in place to develop reading skills. This lack of foundation puts English Learners at a disadvantage that they struggle to overcome. The purpose of the quantitative study was twofold: (a) to determine whether measures of phonemic awareness are predictive of end of…
Nonword Repetition and Phoneme Elision in Adults Who Do and Do Not Stutter
ERIC Educational Resources Information Center
Byrd, Courtney T.; Vallely, Megann; Anderson, Julie D.; Sussman, Harvey
2012-01-01
The purpose of the present study was to explore the phonological working memory of adults who stutter through the use of a non-word repetition and a phoneme elision task. Participants were 14 adults who stutter (M = 28 years) and 14 age/gender matched adults who do not stutter (M = 28 years). For the non-word repetition task, the participants had…
ERIC Educational Resources Information Center
Tyler, Ann A.; Gillon, Gail; Macrae, Toby; Johnson, Roberta L.
2011-01-01
Aim: The purpose of this study was to examine the effects of an integrated phoneme awareness/speech intervention in comparison to an alternating speech/morphosyntax intervention for specific areas targeted by the different interventions, as well as the extent of indirect gains in nontargeted areas. Method: A total of 30 children with co-occurring…
ERIC Educational Resources Information Center
Thornton, Linda H.; Vinzant, Rebecca S.
A study investigated the effect of phonemic awareness instruction on the reading ability of first and second grade students. Participants were 100 second graders who had been in 5 first grades at Westside Elementary in Searcy, Arkansas. Using a posttest only control group design and a t test for independent samples, it was found that second grade…
Phoneme Monitoring in Silent Naming and Perception in Adults Who Stutter
ERIC Educational Resources Information Center
Sasisekaran, Jayanthi; De Nil, Luc F.
2006-01-01
The present study investigated phonological encoding skills in persons who stutter (PWS). Participants were 10 PWS (M=31.8 years, S.D.=5.9) matched for age, gender, and handedness with 12 persons who do not stutter (PNS) (M=24.3 years, S.D.=4.3). The groups were compared in a phoneme monitoring task performed during silent picture naming. The…
ERIC Educational Resources Information Center
Brazendale, Allison; Adlof, Suzanne; Klusek, Jessica; Roberts, Jane
2015-01-01
Clinical Question: Would a child with fragile X syndrome benefit more from phonemic awareness and phonics instruction or whole-word training to increase reading skills? Method: Systematic review. Study Sources: PsycINFO. Search Terms: fragile X OR Down syndrome OR cognitive impairment OR cognitive deficit OR cognitive disability OR intellectual…
A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition
ERIC Educational Resources Information Center
Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko
2015-01-01
When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…
Beyond Phonics: The Case for Teaching Children the Logic of the English Spelling System
ERIC Educational Resources Information Center
Bowers, Jeffrey S.; Bowers, Peter N.
2017-01-01
A large body of research supports the conclusion that early reading instruction in English should emphasize phonics, that is, the teaching of grapheme-phoneme correspondences. By contrast, we argue that instruction should be designed to make sense of spellings by teaching children that spellings are organized around the interrelation of…
Combining the Bourne-Shell, sed and awk in the UNIX Environment for Language Analysis.
ERIC Educational Resources Information Center
Schmitt, Lothar M.; Christianson, Kiel T.
This document describes how to construct tools for language analysis in research and teaching using the Bourne-shell, sed, and awk, three search tools, in the UNIX operating system. Applications include: searches for words, phrases, grammatical patterns, and phonemic patterns in text; statistical analysis of text in regard to such searches,…
Spanish Rhotics: More Evidence of Gradience in the System
ERIC Educational Resources Information Center
Shelton, Michael
2013-01-01
The present study examines the nature of the Spanish rhotics experimentally, investigating the possibilities that the trill is best understood as deriving from an underlying geminate or that the tap and trill may be separate phonemes. In order to explore these two theories, a behavioral task was designed exploiting the absence of native words in…
ERIC Educational Resources Information Center
Trescases, Pierre
A computer system developed as a database access facilitator for the blind is found to have application to foreign language instruction, specifically in teaching French to speakers of English. The computer is programmed to translate symbols from the International Phonetic Alphabet (IPA) into appropriate phonemes for whatever language is being…
Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.
Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T
2017-07-01
Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Sunami, Kishiko; Ishii, Akira; Takano, Sakurako; Yamamoto, Hidefumi; Sakashita, Tetsushi; Tanaka, Masaaki; Watanabe, Yasuyoshi; Yamane, Hideo
2013-11-06
In daily communication, we can usually still hear the spoken words as if they had not been masked and can comprehend the speech when spoken words are masked by background noise. This phenomenon is known as phonemic restoration. Since little is known about the neural mechanisms underlying phonemic restoration for speech comprehension, we aimed to identify the neural mechanisms using magnetoencephalography (MEG). Twelve healthy male volunteers with normal hearing participated in the study. Participants were requested to carefully listen to and understand recorded spoken Japanese stories, which were either played forward (forward condition) or in reverse (reverse condition), with their eyes closed. Several syllables of spoken words were replaced by 300-ms white-noise stimuli with an inter-stimulus interval of 1.6-20.3s. We compared MEG responses to white-noise stimuli during the forward condition with those during the reverse condition using time-frequency analyses. Increased 3-5 Hz band power in the forward condition compared with the reverse condition was continuously observed in the left inferior frontal gyrus [Brodmann's areas (BAs) 45, 46, and 47] and decreased 18-22 Hz band powers caused by white-noise stimuli were seen in the left transverse temporal gyrus (BA 42) and superior temporal gyrus (BA 22). These results suggest that the left inferior frontal gyrus and left transverse and superior temporal gyri are involved in phonemic restoration for speech comprehension. Our findings may help clarify the neural mechanisms of phonemic restoration as well as develop innovative treatment methods for individuals suffering from impaired speech comprehension, particularly in noisy environments. © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Increased subcortical neural activity among HIV+ individuals during a lexical retrieval task.
Thames, April D; Sayegh, Philip; Terashima, Kevin; Foley, Jessica M; Cho, Andrew; Arentoft, Alyssa; Hinkin, Charles H; Bookheimer, Susan Y
2016-08-01
Deficits in lexical retrieval, present in approximately 40% of HIV+ patients, are thought to reflect disruptions to frontal-striatal functions and may worsen with immunosuppression. Coupling frontal-striatal tasks such as lexical retrieval with functional neuroimaging may help delineate the pathophysiologic mechanisms underlying HIV-associated neurological dysfunction. We examined whether HIV infection confers brain functional changes during lexical access and retrieval. It was expected that HIV+ individuals would demonstrate greater brain activity in frontal-subcortical regions despite minimal differences between groups on neuropsychological testing. Within the HIV+ sample, we examined associations between indices of immunosuppression (recent and nadir CD4+ count) and task-related signal change in frontostriatal structures. Method16 HIV+ participants and 12 HIV- controls underwent fMRI while engaged in phonemic/letter and semantic fluency tasks. Participants also completed standardized measures of verbal fluency HIV status groups performed similarly on phonemic and semantic fluency tasks prior to being scanned. fMRI results demonstrated activation differences during the phonemic fluency task as a function of HIV status, with HIV+ individuals demonstrating significantly greater activation in BG structures than HIV- individuals. There were no significant differences in frontal brain activation between HIV status groups during the phonemic fluency task, nor were there significant brain activation differences during the semantic fluency task. Within the HIV+ group, current CD4+ count, though not nadir, was positively correlated with increased activity in the inferior frontal gyrus and basal ganglia. During phonemic fluency performance, HIV+ patients recruit subcortical structures to a greater degree than HIV- controls despite similar task performances suggesting that fMRI may be sensitive to neurocompromise before overt cognitive declines can be detected. Among HIV+ individuals, reduced activity in the frontal-subcortical structures was associated with lower CD4+ count. Copyright © 2015 Elsevier Inc. All rights reserved.
Language context modulates reading route: an electrical neuroimaging study
Buetler, Karin A.; de León Rodríguez, Diego; Laganaro, Marina; Müri, René; Spierer, Lucas; Annoni, Jean-Marie
2014-01-01
Introduction: The orthographic depth hypothesis (Katz and Feldman, 1983) posits that different reading routes are engaged depending on the type of grapheme/phoneme correspondence of the language being read. Shallow orthographies with consistent grapheme/phoneme correspondences favor encoding via non-lexical pathways, where each grapheme is sequentially mapped to its corresponding phoneme. In contrast, deep orthographies with inconsistent grapheme/phoneme correspondences favor lexical pathways, where phonemes are retrieved from specialized memory structures. This hypothesis, however, lacks compelling empirical support. The aim of the present study was to investigate the impact of orthographic depth on reading route selection using a within-subject design. Method: We presented the same pseudowords (PWs) to highly proficient bilinguals and manipulated the orthographic depth of PW reading by embedding them among two separated German or French language contexts, implicating respectively, shallow or deep orthography. High density electroencephalography was recorded during the task. Results: The topography of the ERPs to identical PWs differed 300–360 ms post-stimulus onset when the PWs were read in different orthographic depth context, indicating distinct brain networks engaged in reading during this time window. The brain sources underlying these topographic effects were located within left inferior frontal (German > French), parietal (French > German) and cingular areas (German > French). Conclusion: Reading in a shallow context favors non-lexical pathways, reflected in a stronger engagement of frontal phonological areas in the shallow versus the deep orthographic context. In contrast, reading PW in a deep orthographic context recruits less routine non-lexical pathways, reflected in a stronger engagement of visuo-attentional parietal areas in the deep versus shallow orthographic context. These collective results support a modulation of reading route by orthographic depth. PMID:24600377
Learning to perceive and recognize a second language: the L2LP model revised.
van Leussen, Jan-Willem; Escudero, Paola
2015-01-01
We present a test of a revised version of the Second Language Linguistic Perception (L2LP) model, a computational model of the acquisition of second language (L2) speech perception and recognition. The model draws on phonetic, phonological, and psycholinguistic constructs to explain a number of L2 learning scenarios. However, a recent computational implementation failed to validate a theoretical proposal for a learning scenario where the L2 has less phonemic categories than the native language (L1) along a given acoustic continuum. According to the L2LP, learners faced with this learning scenario must not only shift their old L1 phoneme boundaries but also reduce the number of categories employed in perception. Our proposed revision to L2LP successfully accounts for this updating in the number of perceptual categories as a process driven by the meaning of lexical items, rather than by the learners' awareness of the number and type of phonemes that are relevant in their new language, as the previous version of L2LP assumed. Results of our simulations show that meaning-driven learning correctly predicts the developmental path of L2 phoneme perception seen in empirical studies. Additionally, and to contribute to a long-standing debate in psycholinguistics, we test two versions of the model, with the stages of phonemic perception and lexical recognition being either sequential or interactive. Both versions succeed in learning to recognize minimal pairs in the new L2, but make diverging predictions on learners' resulting phonological representations. In sum, the proposed revision to the L2LP model contributes to our understanding of L2 acquisition, with implications for speech processing in general.
Frisch, Stefan A.; Pisoni, David B.
2012-01-01
Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing. PMID:11132784
Modelling acquired dyslexia: a software tool for developing grapheme-phoneme correspondences.
D'Autrechy, C. L.; Reggia, J. A.; Berndt, R. S.
1991-01-01
In extending a computer model of acquired dyslexia, it has become necessary to develop a way to group printed characters in a word so that the character groups essentially have a one-to-one correspondence with the word's phonemes (speech sounds). This requires deriving a set of correspondences (legal character groupings, legal associations of character groups with phonemes, etc.) that yield a single grouping or "segmentation" of characters when applied to any English word. To facilitate and partially automate this task, a segmentation program has been developed that uses an interchangeable set of correspondences. The program segments words according to these correspondences and tabulates their success over large sets of words. The program has been used successfully to segment a 20,000 word corpus, demonstrating that this approach can be used effectively and efficiently. PMID:1807611
ERIC Educational Resources Information Center
Gelfand, Stanley A.; Gelfand, Jessica T.
2012-01-01
Method: Complete psychometric functions for phoneme and word recognition scores at 8 signal-to-noise ratios from -15 dB to 20 dB were generated for the first 10, 20, and 25, as well as all 50, three-word presentations of the Tri-Word or Computer Assisted Speech Recognition Assessment (CASRA) Test (Gelfand, 1998) based on the results of 12…
ERIC Educational Resources Information Center
Martinussen, Rhonda; Ferrari, Julia; Aitken, Madison; Willows, Dale
2015-01-01
This study examined the relations among perceived and actual knowledge of phonemic awareness (PA), exposure to PA instruction during practicum, and self-efficacy for teaching PA in a sample of 54 teacher candidates (TCs) enrolled in a 1-year Bachelor of Education program in a Canadian university. It also assessed the effects of a brief…
Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users
Newman, Rochelle S.; Goupell, Matthew J.
2017-01-01
Purpose Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes. Method Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates. Twenty-three CI and 29 NH participants performed a phoneme identification task. NH participants heard the same unprocessed stimuli as the CI participants or stimuli degraded by a sine vocoder, simulating aspects of CI processing. Results CI participants showed larger rate normalization effects (6.6 ms) than the NH participants (3.7 ms) and had shallower (less reliable) category boundary slopes. NH participants showed similarly shallow slopes when presented acoustically degraded vocoded signals, but an equal or smaller rate effect in response to reductions in available spectral and temporal information. Conclusion CI participants can rate normalize, despite their degraded speech input, and show a larger rate effect compared to NH participants. CI participants may particularly rely on rate normalization to better maintain perceptual constancy of the speech signal. PMID:28395319
Audiovisual perceptual learning with multiple speakers.
Mitchel, Aaron D; Gerfen, Chip; Weiss, Daniel J
2016-05-01
One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.
Lexical frequency effects on articulation: a comparison of picture naming and reading aloud
Mousikou, Petroula; Rastle, Kathleen
2015-01-01
The present study investigated whether lexical frequency, a variable that is known to affect the time taken to utter a verbal response, may also influence articulation. Pairs of words that differed in terms of their relative frequency, but were matched on their onset, vowel, and number of phonemes (e.g., map vs. mat, where the former is more frequent than the latter) were used in a picture naming and a reading aloud task. Low-frequency items yielded slower response latencies than high-frequency items in both tasks, with the frequency effect being significantly larger in picture naming compared to reading aloud. Also, initial-phoneme durations were longer for low-frequency items than for high-frequency items. The frequency effect on initial-phoneme durations was slightly more prominent in picture naming than in reading aloud, yet its size was very small, thus preventing us from concluding that lexical frequency exerts an influence on articulation. Additionally, initial-phoneme and whole-word durations were significantly longer in reading aloud compared to picture naming. We discuss our findings in the context of current theories of reading aloud and speech production, and the approaches they adopt in relation to the nature of information flow (staged vs. cascaded) between cognitive and articulatory levels of processing. PMID:26528223
Emergent Literacy in Thai Preschoolers: A Preliminary Study.
Yampratoom, Ramorn; Aroonyadech, Nawarat; Ruangdaraganon, Nichara; Roongpraiwan, Rawiwan; Kositprapa, Jariya
To investigate emergent literacy skills, including phonological awareness when presented with an initial phoneme-matching task and letter knowledge when presented with a letter-naming task in Thai preschoolers, and to identify key factors associated with those skills. Four hundred twelve typically developing children in their final kindergarten year were enrolled in this study. Their emergent reading skills were measured by initial phoneme-matching and letter-naming tasks. Determinant variables, such as parents' education and teachers' perception, were collected by self-report questionnaires. The mean score of the initial phoneme-matching task was 4.5 (45% of a total of 10 scores). The mean score of the letter-naming task without a picture representing the target letter name was 30.2 (68.6% of a total of 44 scores), which increased to 38.8 (88.2% of a total of 44 scores) in the letter-naming task when a picture representing the target letter name was provided. Both initial phoneme-matching and letter-naming abilities were associated with the mother's education and household income. Letter-naming ability was also influenced by home reading activities and gender. This was a preliminary study into emergent literacy skills of Thai preschoolers. The findings supported the importance of focusing on phonological awareness and phonics, especially in the socioeconomic disadvantaged group.
Cattaneo, Z; Pisoni, A; Papagno, C
2011-06-02
Previous studies have demonstrated that transcranial direct current stimulation (tDCS) can be proficiently used to modulate attentional and cognitive functions. For instance, in the language domain there is evidence that tDCS can fasten picture naming in both healthy individuals and aphasic patients, or improve grammar learning. In this study, we investigated whether tDCS can be used to increase healthy subjects' performance in phonemic and semantic fluency tasks, that are typically used in clinical assessment of language. Ten healthy individuals performed a semantic and a phonemic fluency task following anodal tDCS applied over Broca's region. Each participant underwent a real and a sham tDCS session. Participants were found to produce more words following real anodal tDCS both in the phonemic and in the semantic fluency. Control experiments ascertained that this finding did not depend upon unspecific effects of tDCS over levels of general arousal or attention or upon participants' expectations. These data confirm the efficacy of tDCS in transiently improving language functions by showing that anodal stimulation of Broca's region can enhance verbal fluency. Implications of these results for the treatment of language functions in aphasia are considered. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Basirat, Anahita
2017-01-01
Cochlear implant (CI) users frequently achieve good speech understanding based on phoneme and word recognition. However, there is a significant variability between CI users in processing prosody. The aim of this study was to examine the abilities of an excellent CI user to segment continuous speech using intonational cues. A post-lingually deafened adult CI user and 22 normal hearing (NH) subjects segmented phonemically identical and prosodically different sequences in French such as 'l'affiche' (the poster) versus 'la fiche' (the sheet), both [lafiʃ]. All participants also completed a minimal pair discrimination task. Stimuli were presented in auditory-only and audiovisual presentation modalities. The performance of the CI user in the minimal pair discrimination task was 97% in the auditory-only and 100% in the audiovisual condition. In the segmentation task, contrary to the NH participants, the performance of the CI user did not differ from the chance level. Visual speech did not improve word segmentation. This result suggests that word segmentation based on intonational cues is challenging when using CIs even when phoneme/word recognition is very well rehabilitated. This finding points to the importance of the assessment of CI users' skills in prosody processing and the need for specific interventions focusing on this aspect of speech communication.
Richardson-Klavehn, A; Gardiner, J M
1998-05-01
Depth-of-processing effects on incidental perceptual memory tests could reflect (a) contamination by voluntary retrieval, (b) sensitivity of involuntary retrieval to prior conceptual processing, or (c) a deficit in lexical processing during graphemic study tasks that affects involuntary retrieval. The authors devised an extension of incidental test methodology--making conjunctive predictions about response times as well as response proportions--to discriminate among these alternatives. They used graphemic, phonemic, and semantic study tasks, and a word-stem completion test with incidental, intentional, and inclusion instructions. Semantic study processing was superior to phonemic study processing in the intentional and inclusion tests, but semantic and phonemic study processing produced equal priming in the incidental test, showing that priming was uncontaminated by voluntary retrieval--a conclusion reinforced by the response-time data--and that priming was insensitive to prior conceptual processing. The incidental test nevertheless showed a priming deficit following graphemic study processing, supporting the lexical-processing hypothesis. Adding a lexical decision to the 3 study tasks eliminated the priming deficit following graphemic study processing, but did not influence priming following phonemic and semantic processing. The results provide the first clear evidence that depth-of-processing effects on perceptual priming can reflect lexical processes, rather than voluntary contamination or conceptual processes.
A proposed mechanism for rapid adaptation to spectrally distorted speech.
Azadpour, Mahan; Balaban, Evan
2015-07-01
The mechanisms underlying perceptual adaptation to severely spectrally-distorted speech were studied by training participants to comprehend spectrally-rotated speech, which is obtained by inverting the speech spectrum. Spectral-rotation produces severe distortion confined to the spectral domain while preserving temporal trajectories. During five 1-hour training sessions, pairs of participants attempted to extract spoken messages from the spectrally-rotated speech of their training partner. Data on training-induced changes in comprehension of spectrally-rotated sentences and identification/discrimination of spectrally-rotated phonemes were used to evaluate the plausibility of three different classes of underlying perceptual mechanisms: (1) phonemic remapping (the formation of new phonemic categories that specifically incorporate spectrally-rotated acoustic information); (2) experience-dependent generation of a perceptual "inverse-transform" that compensates for spectral-rotation; and (3) changes in cue weighting (the identification of sets of acoustic cues least affected by spectral-rotation, followed by a rapid shift in perceptual emphasis to favour those cues, combined with the recruitment of the same type of "perceptual filling-in" mechanisms used to disambiguate speech-in-noise). Results exclusively support the third mechanism, which is the only one predicting that learning would specifically target temporally-dynamic cues that were transmitting phonetic information most stably in spite of spectral-distortion. No support was found for phonemic remapping or for inverse-transform generation.
Hagan-Burke, Shanna; Coyne, Michael D; Kwok, Oi-Man; Simmons, Deborah C; Kim, Minjung; Simmons, Leslie E; Skidmore, Susan T; Hernandez, Caitlin L; McSparran Ruby, Maureen
2013-01-01
This exploratory study examined the influences of student, teacher, and setting characteristics on kindergarteners' early reading outcomes and investigated whether those relations were moderated by type of intervention. Participants included 206 kindergarteners identified as at risk for reading difficulties and randomly assigned to one of two supplemental interventions: (a) an experimental explicit, systematic, code-based program or (b) their schools' typical kindergarten reading intervention. Results from separate multilevel structural equation models indicated that among student variables, entry-level alphabet knowledge was positively associated with phonemic and decoding outcomes in both conditions. Entry-level rapid automatized naming also positively influenced decoding outcomes in both conditions. However, its effect on phonemic outcomes was statistically significant only among children in the typical practice comparison condition. Regarding teacher variables, the quality of instruction was associated with significantly higher decoding outcomes in the typical reading intervention condition but had no statistically significant influence on phonemic outcomes in either condition. Among setting variables, instruction in smaller group sizes was associated with better phonemic outcomes in the comparison condition but had no statistically significant influence on outcomes of children in the intervention group. Mode of delivery (i.e., pullout vs. in class) had no statistically significant influence on either outcome variable.
Responsiveness to Intervention in Children with Dyslexia.
Tilanus, Elisabeth A T; Segers, Eliane; Verhoeven, Ludo
2016-08-01
We examined the responsiveness to a 12-week phonics intervention in 54 s-grade Dutch children with dyslexia, and compared their reading and spelling gains to a control group of 61 typical readers. The intervention aimed to train grapheme-phoneme correspondences (GPCs), and word reading and spelling by using phonics instruction. We examined the accuracy and efficiency of grapheme-phoneme correspondences, decoding words and pseudowords, as well as the accuracy of spelling words before and after the intervention. Moreover, responsiveness to intervention was examined by studying to what extent scores at posttest could directly or indirectly be predicted from precursor measures. Results showed that the children with dyslexia were significantly behind in all reading and spelling measures at pretest. During the intervention, the children with dyslexia made more progress on GPC, (pseudo)word decoding accuracy and efficiency, and spelling accuracy than the typical reading group. Furthermore, we found a direct effect of the precursor measures rapid automatized naming, verbal working memory and phoneme deletion on the dyslexic children's progress in GPC speed, and indirect effects of rapid automatized naming and phoneme deletion on word and pseudoword efficiency and word decoding accuracy via the scores at pretest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Wolff, Ulrika
2014-07-01
Although phonemic awareness is a well-known factor predicting early reading development, there is also evidence that Rapid Automatized Naming (RAN) is an independent factor that contributes to early reading. The aim of this study is to examine phonemic awareness and RAN as predictors of reading speed, reading comprehension and spelling for children with reading difficulties. It also investigates a possible reciprocal relationship between RAN and reading skills, and the possibility of enhancing RAN by intervention. These issues are addressed by examining longitudinal data from a randomised reading intervention study carried out in Sweden for 9-year-old children with reading difficulties (N = 112). The intervention comprised three main elements: training of phonics, reading comprehension strategies and reading speed. The analysis of the data was carried out using structural equation modelling. The results demonstrated that after controlling for autoregressive effects and non-verbal IQ, RAN predicts reading speed whereas phonemic awareness predicts reading comprehension and spelling. RAN was significantly enhanced by training and a reciprocal relationship between reading speed and RAN was found. These findings contribute to support the view that both phonemic awareness and RAN independently influence early phases of reading, and that both are possible to enhance by training.
Nakamura, Miyoko; Kolinsky, Régine
2014-12-01
We explored the functional units of speech segmentation in Japanese using dichotic presentation and a detection task requiring no intentional sublexical analysis. Indeed, illusory perception of a target word might result from preattentive migration of phonemes, morae, or syllables from one ear to the other. In Experiment I, Japanese listeners detected targets presented in hiragana and/or kanji. Phoneme migrations did occur, suggesting that orthography-independent sublexical constituents play some role in segmentation. However, syllable and especially mora migrations were more numerous. This pattern of results was not observed in French speakers (Experiment 2), suggesting that it reflects native segmentation in Japanese. To control for the intervention of kanji representations (many words are written in kanji, and one kanji often corresponds to one syllable), in Experiment 3, Japanese listeners were presented with target loanwords that can be written only in katakana. Again, phoneme migrations occurred, while the first mora and syllable led to similar rates of illusory percepts. No migration occurred for the second, "special" mora (/J/ or/N/), probably because this constitutes the latter part of a heavy syllable. Overall, these findings suggest that multiple units, such as morae, syllables, and even phonemes, function independently of orthographic knowledge in Japanese preattentive speech segmentation.
Text-to-phonemic transcription and parsing into mono-syllables of English text
NASA Astrophysics Data System (ADS)
Jusgir Mullick, Yugal; Agrawal, S. S.; Tayal, Smita; Goswami, Manisha
2004-05-01
The present paper describes a program that converts the English text (entered through the normal computer keyboard) into its phonemic representation and then parses it into mono-syllables. For every letter a set of context based rules is defined in lexical order. A default rule is also defined separately for each letter. Beginning from the first letter of the word the rules are checked and the most appropriate rule is applied on the letter to find its actual orthographic representation. If no matching rule is found, then the default rule is applied. Current rule sets the next position to be analyzed. Proceeding in the same manner orthographic representation for each word can be found. For example, ``reading'' is represented as ``rEdiNX'' by applying the following rules: r-->r move 1 position ahead ead-->Ed move 3 position ahead i-->i move 1 position ahead ng-->NX move 2 position ahead, i.e., end of word. The phonemic representations obtained from the above procedure are parsed to get mono-syllabic representation for various combinations such as CVC, CVCC, CV, CVCVC, etc. For example, the above phonemic representation will be parsed as rEdiNX---> /rE/ /diNX/. This study is a part of developing TTS for Indian English.
The voiced pronunciation of initial phonemes predicts the gender of names.
Slepian, Michael L; Galinsky, Adam D
2016-04-01
Although it is known that certain names gain popularity within a culture because of historical events, it is unknown how names become associated with different social categories in the first place. We propose that vocal cord vibration during the pronunciation of an initial phoneme plays a critical role in explaining which names are assigned to males versus females. This produces a voiced gendered name effect, whereby voiced phonemes (vibration of the vocal cords) are more associated with male names, and unvoiced phonemes (no vibration of the vocal cords) are more associated with female names. Eleven studies test this association between voiced names and gender (a) using 270 million names (more than 80,000 unique names) given to children over 75 years, (b) names across 2 cultures (the U.S. and India), and (c) hundreds of novel names. The voiced gendered name effect was mediated through how hard or soft names sounded, and moderated by gender stereotype endorsement. Although extensive work has demonstrated morphological and physical cues to gender (e.g., facial, bodily, vocal), this work provides a systematic account of name-based cues to gender. Overall, the current research extends work on sound symbolism to names; the way in which a name sounds can be symbolically related to stereotypes associated with its social category. (c) 2016 APA, all rights reserved).
Frontal and temporal lobe involvement on verbal fluency measures in amyotrophic lateral sclerosis.
Lepow, Lauren; Van Sweringen, James; Strutt, Adriana M; Jawaid, Ali; MacAdam, Claire; Harati, Yadollah; Schulz, Paul E; York, Michele K
2010-11-01
Amyotrophic lateral sclerosis (ALS) has been associated with changes in frontal and temporal lobe-mediated cognitive and behavioral functions. Verbal fluency, a sensitive measure to these changes, was utilized to investigate phonemic and semantic abilities in 49 ALS patients and 25 healthy controls (HCs). A subset of the ALS patients was classified as ALS-intact, ALS with mild cognitive impairments (ALS-mild), and ALS with fronto-temporal dementia (ALS-FTD) based on a comprehensive neuropsychological evaluation. Clustering and switching, the underlying component processes of verbal fluency, were analyzed using Troyer's (Troyer, Moscovitch, & Winocur, 1997) and Abwender's (Abwender, Swan, Bowerman, & Connolly, 2001) scoring systems. ALS patients exhibited decreased fluency versus HCs. For phonemic fluency, the intact ALS sample generated fewer clusters and more switches than the ALS-mild and ALS-FTD patients using both scoring systems. This suggests temporal involvement in ALS patients, with increasing frontal lobe involvement in patients with greater cognitive dysfunction. For semantic fluency, similar results were obtained with a greater emphasis on declines in clustering or increased temporal lobe dysfunction. These results suggest that verbal fluency measures identify frontal and temporal lobe involvement in the cognitive decline associated with ALS, particularly when the component processes are evaluated. The clinical utility of these scoring systems with ALS patients is also discussed.
Is the orthographic/phonological onset a single unit in reading aloud?
Mousikou, Petroula; Coltheart, Max; Saunders, Steven; Yen, Lisa
2010-02-01
Two main theories of visual word recognition have been developed regarding the way orthographic units in printed words map onto phonological units in spoken words. One theory suggests that a string of single letters or letter clusters corresponds to a string of phonemes (Coltheart, 1978; Venezky, 1970), while the other suggests that a string of single letters or letter clusters corresponds to coarser phonological units, for example, onsets and rimes (Treiman & Chafetz, 1987). These theoretical assumptions were critical for the development of coding schemes in prominent computational models of word recognition and reading aloud. In a reading-aloud study, we tested whether the human reading system represents the orthographic/phonological onset of printed words and nonwords as single units or as separate letters/phonemes. Our results, which favored a letter and not an onset-coding scheme, were successfully simulated by the dual-route cascaded (DRC) model (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001). A separate experiment was carried out to further adjudicate between 2 versions of the DRC model.
ERIC Educational Resources Information Center
Suortti, Outi; Lipponen, Lasse
2014-01-01
The present study is the first part of a longitudinal research project investigating whether children become more aware of phonemes or rhyming when they learn letters or letter sounds or even begin to read, and if so how. For the present paper, the phonological awareness of young children aged 2-6 years was analyzed, particularly their auditory…
Verbal fluency in bilingual Spanish/English Alzheimer's disease patients.
Salvatierra, Judy; Rosselli, Monica; Acevedo, Amarilis; Duara, Ranjan
2007-01-01
Studies have demonstrated that in verbal fluency tests, monolinguals with Alzheimer's disease (AD) show greater difficulties retrieving words based on semantic rather than phonemic rules. The present study aimed to determine whether this difficulty was reproduced in both languages of Spanish/English bilinguals with mild to moderate AD whose primary language was Spanish. Performance on semantic and phonemic verbal fluency of 11 bilingual AD patients was compared to the performance of 11 cognitively normal, elderly bilingual individuals matched for gender, age, level of education, and degree of bilingualism. Cognitively normal subjects retrieved significantly more items under the semantic condition compared to the phonemic, whereas the performance of AD patients was similar under both conditions, suggesting greater decline in semantic verbal fluency tests. This pattern was produced in both languages, implying a related semantic decline in both languages. Results from this study should be considered preliminary because of the small sample size.
Whissell, Cynthia
2003-06-01
56 samples (n > half a million phonemes) of names (e.g., men's, women's jets'), song lyrics (e.g., Paul Simon's, rap, Beatles'), poems (frequently anthologized English poems), and children's materials (books directed at children ages 3-10 years) were used to study a proposed new measure of English language samples--Pronounceability-based on children's mastery of some phonemes in advance of others. This measure was provisionally equated with greater "youthfulness" and "playfulness" in language samples and with less "maturity." Findings include the facts that women's names were less pronounceable than men's and that poetry was less pronounceable than song lyrics or children's materials. In a supplementary study, 13 university student volunteers' assessments of the youth of randomly constructed names was linearly related to how pronounceable each name was (eta = .8), providing construct validity for the interpretation of Pronounceability as a measure of Youthfulness.
The status of the concept of 'phoneme' in psycholinguistics.
Uppstad, Per Henning; Tønnessen, Finn Egil
2010-10-01
The notion of the phoneme counts as a break-through of modern theoretical linguistics in the early twentieth century. It paved the way for descriptions of distinctive features at different levels in linguistics. Although it has since then had a turbulent existence across altering theoretical positions, it remains a powerful concept of a fundamental unit in spoken language. At the same time, its conceptual status remains highly unclear. The present article aims to clarify the status of the concept of 'phoneme' in psycholinguistics, based on the scientific concepts of description, understanding and explanation. Theoretical linguistics has provided mainly descriptions. The ideas underlying this article are, first, that these descriptions may not be directly relevant to psycholinguistics and, second, that psycholinguistics in this sense is not a sub-discipline of theoretical linguistics. Rather, these two disciplines operate with different sets of features and with different orientations when it comes to the scientific concepts of description, understanding and explanation.
Alveolar and Velarized Laterals in Albanian and in the Viennese Dialect.
Moosmüller, Sylvia; Schmid, Carolin; Kasess, Christian H
2016-12-01
A comparison of alveolar and velarized lateral realizations in two language varieties, Albanian and the Viennese dialect, has been performed. Albanian distinguishes the two laterals phonemically, whereas in the Viennese dialect, the velarized lateral was introduced by language contact with Czech immigrants. A categorical distinction between the two lateral phonemes is fully maintained in Albanian. Results are not as straightforward in the Viennese dialect. Most prominently, female speakers, if at all, realize the velarized lateral in word-final position, thus indicating the application of a phonetically motivated process. The realization of the velarized lateral by male speakers, on the other hand, indicates that the velarized lateral replaced the former alveolar lateral phoneme. Alveolar laterals are either realized in perceptually salient positions, thus governed by an input-switch rule, or in front vowel contexts, thus subject to coarticulatory influences. Our results illustrate the subtle interplay of phonology, phonetics and sociolinguistics.
The cognitive foundations of reading and arithmetic skills in 7- to 10-year-olds.
Durand, Marianne; Hulme, Charles; Larkin, Rebecca; Snowling, Margaret
2005-06-01
A range of possible predictors of arithmetic and reading were assessed in a large sample (N=162) of children between ages 7 years 5 months and 10 years 4 months. A confirmatory factor analysis of the predictors revealed a good fit to a model consisting of four latent variables (verbal ability, nonverbal ability, search speed, and phonological memory) and two manifest variables (digit comparison and phoneme deletion). A path analysis showed that digit comparison and verbal ability were unique predictors of variations in arithmetic skills, whereas phoneme deletion and verbal ability were unique predictors of variations in reading skills. These results confirm earlier findings that phoneme deletion ability appears to be a critical foundation for learning to read (decode). In addition, variations in the speed of accessing numerical quantity information appear to be a critical foundation for the development of arithmetic skills.
Levels of Phonology Related to Reading and Writing in Middle Childhood
ERIC Educational Resources Information Center
Del Campo, Roxana; Buchanan, William R.; Abbott, Robert D.; Berninger, Virginia W.
2015-01-01
The relationships of different levels of phonological processing (sounds in heard and spoken words for whole words, syllables, phonemes, and rimes) to multi-leveled functional reading or writing systems were studied. Participants in this cross-sectional study were students in fourth-grade (n = 119, mean age 116.5 months) and sixth-grade (n = 105,…
ERIC Educational Resources Information Center
Woore, Robert
2010-01-01
"Decoding"--converting the written symbols (or graphemes) of an alphabetical writing system into the sounds (or phonemes) they represent, using knowledge of the language's symbol/sound correspondences--has been argued to be an important but neglected skill in the teaching of second language (L2) French in English secondary schools.…
Expressive facial animation synthesis by learning speech coarticulation and expression spaces.
Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth
2006-01-01
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.
Ferguson, Melanie A; Henshaw, Helen; Clark, Daniel P A; Moore, David R
2014-01-01
The aims of this study were to (i) evaluate the efficacy of phoneme discrimination training for hearing and cognitive abilities of adults aged 50 to 74 years with mild sensorineural hearing loss who were not users of hearing aids, and to (ii) determine participant compliance with a self-administered, computer-delivered, home- and game-based auditory training program. This study was a randomized controlled trial with repeated measures and crossover design. Participants were trained and tested over an 8- to 12-week period. One group (Immediate Training) trained during weeks 1 and 4. A second waitlist group (Delayed Training) did no training during weeks 1 and 4, but then trained during weeks 5 and 8. On-task (phoneme discrimination) and transferable outcome measures (speech perception, cognition, self-report of hearing disability) for both groups were obtained during weeks 0, 4, and 8, and for the Delayed Training group only at week 12. Robust phoneme discrimination learning was found for both groups, with the largest improvements in threshold shown for those with the poorest initial thresholds. Between weeks 1 and 4, the Immediate Training group showed moderate, significant improvements on self-report of hearing disability, divided attention, and working memory, specifically for conditions or situations that were more complex and therefore more challenging. Training did not result in consistent improvements in speech perception in noise. There was no evidence of any test-retest effects between weeks 1 and 4 for the Delayed Training group. Retention of benefit at 4 weeks post-training was shown for phoneme discrimination, divided attention, working memory, and self-report of hearing disability. Improved divided attention and reduced self-reported hearing difficulties were highly correlated. It was observed that phoneme discrimination training benefits some but not all people with mild hearing loss. Evidence presented here, together with that of other studies that used different training stimuli, suggests that auditory training may facilitate cognitive skills that index executive function and the self-perception of hearing difficulty in challenging situations. The development of cognitive skills may be more important than the development of sensory skills for improving communication and speech perception in everyday life. However, improvements were modest. Outcome measures need to be appropriately challenging to be sensitive to the effects of the relatively small amount of training performed.
Finding Acoustic Regularities in Speech: Applications to Phonetic Recognition
1988-12-01
University Press, Indiana, I 1977. [12] N. Chomsky and M. Halle, The Sound Patterns of English, Harper and Row, New York, 1968. l 129 I BIBLIOGRAPHY [13] Y.L...segments are related to the phonemes by a grammar which is determined using. automated procedures operating on a set of training data. Thus important...segments which are described completely in acoustic terms. Next, these acous- tic segments are related to the phonemes by a grammar which is determined
[Improvement in Phoneme Discrimination in Noise in Normal Hearing Adults].
Schumann, A; Garea Garcia, L; Hoppe, U
2017-02-01
Objective: The study's aim was to examine the possibility to train phoneme-discrimination in noise with normal hearing adults, and its effectivity on speech recognition in noise. A specific computerised training program was used, consisting of special nonsense-syllables with background noise, to train participants' discrimination ability. Material and Methods: 46 normal hearing subjects took part in this study, 28 as training group participants, 18 as control group participants. Only the training group subjects were asked to train over a period of 3 weeks, twice a week for an hour with a computer-based training program. Speech recognition in noise were measured pre- to posttraining for the training group subjects with the Freiburger Einsilber Test. The control group subjects obtained test and restest measures within a 2-3 week break. For the training group follow-up speech recognition was measured 2-3 months after the end of the training. Results: The majority of training group subjects improved their phoneme discrimination significantly. Besides, their speech recognition in noise improved significantly during the training compared to the control group, and remained stable for a period of time. Conclusions: Phonem-Discrimination in noise can be trained by normal hearing adults. The improvements have got a positiv effect on speech recognition in noise, also for a longer period of time. © Georg Thieme Verlag KG Stuttgart · New York.
Preservice and inservice teachers' knowledge of language constructs in Finland.
Aro, Mikko; Björn, Piia Maria
2016-04-01
The aim of the study was to explore the Finnish preservice and inservice teachers' knowledge of language constructs relevant for literacy acquisition. A total of 150 preservice teachers and 74 inservice teachers participated in the study by filling out a questionnaire that assessed self-perceived expertise in reading instruction, knowledge of phonology and phonics, and knowledge of morphology. The inservice teachers outperformed the preservice teachers in knowledge of phonology and phonics, as well as morphology. Both groups' knowledge of morphology was markedly lower than their knowledge of phonology and phonics. Because early reading instruction does not focus on the morphological level of language but is phonics-based, this result was expected. However, the findings also revealed a lack of explicit knowledge of basic phonological constructs and less-than-optimal phonemic awareness skills in both groups. Problems in phonemic skills manifested mostly as responding to the phonological tasks based on orthographic knowledge, which reflects an overreliance on the one-to-one correspondence between graphemes and phonemes. The preservice teachers' perceptions of expertise were weakly related to their knowledge and skills. Among the inservice teachers, perceived expertise and knowledge of language constructs were completely unrelated. Although the study was exploratory, these findings suggest that within the Finnish teacher education there is a need to focus more on explicit content studies for language structures and the concepts relevant for literacy instruction, as well as phonological and phonemic skills.
Impact of Cyrillic on Native English Speakers' Phono-lexical Acquisition of Russian.
Showalter, Catherine E
2018-03-01
We investigated the influence of grapheme familiarity and native language grapheme-phoneme correspondences during second language lexical learning. Native English speakers learned Russian-like words via auditory presentations containing only familiar first language phones, pictured meanings, and exposure to either Cyrillic orthographic forms (Orthography condition) or the sequence
Blair, Christopher David; Berryhill, Marian E.
2013-01-01
Grapheme-color synesthetes experience color, not physically present, when viewing symbols. Synesthetes cannot remember learning these associations. Must synesthetic percepts be formed during a sensitive period? Can they form later and be consistent? What determines their nature? We tested grapheme-color synesthete, MC2, before, during and after she studied Hindi abroad. We investigated whether novel graphemes elicited synesthetic percepts, changed with familiarity, and/or benefited from phonemic information. MC2 reported color percepts to novel Devanagari and Hebrew graphemes. MC2 monitored these percepts over 6 months in a Hindi-speaking environment. MC2 and synesthete DN, reported synesthetic percepts for Armenian graphemes, or Cyrillic graphemes + phonemes over time. Synesthetes, not controls, reported color percepts for novel graphemes that gained consistency over time. Phonemic information did not enhance consistency. Thus, synesthetes can form and consolidate percepts to novel graphemes as adults. These percepts may depend on pre-existing grapheme-color relationships but they can flexibly shift with familiarity. PMID:23860303
Teki, Sundeep; Barnes, Gareth R; Penny, William D; Iverson, Paul; Woodhead, Zoe V J; Griffiths, Timothy D; Leff, Alexander P
2013-06-01
In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.
Barnes, Gareth R.; Penny, William D.; Iverson, Paul; Woodhead, Zoe V. J.; Griffiths, Timothy D.; Leff, Alexander P.
2013-01-01
In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics’ speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired. PMID:23715097
Early Bimodal Stimulation Benefits Language Acquisition for Children With Cochlear Implants.
Moberly, Aaron C; Lowenstein, Joanna H; Nittrouer, Susan
2016-01-01
Adding a low-frequency acoustic signal to the cochlear implant (CI) signal (i.e., bimodal stimulation) for a period of time early in life improves language acquisition. Children must acquire sensitivity to the phonemic units of language to develop most language-related skills, including expressive vocabulary, working memory, and reading. Acquiring sensitivity to phonemic structure depends largely on having refined spectral (frequency) representations available in the signal, which does not happen with CIs alone. Combining the low-frequency acoustic signal available through hearing aids with the CI signal can enhance signal quality. A period with this bimodal stimulation has been shown to improve language skills in very young children. This study examined whether these benefits persist into childhood. Data were examined for 48 children with CIs implanted under age 3 years, participating in a longitudinal study. All children wore hearing aids before receiving a CI, but upon receiving a first CI, 24 children had at least 1 year of bimodal stimulation (Bimodal group), and 24 children had only electric stimulation subsequent to implantation (CI-only group). Measures of phonemic awareness were obtained at second and fourth grades, along with measures of expressive vocabulary, working memory, and reading. Children in the Bimodal group generally performed better on measures of phonemic awareness, and that advantage was reflected in other language measures. Having even a brief period of time early in life with combined electric-acoustic input provides benefits to language learning into childhood, likely because of the enhancement in spectral representations provided.
Moulin, Annie; Bernard, André; Tordella, Laurent; Vergne, Judith; Gisbert, Annie; Martin, Christian; Richard, Céline
2017-05-01
Speech perception scores are widely used to assess patient's functional hearing, yet most linguistic material used in these audiometric tests dates to before the availability of large computerized linguistic databases. In an ENT clinic population of 120 patients with median hearing loss of 43-dB HL, we quantified the variability and the sensitivity of speech perception scores to hearing loss, measured using disyllabic word lists, as a function of both the number of ten-word lists and type of scoring used (word, syllables or phonemes). The mean word recognition scores varied significantly across lists from 54 to 68%. The median of the variability of the word recognition score ranged from 30% for one ten-word list down to 20% for three ten-word lists. Syllabic and phonemic scores showed much less variability with standard deviations decreasing by 1.15 with the use of syllabic scores and by 1.45 with phonemic scores. The sensitivity of each list to hearing loss and distortions varied significantly. There was an increase in the minimum effect size that could be seen for syllabic scores compared to word scores, with no significant further improvement with phonemic scores. The use of at least two ten-word lists, quoted in syllables rather than in whole words, contributed to a large decrease in variability and an increase in sensitivity to hearing loss. However, those results emphasize the need of using updated linguistic material for clinical speech score assessments.
What’s in a Name? Sound Symbolism and Gender in First Names
2015-01-01
Although the arbitrariness of language has been considered one of its defining features, studies have demonstrated that certain phonemes tend to be associated with certain kinds of meaning. A well-known example is the Bouba/Kiki effect, in which nonwords like bouba are associated with round shapes while nonwords like kiki are associated with sharp shapes. These sound symbolic associations have thus far been limited to nonwords. Here we tested whether or not the Bouba/Kiki effect extends to existing lexical stimuli; in particular, real first names. We found that the roundness/sharpness of the phonemes in first names impacted whether the names were associated with round or sharp shapes in the form of character silhouettes (Experiments 1a and 1b). We also observed an association between femaleness and round shapes, and maleness and sharp shapes. We next investigated whether this association would extend to the features of language and found the proportion of round-sounding phonemes was related to name gender (Analysis of Category Norms). Finally, we investigated whether sound symbolic associations for first names would be observed for other abstract properties; in particular, personality traits (Experiment 2). We found that adjectives previously judged to be either descriptive of a figuratively ‘round’ or a ‘sharp’ personality were associated with names containing either round- or sharp-sounding phonemes, respectively. These results demonstrate that sound symbolic associations extend to existing lexical stimuli, providing a new example of non-arbitrary mappings between form and meaning. PMID:26016856
What's in a Name? Sound Symbolism and Gender in First Names.
Sidhu, David M; Pexman, Penny M
2015-01-01
Although the arbitrariness of language has been considered one of its defining features, studies have demonstrated that certain phonemes tend to be associated with certain kinds of meaning. A well-known example is the Bouba/Kiki effect, in which nonwords like bouba are associated with round shapes while nonwords like kiki are associated with sharp shapes. These sound symbolic associations have thus far been limited to nonwords. Here we tested whether or not the Bouba/Kiki effect extends to existing lexical stimuli; in particular, real first names. We found that the roundness/sharpness of the phonemes in first names impacted whether the names were associated with round or sharp shapes in the form of character silhouettes (Experiments 1a and 1b). We also observed an association between femaleness and round shapes, and maleness and sharp shapes. We next investigated whether this association would extend to the features of language and found the proportion of round-sounding phonemes was related to name gender (Analysis of Category Norms). Finally, we investigated whether sound symbolic associations for first names would be observed for other abstract properties; in particular, personality traits (Experiment 2). We found that adjectives previously judged to be either descriptive of a figuratively 'round' or a 'sharp' personality were associated with names containing either round- or sharp-sounding phonemes, respectively. These results demonstrate that sound symbolic associations extend to existing lexical stimuli, providing a new example of non-arbitrary mappings between form and meaning.
Phonological skills and their role in learning to read: a meta-analytic review.
Melby-Lervåg, Monica; Lyster, Solveig-Alma Halaas; Hulme, Charles
2012-03-01
The authors report a systematic meta-analytic review of the relationships among 3 of the most widely studied measures of children's phonological skills (phonemic awareness, rime awareness, and verbal short-term memory) and children's word reading skills. The review included both extreme group studies and correlational studies with unselected samples (235 studies were included, and 995 effect sizes were calculated). Results from extreme group comparisons indicated that children with dyslexia show a large deficit on phonemic awareness in relation to typically developing children of the same age (pooled effect size estimate: -1.37) and children matched on reading level (pooled effect size estimate: -0.57). There were significantly smaller group deficits on both rime awareness and verbal short-term memory (pooled effect size estimates: rime skills in relation to age-matched controls, -0.93, and reading-level controls, -0.37; verbal short-term memory skills in relation to age-matched controls, -0.71, and reading-level controls, -0.09). Analyses of studies of unselected samples showed that phonemic awareness was the strongest correlate of individual differences in word reading ability and that this effect remained reliable after controlling for variations in both verbal short-term memory and rime awareness. These findings support the pivotal role of phonemic awareness as a predictor of individual differences in reading development. We discuss whether such a relationship is a causal one and the implications of research in this area for current approaches to the teaching of reading and interventions for children with reading difficulties.
Hemispheric asymmetry of auditory steady-state responses to monaural and diotic stimulation.
Poelmans, Hanne; Luts, Heleen; Vandermosten, Maaike; Ghesquière, Pol; Wouters, Jan
2012-12-01
Amplitude modulations in the speech envelope are crucial elements for speech perception. These modulations comprise the processing rate at which syllabic (~3-7 Hz), and phonemic transitions occur in speech. Theories about speech perception hypothesize that each hemisphere in the auditory cortex is specialized in analyzing modulations at different timescales, and that phonemic-rate modulations of the speech envelope lateralize to the left hemisphere, whereas right lateralization occurs for slow, syllabic-rate modulations. In the present study, neural processing of phonemic- and syllabic-rate modulations was investigated with auditory steady-state responses (ASSRs). ASSRs to speech-weighted noise stimuli, amplitude modulated at 4, 20, and 80 Hz, were recorded in 30 normal-hearing adults. The 80 Hz ASSR is primarily generated by the brainstem, whereas 20 and 4 Hz ASSRs are mainly cortically evoked and relate to speech perception. Stimuli were presented diotically (same signal to both ears) and monaurally (one signal to the left or right ear). For 80 Hz, diotic ASSRs were larger than monaural responses. This binaural advantage decreased with decreasing modulation frequency. For 20 Hz, diotic ASSRs were equal to monaural responses, while for 4 Hz, diotic responses were smaller than monaural responses. Comparison of left and right ear stimulation demonstrated that, with decreasing modulation rate, a gradual change from ipsilateral to right lateralization occurred. Together, these results (1) suggest that ASSR enhancement to binaural stimulation decreases in the ascending auditory system and (2) indicate that right lateralization is more prominent for low-frequency ASSRs. These findings may have important consequences for electrode placement in clinical settings, as well as for the understanding of low-frequency ASSR generation.
Combining Multiple Knowledge Sources for Speech Recognition
1988-09-15
Thus, the first is thle to clarify the pronunciationt ( TASSEAJ for the acronym TASA !). best adaptation sentence, the second sentence, whens addled...10 rapid adapltati,,n sen- tenrces, and 15 spell-i,, de phrases. 6101 resource rirailageo lei SPEAKER-DEPENDENT DATABASE sentences were randortily...combining the smoothed phoneme models with the de - system tested on a standard database using two well de . tailed context models. BYBLOS makes maximal use
Semiotic systems with duality of patterning and the issue of cultural replicators.
Schaden, Gerhard; Patin, Cédric
2017-11-14
Two major works in recent evolutionary biology have in different ways touched upon the issue of cultural replicators in language, namely Dawkins' Selfish Gene and Maynard Smith and Szathmáry's Major Transitions in Evolution. In the latter, the emergence of language is referred to as the last major transition in evolution (for the time being), a claim we argue to be derived from a crucial property of language, called Duality of Patterning. Prima facie, this property makes natural language look like a structural equivalent to DNA, and its peer in terms of expressive power. We will argue that, if one takes seriously Maynard Smith and Szathmáry's outlook and examines what has been proposed as linguistic replicators, amongst others phonemes and words, the analogy meme-gene becomes problematic. A key issue is the fact that genes and memes are assumed to carry and transmit information, while what has been described as the best candidate for replicatorhood in language, i.e. the phoneme, does by definition not carry meaning. We will argue that semiotic systems with Duality of Pattering (like natural languages) force us to reconsider either the analogy between replicators in the biological and the cultural domain, or what it is to be a replicator in linguistics.
Phoneme Error Pattern by Heritage Speakers of Spanish on an English Word Recognition Test.
Shi, Lu-Feng
2017-04-01
Heritage speakers acquire their native language from home use in their early childhood. As the native language is typically a minority language in the society, these individuals receive their formal education in the majority language and eventually develop greater competency with the majority than their native language. To date, there have not been specific research attempts to understand word recognition by heritage speakers. It is not clear if and to what degree we may infer from evidence based on bilingual listeners in general. This preliminary study investigated how heritage speakers of Spanish perform on an English word recognition test and analyzed their phoneme errors. A prospective, cross-sectional, observational design was employed. Twelve normal-hearing adult Spanish heritage speakers (four men, eight women, 20-38 yr old) participated in the study. Their language background was obtained through the Language Experience and Proficiency Questionnaire. Nine English monolingual listeners (three men, six women, 20-41 yr old) were also included for comparison purposes. Listeners were presented with 200 Northwestern University Auditory Test No. 6 words in quiet. They repeated each word orally and in writing. Their responses were scored by word, word-initial consonant, vowel, and word-final consonant. Performance was compared between groups with Student's t test or analysis of variance. Group-specific error patterns were primarily descriptive, but intergroup comparisons were made using 95% or 99% confidence intervals for proportional data. The two groups of listeners yielded comparable scores when their responses were examined by word, vowel, and final consonant. However, heritage speakers of Spanish misidentified significantly more word-initial consonants and had significantly more difficulty with initial /p, b, h/ than their monolingual peers. The two groups yielded similar patterns for vowel and word-final consonants, but heritage speakers made significantly fewer errors with /e/ and more errors with word-final /p, k/. Data reported in the present study lead to a twofold conclusion. On the one hand, normal-hearing heritage speakers of Spanish may misidentify English phonemes in patterns different from those of English monolingual listeners. Not all phoneme errors can be readily understood by comparing Spanish and English phonology, suggesting that Spanish heritage speakers differ in performance from other Spanish-English bilingual listeners. On the other hand, the absolute number of errors and the error pattern of most phonemes were comparable between English monolingual listeners and Spanish heritage speakers, suggesting that audiologists may assess word recognition in quiet in the same way for these two groups of listeners, if diagnosis is based on words, not phonemes. American Academy of Audiology
Ding, Yi; Liu, Ru-De; McBride, Catherine A; Fan, Chung-Hau; Xu, Le; Wang, Jia
2018-05-07
This study examined pinyin (the official phonetic system that transcribes the lexical tones and pronunciation of Chinese characters) invented spelling and English invented spelling in 72 Mandarin-speaking 6th graders who learned English as their second language. The pinyin invented spelling task measured segmental-level awareness including syllable and phoneme awareness, and suprasegmental-level awareness including lexical tones and tone sandhi in Chinese Mandarin. The English invented spelling task manipulated segmental-level awareness including syllable awareness and phoneme awareness, and suprasegmental-level awareness including word stress. This pinyin task outperformed a traditional phonological awareness task that only measured segmental-level awareness and may have optimal utility to measure unique phonological and linguistic features in Chinese reading. The pinyin invented spelling uniquely explained variance in Chinese conventional spelling and word reading in both languages. The English invented spelling uniquely explained variance in conventional spelling and word reading in both languages. Our findings appear to support the role of phonological activation in Chinese reading. Our experimental linguistic manipulations altered the phonological awareness item difficulties.
N400 elicited by incongruent ending words of Chinese idioms in healthy adults.
Chen, Xing-shi; Tang, Yun-xiang; Xiao, Ze-ping; Wang, Ji-jun; Zhang, Ming-dao; Zhang, Zai-fu; Hu, Zhen-yu; Lou, Fei-ying; Chen, Chong; Zhang, Tian-hong
2010-03-20
Prior research about N400 has been mainly based on English stimuli, while the cognitive processing of Chinese characters is still unclear. The aim of the present study was to further investigate the semantic processing of Chinese idioms. Event related potentials (ERP) component N400 was elicited by 38 pairs of matching (congruent) and mismatching (incongruent) ended Chinese idioms: ending words with same phoneme but different shape and meaning (sPdSdM), with similar shape but different phoneme and meaning (sSdPdM), with same meaning but different phoneme and shape (sMdPdS), and words with different phoneme, shape and meaning (dPdSdM) and recorded by Guangzhou Runjie WJ-1 ERP instruments. In 62 right-handed healthy adults (age 19 - 50 years), N400 amplitudes and latencies were compared between matching and mismatching conditions at Fz, Cz and Pz. N400 showed a midline distribution and could be elicited in electrodes Fz, Cz and Pz. The mean values of N400 latencies and amplitudes were obtained for matching and mismatching ending words in healthy adults. Significant differences were found in N400 latencies and amplitudes in matching and mismatching ending-words idioms in healthy adults (P < 0.05). Compared with matching ending-words idioms, N400 latencies were prolonged and the amplitudes were increased in mismatching ones. N400s elicited by different types of stimuli showed different latencies and amplitudes, and longest N400 latency and largest N400 amplitude were elicited by ending-words with dPdSdM. No gender difference was found of N400 latency and amplitude in this study (P > 0.05). Compared with English stimuli, Chinese ideographic words could provide more flexible stimuli for N400 research in that the words have 3-dimension changes - phoneme, shape and meaning. Features of N400 elicited by matching and mismatching ending words in Chinese idioms are mainly determined by the meaning of the word. Some issues of N400 elicited by Chinese characters deserve further research.
Vertical interincisal trespass assessment in children with speech disorders.
Sahad, Marcelo de Gouveia; Nahás, Ana Carla Raphaelli; Scavone-Junior, Helio; Jabur, Luciana Badra; Guedes-Pinto, Eduardo
2008-01-01
Through a transversal epidemiological study, conducted with 333 Brazilian children, males (157) and females (176), aged 3 to 6 years old, enrolled in a public preschool, this study aimed to evaluate the prevalence of the different types of vertical interincisal trespass (VIT) and the relationship between these occlusal aspects and anterior lisping and/or anterior tongue thrust in the articulation of the lingua-alveolar phonemes /t/, /d/, /n/ and /l/. All children involved were submitted to a VIT examination and to a speech evaluation. Statistical significance was analyzed through the Qui-square test, at a significance level of 0.05 (95% confidence limit). The quantitative analysis of the data demonstrated the following prevalences: 1 - the different types of VIT: 48.3% for normal overbite (NO), 22.5% for deep overbite (DO), 9.3% for edge to edge (ETE) and 19.8% for open bite (OB); 2 - interdental lisping in relation to the different types of VIT: 42% for NO, 12.5% for DO, 12.5% for ETE, 32.9% for OB; and 3 - children with anterior tongue thrust in the articulation of lingua-alveolar phonemes in relation to the different types of VIT: 42.1% for NO, 14% for DO, 10.5% for ETE, 33.3% for OB. The results demonstrated that there was a significant relationship between open bite and anterior lisping and/or anterior tongue thrust in the articulation of the lingua-alveolar phonemes /t/, /d/, /n/ and /l/; and that there was a significant relationship between deep overbite and the absence of anterior lisping and anterior tongue thrust in the articulation of the lingua-alveolar phonemes.
Study of Cognitive Impairments Following Clipping of Ruptured Anterior Circulation Aneurysms.
Mohanty, Manju; Dhandapani, Sivashanmugam; Gupta, Sunil Kumar; Shahid, Adnan Hussain; Patra, Debi Prasad; Sharma, Anchal; Mathuriya, Suresh Narayan
2018-06-16
The cognitive impairments following treatment of ruptured aneurysms have often been underestimated. This study was to assess their prevalence and analyze various associated factors. Patients who were operated for ruptured anterior circulation aneurysms and discharged in Glasgow outcome scale 4-5 were studied at 3 months for various cognitive impairments. Continuous scales of memory (recent, remote, verbal, visual and overall memory), verbal fluency (phonemic and category fluency) and others were studied in relation to various factors. Univariate and multivariate analyses were performed using SPSS21. There were a total of 87 patients included in our study. Phonemic fluency was the most affected noted in 66% of patients. While 56% had some memory related impairments, 13 (15%) and 6 (7%) had moderate and severe deficits in recent memory, and 19 (22%) and 12 (14%) had moderate and severe deficits in remote memory respectively. Patients operated for anterior cerebral artery (ACA) aneurysms have significantly greater impairments in recent (34% vs 8%) and remote memory (43% vs 28%) compared to the rest, both in univariate (P values 0.01 & 0.002 respectively) and multivariate analyses (P values 0.01 & 0.03 respectively). ACA related aneurysms also had significantly greater independent impairments in phonemic fluency (P-value 0.04), compared to others. The clinical grade had a significant independent impact only on remote memory (P-value 0.01). Cognitive impairments are frequent following treatment of ruptured anterior circulation aneurysms. Impairments in recent memory, remote memory, and phonemic fluency are significantly greater following treatment of ACA related aneurysms, compared to others, independent of other factors. Copyright © 2018. Published by Elsevier Inc.
Error biases in inner and overt speech: evidence from tongue twisters.
Corley, Martin; Brocklehurst, Paul H; Moat, H Susannah
2011-01-01
To compare the properties of inner and overt speech, Oppenheim and Dell (2008) counted participants' self-reported speech errors when reciting tongue twisters either overtly or silently and found a bias toward substituting phonemes that resulted in words in both conditions, but a bias toward substituting similar phonemes only when speech was overt. Here, we report 3 experiments revisiting their conclusion that inner speech remains underspecified at the subphonemic level, which they simulated within an activation-feedback framework. In 2 experiments, participants recited tongue twisters that could result in the errorful substitutions of similar or dissimilar phonemes to form real words or nonwords. Both experiments included an auditory masking condition, to gauge the possible impact of loss of auditory feedback on the accuracy of self-reporting of speech errors. In Experiment 1, the stimuli were composed entirely from real words, whereas, in Experiment 2, half the tokens used were nonwords. Although masking did not have any effects, participants were more likely to report substitutions of similar phonemes in both experiments, in inner as well as overt speech. This pattern of results was confirmed in a 3rd experiment using the real-word materials from Oppenheim and Dell (in press). In addition to these findings, a lexical bias effect found in Experiments 1 and 3 disappeared in Experiment 2. Our findings support a view in which plans for inner speech are indeed specified at the feature level, even when there is no intention to articulate words overtly, and in which editing of the plan for errors is implicated. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Functional Lateralization of Speech Processing in Adults and Children Who Stutter
Sato, Yutaka; Mori, Koichi; Koizumi, Toshizo; Minagawa-Kawai, Yasuyo; Tanaka, Akihiro; Ozawa, Emi; Wakaba, Yoko; Mazuka, Reiko
2011-01-01
Developmental stuttering is a speech disorder in fluency characterized by repetitions, prolongations, and silent blocks, especially in the initial parts of utterances. Although their symptoms are motor related, people who stutter show abnormal patterns of cerebral hemispheric dominance in both anterior and posterior language areas. It is unknown whether the abnormal functional lateralization in the posterior language area starts during childhood or emerges as a consequence of many years of stuttering. In order to address this issue, we measured the lateralization of hemodynamic responses in the auditory cortex during auditory speech processing in adults and children who stutter, including preschoolers, with near-infrared spectroscopy. We used the analysis–resynthesis technique to prepare two types of stimuli: (i) a phonemic contrast embedded in Japanese spoken words (/itta/ vs. /itte/) and (ii) a prosodic contrast (/itta/ vs. /itta?/). In the baseline blocks, only /itta/ tokens were presented. In phonemic contrast blocks, /itta/ and /itte/ tokens were presented pseudo-randomly, and /itta/ and /itta?/ tokens in prosodic contrast blocks. In adults and children who do not stutter, there was a clear left-hemispheric advantage for the phonemic contrast compared to the prosodic contrast. Adults and children who stutter, however, showed no significant difference between the two stimulus conditions. A subject-by-subject analysis revealed that not a single subject who stutters showed a left advantage in the phonemic contrast over the prosodic contrast condition. These results indicate that the functional lateralization for auditory speech processing is in disarray among those who stutter, even at preschool age. These results shed light on the neural pathophysiology of developmental stuttering. PMID:21687442
Coppens-Hofman, Marjolein C.; Terband, Hayo; Snik, Ad F.M.; Maassen, Ben A.M.
2017-01-01
Purpose Adults with intellectual disabilities (ID) often show reduced speech intelligibility, which affects their social interaction skills. This study aims to establish the main predictors of this reduced intelligibility in order to ultimately optimise management. Method Spontaneous speech and picture naming tasks were recorded in 36 adults with mild or moderate ID. Twenty-five naïve listeners rated the intelligibility of the spontaneous speech samples. Performance on the picture-naming task was analysed by means of a phonological error analysis based on expert transcriptions. Results The transcription analyses showed that the phonemic and syllabic inventories of the speakers were complete. However, multiple errors at the phonemic and syllabic level were found. The frequencies of specific types of errors were related to intelligibility and quality ratings. Conclusions The development of the phonemic and syllabic repertoire appears to be completed in adults with mild-to-moderate ID. The charted speech difficulties can be interpreted to indicate speech motor control and planning difficulties. These findings may aid the development of diagnostic tests and speech therapies aimed at improving speech intelligibility in this specific group. PMID:28118637
Word and Person Effects on Decoding Accuracy: A New Look at an Old Question
Gilbert, Jennifer K.; Compton, Donald L.; Kearns, Devin M.
2011-01-01
The purpose of this study was to extend the literature on decoding by bringing together two lines of research, namely person and word factors that affect decoding, using a crossed random-effects model. The sample was comprised of 196 English-speaking grade 1 students. A researcher-developed pseudoword list was used as the primary outcome measure. Because grapheme-phoneme correspondence (GPC) knowledge was treated as person and word specific, we are able to conclude that it is neither necessary nor sufficient for a student to know all GPCs in a word before accurately decoding the word. And controlling for word-specific GPC knowledge, students with lower phonemic awareness and slower rapid naming skill have lower predicted probabilities of correct decoding than counterparts with superior skills. By assessing a person-by-word interaction, we found that students with lower phonemic awareness have more difficulty applying knowledge of complex vowel graphemes compared to complex consonant graphemes when decoding unfamiliar words. Implications of the methodology and results are discussed in light of future research. PMID:21743750
Podhajski, Blanche; Mather, Nancy; Nathan, Jane; Sammons, Janice
2009-01-01
This article reviews the literature and presents data from a study that examined the effects of professional development in scientifically based reading instruction on teacher knowledge and student reading outcomes. The experimental group consisted of four first- and second-grade teachers and their students (n = 33). Three control teachers and their students (n = 14), from a community of significantly higher socioeconomic demographics, were also followed. Experimental teachers participated in a 35-hour course on instruction of phonemic awareness, phonics, and fluency and were coached by professional mentors for a year. Although teacher knowledge in the experimental group was initially lower than that of the controls, their scores surpassed the controls on the posttest. First-grade experimental students' growth exceeded the controls in letter name fluency, phonemic segmentation, nonsense word fluency, and oral reading. Second-grade experimental students exceeded controls in phonemic segmentation. Although the teacher sample was small, findings suggest that teachers can improve their knowledge concerning explicit reading instruction and that this new knowledge may contribute to student growth in reading.
Poor phonemic discrimination does not underlie poor verbal short-term memory in Down syndrome.
Purser, Harry R M; Jarrold, Christopher
2013-05-01
Individuals with Down syndrome tend to have a marked impairment of verbal short-term memory. The chief aim of this study was to investigate whether phonemic discrimination contributes to this deficit. The secondary aim was to investigate whether phonological representations are degraded in verbal short-term memory in people with Down syndrome relative to control participants. To answer these questions, two tasks were used: a discrimination task, in which memory load was as low as possible, and a short-term recognition task that used the same stimulus items. Individuals with Down syndrome were found to perform significantly better than a nonverbal-matched typically developing group on the discrimination task, but they performed significantly more poorly than that group on the recognition task. The Down syndrome group was outperformed by an additional vocabulary-matched control group on the discrimination task but was outperformed to a markedly greater extent on the recognition task. Taken together, the results strongly indicate that phonemic discrimination ability is not central to the verbal short-term memory deficit associated with Down syndrome. Copyright © 2013 Elsevier Inc. All rights reserved.
2015-01-01
Several competing aetiologies of developmental dyslexia suggest that the problems with acquiring literacy skills are causally entailed by low-level auditory and/or speech perception processes. The purpose of this study is to evaluate the diverging claims about the specific deficient peceptual processes under conditions of strong inference. Theoretically relevant acoustic features were extracted from a set of artificial speech stimuli that lie on a /bAk/-/dAk/ continuum. The features were tested on their ability to enable a simple classifier (Quadratic Discriminant Analysis) to reproduce the observed classification performance of average and dyslexic readers in a speech perception experiment. The ‘classical’ features examined were based on component process accounts of developmental dyslexia such as the supposed deficit in Envelope Rise Time detection and the deficit in the detection of rapid changes in the distribution of energy in the frequency spectrum (formant transitions). Studies examining these temporal processing deficit hypotheses do not employ measures that quantify the temporal dynamics of stimuli. It is shown that measures based on quantification of the dynamics of complex, interaction-dominant systems (Recurrence Quantification Analysis and the multifractal spectrum) enable QDA to classify the stimuli almost identically as observed in dyslexic and average reading participants. It seems unlikely that participants used any of the features that are traditionally associated with accounts of (impaired) speech perception. The nature of the variables quantifying the temporal dynamics of the speech stimuli imply that the classification of speech stimuli cannot be regarded as a linear aggregate of component processes that each parse the acoustic signal independent of one another, as is assumed by the ‘classical’ aetiologies of developmental dyslexia. It is suggested that the results imply that the differences in speech perception performance between average and dyslexic readers represent a scaled continuum rather than being caused by a specific deficient component. PMID:25834769
The Recognition of Words from Phonemes in Continuous Speech.
1981-12-01
C A BAKER UNCLASSIFIED AFIT/GE/EE/810ŝ NL EEEEEEIIEEEEI EEEEEEEEEEEEEE EEEEIIEEEIIEI IIIEEEEIIEIIEE EIEEEEEEEEIIEE IIIEIIEEEEEEEE r- ~. 7 c ~ F IVV...82 06 16 011 AFIT/GE/EE/ 81 D -9 THE RECOGNITION OF WORDS FROM PHONEMES IN CONTINUOUS SPEECH THESIS AFIT/GE/EE/81D-9 Claude A. Baker Captain USAF...33. OU Qbey 14. ZX zoo 34. UX foot 15. SH 5h e 35. UU b.Qt 16. ZH azure 36. UH up 17. MX me 37. UH about 18. NX no 38. ER woQrd 19. NG sinkg 39. XX NA
Speech evaluation in children with temporomandibular disorders.
Pizolato, Raquel Aparecida; Fernandes, Frederico Silva de Freitas; Gavião, Maria Beatriz Duarte
2011-10-01
The aims of this study were to evaluate the influence of temporomandibular disorders (TMD) on speech in children, and to verify the influence of occlusal characteristics. Speech and dental occlusal characteristics were assessed in 152 Brazilian children (78 boys and 74 girls), aged 8 to 12 (mean age 10.05 ± 1.39 years) with or without TMD signs and symptoms. The clinical signs were evaluated using the Research Diagnostic Criteria for TMD (RDC/TMD) (axis I) and the symptoms were evaluated using a questionnaire. The following groups were formed: Group TMD (n=40), TMD signs and symptoms (Group S and S, n=68), TMD signs or symptoms (Group S or S, n=33), and without signs and symptoms (Group N, n=11). Articulatory speech disorders were diagnosed during spontaneous speech and repetition of the words using the "Phonological Assessment of Child Speech" for the Portuguese language. It was also applied a list of 40 phonological balanced words, read by the speech pathologist and repeated by the children. Data were analyzed by descriptive statistics, Fisher's exact or Chi-square tests (α=0.05). A slight prevalence of articulatory disturbances, such as substitutions, omissions and distortions of the sibilants /s/ and /z/, and no deviations in jaw lateral movements were observed. Reduction of vertical amplitude was found in 10 children, the prevalence being greater in TMD signs and symptoms children than in the normal children. The tongue protrusion in phonemes /t/, /d/, /n/, /l/ and frontal lips in phonemes /s/ and /z/ were the most prevalent visual alterations. There was a high percentage of dental occlusal alterations. There was no association between TMD and speech disorders. Occlusal alterations may be factors of influence, allowing distortions and frontal lisp in phonemes /s/ and /z/ and inadequate tongue position in phonemes /t/; /d/; /n/; /l/.
The differing roles of the frontal cortex in fluency tests
Shallice, Tim; Bozzali, Marco; Cipolotti, Lisa
2012-01-01
Fluency tasks have been widely used to tap the voluntary generation of responses. The anatomical correlates of fluency tasks and their sensitivity and specificity have been hotly debated. However, investigation of the cognitive processes involved in voluntary generation of responses and whether generation is supported by a common, general process (e.g. fluid intelligence) or specific cognitive processes underpinned by particular frontal regions has rarely been addressed. This study investigates a range of verbal and non-verbal fluency tasks in patients with unselected focal frontal (n = 47) and posterior (n = 20) lesions. Patients and controls (n = 35) matched for education, age and sex were administered fluency tasks including word (phonemic/semantic), design, gesture and ideational fluency as well as background cognitive tests. Lesions were analysed by standard anterior/posterior and left/right frontal subdivisions as well as a finer-grained frontal localization method. Thus, patients with right and left lateral lesions were compared to patients with superior medial lesions. The results show that all eight fluency tasks are sensitive to frontal lobe damage although only the phonemic word and design fluency tasks were specific to the frontal region. Superior medial patients were the only group to be impaired on all eight fluency tasks, relative to controls, consistent with an energization deficit. The most marked fluency deficits for lateral patients were along material specific lines (i.e. left—phonemic and right—design). Phonemic word fluency that requires greater selection was most severely impaired following left inferior frontal damage. Overall, our results support the notion that frontal functions comprise a set of specialized cognitive processes, supported by distinct frontal regions. PMID:22669082
Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne
2015-01-01
Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on “poor comprehenders” by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills. PMID:25793519
Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne
2015-01-01
Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on "poor comprehenders" by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills.
Speech profile of patients undergoing primary palatoplasty.
Menegueti, Katia Ignacio; Mangilli, Laura Davison; Alonso, Nivaldo; Andrade, Claudia Regina Furquim de
2017-10-26
To characterize the profile and speech characteristics of patients undergoing primary palatoplasty in a Brazilian university hospital, considering the time of intervention (early, before two years of age; late, after two years of age). Participants were 97 patients of both genders with cleft palate and/or cleft and lip palate, assigned to the Speech-language Pathology Department, who had been submitted to primary palatoplasty and presented no prior history of speech-language therapy. Patients were divided into two groups: early intervention group (EIG) - 43 patients undergoing primary palatoplasty before 2 years of age and late intervention group (LIG) - 54 patients undergoing primary palatoplasty after 2 years of age. All patients underwent speech-language pathology assessment. The following parameters were assessed: resonance classification, presence of nasal turbulence, presence of weak intraoral air pressure, presence of audible nasal air emission, speech understandability, and compensatory articulation disorder (CAD). At statistical significance level of 5% (p≤0.05), no significant difference was observed between the groups in the following parameters: resonance classification (p=0.067); level of hypernasality (p=0.113), presence of nasal turbulence (p=0.179); presence of weak intraoral air pressure (p=0.152); presence of nasal air emission (p=0.369), and speech understandability (p=0.113). The groups differed with respect to presence of compensatory articulation disorders (p=0.020), with the LIG presenting higher occurrence of altered phonemes. It was possible to assess the general profile and speech characteristics of the study participants. Patients submitted to early primary palatoplasty present better speech profile.
The Proximate Phonological Unit of Chinese-English Bilinguals: Proficiency Matters
Verdonschot, Rinus Gerardus; Nakayama, Mariko; Zhang, Qingfang; Tamaoka, Katsuo; Schiller, Niels Olaf
2013-01-01
An essential step to create phonology according to the language production model by Levelt, Roelofs and Meyer is to assemble phonemes into a metrical frame. However, recently, it has been proposed that different languages may rely on different grain sizes of phonological units to construct phonology. For instance, it has been proposed that, instead of phonemes, Mandarin Chinese uses syllables and Japanese uses moras to fill the metrical frame. In this study, we used a masked priming-naming task to investigate how bilinguals assemble their phonology for each language when the two languages differ in grain size. Highly proficient Mandarin Chinese-English bilinguals showed a significant masked onset priming effect in English (L2), and a significant masked syllabic priming effect in Mandarin Chinese (L1). These results suggest that their proximate unit is phonemic in L2 (English), and that bilinguals may use different phonological units depending on the language that is being processed. Additionally, under some conditions, a significant sub-syllabic priming effect was observed even in Mandarin Chinese, which indicates that L2 phonology exerts influences on L1 target processing as a consequence of having a good command of English. PMID:23646107
Evolution of phonemic word fluency performance in post-stroke aphasia.
Sarno, Martha Taylor; Postman, Whitney Anne; Cho, Young Susan; Norman, Robert G
2005-01-01
In this longitudinal study, quantitative and qualitative changes in responses of people with aphasia were examined on a phonemic fluency task. Eighteen patients were tested at 3-month intervals on the letters F-A-S while they received comprehensive, intensive treatment from 3 to 12 months post-stroke. They returned for a follow-up evaluation at an average of 10 months post-intervention. Mean group scores improved significantly from beginning to end of treatment, but declined post-intervention. Patients produced a significantly greater number and proportion of modifiers (adjectives and adverbs) between the beginning and end of treatment, with no decline afterwards, implying that they had access to a wider range of grammatical categories over time. Moreover, patients used significantly more phonemic clusters in generating word lists by the end of treatment. These gains may be attributed to the combined effects of time since onset and the linguistic and cognitive stimulation that patients received in therapy. Readers of this paper should (1) gain a better understanding of verbal fluency performance in the assessment of aphasia, (2) recognize the importance of analyzing qualitative aspects of single word production in aphasia, and (3) contribute to their clinical judgment of long term improvement in aphasia.
Keeping it together: Semantic coherence stabilizes phonological sequences in short-term memory.
Savill, Nicola; Ellis, Rachel; Brooke, Emma; Koa, Tiffany; Ferguson, Suzie; Rojas-Rodriguez, Elena; Arnold, Dominic; Smallwood, Jonathan; Jefferies, Elizabeth
2018-04-01
Our ability to hold a sequence of speech sounds in mind, in the correct configuration, supports many aspects of communication, but the contribution of conceptual information to this basic phonological capacity remains controversial. Previous research has shown modest and inconsistent benefits of meaning on phonological stability in short-term memory, but these studies were based on sets of unrelated words. Using a novel design, we examined the immediate recall of sentence-like sequences with coherent meaning, alongside both standard word lists and mixed lists containing words and nonwords. We found, and replicated, substantial effects of coherent meaning on phoneme-level accuracy: The phonemes of both words and nonwords within conceptually coherent sequences were more likely to be produced together and in the correct order. Since nonwords do not exist as items in long-term memory, the semantic enhancement of phoneme-level recall for both item types cannot be explained by a lexically based item reconstruction process employed at the point of retrieval ("redintegration"). Instead, our data show, for naturalistic input, that when meaning emerges from the combination of words, the phonological traces that support language are reinforced by a semantic-binding process that has been largely overlooked by past short-term memory research.
Detection of target phonemes in spontaneous and read speech.
Mehta, G; Cutler, A
1988-01-01
Although spontaneous speech occurs more frequently in most listeners' experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalise to the recognition of spontaneous speech. In the present study listeners were presented with both spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Responses were, overall, equally fast in each speech mode. However, analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than in unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claims from previous work that listeners pay great attention to prosodic information in the process of recognising speech.
Human phoneme recognition depending on speech-intrinsic variability.
Meyer, Bernd T; Jürgens, Tim; Wesker, Thorsten; Brand, Thomas; Kollmeier, Birger
2010-11-01
The influence of different sources of speech-intrinsic variation (speaking rate, effort, style and dialect or accent) on human speech perception was investigated. In listening experiments with 16 listeners, confusions of consonant-vowel-consonant (CVC) and vowel-consonant-vowel (VCV) sounds in speech-weighted noise were analyzed. Experiments were based on the OLLO logatome speech database, which was designed for a man-machine comparison. It contains utterances spoken by 50 speakers from five dialect/accent regions and covers several intrinsic variations. By comparing results depending on intrinsic and extrinsic variations (i.e., different levels of masking noise), the degradation induced by variabilities can be expressed in terms of the SNR. The spectral level distance between the respective speech segment and the long-term spectrum of the masking noise was found to be a good predictor for recognition rates, while phoneme confusions were influenced by the distance to spectrally close phonemes. An analysis based on transmitted information of articulatory features showed that voicing and manner of articulation are comparatively robust cues in the presence of intrinsic variations, whereas the coding of place is more degraded. The database and detailed results have been made available for comparisons between human speech recognition (HSR) and automatic speech recognizers (ASR).
Neural correlates in the processing of phoneme-level complexity in vowel production.
Park, Haeil; Iverson, Gregory K; Park, Hae-Jeong
2011-12-01
We investigated how articulatory complexity at the phoneme level is manifested neurobiologically in an overt production task. fMRI images were acquired from young Korean-speaking adults as they pronounced bisyllabic pseudowords in which we manipulated phonological complexity defined in terms of vowel duration and instability (viz., COMPLEX: /tiɯi/ > MID-COMPLEX: /tiye/ > SIMPLE: /tii/). Increased activity in the left inferior frontal gyrus (Brodmann Areas (BA) 44 and 47), supplementary motor area and anterior insula was observed for the articulation of COMPLEX sequences relative to MID-COMPLEX; this was the case with the articulation of MID-COMPLEX relative to SIMPLE, except that the pars orbitalis (BA 47) was dominantly identified in the Broca's area. The differentiation indicates that phonological complexity is reflected in the neural processing of distinct phonemic representations, both by recruiting brain regions associated with retrieval of phonological information from memory and via articulatory rehearsal for the production of COMPLEX vowels. In addition, the finding that increased complexity engages greater areas of the brain suggests that brain activation can be a neurobiological measure of articulo-phonological complexity, complementing, if not substituting for, biomechanical measurements of speech motor activity. 2011 Elsevier Inc. All rights reserved.
Reading aloud in Persian: ERP evidence for an early locus of the masked onset priming effect.
Timmer, Kalinka; Vahid-Gharavi, Narges; Schiller, Niels O
2012-07-01
The current study investigates reading aloud words in Persian, a language that does not mark all its vowels in the script. Behaviorally, a masked onset priming effect (MOPE) was revealed for transparent words, with faster speech onset latencies in the phoneme-matching condition (i.e. phonological prime and target onset overlap; e.g. [symbol: see text] /sɒːl/; 'year' [symbol: see text] /sot/; 'voice') than the phoneme-mismatching condition (e.g. [symbol: see text] /tɒːb/ 'swing' - [symbol: see text] /sot/; 'voice'). For opaque target words (e.g. [symbol: see text] /solh/; 'peace'), no such effect was found. However, event-related potentials (ERPs) did reveal an amplitude difference between the two prime conditions in the 80-160 ms time window for transparent as well as opaque words. Only for the former, this effect continued into the 300-480 ms time window. This finding constrains the time course of the MOPE and suggests the simultaneous activation of both the non-lexical grapheme-to-phoneme and the lexical route in the dual-route cascaded (DRC) model. Copyright © 2012 Elsevier Inc. All rights reserved.
Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.
Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W
2004-11-30
Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.
Yoncheva, Yuliya N; Wise, Jessica; McCandliss, Bruce
2015-01-01
Selective attention to grapheme-phoneme mappings during learning can impact the circuitry subsequently recruited during reading. Here we trained literate adults to read two novel scripts of glyph words containing embedded letters under different instructions. For one script, learners linked each embedded letter to its corresponding sound within the word (grapheme-phoneme focus); for the other, decoding was prevented so entire words had to be memorized. Post-training, ERPs were recorded during a reading task on the trained words within each condition and on untrained but decodable (transfer) words. Within this condition, reaction-time patterns suggested both trained and transfer words were accessed via sublexical units, yet a left-lateralized, late ERP response showed an enhanced left lateralization for transfer words relative to trained words, potentially reflecting effortful decoding. Collectively, these findings show that selective attention to grapheme-phoneme mappings during learning drives the lateralization of circuitry that supports later word recognition. This study thus provides a model example of how different instructional approaches to the same material may impact changes in brain circuitry. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
The Impact of Feedback Frequency on Performance in a Novel Speech Motor Learning Task.
Lowe, Mara Steinberg; Buchwald, Adam
2017-06-22
This study investigated whether whole nonword accuracy, phoneme accuracy, and acoustic duration measures were influenced by the amount of feedback speakers without impairment received during a novel speech motor learning task. Thirty-two native English speakers completed a nonword production task across 3 time points: practice, short-term retention, and long-term retention. During practice, participants received knowledge of results feedback according to a randomly assigned schedule (100%, 50%, 20%, or 0%). Changes in nonword accuracy, phoneme accuracy, nonword duration, and initial-cluster duration were compared among feedback groups, sessions, and stimulus properties. All participants improved phoneme and whole nonword accuracy at short-term and long-term retention time points. Participants also refined productions of nonwords, as indicated by a decrease in nonword duration across sessions. The 50% group exhibited the largest reduction in duration between practice and long-term retention for nonwords with native and nonnative clusters. All speakers, regardless of feedback schedule, learned new speech motor behaviors quickly with a high degree of accuracy and refined their speech motor skills for perceptually accurate productions. Acoustic measurements may capture more subtle, subperceptual changes that may occur during speech motor learning. https://doi.org/10.23641/asha.5116324.
Is there still a TRACE of trace?
NASA Astrophysics Data System (ADS)
McClelland, James; Mirman, Daniel; Holt, Lori
2003-04-01
According to the TRACE model [McClelland and Elman, Cogn. Psychol. 18, 1-86 (1986)], speech recognition is an interactive activation process involving the integrated use of top-down (lexical) and bottom-up (acoustic) information. Although it is widely accepted that there are lexical influences on speech perception, there has been a disagreement over their exact nature. Two contested predictions of TRACE are that (a) lexical influences should delay or inhibit recognition of phonemes not consistent with lexical information and (b) a lexical influence on the identification of one phoneme can trigger compensation for co-articulation, affecting the identification of other phonemes. Others [Norris, McQueen, and Cutler, BBS 23, 299-370 (2000)] have argued that the predicted effects do not occur, taking this to support an alternative to the TRACE model in which lexical influences do not affect perception, but only a post-perceptual identification process. We re-examine the evidence on these points along with the recent finding that lexical information may lead to a lasting adjustment of category boundaries [McQueen, Norris, and Cutler, Psychonomics Abstract 255 (2001)]. Our analysis indicates that the existing evidence is completely consistent with TRACE, and we suggest additional research that will be necessary to resolve unanswered questions.
Psychophysics of complex auditory and speech stimuli
NASA Astrophysics Data System (ADS)
Pastore, Richard E.
1993-10-01
A major focus on the primary project is the use of different procedures to provide converging evidence on the nature of perceptual spaces for speech categories. Completed research examined initial voiced consonants, with results providing strong evidence that different stimulus properties may cue a phoneme category in different vowel contexts. Thus, /b/ is cued by a rising second format (F2) with the vowel /a/, requiring both F2 and F3 to be rising with /i/, and is independent of the release burst for these vowels. Furthermore, cues for phonetic contrasts are not necessarily symmetric, and the strong dependence of prior speech research on classification procedures may have led to errors. Thus, the opposite (falling F2 and F3) transitions lead somewhat ambiguous percepts (i.e., not /b/) which may be labeled consistently (as /d/ or /g/), but requires a release burst to achieve high category quality and similarity to category exemplars. Ongoing research is examining cues in other vowel contexts and issuing procedures to evaluate the nature of interaction between cues for categories of both speech and music.
1994-06-09
Competitive Neural Nets Speed Complex Fluid Flow Calculations 1-366 T. Long, E. Hanzevack Neural Networks for Steam Boiler MIMO Modeling and Advisory Control...Gallinr The Cochlear Nucleus and Primary Cortex as a Sequence of Distributed Neural Filters in Phoneme IV-607 Perception J. Antrobus, C. Tarshish, S...propulsion linear model, a fuel flow actuator modelled as a linear second order system with position and rate limits, and a thrust vectoring actuator
Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems
NASA Technical Reports Server (NTRS)
Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan
2010-01-01
A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.
Do deep dyslexia, dysphasia and dysgraphia share a common phonological impairment?
Jefferies, Elizabeth; Sage, Karen; Lambon Ralph, Matthew A.
2007-01-01
This study directly compared four patients who, to varying degrees, showed the characteristics of deep dyslexia, dysphasia and/or dysgraphia – i.e., they made semantic errors in oral reading, repetition and/or spelling to dictation. The “primary systems” hypothesis proposes that these different conditions result from severe impairment to a common phonological system, rather than damage to task-specific mechanisms (i.e. grapheme-phoneme conversion). By this view, deep dyslexic/dysphasic patients should show overlapping deficits but previous studies have not directly compared them. All four patients in the current study showed poor phonological production across different tasks, including repetition, reading aloud and spoken picture naming, in line with the primary systems hypothesis. They also showed severe deficits in tasks that required the manipulation of phonology, such as phoneme addition and deletion. Some of the characteristics of the deep syndromes – namely lexicality and imageability effects – were typically observed in all of the tasks, regardless of whether semantic errors occurred or not, suggesting that the patients’ phonological deficits impacted on repetition, reading aloud and spelling to dictation in similar ways. Differences between the syndromes were accounted for by variation in other primary systems – particularly auditory processing. Deep dysphasic symptoms occurred when the impact of phonological input on spoken output was disrupted or reduced, either as a result of auditory/phonological impairment, or for patients with good phonological input analysis, when repetition was delayed. ‘Deep’ disorders of reading aloud, repetition and spelling can therefore be explained in terms of damage to interacting primary systems such as phonology, semantics and vision, with phonology playing a critical role. PMID:17227679
The effect of word length in short-term memory: Is rehearsal necessary?
Campoy, Guillermo
2008-05-01
Three experiments investigated the effect of word length on a serial recognition task when rehearsal was prevented by a high presentation rate with no delay between study and test lists. Results showed that lists of short four-phoneme words were better recognized than lists of long six-phoneme words. Moreover, this effect was equivalent to that observed in conditions in which there was a delay between lists, thereby making rehearsal possible in the interval. These findings imply that rehearsal does not play a central role in the origin of the word length effect. An alternative explanation based on differences in the degree of retroactive interference generated by long and short words is proposed.
Hemispheric association and dissociation of voice and speech information processing in stroke.
Jones, Anna B; Farrall, Andrew J; Belin, Pascal; Pernet, Cyril R
2015-10-01
As we listen to someone speaking, we extract both linguistic and non-linguistic information. Knowing how these two sets of information are processed in the brain is fundamental for the general understanding of social communication, speech recognition and therapy of language impairments. We investigated the pattern of performances in phoneme versus gender categorization in left and right hemisphere stroke patients, and found an anatomo-functional dissociation in the right frontal cortex, establishing a new syndrome in voice discrimination abilities. In addition, phoneme and gender performances were most often associated than dissociated in the left hemisphere patients, suggesting a common neural underpinnings. Copyright © 2015 Elsevier Ltd. All rights reserved.
What is extinguished in auditory extinction?
Deouell, L Y; Soroker, N
2000-09-11
Extinction is a frequent sequel of brain damage, whereupon patients disregard (extinguish) a contralesional stimulus, and report only the more ipsilesional stimulus, of a pair of stimuli presented simultaneously. We investigated the possibility of a dissociation between the detection and the identification of extinguished phonemes. Fourteen right hemisphere damaged patients with severe auditory extinction were examined using a paradigm that separated the localization of stimuli and the identification of their phonetic content. Patients reported the identity of left-sided phonemes, while extinguishing them at the same time, in the traditional sense of the term. This dissociation suggests that auditory extinction is more about acknowledging the existence of a stimulus in the contralesional hemispace than about the actual processing of the stimulus.
Syntactic error modeling and scoring normalization in speech recognition
NASA Technical Reports Server (NTRS)
Olorenshaw, Lex
1991-01-01
The objective was to develop the speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Research was performed in the following areas: (1) syntactic error modeling; (2) score normalization; and (3) phoneme error modeling. The study into the types of errors that a reader makes will provide the basis for creating tests which will approximate the use of the system in the real world. NASA-Johnson will develop this technology into a 'Literacy Tutor' in order to bring innovative concepts to the task of teaching adults to read.
Modeling phoneme perception. II: A model of stop consonant discrimination.
van Hessen, A J; Schouten, M E
1992-10-01
Combining elements from two existing theories of speech sound discrimination, dual process theory (DPT) and trace context theory (TCT), a new theory, called phoneme perception theory, is proposed, consisting of a long-term phoneme memory, a context-coding memory, and a trace memory, each with its own time constants. This theory is tested by means of stop-consonant discrimination data in which interstimulus interval (ISI; values of 100, 300, and 2000 ms) is an important variable. It is shown that discrimination in which labeling plays an important part (2IFC and AX between category) benefits from increased ISI, whereas discrimination in which only sensory traces are compared (AX within category), decreases with increasing ISI. The theory is also tested on speech discrimination data from the literature in which ISI is a variable [Pisoni, J. Acoust. Soc. Am. 36, 277-282 (1964); Cowan and Morse, J. Acoust. Soc. Am. 79, 500-507 (1986)]. It is concluded that the number of parameters in trace context theory is not sufficient to account for most speech-sound discrimination data and that a few additional assumptions are needed, such as a form of sublabeling, in which subjects encode the quality of a stimulus as a member of a category, and which requires processing time.
Mädebach, Andreas; Markuske, Anna-Maria; Jescheniak, Jörg D
2018-05-22
Picture naming takes longer in the presence of socially inappropriate (taboo) distractor words compared with neutral distractor words. Previous studies have attributed this taboo interference effect to increased attentional capture by taboo words or verbal self-monitoring-that is, control processes scrutinizing verbal responses before articulation. In this study, we investigated the cause and locus of the taboo interference effect by contrasting three tasks that used the same target pictures, but systematically differed with respect to the processing stages involved: picture naming (requiring conceptual processing, lexical processing, and articulation), phoneme decision (requiring conceptual and lexical processing), and natural size decision (requiring conceptual processing only). We observed taboo interference in picture naming and phoneme decision. In size decision, taboo interference was not reliably observed under the same task conditions in which the effect arose in picture naming and phoneme decision, but it emerged when the difficulty of the size decision task was increased by visually degrading the target pictures. Overall, these results suggest that taboo interference cannot be exclusively attributed to verbal self-monitoring operating over articulatory responses. Instead, taboo interference appears to arise already prior to articulatory preparation, during lexical processing and-at least with sufficiently high task difficulty-during prelexical processing stages.
Performance of Brazilian children on phonemic and semantic verbal fluency tasks
Charchat-Fichman, Helenice; Oliveira, Rosinda Martins; da Silva, Andreza Morais
2011-01-01
The most used verbal fluency paradigms are semantic and letter fluency tasks. Studies suggest that these paradigms access semantic memory and executive function and are sensitive to frontal lobe disturbances. There are few studies in Brazilian samples on these paradigms. Objective The present study investigated performance, and the effects of age, on verbal fluency tasks in Brazilian children. The results were compared with those of other studies, and the consistency of the scoring criteria data is presented. Methods A sample of 119 children (7 to 10 years old) was submitted to the three phonemic fluency (F, A, M) tasks and three semantic fluency (animals, clothes, fruits) tasks. The results of thirty subjects were scored by two independent examiners. Results A significant positive correlation was found between the scores calculated by the two independent examiners. Significant positive correlations were found between performance on the semantic fluency task and the phonemic fluency task. The effect of age was significant for both tasks, and a significant difference was found between the 7- and 9-year-old subjects and between the 7- and 10-year-old subjects. The 8-year-old group did not differ to any of the other age groups. Conclusion The pattern of results was similar to that observed in previous Brazilian and international studies. PMID:29213727
Li, Jin-rang; Sun, Yan-yan; Xu, Wen
2010-09-01
To design a speech voice sample text with all phonemes in Mandarin for subjective auditory perceptual evaluation of voice disorders. The principles for design of a speech voice sample text are: The short text should include the 21 initials and 39 finals, this may cover all the phonemes in Mandarin. Also, the short text should have some meanings. A short text was made out. It had 155 Chinese words, and included 21 initials and 38 finals (the final, ê, was not included because it was rarely used in Mandarin). Also, the text covered 17 light tones and one "Erhua". The constituent ratios of the initials and finals presented in this short text were statistically similar as those in Mandarin according to the method of similarity of the sample and population (r = 0.742, P < 0.001 and r = 0.844, P < 0.001, respectively). The constituent ratios of the tones presented in this short text were statistically not similar as those in Mandarin (r = 0.731, P > 0.05). A speech voice sample text with all phonemes in Mandarin was made out. The constituent ratios of the initials and finals presented in this short text are similar as those in Mandarin. Its value for subjective auditory perceptual evaluation of voice disorders need further study.
Phonetically Irregular Word Pronunciation and Cortical Thickness in the Adult Brain
Blackmon, Karen; Barr, William B.; Kuzniecky, Ruben; DuBois, Jonathan; Carlson, Chad; Quinn, Brian T.; Blumberg, Mark; Halgren, Eric; Hagler, Donald J.; Mikhly, Mark; Devinsky, Orrin; McDonald, Carrie R.; Dale, Anders M.; Thesen, Thomas
2010-01-01
Accurate pronunciation of phonetically irregular words (exception words) requires prior exposure to unique relationships between orthographic and phonemic features. Whether such word knowledge is accompanied by structural variation in areas associated with orthographic-to-phonemic transformations has not been investigated. We used high resolution MRI to determine whether performance on a visual word-reading test composed of phonetically irregular words, the Wechsler Test of Adult Reading (WTAR), is associated with regional variations in cortical structure. A sample of 60 right-handed, neurologically intact individuals were administered the WTAR and underwent 3T volumetric MRI. Using quantitative, surface-based image analysis, cortical thickness was estimated at each vertex on the cortical mantle and correlated with WTAR scores while controlling for age. Higher scores on the WTAR were associated with thicker cortex in bilateral anterior superior temporal gyrus, bilateral angular gyrus/posterior superior temporal gyrus, and left hemisphere intraparietal sulcus. Higher scores were also associated with thinner cortex in left hemisphere posterior fusiform gyrus and central sulcus, bilateral inferior frontal gyrus, and right hemisphere lingual gyrus and supramarginal gyrus. These results suggest that the ability to correctly pronounce phonetically irregular words is associated with structural variations in cortical areas that are commonly activated in functional neuroimaging studies of word reading, including areas associated with grapheme-to–phonemic conversion. PMID:20302944
Männel, Claudia; Schaadt, Gesa; Illner, Franziska K; van der Meer, Elke; Friederici, Angela D
2017-02-01
Intact phonological processing is crucial for successful literacy acquisition. While individuals with difficulties in reading and spelling (i.e., developmental dyslexia) are known to experience deficient phoneme discrimination (i.e., segmental phonology), findings concerning their prosodic processing (i.e., suprasegmental phonology) are controversial. Because there are no behavior-independent studies on the underlying neural correlates of prosodic processing in dyslexia, these controversial findings might be explained by different task demands. To provide an objective behavior-independent picture of segmental and suprasegmental phonological processing in impaired literacy acquisition, we investigated event-related brain potentials during passive listening in typically and poor-spelling German school children. For segmental phonology, we analyzed the Mismatch Negativity (MMN) during vowel length discrimination, capturing automatic auditory deviancy detection in repetitive contexts. For suprasegmental phonology, we analyzed the Closure Positive Shift (CPS) that automatically occurs in response to prosodic boundaries. Our results revealed spelling group differences for the MMN, but not for the CPS, indicating deficient segmental, but intact suprasegmental phonological processing in poor spellers. The present findings point towards a differential role of segmental and suprasegmental phonology in literacy disorders and call for interventions that invigorate impaired literacy by utilizing intact prosody in addition to training deficient phonemic awareness. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Lallier, Marie; Valdois, Sylviane; Lassus-Sangosse, Delphine; Prado, Chloé; Kandel, Sonia
2014-05-01
The present study aimed to quantify cross-linguistic modulations of the contribution of phonemic awareness skills and visual attention span (VA Span) skills (number of visual elements that can be processed simultaneously) to reading speed and accuracy in 18 Spanish-French balanced bilingual children with and without developmental dyslexia. The children were administered two similar reading batteries in French and Spanish. The deficits of the dyslexic children in reading accuracy were mainly visible in their opaque orthography (French) whereas difficulties indexed by reading speed were observed in both their opaque and transparent orthographies. Dyslexic children did not exhibit any phonemic awareness problems in French or in Spanish, but showed poor VA Span skills compared to their control peers. VA span skills correlated with reading accuracy and speed measures in both Spanish and French, whereas phonemic awareness correlated with reading accuracy only. Overall, the present results show that the VA Span is tightly related to reading speed regardless of orthographic transparency, and that it accounts for differences in reading performance between good and poor readers across languages. The present findings further suggest that VA Span skills may play a particularly important role in building-up specific word knowledge which is critical for lexical reading strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Abnormal Brain Dynamics Underlie Speech Production in Children with Autism Spectrum Disorder.
Pang, Elizabeth W; Valica, Tatiana; MacDonald, Matt J; Taylor, Margot J; Brian, Jessica; Lerch, Jason P; Anagnostou, Evdokia
2016-02-01
A large proportion of children with autism spectrum disorder (ASD) have speech and/or language difficulties. While a number of structural and functional neuroimaging methods have been used to explore the brain differences in ASD with regards to speech and language comprehension and production, the neurobiology of basic speech function in ASD has not been examined. Magnetoencephalography (MEG) is a neuroimaging modality with high spatial and temporal resolution that can be applied to the examination of brain dynamics underlying speech as it can capture the fast responses fundamental to this function. We acquired MEG from 21 children with high-functioning autism (mean age: 11.43 years) and 21 age- and sex-matched controls as they performed a simple oromotor task, a phoneme production task and a phonemic sequencing task. Results showed significant differences in activation magnitude and peak latencies in primary motor cortex (Brodmann Area 4), motor planning areas (BA 6), temporal sequencing and sensorimotor integration areas (BA 22/13) and executive control areas (BA 9). Our findings of significant functional brain differences between these two groups on these simple oromotor and phonemic tasks suggest that these deficits may be foundational and could underlie the language deficits seen in ASD. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research.
Balthazar, Marcio Luiz Figueredo; Cendes, Fernando; Damasceno, Benito Pereira
2008-11-01
Naming difficulty is common in Alzheimer's disease (AD), but the nature of this problem is not well established. The authors investigated the presence of semantic breakdown and the pattern of general and semantic errors in patients with mild AD, patients with amnestic mild cognitive impairment (aMCI), and normal controls by examining their spontaneous answers on the Boston Naming Test (BNT) and verifying whether they needed or were benefited by semantic and phonemic cues. The errors in spontaneous answers were classified in four mutually exclusive categories (semantic errors, visual paragnosia, phonological errors, and omission errors), and the semantic errors were further subclassified as coordinate, superordinate, and circumlocutory. Patients with aMCI performed normally on the BNT and needed fewer semantic and phonemic cues than patients with mild AD. After semantic cues, subjects with aMCI and control subjects gave more correct answers than patients with mild AD, but after phonemic cues, there was no difference between the three groups, suggesting that the low performance of patients with AD cannot be completely explained by semantic breakdown. Patterns of spontaneous naming errors and subtypes of semantic errors were similar in the three groups, with decreasing error frequency from coordinate to superordinate to circumlocutory subtypes.
Sex-biased sound symbolism in english-language first names.
Pitcher, Benjamin J; Mesoudi, Alex; McElligott, Alan G
2013-01-01
Sexual selection has resulted in sex-based size dimorphism in many mammals, including humans. In Western societies, average to taller stature men and comparatively shorter, slimmer women have higher reproductive success and are typically considered more attractive. This size dimorphism also extends to vocalisations in many species, again including humans, with larger individuals exhibiting lower formant frequencies than smaller individuals. Further, across many languages there are associations between phonemes and the expression of size (e.g. large /a, o/, small /i, e/), consistent with the frequency-size relationship in vocalisations. We suggest that naming preferences are a product of this frequency-size relationship, driving male names to sound larger and female names smaller, through sound symbolism. In a 10-year dataset of the most popular British, Australian and American names we show that male names are significantly more likely to contain larger sounding phonemes (e.g. "Thomas"), while female names are significantly more likely to contain smaller phonemes (e.g. "Emily"). The desire of parents to have comparatively larger, more masculine sons, and smaller, more feminine daughters, and the increased social success that accompanies more sex-stereotyped names, is likely to be driving English-language first names to exploit sound symbolism of size in line with sexual body size dimorphism.
Functional connectivity in resting state as a phonemic fluency ability measure.
Miró-Padilla, Anna; Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Ávila, César
2017-03-01
There is some evidence that functional connectivity (FC) measures obtained at rest may reflect individual differences in cognitive capabilities. We tested this possibility by using the FAS test as a measure of phonemic fluency. Seed regions of the main brain areas involved in this task were extracted from meta-analysis results (Wagner et al., 2014) and used for pairwise resting-state FC analysis. Ninety-three undergraduates completed the FAS test outside the scanner. A correlation analysis was conducted between the F-A-S scores (behavioral testing) and the pairwise FC pattern of verbal fluency regions of interest. Results showed that the higher FC between the thalamus and the cerebellum, and the lower FCs between the left inferior frontal gyrus and the right insula and between the supplementary motor area and the right insula were associated with better performance on the FAS test. Regression analyses revealed that the first two FCs contributed independently to this better phonemic fluency, reflecting a more general attentional factor (FC between thalamus and cerebellum) and a more specific fluency factor (FC between the left inferior frontal gyrus and the right insula). The results support the Spontaneous Trait Reactivation hypothesis, which explains how resting-state derived measures may reflect individual differences in cognitive abilities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Batty, Rachel; Francis, Andrew; Thomas, Neil; Hopwood, Malcolm; Ponsford, Jennie; Johnston, Lisa; Rossell, Susan
2015-06-30
Verbal fluency in patients with psychosis following traumatic brain injury (PFTBI) has been reported as comparable to healthy participants. This finding is counterintuitive given the prominent fluency impairments demonstrated post-traumatic brain injury (TBI) and in psychotic disorders, e.g. schizophrenia. We investigated phonemic (executive) fluency (3 letters: 'F' 'A' and 'S'), and semantic fluency (1 category: fruits and/or vegetables) in four matched groups; PFTBI (N=10), TBI (N=10), schizophrenia (N=23), and healthy controls (N=23). Words produced (minus perseverations and errors), and clustering and switching scores were compared for the two fluency types across the groups. The results confirmed that PFTBI patients do show impaired fluency, aligned with existing evidence in TBI and schizophrenia. PFTBI patients produced the least amount of words on the phonemic fluency ('A') trial and total score, and demonstrated reduced switching on both phonemic and semantic tasks. No significant differences in clustering performance were found. Importantly, the pattern of results suggested that PFTBI patients share deficits with their brain-injured (primarily executive), and psychotic (executive and semantic), counterparts, and that these are exacerbated by their dual-diagnosis. These findings add to a very limited literature by providing novel evidence of the nature of fluency impairments in dually-diagnosed PFTBI. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Lallier, Marie; Acha, Joana; Carreiras, Manuel
2016-01-01
This study investigates whether orthographic consistency and transparency of languages have an impact on the development of reading strategies and reading sub-skills (i.e. phonemic awareness and visual attention span) in bilingual children. We evaluated 21 French (opaque)-Basque (transparent) bilingual children and 21 Spanish (transparent)-Basque (transparent) bilingual children at Grade 2, and 16 additional children of each group at Grade 5. All of them were assessed in their common language (i.e. Basque) on tasks measuring word and pseudoword reading, phonemic awareness and visual attention span skills. The Spanish speaking groups showed better Basque pseudoword reading and better phonemic awareness abilities than their French speaking peers, but only in the most difficult conditions of the tasks. However, on the visual attention span task, the French-Basque bilinguals showed the most efficient visual processing strategies to perform the task. Therefore, learning to read in an additional language affected differently Basque literacy skills, depending on whether this additional orthography was opaque (e.g. French) or transparent (e.g. Spanish). Moreover, we showed that the most noteworthy effects of Spanish and French orthographic transparency on Basque performance were related to the size of the phonological and visual grain used to perform the tasks. © 2015 John Wiley & Sons Ltd.
Sex-Biased Sound Symbolism in English-Language First Names
Pitcher, Benjamin J.; Mesoudi, Alex; McElligott, Alan G.
2013-01-01
Sexual selection has resulted in sex-based size dimorphism in many mammals, including humans. In Western societies, average to taller stature men and comparatively shorter, slimmer women have higher reproductive success and are typically considered more attractive. This size dimorphism also extends to vocalisations in many species, again including humans, with larger individuals exhibiting lower formant frequencies than smaller individuals. Further, across many languages there are associations between phonemes and the expression of size (e.g. large /a, o/, small /i, e/), consistent with the frequency-size relationship in vocalisations. We suggest that naming preferences are a product of this frequency-size relationship, driving male names to sound larger and female names smaller, through sound symbolism. In a 10-year dataset of the most popular British, Australian and American names we show that male names are significantly more likely to contain larger sounding phonemes (e.g. “Thomas”), while female names are significantly more likely to contain smaller phonemes (e.g. “Emily”). The desire of parents to have comparatively larger, more masculine sons, and smaller, more feminine daughters, and the increased social success that accompanies more sex-stereotyped names, is likely to be driving English-language first names to exploit sound symbolism of size in line with sexual body size dimorphism. PMID:23755148
Ding, Yi; Liu, Ru-De; McBride, Catherine; Zhang, Dake
2015-01-01
This study examined analytical pinyin (a phonological coding system for teaching pronunciation and lexical tones of Chinese characters) skills in 54 Mandarin-speaking fourth graders by using an invented spelling instrument that tapped into syllable awareness, phoneme awareness, lexical tones, and tone sandhi in Chinese. Pinyin invented spelling was significantly correlated with Chinese character recognition and Chinese phonological awareness (i.e., syllable deletion and phoneme deletion). In comparison to good and average readers, poor readers performed significantly worse on the invented spelling task, and a difference was also found between average and good readers. To differentiate readers at different levels, the pinyin invented spelling task, which examined both segmental and suprasegmental elements, was superior to the typical phonological awareness task, which examined segments only. Within this new task, items involving tone sandhi (Chinese language changes in which the tones of words alter according to predetermined rules) were more difficult to manipulate than were those without tone sandhi. The findings suggest that this newly developed task may be optimal for tapping unique phonological and linguistic features in reading of Chinese and examining particular tonal difficulties in struggling Chinese readers. In addition, the results suggest that phonics manipulations within tasks of phonological and tonal awareness can alter their difficulty levels. © Hammill Institute on Disabilities 2014.
Barrios, Shannon L.; Namyst, Anna M.; Lau, Ellen F.; Feldman, Naomi H.; Idsardi, William J.
2016-01-01
To attain native-like competence, second language (L2) learners must establish mappings between familiar speech sounds and new phoneme categories. For example, Spanish learners of English must learn that [d] and [ð], which are allophones of the same phoneme in Spanish, can distinguish meaning in English (i.e., /deɪ/ “day” and /ðeɪ/ “they”). Because adult listeners are less sensitive to allophonic than phonemic contrasts in their native language (L1), novel target language contrasts between L1 allophones may pose special difficulty for L2 learners. We investigate whether advanced Spanish late-learners of English overcome native language mappings to establish new phonological relations between familiar phones. We report behavioral and magnetoencepholographic (MEG) evidence from two experiments that measured the sensitivity and pre-attentive processing of three listener groups (L1 English, L1 Spanish, and advanced Spanish late-learners of English) to differences between three nonword stimulus pairs ([idi]-[iði], [idi]-[iɾi], and [iði]-[iɾi]) which differ in phones that play a different functional role in Spanish and English. Spanish and English listeners demonstrated greater sensitivity (larger d' scores) for nonword pairs distinguished by phonemic than by allophonic contrasts, mirroring previous findings. Spanish late-learners demonstrated sensitivity (large d' scores and MMN responses) to all three contrasts, suggesting that these L2 learners may have established a novel [d]-[ð] contrast despite the phonological relatedness of these sounds in the L1. Our results suggest that phonological relatedness influences perceived similarity, as evidenced by the results of the native speaker groups, but may not cause persistent difficulty for advanced L2 learners. Instead, L2 learners are able to use cues that are present in their input to establish new mappings between familiar phones. PMID:27445949
Blackford, Trevor; Holcomb, Phillip J.; Grainger, Jonathan; Kuperberg, Gina R.
2013-01-01
We measured Event-Related Potentials (ERPs) and naming times to picture targets preceded by masked words (stimulus onset asynchrony: 80 ms) that shared one of three different types of relationship with the names of the pictures: (1) Identity related, in which the prime was the name of the picture (“socks” –
Speech evaluation in children with temporomandibular disorders
PIZOLATO, Raquel Aparecida; FERNANDES, Frederico Silva de Freitas; GAVIÃO, Maria Beatriz Duarte
2011-01-01
Objectives The aims of this study were to evaluate the influence of temporomandibular disorders (TMD) on speech in children, and to verify the influence of occlusal characteristics. Material and methods Speech and dental occlusal characteristics were assessed in 152 Brazilian children (78 boys and 74 girls), aged 8 to 12 (mean age 10.05 ± 1.39 years) with or without TMD signs and symptoms. The clinical signs were evaluated using the Research Diagnostic Criteria for TMD (RDC/TMD) (axis I) and the symptoms were evaluated using a questionnaire. The following groups were formed: Group TMD (n=40), TMD signs and symptoms (Group S and S, n=68), TMD signs or symptoms (Group S or S, n=33), and without signs and symptoms (Group N, n=11). Articulatory speech disorders were diagnosed during spontaneous speech and repetition of the words using the "Phonological Assessment of Child Speech" for the Portuguese language. It was also applied a list of 40 phonological balanced words, read by the speech pathologist and repeated by the children. Data were analyzed by descriptive statistics, Fisher's exact or Chi-square tests (α=0.05). Results A slight prevalence of articulatory disturbances, such as substitutions, omissions and distortions of the sibilants /s/ and /z/, and no deviations in jaw lateral movements were observed. Reduction of vertical amplitude was found in 10 children, the prevalence being greater in TMD signs and symptoms children than in the normal children. The tongue protrusion in phonemes /t/, /d/, /n/, /l/ and frontal lips in phonemes /s/ and /z/ were the most prevalent visual alterations. There was a high percentage of dental occlusal alterations. Conclusions There was no association between TMD and speech disorders. Occlusal alterations may be factors of influence, allowing distortions and frontal lisp in phonemes /s/ and /z/ and inadequate tongue position in phonemes /t/; /d/; /n/; /l/. PMID:21986655
Timing in audiovisual speech perception: A mini review and new psychophysical data.
Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory
2016-02-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.
Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data
Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory
2015-01-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309
Speech motor development: Integrating muscles, movements, and linguistic units.
Smith, Anne
2006-01-01
A fundamental problem for those interested in human communication is to determine how ideas and the various units of language structure are communicated through speaking. The physiological concepts involved in the control of muscle contraction and movement are theoretically distant from the processing levels and units postulated to exist in language production models. A review of the literature on adult speakers suggests that they engage complex, parallel processes involving many units, including sentence, phrase, syllable, and phoneme levels. Infants must develop multilayered interactions among language and motor systems. This discussion describes recent studies of speech motor performance relative to varying linguistic goals during the childhood, teenage, and young adult years. Studies of the developing interactions between speech motor and language systems reveal both qualitative and quantitative differences between the developing and the mature systems. These studies provide an experimental basis for a more comprehensive theoretical account of how mappings between units of language and units of action are formed and how they function. Readers will be able to: (1) understand the theoretical differences between models of speech motor control and models of language processing, as well as the nature of the concepts used in the two different kinds of models, (2) explain the concept of coarticulation and state why this phenomenon has confounded attempts to determine the role of linguistic units, such as syllables and phonemes, in speech production, (3) describe the development of speech motor performance skills and specify quantitative and qualitative differences between speech motor performance in children and adults, and (4) describe experimental methods that allow scientists to study speech and limb motor control, as well as compare units of action used to study non-speech and speech movements.
Neuron recycling for learning the alphabetic principles.
Scliar-Cabral, Leonor
2014-01-01
The main purpose of this paper is to discuss an approach to the phonic method of learning-teaching early literacy development, namely that the visual neurons must be recycled to recognize the small differences among pertinent letter features. In addition to the challenge of segmenting the speech chain and the syllable for learning the alphabetic principles, neuroscience has demonstrated another major challenge: neurons in mammals are programmed to process visual signals symmetrically. In order to develop early literacy, visual neurons must be recycled to overcome this initial programming together with phonological awareness, expanding it with the ability to delimit words, including clitics, as well as assigning stress to words. To achieve this goal, Scliar's Early Literacy Development System was proposed and tested. Sixteen subjects (10 girls and 6 boys) comprised the experimental group (mean age 6.02 years), and 16 subjects (7 girls and 9 boys) formed the control group (mean age 6.10 years). The research instruments were a psychosociolinguistic questionnaire to reveal the subjects' profile and a post-test battery of tests. At the beginning of the experiment, the experimental group was submitted to an intervention program based on Scliar's Early Literacy Development System. One of the tests is discussed in this paper, the grapheme-phoneme test: subjects had to read aloud a pseudoword with 4 graphemes, signaled by the experimenter and designed to assess the subject's ability to convert a grapheme into its correspondent phoneme. The average value for the test group was 25.0 correct answers (SD = 11.4); the control group had an average of 14.3 correct answers (SD = 10.6): The difference was significant. The experimental results validate Scliar's Early Literacy Development System and indicate the need to redesign early literacy development methods. © 2014 S. Karger AG, Basel.
Hickok, G; Okada, K; Barr, W; Pa, J; Rogalsky, C; Donnelly, K; Barde, L; Grant, A
2008-12-01
Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.
Dutch home-based pre-reading intervention with children at familial risk of dyslexia.
van Otterloo, Sandra G; van der Leij, Aryan
2009-12-01
Children (5 and 6 years old, n = 30) at familial risk of dyslexia received a home-based intervention that focused on phoneme awareness and letter knowledge in the year prior to formal reading instruction. The children were compared to a no-training at-risk control group (n = 27), which was selected a year earlier. After training, we found a small effect on a composite score of phoneme awareness (d = 0.29) and a large effect on receptive letter knowledge (d = 0.88). In first grade, however, this did not result in beneficial effects for the experimental group in word reading and spelling. Results are compared to three former intervention studies in The Netherlands and comparable studies from Denmark and Australia.
A new VOX technique for reducing noise in voice communication systems. [voice operated keying
NASA Technical Reports Server (NTRS)
Morris, C. F.; Morgan, W. C.; Shack, P. E.
1974-01-01
A VOX technique for reducing noise in voice communication systems is described which is based on the separation of voice signals into contiguous frequency-band components with the aid of an adaptive VOX in each band. It is shown that this processing scheme can effectively reduce both wideband and narrowband quasi-periodic noise since the threshold levels readjust themselves to suppress noise that exceeds speech components in each band. Results are reported for tests of the adaptive VOX, and it is noted that improvements can still be made in such areas as the elimination of noise pulses, phoneme reproduction at high-noise levels, and the elimination of distortion introduced by phase delay.
Effects of syllable structure in aphasic errors: implications for a new model of speech production.
Romani, Cristina; Galluzzi, Claudia; Bureca, Ivana; Olson, Andrew
2011-03-01
Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices. Copyright © 2010 Elsevier Inc. All rights reserved.
Strong and long: effects of word length on phonological binding in verbal short-term memory.
Jefferies, Elizabeth; Frankish, Clive; Noble, Katie
2011-02-01
This study examined the effects of item length on the contribution of linguistic knowledge to immediate serial recall (ISR). Long words are typically recalled more poorly than short words, reflecting the greater demands that they place on phonological encoding, rehearsal, and production. However, reverse word length effects--that is, better recall of long than short words--can also occur in situations in which phonological maintenance is difficult, suggesting that long words derive greater support from long-term lexical knowledge. In this study, long and short words and nonwords (containing one vs. three syllables) were presented for immediate serial recall in (a) pure lists and (b) unpredictable mixed lists of words and nonwords. The mixed-list paradigm is known to disrupt the phonological stability of words, encouraging their phonemes to recombine with the elements of other list items. In this situation, standard length effects were seen for nonwords, while length effects for words were absent or reversed. A detailed error analysis revealed that long words were more robust to the mixed-list manipulation than short words: Their phonemes were less likely to be omitted and to recombine with phonemes from other list items. These findings support an interactive view of short-term memory, in which long words derive greater benefits from lexical knowledge than short words-especially when their phonological integrity is challenged by the inclusion of nonwords in mixed lists.
Flanagan, Sheila; Goswami, Usha
2018-03-01
Recent models of the neural encoding of speech suggest a core role for amplitude modulation (AM) structure, particularly regarding AM phase alignment. Accordingly, speech tasks that measure linguistic development in children may exhibit systematic properties regarding AM structure. Here, the acoustic structure of spoken items in child phonological and morphological tasks, phoneme deletion and plural elicitation, was investigated. The phase synchronisation index (PSI), reflecting the degree of phase alignment between pairs of AMs, was computed for 3 AM bands (delta, theta, beta/low gamma; 0.9-2.5 Hz, 2.5-12 Hz, 12-40 Hz, respectively), for five spectral bands covering 100-7250 Hz. For phoneme deletion, data from 94 child participants with and without dyslexia was used to relate AM structure to behavioural performance. Results revealed that a significant change in magnitude of the phase synchronisation index (ΔPSI) of slower AMs (delta-theta) systematically accompanied both phoneme deletion and plural elicitation. Further, children with dyslexia made more linguistic errors as the delta-theta ΔPSI increased. Accordingly, ΔPSI between slower temporal modulations in the speech signal systematically distinguished test items from accurate responses and predicted task performance. This may suggest that sensitivity to slower AM information in speech is a core aspect of phonological and morphological development.
Fauvel, Baptiste; Groussard, Mathilde; Mutlu, Justine; Arenaza-Urquijo, Eider M; Eustache, Francis; Desgranges, Béatrice; Platel, Hervé
2014-01-01
Because of permanent use-dependent brain plasticity, all lifelong individuals' experiences are believed to influence the cognitive aging quality. In older individuals, both former and current musical practices have been associated with better verbal skills, visual memory, processing speed, and planning function. This work sought for an interaction between musical practice and cognitive aging by comparing musician and non-musician individuals for two lifetime periods (middle and late adulthood). Long-term memory, auditory-verbal short-term memory, processing speed, non-verbal reasoning, and verbal fluencies were assessed. In Study 1, measures of processing speed and auditory-verbal short-term memory were significantly better performed by musicians compared with controls, but both groups displayed the same age-related differences. For verbal fluencies, musicians scored higher than controls and displayed different age effects. In Study 2, we found that lifetime period at training onset (childhood vs. adulthood) was associated with phonemic, but not semantic, fluency performances (musicians who had started to practice in adulthood did not perform better on phonemic fluency than non-musicians). Current frequency of training did not account for musicians' scores on either of these two measures. These patterns of results are discussed by setting the hypothesis of a transformative effect of musical practice against a non-causal explanation.
Screening Protocol for Early Identification of Brazilian Children at Risk for Dyslexia
Germano, Giseli D.; César, Alexandra B. P. de C.; Capellini, Simone A.
2017-01-01
Early identification of students at risk of dyslexia has been an educational challenge in the past years. This research had two main goals. First, we aimed to develop a screening protocol for early identification of Brazilian children at risk for dyslexia; second, we aimed to identify the predictive variables of this protocol using Principal Component Analysis. The major step involved in developing this protocol was the selection of variables, which were chosen based on the literature review and linguistic criteria. The screening protocol was composed of seven cognitive-linguistic skills: Letter naming; Phonological Awareness (which comprises the following subtests: Rhyme production, Rhyme identification, Syllabic segmentation, Production of words from a given phoneme, Phonemic Synthesis, and Phonemic analysis); Phonological Working memory, Rapid naming Speed; Silent reading; Reading of words and non-words; and Auditory Comprehension of sentences from pictures. A total of 149 children, aged from 6 years to 6 and 11, of both genders who were enrolled in the 1st grade of elementary public schools were submitted to the screening protocol. Principal Component Analysis revealed four factors, accounting for 64.45% of the variance of the Protocol variables: first factor (“pre-reading”), second factor (“decoding”), third factor (“Reading”), and fourth factor “Auditory processing.” The factors found corroborate those reported in the National and International literature and have been described as early signs of dyslexia and reading problems. PMID:29163246
Mattys, Sven L; Scharenborg, Odette
2014-03-01
This study investigates the extent to which age-related language processing difficulties are due to a decline in sensory processes or to a deterioration of cognitive factors, specifically, attentional control. Two facets of attentional control were examined: inhibition of irrelevant information and divided attention. Younger and older adults were asked to categorize the initial phoneme of spoken syllables ("Was it m or n?"), trying to ignore the lexical status of the syllables. The phonemes were manipulated to range in eight steps from m to n. Participants also did a discrimination task on syllable pairs ("Were the initial sounds the same or different?"). Categorization and discrimination were performed under either divided attention (concurrent visual-search task) or focused attention (no visual task). The results showed that even when the younger and older adults were matched on their discrimination scores: (1) the older adults had more difficulty inhibiting lexical knowledge than did younger adults, (2) divided attention weakened lexical inhibition in both younger and older adults, and (3) divided attention impaired sound discrimination more in older than younger listeners. The results confirm the independent and combined contribution of sensory decline and deficit in attentional control to language processing difficulties associated with aging. The relative weight of these variables and their mechanisms of action are discussed in the context of theories of aging and language. (c) 2014 APA, all rights reserved.
Word naming times and psycholinguistic norms for Italian nouns.
Barca, Laura; Burani, Cristina; Arduino, Lisa S
2002-08-01
The present study describes normative measures for 626 Italian simple nouns. The database (LEXVAR.XLS) is freely available for down-loading on the Web site http://wwwistc.ip.rm.cnr.it/materia/database/. For each of the 626 nouns, values for the following variables are reported: age of acquisition, familiarity, imageability, concreteness, adult written frequency, child written frequency, adult spoken frequency, number of orthographic neighbors, mean bigram frequency, length in syllables, and length in letters. A classification of lexical stress and of the type of word-initial phoneme is also provided. The intercorrelations among the variables, a factor analysis, and the effects of variables and of the extracted factors on word naming are reported. Naming latencies were affected primarily by a factor including word length and neighborhood size and by a word frequency factor. Neither a semantic factor including imageability, concreteness, and age of acquisition nor a factor defined by mean bigram frequency had significant effects on pronunciation times. These results hold for a language with shallow orthography, like Italian, for which lexical nonsemantic properties have been shown to affect reading aloud. These norms are useful in a variety of research areas involving the manipulation and control of stimulus attributes.
Brooks, P L; Frost, B J; Mason, J L; Gibson, D M
1987-03-01
The experiments described are part of an ongoing evaluation of the Queen's University Tactile Vocoder, a device that allows the acoustic waveform to be felt as a vibrational pattern on the skin. Two prelingually profoundly deaf teenagers reached criterion on a 50-word vocabulary (live voice, single speaker) using information obtained solely from the tactile vocoder with 28.5 and 24.0 hours of training. Immediately following word-learning experiments, subjects were asked to place 16 CVs into five phonemic categories (voiced & unvoiced stops, voiced & unvoiced fricatives, approximants). Average accuracy was 84.5%. Similar performance (89.6%) was obtained for placement of 12 VCs into four phonemic categories. Subjects were able to acquire some general rules about voicing and manner of articulation cues.
Age of acquisition effects on the functional organization of language in the adult brain.
Mayberry, Rachel I; Chen, Jen-Kai; Witcher, Pamela; Klein, Denise
2011-10-01
Using functional magnetic resonance imaging (fMRI), we neuroimaged deaf adults as they performed two linguistic tasks with sentences in American Sign Language, grammatical judgment and phonemic-hand judgment. Participants' age-onset of sign language acquisition ranged from birth to 14 years; length of sign language experience was substantial and did not vary in relation to age of acquisition. For both tasks, a more left lateralized pattern of activation was observed, with activity for grammatical judgment being more anterior than that observed for phonemic-hand judgment, which was more posterior by comparison. Age of acquisition was linearly and negatively related to activation levels in anterior language regions and positively related to activation levels in posterior visual regions for both tasks. Copyright © 2011 Elsevier Inc. All rights reserved.
Ertürk, Korhan Levent; Şengül, Gökhan
2012-01-01
We developed 3D simulation software of human organs/tissues; we developed a database to store the related data, a data management system to manage the created data, and a metadata system for the management of data. This approach provides two benefits: first of all the developed system does not require to keep the patient's/subject's medical images on the system, providing less memory usage. Besides the system also provides 3D simulation and modification options, which will help clinicians to use necessary tools for visualization and modification operations. The developed system is tested in a case study, in which a 3D human brain model is created and simulated from 2D MRI images of a human brain, and we extended the 3D model to include the spreading cortical depression (SCD) wave front, which is an electrical phoneme that is believed to cause the migraine. PMID:23258956
ERIC Educational Resources Information Center
Hwang, Shin Ja J., Ed.; Lommel, Arle R., Ed.
Papers from the conference include: "English and Human Morphology: 'Naturalness' in Counting and Measuring" (Sullivan); "Phonetic and Phonemic Change Revisited" (Lockwood); "Virtual Reality" (Langacker); "Path Directions in ASL Agreement Verbs are Predictable on Semantic Grounds" (Taub); "Temporal…
Processing voiceless vowels in Japanese: Effects of language-specific phonological knowledge
NASA Astrophysics Data System (ADS)
Ogasawara, Naomi
2005-04-01
There has been little research on processing allophonic variation in the field of psycholinguistics. This study focuses on processing the voiced/voiceless allophonic alternation of high vowels in Japanese. Three perception experiments were conducted to explore how listeners parse out vowels with the voicing alternation from other segments in the speech stream and how the different voicing statuses of the vowel affect listeners' word recognition process. The results from the three experiments show that listeners use phonological knowledge of their native language for phoneme processing and for word recognition. However, interactions of the phonological and acoustic effects are observed to be different in each process. The facilitatory phonological effect and the inhibitory acoustic effect cancel out one another in phoneme processing; while in word recognition, the facilitatory phonological effect overrides the inhibitory acoustic effect.
Phonemic carryover perseveration: word blends.
Buckingham, Hugh W; Christman, Sarah S
2004-11-01
This article will outline and describe the aphasic disorder of recurrent perseveration and will demonstrate how it interacts with the retrieval and production of spoken words in the language of fluent aphasic patients who have sustained damage to the left (dominant) posterior temporoparietal lobe. We will concentrate on the various kinds of sublexical segmental perseverations (the so-called phonemic carryovers of Santo Pietro and Rigrodsky) that most often play a role in the generation of word blendings. We will show how perseverative blends allow the clinician to better understand the dynamics of word and syllable production in fluent aphasia by scrutinizing the "onset/rime" and "onset/superrime" constituents of monosyllabic and polysyllabic words, respectively. We will demonstrate to the speech language pathologist the importance of the trochee stress pattern and the possibility that its metrical template may constitute a structural unit that can be perseverated.
Spatiotemporal differentiation in auditory and motor regions during auditory phoneme discrimination.
Aerts, Annelies; Strobbe, Gregor; van Mierlo, Pieter; Hartsuiker, Robert J; Corthals, Paul; Santens, Patrick; De Letter, Miet
2017-06-01
Auditory phoneme discrimination (APD) is supported by both auditory and motor regions through a sensorimotor interface embedded in a fronto-temporo-parietal cortical network. However, the specific spatiotemporal organization of this network during APD with respect to different types of phonemic contrasts is still unclear. Here, we use source reconstruction, applied to event-related potentials in a group of 47 participants, to uncover a potential spatiotemporal differentiation in these brain regions during a passive and active APD task with respect to place of articulation (PoA), voicing and manner of articulation (MoA). Results demonstrate that in an early stage (50-110 ms), auditory, motor and sensorimotor regions elicit more activation during the passive and active APD task with MoA and active APD task with voicing compared to PoA. In a later stage (130-175 ms), the same auditory and motor regions elicit more activation during the APD task with PoA compared to MoA and voicing, yet only in the active condition, implying important timing differences. Degree of attention influences a frontal network during the APD task with PoA, whereas auditory regions are more affected during the APD task with MoA and voicing. Based on these findings, it can be carefully suggested that APD is supported by the integration of early activation of auditory-acoustic properties in superior temporal regions, more perpetuated for MoA and voicing, and later auditory-to-motor integration in sensorimotor areas, more perpetuated for PoA.
Martinussen, Rhonda; Ferrari, Julia; Aitken, Madison; Willows, Dale
2015-10-01
This study examined the relations among perceived and actual knowledge of phonemic awareness (PA), exposure to PA instruction during practicum, and self-efficacy for teaching PA in a sample of 54 teacher candidates (TCs) enrolled in a 1-year Bachelor of Education program in a Canadian university. It also assessed the effects of a brief multimedia-enhanced lecture on TCs' actual knowledge of PA and efficacy ratings. Prior to the lecture, teacher candidates' scores on the PA assessment were relatively low with a mean percentage correct of 56.3%. Actual knowledge was not significantly correlated with perceived knowledge or self-efficacy ratings. Perceived knowledge was significantly and positively correlated with efficacy ratings and students' rating of their exposure to PA instruction during their practicum experience. A path analysis revealed that the relationship between exposure to PA instruction and self-efficacy beliefs was mediated by perceived knowledge controlling for actual knowledge and general prior experience working with young children. Analyses also revealed that TCs made significant gains in self-efficacy as well as actual knowledge when re-assessed after the lecture with a mean post-lecture score of 71.4%. Written feedback from the TCs indicated that the digital video clips included in the lecture provided clarity regarding the type of instructional practices that teachers could use to support phonemic awareness development in children. Implications for practice and future research on teacher preparation are discussed.
Developing the Persian version of the homophone meaning generation test
Ebrahimipour, Mona; Motamed, Mohammad Reza; Ashayeri, Hassan; Modarresi, Yahya; Kamali, Mohammad
2016-01-01
Background: Finding the right word is a necessity in communication, and its evaluation has always been a challenging clinical issue, suggesting the need for valid and reliable measurements. The Homophone Meaning Generation Test (HMGT) can measure the ability to switch between verbal concepts, which is required in word retrieval. The purpose of this study was to adapt and validate the Persian version of the HMGT. Methods: The first phase involved the adaptation of the HMGT to the Persian language. The second phase concerned the psychometric testing. The word-finding performance was assessed in 90 Persian-speaking healthy individuals (20-50 year old; 45 males and 45 females) through three naming tasks: Semantic Fluency, Phonemic Fluency, and Homophone Meaning Generation Test. The participants had no history of neurological or psychiatric diseases, alcohol abuse, severe depression, or history of speech, language, or learning problems. Results: The internal consistency coefficient was larger than 0.8 for all the items with a total Cronbach’s alpha of 0.80. Interrater and intrarater reliability were also excellent. The validity of all items was above 0.77, and the content validity index (0.99) was appropriate. The Persian HMGT had strong convergent validity with semantic and phonemic switching and adequate divergent validity with semantic and phonemic clustering. Conclusion: The Persian version of the Homophone Meaning Generation Test is an appropriate, valid, and reliable test to evaluate the ability to switch between verbal concepts in the assessment of word-finding performance. PMID:27390705
Developing the Persian version of the homophone meaning generation test.
Ebrahimipour, Mona; Motamed, Mohammad Reza; Ashayeri, Hassan; Modarresi, Yahya; Kamali, Mohammad
2016-01-01
Finding the right word is a necessity in communication, and its evaluation has always been a challenging clinical issue, suggesting the need for valid and reliable measurements. The Homophone Meaning Generation Test (HMGT) can measure the ability to switch between verbal concepts, which is required in word retrieval. The purpose of this study was to adapt and validate the Persian version of the HMGT. The first phase involved the adaptation of the HMGT to the Persian language. The second phase concerned the psychometric testing. The word-finding performance was assessed in 90 Persian-speaking healthy individuals (20-50 year old; 45 males and 45 females) through three naming tasks: Semantic Fluency, Phonemic Fluency, and Homophone Meaning Generation Test. The participants had no history of neurological or psychiatric diseases, alcohol abuse, severe depression, or history of speech, language, or learning problems. The internal consistency coefficient was larger than 0.8 for all the items with a total Cronbach's alpha of 0.80. Interrater and intrarater reliability were also excellent. The validity of all items was above 0.77, and the content validity index (0.99) was appropriate. The Persian HMGT had strong convergent validity with semantic and phonemic switching and adequate divergent validity with semantic and phonemic clustering. The Persian version of the Homophone Meaning Generation Test is an appropriate, valid, and reliable test to evaluate the ability to switch between verbal concepts in the assessment of word-finding performance.
Mirror neurons, birdsong, and human language: a hypothesis.
Levy, Florence
2011-01-01
THE MIRROR SYSTEM HYPOTHESIS AND INVESTIGATIONS OF BIRDSONG ARE REVIEWED IN RELATION TO THE SIGNIFICANCE FOR THE DEVELOPMENT OF HUMAN SYMBOLIC AND LANGUAGE CAPACITY, IN TERMS OF THREE FUNDAMENTAL FORMS OF COGNITIVE REFERENCE: iconic, indexical, and symbolic. Mirror systems are initially iconic but can progress to indexical reference when produced without the need for concurrent stimuli. Developmental stages in birdsong are also explored with reference to juvenile subsong vs complex stereotyped adult syllables, as an analogy with human language development. While birdsong remains at an indexical reference stage, human language benefits from the capacity for symbolic reference. During a pre-linguistic "babbling" stage, recognition of native phonemic categories is established, allowing further development of subsequent prefrontal and linguistic circuits for sequential language capacity.
Mirror Neurons, Birdsong, and Human Language: A Hypothesis
Levy, Florence
2012-01-01
The mirror system hypothesis and investigations of birdsong are reviewed in relation to the significance for the development of human symbolic and language capacity, in terms of three fundamental forms of cognitive reference: iconic, indexical, and symbolic. Mirror systems are initially iconic but can progress to indexical reference when produced without the need for concurrent stimuli. Developmental stages in birdsong are also explored with reference to juvenile subsong vs complex stereotyped adult syllables, as an analogy with human language development. While birdsong remains at an indexical reference stage, human language benefits from the capacity for symbolic reference. During a pre-linguistic “babbling” stage, recognition of native phonemic categories is established, allowing further development of subsequent prefrontal and linguistic circuits for sequential language capacity. PMID:22287950
Creating a Canonical Scientific and Technical Information Classification System for NCSTRL+
NASA Technical Reports Server (NTRS)
Tiffany, Melissa E.; Nelson, Michael L.
1998-01-01
The purpose of this paper is to describe the new subject classification system for the NCSTRL+ project. NCSTRL+ is a canonical digital library (DL) based on the Networked Computer Science Technical Report Library (NCSTRL). The current NCSTRL+ classification system uses the NASA Scientific and Technical (STI) subject classifications, which has a bias towards the aerospace, aeronautics, and engineering disciplines. Examination of other scientific and technical information classification systems showed similar discipline-centric weaknesses. Traditional, library-oriented classification systems represented all disciplines, but were too generalized to serve the needs of a scientific and technically oriented digital library. Lack of a suitable existing classification system led to the creation of a lightweight, balanced, general classification system that allows the mapping of more specialized classification schemes into the new framework. We have developed the following classification system to give equal weight to all STI disciplines, while being compact and lightweight.
Wyma, John M.; Herron, Timothy J.; Yund, E. William
2016-01-01
In verbal fluency (VF) tests, subjects articulate words in a specified category during a short test period (typically 60 s). Verbal fluency tests are widely used to study language development and to evaluate memory retrieval in neuropsychiatric disorders. Performance is usually measured as the total number of correct words retrieved. Here, we describe the properties of a computerized VF (C-VF) test that tallies correct words and repetitions while providing additional lexical measures of word frequency, syllable count, and typicality. In addition, the C-VF permits (1) the analysis of the rate of responding over time, and (2) the analysis of the semantic relationships between words using a new method, Explicit Semantic Analysis (ESA), as well as the established semantic clustering and switching measures developed by Troyer et al. (1997). In Experiment 1, we gathered normative data from 180 subjects ranging in age from 18 to 82 years in semantic (“animals”) and phonemic (letter “F”) conditions. The number of words retrieved in 90 s correlated with education and daily hours of computer-use. The rate of word production declined sharply over time during both tests. In semantic conditions, correct-word scores correlated strongly with the number of ESA and Troyer-defined semantic switches as well as with an ESA-defined semantic organization index (SOI). In phonemic conditions, ESA revealed significant semantic influences in the sequence of words retrieved. In Experiment 2, we examined the test-retest reliability of different measures across three weekly tests in 40 young subjects. Different categories were used for each semantic (“animals”, “parts of the body”, and “foods”) and phonemic (letters “F”, “A”, and “S”) condition. After regressing out the influences of education and computer-use, we found that correct-word z-scores in the first session did not differ from those of the subjects in Experiment 1. Word production was uniformly greater in semantic than phonemic conditions. Intraclass correlation coefficients (ICCs) of correct-word z-scores were higher for phonemic (0.91) than semantic (0.77) tests. In semantic conditions, good reliability was also seen for the SOI (ICC = 0.68) and ESA-defined switches in semantic categories (ICC = 0.62). In Experiment 3, we examined the performance of subjects from Experiment 2 when instructed to malinger: 38% showed abnormal (p< 0.05) performance in semantic conditions. Simulated malingerers with abnormal scores could be distinguished with 80% sensitivity and 89% specificity from subjects with abnormal scores in Experiment 1 using lexical, temporal, and semantic measures. In Experiment 4, we tested patients with mild and severe traumatic brain injury (mTBI and sTBI). Patients with mTBI performed within the normal range, while patients with sTBI showed significant impairments in correct-word z-scores and category shifts. The lexical, temporal, and semantic measures of the C-VF provide an automated and comprehensive description of verbal fluency performance. PMID:27936001
NASA Astrophysics Data System (ADS)
Lin, Y.; Chen, X.
2016-12-01
Land cover classification systems used in remote sensing image data have been developed to meet the needs for depicting land covers in scientific investigations and policy decisions. However, accuracy assessments of a spate of data sets demonstrate that compared with the real physiognomy, each of the thematic map of specific land cover classification system contains some unavoidable flaws and unintended deviation. This work proposes a web-based land cover classification system, an integrated prototype, based on an ontology model of various classification systems, each of which is assigned the same weight in the final determination of land cover type. Ontology, a formal explication of specific concepts and relations, is employed in this prototype to build up the connections among different systems to resolve the naming conflicts. The process is initialized by measuring semantic similarity between terminologies in the systems and the search key to produce certain set of satisfied classifications, and carries on through searching the predefined relations in concepts of all classification systems to generate classification maps with user-specified land cover type highlighted, based on probability calculated by votes from data sets with different classification system adopted. The present system is verified and validated by comparing the classification results with those most common systems. Due to full consideration and meaningful expression of each classification system using ontology and the convenience that the web brings with itself, this system, as a preliminary model, proposes a flexible and extensible architecture for classification system integration and data fusion, thereby providing a strong foundation for the future work.
Speech Analysis Based On Image Information from Lip Movement
NASA Astrophysics Data System (ADS)
Talha, Kamil S.; Wan, Khairunizam; Za'ba, S. K.; Mohamad Razlan, Zuradzman; B, Shahriman A.
2013-12-01
Deaf and hard of hearing people often have problems being able to understand and lip read other people. Usually deaf and hard of hearing people feel left out of conversation and sometimes they are actually ignored by other people. There are a variety of ways hearing-impaired person can communicate and gain accsss to the information. Communication support includes both technical and human aids. Human aids include interpreters, lip-readers and note-takers. Interpreters translate the Sign Language and must therefore be qualified. In this paper, vision system is used to track movements of the lip. In the experiment, the proposed system succesfully can differentiate 11 type of phonemes and then classified it to the respective viseme group. By using the proposed system the hearing-impaired persons could practise pronaunciations by themselve without support from the instructor.
Towards a psychology of literacy: on the relations between speech and writing.
Olson, D R
1996-07-01
A variety of graphic systems have been developed for preserving and communicating information, among them pictures, charts, graphs, flags, tartans and hallmarks. Writing systems which constitute a species of these graphic systems are distinctive in that they bear a direct relation to speech; in this paper it is argued that writing serves as a model for various properties of speech including sentences, words and for alphabets, phonemes. On this view, the history of writing and the acquisition of literacy are less matters of learning how to transcribe speech than a matter of learning to hear and think about one's own language in a new way. A number of lines of evidence are advanced to support the "model" view and the conclusion that literacy contributes to conceptual structure rather than merely reporting it.
Suggate, Sebastian P
2016-01-01
Much is known about short-term--but very little about the long-term--effects of reading interventions. To rectify this, a detailed analysis of follow-up effects as a function of intervention, sample, and methodological variables was conducted. A total of 71 intervention-control groups were selected (N = 8,161 at posttest) from studies reporting posttest and follow-up data (M = 11.17 months) for previously established reading interventions. The posttest effect sizes indicated effects (dw = 0.37) that decreased to follow-up (dw = 0.22). Overall, comprehension and phonemic awareness interventions showed good maintenance of effect that transferred to nontargeted skills, whereas phonics and fluency interventions, and those for preschool and kindergarten children, tended not to. Several methodological features also related to effect sizes at follow-up, namely experimental design and dosage, and sample attrition, risk status, and gender balance. © Hammill Institute on Disabilities 2014.
[Perception features of emotional intonation of short pseudowords].
Dmitrieva, E S; Gel'man, V Ia; Zaĭtseva, K A; Orlov, A M
2012-01-01
Reaction time and recognition accuracy of speech emotional intonations in short meaningless words that differed only in one phoneme with background noise and without it were studied in 49 adults of 20-79 years old. The results were compared with the same parameters of emotional intonations in intelligent speech utterances under similar conditions. Perception of emotional intonations at different linguistic levels (phonological and lexico-semantic) was found to have both common features and certain peculiarities. Recognition characteristics of emotional intonations depending on gender and age of listeners appeared to be invariant with regard to linguistic levels of speech stimuli. Phonemic composition of pseudowords was found to influence the emotional perception, especially against the background noise. The most significant stimuli acoustic characteristic responsible for the perception of speech emotional prosody in short meaningless words under the two experimental conditions, i.e. with and without background noise, was the fundamental frequency variation.
Phonemic awareness of English second language learners
2017-01-01
Background The PA skills of phonological blending and segmentation and auditory word discrimination relate directly to literacy and may be weak in English second language (EL2) learners. In South Africa, literacy skills have been found to be poor in especially EL2 learners. Objectives The purpose of this paper is to determine the effects of vowel perception and production intervention on phonemic awareness (PA) and literacy skills of Setswana first language (L1) learners. These learners are English second language (EL2) learners in Grade 3. Method The present study employed a quasi-experimental, pre-test–post-test design. Results The findings of low–literacy skill levels concurred with previous investigations. However, post-test results of intervention in PA seemed to improve the literacy skills of EL2 learners. Conclusion PA skills should be a crucial part of the literacy curriculum in South Africa. PMID:28155282
Early home-based intervention in the Netherlands for children at familial risk of dyslexia.
van Otterloo, Sandra G; van der Leij, Aryan; Henrichs, Lotte F
2009-08-01
Dutch children at higher familial risk of reading disability received a home-based intervention programme before formal reading instruction started to investigate whether this would reduce the risk of dyslexia. The experimental group (n=23) received a specific training in phoneme awareness and letter knowledge. A control group (n=25) received a non-specific training in morphology, syntax, and vocabulary. Both interventions were designed to take 10 min a day, 5 days a week for 10 weeks. Most parents were sufficiently able to work with the programme properly. At post-test the experimental group had gained more on phoneme awareness than the control group. The control group gained more on one of the morphology measures. On average, these specific training results did not lead to significant group differences in first-grade reading and spelling measures. However, fewer experimental children scored below 10th percentile on word recognition. (c) 2008 John Wiley & Sons, Ltd.
Lexical access changes in patients with multiple sclerosis: a two-year follow-up study.
Sepulcre, Jorge; Peraita, Herminia; Goni, Joaquin; Arrondo, Gonzalo; Martincorena, Inigo; Duque, Beatriz; Velez de Mendizabal, Nieves; Masdeu, Joseph C; Villoslada, Pablo
2011-02-01
The aim of the study was to analyze lexical access strategies in patients with multiple sclerosis (MS) and their changes over time. We studied lexical access strategies during semantic and phonemic verbal fluency tests and also confrontation naming in a 2-year prospective cohort of 45 MS patients and 20 healthy controls. At baseline, switching lexical access strategy (both in semantic and in phonemic verbal fluency tests) and confrontation naming were significantly impaired in MS patients compared with controls. After 2 years follow-up, switching score decreased, and cluster size increased over time in semantic verbal fluency tasks, suggesting a failure in the retrieval of lexical information rather than an impairment of the lexical pool. In conclusion, these findings underline the significant presence of lexical access problems in patients with MS and could point out their key role in the alterations of high-level communications abilities in MS.
Schefft, Bruce K; Testa, S Marc; Dulay, Mario F; Privitera, Michael D; Yeh, Hwa-Shain
2003-04-01
The present study examined the diagnostic utility of confrontation naming tasks and phonemic paraphasia production in lateralizing the epileptogenic region in patients with temporal lobe epilepsy (TLE). Further, the role of intelligence in moderating the diagnostic utility of confrontation naming tasks was assessed. Eighty patients with medically intractable complex partial seizures (40 left TLE, 40 right TLE) received the Boston Naming Test (BNT) and the Visual Naming subtest (VNT) of the Multilingual Aphasia Examination. The BNT was diagnostically more sensitive than the VNT in identifying left TLE (77.5% vs 17.5%, respectively). The utility of BNT performance and paraphasias was maximal in patients with Full Scale IQs >or=90 who were 6.8 times more likely to have left TLE than patients without paraphasias. Preoperative assessment of confrontation naming ability and phonemic paraphasia production using the BNT provided diagnostically useful information in lateralizing the epileptogenic region in left TLE.
Spriet, Ann; Van Deun, Lieselot; Eftaxiadis, Kyriaky; Laneau, Johan; Moonen, Marc; van Dijk, Bas; van Wieringen, Astrid; Wouters, Jan
2007-02-01
This paper evaluates the benefit of the two-microphone adaptive beamformer BEAM in the Nucleus Freedom cochlear implant (CI) system for speech understanding in background noise by CI users. A double-blind evaluation of the two-microphone adaptive beamformer BEAM and a hardware directional microphone was carried out with five adult Nucleus CI users. The test procedure consisted of a pre- and post-test in the lab and a 2-wk trial period at home. In the pre- and post-test, the speech reception threshold (SRT) with sentences and the percentage correct phoneme scores for CVC words were measured in quiet and background noise at different signal-to-noise ratios. Performance was assessed for two different noise configurations (with a single noise source and with three noise sources) and two different noise materials (stationary speech-weighted noise and multitalker babble). During the 2-wk trial period at home, the CI users evaluated the noise reduction performance in different listening conditions by means of the SSQ questionnaire. In addition to the perceptual evaluation, the noise reduction performance of the beamformer was measured physically as a function of the direction of the noise source. Significant improvements of both the SRT in noise (average improvement of 5-16 dB) and the percentage correct phoneme scores (average improvement of 10-41%) were observed with BEAM compared to the standard hardware directional microphone. In addition, the SSQ questionnaire and subjective evaluation in controlled and real-life scenarios suggested a possible preference for the beamformer in noisy environments. The evaluation demonstrates that the adaptive noise reduction algorithm BEAM in the Nucleus Freedom CI-system may significantly increase the speech perception by cochlear implantees in noisy listening conditions. This is the first monolateral (adaptive) noise reduction strategy actually implemented in a mainstream commercial CI.
42 CFR 412.620 - Patient classification system.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false Patient classification system. 412.620 Section 412... Inpatient Rehabilitation Hospitals and Rehabilitation Units § 412.620 Patient classification system. (a) Classification methodology. (1) A patient classification system is used to classify patients in inpatient...
42 CFR 412.620 - Patient classification system.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Patient classification system. 412.620 Section 412... Inpatient Rehabilitation Hospitals and Rehabilitation Units § 412.620 Patient classification system. (a) Classification methodology. (1) A patient classification system is used to classify patients in inpatient...
The locus of taboo context effects in picture naming.
Hansen, Samuel J; McMahon, Katie L; Burt, Jennifer S; de Zubicaray, Greig I
2016-07-20
Speakers respond more slowly when naming pictures presented with taboo (i.e., offensive/embarrassing) than with neutral distractor words in the picture-word interference paradigm. Over four experiments, we attempted to localize the processing stage at which this effect occurs during word production and determine whether it reflects the socially offensive/embarrassing nature of the stimuli. Experiment 1 demonstrated taboo interference at early stimulus onset asynchronies of -150 ms and 0 ms although not at 150 ms. In Experiment 2, taboo distractors sharing initial phonemes with target picture names eliminated the interference effect. Using additive factors logic, Experiment 3 demonstrated that taboo interference and phonological facilitation effects do not interact, indicating that the two effects originate at different processing levels within the speech production system. In Experiment 4, interference was observed for masked taboo distractors, including those sharing initial phonemes with the target picture names, indicating that the effect cannot be attributed to a processing level involving responses in an output buffer. In two of the four experiments, the magnitude of the interference effect correlated significantly with arousal ratings of the taboo words. However, no significant correlations were found for either offensiveness or valence ratings. These findings are consistent with a locus for the taboo interference effect prior to the processing stage responsible for word form encoding. We propose a pre-lexical account in which taboo distractors capture attention at the expense of target picture processing due to their high arousal levels.
Intra- and Interobserver Reliability of Three Classification Systems for Hallux Rigidus.
Dillard, Sarita; Schilero, Christina; Chiang, Sharon; Pham, Peter
2018-04-18
There are over ten classification systems currently used in the staging of hallux rigidus. This results in confusion and inconsistency with radiographic interpretation and treatment. The reliability of hallux rigidus classification systems has not yet been tested. The purpose of this study was to evaluate intra- and interobserver reliability using three commonly used classifications for hallux rigidus. Twenty-one plain radiograph sets were presented to ten ACFAS board-certified foot and ankle surgeons. Each physician classified each radiograph based on clinical experience and knowledge according to the Regnauld, Roukis, and Hattrup and Johnson classification systems. The two-way mixed single-measure consistency intraclass correlation was used to calculate intra- and interrater reliability. The intrarater reliability of individual sets for the Roukis and Hattrup and Johnson classification systems was "fair to good" (Roukis, 0.62±0.19; Hattrup and Johnson, 0.62±0.28), whereas the intrarater reliability of individual sets for the Regnauld system bordered between "fair to good" and "poor" (0.43±0.24). The interrater reliability of the mean classification was "excellent" for all three classification systems. Conclusions Reliable and reproducible classification systems are essential for treatment and prognostic implications in hallux rigidus. In our study, Roukis classification system had the best intrarater reliability. Although there are various classification systems for hallux rigidus, our results indicate that all three of these classification systems show reliability and reproducibility.
Park, Myoung-Ok
2017-02-01
[Purpose] The purpose of this study was to determine effects of Gross Motor Function Classification System and Manual Ability Classification System levels on performance-based motor skills of children with spastic cerebral palsy. [Subjects and Methods] Twenty-three children with cerebral palsy were included. The Assessment of Motor and Process Skills was used to evaluate performance-based motor skills in daily life. Gross motor function was assessed using Gross Motor Function Classification Systems, and manual function was measured using the Manual Ability Classification System. [Results] Motor skills in daily activities were significantly different on Gross Motor Function Classification System level and Manual Ability Classification System level. According to the results of multiple regression analysis, children categorized as Gross Motor Function Classification System level III scored lower in terms of performance based motor skills than Gross Motor Function Classification System level I children. Also, when analyzed with respect to Manual Ability Classification System level, level II was lower than level I, and level III was lower than level II in terms of performance based motor skills. [Conclusion] The results of this study indicate that performance-based motor skills differ among children categorized based on Gross Motor Function Classification System and Manual Ability Classification System levels of cerebral palsy.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
...-AM78 Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System... 2007 North American Industry Classification System (NAICS) codes currently used in Federal Wage System... (OPM) issued a final rule (73 FR 45853) to update the 2002 North American Industry Classification...
Zając-Lamparska, Ludmiła; Wiłkość, Monika; Markowska, Anita; Laskowska-Levy, Ilona Paulina; Wróbel, Marek; Małkowski, Bogdan
2017-08-29
Functional neuroimaging of the brain is a widely used method to study cognitive functions. The aim of this study was to compare the activity of the brain during performance of the tasks of phonemic and semantic fluency with the paced-overt technique in terms of prolonged activation of the brain. The study included 17 patients aged 20-40 years who were treated in the past for Hodgkin'slymphoma, now in remission. Due to the type of task, the subjectswere divided into two groups. Nine people performed the phonemic fluency task, and eight semantic. Due to the disease, all subjects were subject to neuropsychological diagnosis. The diagnosis of any cognitive impairment was an exclusion criterion. Neuroimaging was performed using PET technique with 18F-fluorodeoxyglucose (FDG) tracer. Performance of a verbal fluency test, regardless of the version of the task, was associated with greater activity of the left hemisphere of the brain. The most involved areas compared with other areas of key importance for the performance of verbal fluency tasks were frontal lobes. An increased activity of parietal structures was also shown. The study did not reveal differences in brain activity depending on the type of task. Performing the test in both phonemic and semantic form for a long time, in terms of increased cognitive control resulting from the test procedure, could result in significant advantage of prefrontal lobe activityin both types of tasks and made it impossible to observe the processes specific to each of them.
Native language shapes automatic neural processing of speech.
Intartaglia, Bastien; White-Schwoch, Travis; Meunier, Christine; Roman, Stéphane; Kraus, Nina; Schön, Daniele
2016-08-01
The development of the phoneme inventory is driven by the acoustic-phonetic properties of one's native language. Neural representation of speech is known to be shaped by language experience, as indexed by cortical responses, and recent studies suggest that subcortical processing also exhibits this attunement to native language. However, most work to date has focused on the differences between tonal and non-tonal languages that use pitch variations to convey phonemic categories. The aim of this cross-language study is to determine whether subcortical encoding of speech sounds is sensitive to language experience by comparing native speakers of two non-tonal languages (French and English). We hypothesized that neural representations would be more robust and fine-grained for speech sounds that belong to the native phonemic inventory of the listener, and especially for the dimensions that are phonetically relevant to the listener such as high frequency components. We recorded neural responses of American English and French native speakers, listening to natural syllables of both languages. Results showed that, independently of the stimulus, American participants exhibited greater neural representation of the fundamental frequency compared to French participants, consistent with the importance of the fundamental frequency to convey stress patterns in English. Furthermore, participants showed more robust encoding and more precise spectral representations of the first formant when listening to the syllable of their native language as compared to non-native language. These results align with the hypothesis that language experience shapes sensory processing of speech and that this plasticity occurs as a function of what is meaningful to a listener. Copyright © 2016 Elsevier Ltd. All rights reserved.
2018-01-01
Abstract In real-world environments, humans comprehend speech by actively integrating prior knowledge (P) and expectations with sensory input. Recent studies have revealed effects of prior information in temporal and frontal cortical areas and have suggested that these effects are underpinned by enhanced encoding of speech-specific features, rather than a broad enhancement or suppression of cortical activity. However, in terms of the specific hierarchical stages of processing involved in speech comprehension, the effects of integrating bottom-up sensory responses and top-down predictions are still unclear. In addition, it is unclear whether the predictability that comes with prior information may differentially affect speech encoding relative to the perceptual enhancement that comes with that prediction. One way to investigate these issues is through examining the impact of P on indices of cortical tracking of continuous speech features. Here, we did this by presenting participants with degraded speech sentences that either were or were not preceded by a clear recording of the same sentences while recording non-invasive electroencephalography (EEG). We assessed the impact of prior information on an isolated index of cortical tracking that reflected phoneme-level processing. Our findings suggest the possibility that prior information affects the early encoding of natural speech in a dual manner. Firstly, the availability of prior information, as hypothesized, enhanced the perceived clarity of degraded speech, which was positively correlated with changes in phoneme-level encoding across subjects. In addition, P induced an overall reduction of this cortical measure, which we interpret as resulting from the increase in predictability. PMID:29662947
Autophonic Loudness of Singers in Simulated Room Acoustic Environments.
Yadav, Manuj; Cabrera, Densil
2017-05-01
This paper aims to study the effect of room acoustics and phonemes on the perception of loudness of one's own voice (autophonic loudness) for a group of trained singers. For a set of five phonemes, 20 singers vocalized over several autophonic loudness ratios, while maintaining pitch constancy over extreme voice levels, within five simulated rooms. There were statistically significant differences in the slope of the autophonic loudness function (logarithm of autophonic loudness as a function of voice sound pressure level) for the five phonemes, with slopes ranging from 1.3 (/a:/) to 2.0 (/z/). There was no significant variation in the autophonic loudness function slopes with variations in room acoustics. The autophonic room response, which represents a systematic decrease in voice levels with increasing levels of room reflections, was also studied, with some evidence found in support. Overall, the average slope of the autophonic room response for the three corner vowels (/a:/, /i:/, and /u:/) was -1.4 for medium autophonic loudness. The findings relating to the slope of the autophonic loudness function are in agreement with the findings of previous studies where the sensorimotor mechanisms in regulating voice were shown to be more important in the perception of autophonic loudness than hearing of room acoustics. However, the role of room acoustics, in terms of the autophonic room response, is shown to be more complicated, requiring further inquiry. Overall, it is shown that autophonic loudness grows at more than twice the rate of loudness growth for sounds created outside the human body. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.
Lohvansuu, Kaisa; Hämäläinen, Jarmo A; Tanskanen, Annika; Ervast, Leena; Heikkinen, Elisa; Lyytinen, Heikki; Leppänen, Paavo H T
2014-12-01
Specific reading disability, dyslexia, is a prevalent and heritable disorder impairing reading acquisition characterized by a phonological deficit. However, the underlying mechanism of how the impaired phonological processing mediates resulting dyslexia or reading disabilities remains still unclear. Using ERPs we studied speech sound processing of 30 dyslexic children with familial risk for dyslexia, 51 typically reading children with familial risk for dyslexia, and 58 typically reading control children. We found enhanced brain responses to shortening of a phonemic length in pseudo-words (/at:a/ vs. /ata/) in dyslexic children with familial risk as compared to other groups. The enhanced brain responses were associated with better performance in behavioral phonemic length discrimination task, as well as with better reading and writing accuracy. Source analyses revealed that the brain responses of sub-group of dyslexic children with largest responses originated from a more posterior area of the right temporal cortex as compared to the responses of the other participants. This is the first electrophysiological evidence for a possible compensatory speech perception mechanism in dyslexia. The best readers within the dyslexic group have probably developed alternative strategies which employ compensatory mechanisms substituting their possible earlier deficit in phonological processing and might therefore be able to perform better in phonemic length discrimination and reading and writing accuracy tasks. However, we speculate that for reading fluency compensatory mechanisms are not that easily built and dyslexic children remain slow readers during their adult life. Copyright © 2014 Elsevier B.V. All rights reserved.
Computer-based learning of spelling skills in children with and without dyslexia.
Kast, Monika; Baschera, Gian-Marco; Gross, Markus; Jäncke, Lutz; Meyer, Martin
2011-12-01
Our spelling training software recodes words into multisensory representations comprising visual and auditory codes. These codes represent information about letters and syllables of a word. An enhanced version, developed for this study, contains an additional phonological code and an improved word selection controller relying on a phoneme-based student model. We investigated the spelling behavior of children by means of learning curves based on log-file data of the previous and the enhanced software version. First, we compared the learning progress of children with dyslexia working either with the previous software (n = 28) or the adapted version (n = 37). Second, we investigated the spelling behavior of children with dyslexia (n = 37) and matched children without dyslexia (n = 25). To gain deeper insight into which factors are relevant for acquiring spelling skills, we analyzed the influence of cognitive abilities, such as attention functions and verbal memory skills, on the learning behavior. All investigations of the learning process are based on learning curve analyses of the collected log-file data. The results evidenced that those children with dyslexia benefit significantly from the additional phonological cue and the corresponding phoneme-based student model. Actually, children with dyslexia improve their spelling skills to the same extent as children without dyslexia and were able to memorize phoneme to grapheme correspondence when given the correct support and adequate training. In addition, children with low attention functions benefit from the structured learning environment. Generally, our data showed that memory sources are supportive cognitive functions for acquiring spelling skills and for using the information cues of a multi-modal learning environment.
Developmental Trajectory of McGurk Effect Susceptibility in Children and Adults With Amblyopia.
Narinesingh, Cindy; Goltz, Herbert C; Raashid, Rana Arham; Wong, Agnes M F
2015-03-05
The McGurk effect is an audiovisual illusion that involves the concurrent presentation of a phoneme (auditory syllable) and an incongruent viseme (visual syllable). Adults with amblyopia show less susceptibility to this illusion than visually normal controls, even when viewing binocularly. The present study investigated the developmental trajectory of McGurk effect susceptibility in adults, older children (10-17 years), and younger children (4-9 years) with amblyopia. A total of 62 participants with amblyopia (22 adults, 12 older children, 28 younger children) and 66 visually normal controls (25 adults, 17 older children, 24 younger children) viewed videos that combined phonemes and visemes, and were asked to report what they heard. Videos with congruent (auditory and visual matching) and incongruent (auditory and visual not matching) stimuli were presented. Incorrect responses on incongruent trials correspond to high McGurk effect susceptibility, indicating that the viseme influenced the phoneme. Participants with amblyopia (28.0% ± 3.3%) demonstrated a less consistent McGurk effect than visually normal controls (15.2% ± 2.3%) across all age groups (P = 0.0024). Effect susceptibility increased with age (P = 0.0003) for amblyopic participants and controls. Both groups showed a similar response pattern to different speakers and syllables, but amblyopic participants invariably demonstrated a less consistent effect. Amblyopia is associated with reduced McGurk effect susceptibility in children and adults. Our findings indicate that the differences do not simply indicate delayed development in children with amblyopia; rather, they represent permanent alterations that persist into adulthood. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
An Investigation of Luria's Hypothesis on Prompting in Aphasic Naming Disturbances.
ERIC Educational Resources Information Center
Li, Edith Chin; Canter, Gerald J.
1987-01-01
The study investigated A. R. Luria's hypothesis that aphasic subgroups (Broca's, conduction, Wernicke's, and anomic aphasics) would respond differentially to phonemic prompts. Results, with the exception of the anomic aphasic group, supported Luria's predictions. (Author/DB)
Seeto, Angeline; Searchfield, Grant D
2018-03-01
Advances in digital signal processing have made it possible to provide a wide-band frequency response with smooth, precise spectral shaping. Several manufacturers have introduced hearing aids that are claimed to provide gain for frequencies up to 10-12 kHz. However, there is currently limited evidence and very few independent studies evaluating the performance of the extended bandwidth hearing aids that have recently become available. This study investigated an extended bandwidth hearing aid using measures of speech intelligibility and sound quality to find out whether there was a significant benefit of extended bandwidth amplification over standard amplification. Repeated measures study designed to examine the efficacy of extended bandwidth amplification compared to standard bandwidth amplification. Sixteen adult participants with mild-to-moderate sensorineural hearing loss. Participants were bilaterally fit with a pair of Widex Mind 440 behind-the-ear hearing aids programmed with a standard bandwidth fitting and an extended bandwidth fitting; the latter provided gain up to 10 kHz. For each fitting, and an unaided condition, participants completed two speech measures of aided benefit, the Quick Speech-in-Noise test (QuickSIN™) and the Phonak Phoneme Perception Test (PPT; high-frequency perception in quiet), and a measure of sound quality rating. There were no significant differences found between unaided and aided conditions for QuickSIN™ scores. For the PPT, there were statistically significantly lower (improved) detection thresholds at high frequencies (6 and 9 kHz) with the extended bandwidth fitting. Although not statistically significant, participants were able to distinguish between 6 and 9 kHz 50% better with extended bandwidth. No significant difference was found in ability to recognize phonemes in quiet between the unaided and aided conditions when phonemes only contained frequency content <6 kHz. However significant benefit was found with the extended bandwidth fitting for recognition of 9-kHz phonemes. No significant difference in sound quality preference was found between the standard bandwidth and extended bandwidth fittings. This study demonstrated that a pair of currently available extended bandwidth hearing aids was technically capable of delivering high-frequency amplification that was both audible and useable to listeners with mild-to-moderate hearing loss. This amplification was of acceptable sound quality. Further research, particularly field trials, is required to ascertain the real-world benefit of high-frequency amplification. American Academy of Audiology
Colé, Pascale; Cavalli, Eddy; Duncan, Lynne G.; Theurel, Anne; Gentaz, Edouard; Sprenger-Charolles, Liliane; El-Ahmadi, Abdessadek
2018-01-01
Children from low-SES families are known to show delays in aspects of language development which underpin reading acquisition such as vocabulary and listening comprehension. Research on the development of morphological skills in this group is scarce, and no studies exist in French. The present study investigated the involvement of morphological knowledge in the very early stages of reading acquisition (decoding), before reading comprehension can be reliably assessed. We assessed listening comprehension, receptive vocabulary, phoneme awareness, morphological awareness as well as decoding, word reading and non-verbal IQ in 703 French first-graders from low-SES families after 3 months of formal schooling (November). Awareness of derivational morphology was assessed using three oral tasks: Relationship Judgment (e.g., do these words belong to the same family or not? heat-heater … ham-hammer); Lexical Sentence Completion [e.g., Someone who runs is a …? (runner)]; and Non-lexical Sentence Completion [e.g., Someone who lums is a…? (lummer)]. The tasks differ on implicit/explicit demands and also tap different kinds of morphological knowledge. The Judgement task measures the phonological and semantic properties of the morphological relationship and the Sentence Completion tasks measure knowledge of morphological production rules. Data were processed using a graphical modeling approach which offers key information about how skills known to be involved in learning to read are organized in memory. This modeling approach was therefore useful in revealing a potential network which expresses the conditional dependence structure between skills, after which recursive structural equation modeling was applied to test specific hypotheses. Six main conclusions can be drawn from these analyses about low SES reading acquisition: (1) listening comprehension is at the heart of the reading acquisition process; (2) word reading depends directly on phonemic awareness and indirectly on listening comprehension; (3) decoding depends on word reading; (4) Morphological awareness and vocabulary have an indirect influence on word reading via both listening comprehension and phoneme awareness; (5) the components of morphological awareness assessed by our tasks have independent relationships with listening comprehension; and (6) neither phonemic nor morphological awareness influence vocabulary directly. The implications of these results with regard to early reading acquisition among low SES groups are discussed. PMID:29725313
Colé, Pascale; Cavalli, Eddy; Duncan, Lynne G; Theurel, Anne; Gentaz, Edouard; Sprenger-Charolles, Liliane; El-Ahmadi, Abdessadek
2018-01-01
Children from low-SES families are known to show delays in aspects of language development which underpin reading acquisition such as vocabulary and listening comprehension. Research on the development of morphological skills in this group is scarce, and no studies exist in French. The present study investigated the involvement of morphological knowledge in the very early stages of reading acquisition (decoding), before reading comprehension can be reliably assessed. We assessed listening comprehension, receptive vocabulary, phoneme awareness, morphological awareness as well as decoding, word reading and non-verbal IQ in 703 French first-graders from low-SES families after 3 months of formal schooling (November). Awareness of derivational morphology was assessed using three oral tasks: Relationship Judgment (e.g., do these words belong to the same family or not? heat-heater … ham-hammer); Lexical Sentence Completion [e.g., Someone who runs is a …? (runner)]; and Non-lexical Sentence Completion [e.g., Someone who lums is a…? (lummer)]. The tasks differ on implicit/explicit demands and also tap different kinds of morphological knowledge. The Judgement task measures the phonological and semantic properties of the morphological relationship and the Sentence Completion tasks measure knowledge of morphological production rules. Data were processed using a graphical modeling approach which offers key information about how skills known to be involved in learning to read are organized in memory. This modeling approach was therefore useful in revealing a potential network which expresses the conditional dependence structure between skills, after which recursive structural equation modeling was applied to test specific hypotheses. Six main conclusions can be drawn from these analyses about low SES reading acquisition: (1) listening comprehension is at the heart of the reading acquisition process; (2) word reading depends directly on phonemic awareness and indirectly on listening comprehension; (3) decoding depends on word reading; (4) Morphological awareness and vocabulary have an indirect influence on word reading via both listening comprehension and phoneme awareness; (5) the components of morphological awareness assessed by our tasks have independent relationships with listening comprehension; and (6) neither phonemic nor morphological awareness influence vocabulary directly. The implications of these results with regard to early reading acquisition among low SES groups are discussed.
covert contrast: The acquisition of Mandarin tone 2 and tone 3 in L2 production and perception
NASA Astrophysics Data System (ADS)
Mar, Li-Ya
This dissertation investigates the occurrence of an intermediate stage, termed a covert contrast, in the acquisition of Mandarin Tone 2 (T2) and Tone 3 (T3) by adult speakers of American English. A covert contrast is a statistically reliable distinction produced by language learners that is not perceived by native speakers of the target language (TL). In second language (L2) acquisition, whether a learner is judged as having acquired a TL phonemic contrast has largely depended on whether the contrast was perceived and transcribed by native speakers of the TL. However, categorical perception has shown that native listeners cannot perceive a distinction between two sounds that fall within the same perceptual boundaries on the continuum of the relevant acoustic cues. In other words, it is possible that native speakers of the TL do not perceive a phonemic distinction that is produced by L2 learners when that distinction occurs within a phonemic boundary of TL. The data for the study were gathered through two elicitations of tone production, a longitudinal analysis, and two perception tasks. There were three key findings. First, both elicitations showed that most of the L2 participants produced a covert contrast between T2 and T3 on at least one of the three acoustic measures used in the study. Second, the longitudinal analysis reveals that some L2 participants progressed from making a covert contrast to a later stage of implementing an overt one, thereby supporting the claim that making a covert contrast is an intermediate stage in the process of acquiring a L2 phonemic contrast. Third, results of the perceptual tasks showed no reliable difference in identifying and discriminating Mandarin T2 and T3 on the part of the L2 learners who produced a covert contrast and those who produced an overt contrast, indicating that there was no reliable difference in the two groups' ability to perceive the target tones. In all, the occurrence of a covert contrast in the process of acquiring Mandarin T2 and T3 suggests that L2 acquisition of a tonal contrast is a gradient process, one in which an intermediate step occurs before a L2 learner reaches the final stage of implementing an overt contrast that is perceived as target-like by the native speakers of the TL.
Blanco, Cynthia P.; Bannard, Colin; Smiljanic, Rajka
2016-01-01
Early bilinguals often show as much sensitivity to L2-specific contrasts as monolingual speakers of the L2, but most work on cross-language speech perception has focused on isolated segments, and typically only on neighboring vowels or stop contrasts. In tasks that include sounds in context, listeners’ success is more variable, so segment discrimination in isolation may not adequately represent the phonetic detail in stored representations. The current study explores the relationship between language experience and sensitivity to segmental cues in context by comparing the categorization patterns of monolingual English listeners and early and late Spanish–English bilinguals. Participants categorized nonce words containing different classes of English- and Spanish-specific sounds as being more English-like or more Spanish-like; target segments included phonemic cues, cues for which there is no analogous sound in the other language, or phonetic cues, cues for which English and Spanish share the category but for which each language varies in its phonetic implementation. Listeners’ language categorization accuracy and reaction times were analyzed. Our results reveal a largely uniform categorization pattern across listener groups: Spanish cues were categorized more accurately than English cues, and phonemic cues were easier for listeners to categorize than phonetic cues. There were no differences in the sensitivity of monolinguals and early bilinguals to language-specific cues, suggesting that the early bilinguals’ exposure to Spanish did not fundamentally change their representations of English phonology. However, neither did the early bilinguals show more sensitivity than the monolinguals to Spanish sounds. The late bilinguals however, were significantly more accurate than either of the other groups. These findings indicate that listeners with varying exposure to English and Spanish are able to use language-specific cues in a nonce-word language categorization task. Differences in how, and not only when, a language was acquired may influence listener sensitivity to more difficult cues, and the advantage for phonemic cues may reflect the greater salience of categories unique to each language. Implications for foreign-accent categorization and cross-language speech perception are discussed, and future directions are outlined to better understand how salience varies across language-specific phonemic and phonetic cues. PMID:27445947
Strudwick, Gillian; Hardiker, Nicholas R
2016-10-01
In the era of evidenced based healthcare, nursing is required to demonstrate that care provided by nurses is associated with optimal patient outcomes, and a high degree of quality and safety. The use of standardized nursing terminologies and classification systems are a way that nursing documentation can be leveraged to generate evidence related to nursing practice. Several widely-reported nursing specific terminologies and classifications systems currently exist including the Clinical Care Classification System, International Classification for Nursing Practice(®), Nursing Intervention Classification, Nursing Outcome Classification, Omaha System, Perioperative Nursing Data Set and NANDA International. However, the influence of these systems on demonstrating the value of nursing and the professions' impact on quality, safety and patient outcomes in published research is relatively unknown. This paper seeks to understand the use of standardized nursing terminology and classification systems in published research, using the International Classification for Nursing Practice(®) as a case study. A systematic review of international published empirical studies on, or using, the International Classification for Nursing Practice(®) were completed using Medline and the Cumulative Index for Nursing and Allied Health Literature. Since 2006, 38 studies have been published on the International Classification for Nursing Practice(®). The main objectives of the published studies have been to validate the appropriateness of the classification system for particular care areas or populations, further develop the classification system, or utilize it to support the generation of new nursing knowledge. To date, most studies have focused on the classification system itself, and a lesser number of studies have used the system to generate information about the outcomes of nursing practice. Based on the published literature that features the International Classification for Nursing Practice, standardized nursing terminology and classification systems appear to be well developed for various populations, settings and to harmonize with other health-related terminology systems. However, the use of the systems to generate new nursing knowledge, and to validate nursing practice is still in its infancy. There is an opportunity now to utilize the well-developed systems in their current state to further what is know about nursing practice, and how best to demonstrate improvements in patient outcomes through nursing care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Alternative Speech Communication System for Persons with Severe Speech Disorders
NASA Astrophysics Data System (ADS)
Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas
2009-12-01
Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.
Dennis L. Mengel; D. Thompson Tew; [Editors
1991-01-01
Eighteen papers representing four categories-Regional Overviews; Classification System Development; Classification System Interpretation; Mapping/GIS Applications in Classification Systems-present the state of the art in forest-land classification and evaluation in the South. In addition, nine poster papers are presented.
42 CFR 412.513 - Patient classification system.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false Patient classification system. 412.513 Section 412... Long-Term Care Hospitals § 412.513 Patient classification system. (a) Classification methodology. CMS...-DRGs. (1) The classification of a particular discharge is based, as appropriate, on the patient's age...
42 CFR 412.513 - Patient classification system.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Patient classification system. 412.513 Section 412... Long-Term Care Hospitals § 412.513 Patient classification system. (a) Classification methodology. CMS...-DRGs. (1) The classification of a particular discharge is based, as appropriate, on the patient's age...
Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)
NASA Astrophysics Data System (ADS)
Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto
An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.
Berlth, Felix; Bollschweiler, Elfriede; Drebber, Uta; Hoelscher, Arnulf H; Moenig, Stefan
2014-01-01
Several pathohistological classification systems exist for the diagnosis of gastric cancer. Many studies have investigated the correlation between the pathohistological characteristics in gastric cancer and patient characteristics, disease specific criteria and overall outcome. It is still controversial as to which classification system imparts the most reliable information, and therefore, the choice of system may vary in clinical routine. In addition to the most common classification systems, such as the Laurén and the World Health Organization (WHO) classifications, other authors have tried to characterize and classify gastric cancer based on the microscopic morphology and in reference to the clinical outcome of the patients. In more than 50 years of systematic classification of the pathohistological characteristics of gastric cancer, there is no sole classification system that is consistently used worldwide in diagnostics and research. However, several national guidelines for the treatment of gastric cancer refer to the Laurén or the WHO classifications regarding therapeutic decision-making, which underlines the importance of a reliable classification system for gastric cancer. The latest results from gastric cancer studies indicate that it might be useful to integrate DNA- and RNA-based features of gastric cancer into the classification systems to establish prognostic relevance. This article reviews the diagnostic relevance and the prognostic value of different pathohistological classification systems in gastric cancer. PMID:24914328
Teaching the Alphabet to Young Children.
ERIC Educational Resources Information Center
Wasik, Barbara A.
2001-01-01
Clarifies issues surrounding teaching of the alphabet to preschoolers. Considers the meaning of "teaching" and examines links between letter knowledge, phonemic awareness, and learning to read. Presents suggestions for teaching the alphabet within developmentally appropriate practice guidelines, including beginning with the familiar, creating a…
ERIC Educational Resources Information Center
Magner, Thomas F., Ed.; Schmalstieg, William R., Ed.
The 20 papers in this collection are: "The Dative of Subordination in Baltic and Slavic"--H. Andersen; "The Vocalic Phonemes of the Old Prussian Elbing Vocabulary"--M.L. Burwell; "The Nominative Plural and Preterit Singular of the Active Participles in Baltic"--W. Cowgill; "The State of Linguistics in Soviet…
Waring, R; Knight, R
2013-01-01
Children with speech sound disorders (SSD) form a heterogeneous group who differ in terms of the severity of their condition, underlying cause, speech errors, involvement of other aspects of the linguistic system and treatment response. To date there is no universal and agreed-upon classification system. Instead, a number of theoretically differing classification systems have been proposed based on either an aetiological (medical) approach, a descriptive-linguistic approach or a processing approach. To describe and review the supporting evidence, and to provide a critical evaluation of the current childhood SSD classification systems. Descriptions of the major specific approaches to classification are reviewed and research papers supporting the reliability and validity of the systems are evaluated. Three specific paediatric SSD classification systems; the aetiologic-based Speech Disorders Classification System, the descriptive-linguistic Differential Diagnosis system, and the processing-based Psycholinguistic Framework are identified as potentially useful in classifying children with SSD into homogeneous subgroups. The Differential Diagnosis system has a growing body of empirical support from clinical population studies, across language error pattern studies and treatment efficacy studies. The Speech Disorders Classification System is currently a research tool with eight proposed subgroups. The Psycholinguistic Framework is a potential bridge to linking cause and surface level speech errors. There is a need for a universally agreed-upon classification system that is useful to clinicians and researchers. The resulting classification system needs to be robust, reliable and valid. A universal classification system would allow for improved tailoring of treatments to subgroups of SSD which may, in turn, lead to improved treatment efficacy. © 2012 Royal College of Speech and Language Therapists.
5 CFR 9701.231 - Conversion of positions and employees to the DHS classification system.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the DHS classification system. 9701.231 Section 9701.231 Administrative Personnel DEPARTMENT OF... MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Classification Transitional Provisions § 9701.231 Conversion of positions and employees to the DHS classification system. (a) This...
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception
Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.
2011-01-01
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344
Savill, Nicola; Ellis, Andrew W; Jefferies, Elizabeth
2017-04-01
Verbal short-term memory (STM) is a crucial cognitive function central to language learning, comprehension and reasoning, yet the processes that underlie this capacity are not fully understood. In particular, although STM primarily draws on a phonological code, interactions between long-term phonological and semantic representations might help to stabilise the phonological trace for words ("semantic binding hypothesis"). This idea was first proposed to explain the frequent phoneme recombination errors made by patients with semantic dementia when recalling words that are no longer fully understood. However, converging evidence in support of semantic binding is scant: it is unusual for studies of healthy participants to examine serial recall at the phoneme level and also it is difficult to separate the contribution of phonological-lexical knowledge from effects of word meaning. We used a new method to disentangle these influences in healthy individuals by training new 'words' with or without associated semantic information. We examined phonological coherence in immediate serial recall (ISR), both immediately and the day after training. Trained items were more likely to be recalled than novel nonwords, confirming the importance of phonological-lexical knowledge, and items with semantic associations were also produced more accurately than those with no meaning, at both time points. For semantically-trained items, there were fewer phoneme ordering and identity errors, and consequently more complete target items were produced in both correct and incorrect list positions. These data show that lexical-semantic knowledge improves the robustness of verbal STM at the sub-item level, even when the effect of phonological familiarity is taken into account. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tyson-Parry, Maree M; Sailah, Jessica; Boyes, Mark E; Badcock, Nicholas A
2015-10-01
This research investigated the relationship between the attentional blink (AB) and reading in typical adults. The AB is a deficit in the processing of the second of two rapidly presented targets when it occurs in close temporal proximity to the first target. Specifically, this experiment examined whether the AB was related to both phonological and sight-word reading abilities, and whether the relationship was mediated by accuracy on a single-target rapid serial visual processing task (single-target accuracy). Undergraduate university students completed a battery of tests measuring reading ability, non-verbal intelligence, and rapid automatised naming, in addition to rapid serial visual presentation tasks in which they were required to identify either two (AB task) or one (single target task) target/s (outlined shapes: circle, square, diamond, cross, and triangle) in a stream of random-dot distractors. The duration of the AB was related to phonological reading (n=41, β=-0.43): participants who exhibited longer ABs had poorer phonemic decoding skills. The AB was not related to sight-word reading. Single-target accuracy did not mediate the relationship between the AB and reading, but was significantly related to AB depth (non-linear fit, R(2)=.50): depth reflects the maximal cost in T2 reporting accuracy in the AB. The differential relationship between the AB and phonological versus sight-word reading implicates common resources used for phonemic decoding and target consolidation, which may be involved in cognitive control. The relationship between single-target accuracy and the AB is discussed in terms of cognitive preparation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Segal, Osnat; Hejli-Assi, Saja; Kishon-Rabin, Liat
2016-02-01
Infant speech discrimination can follow multiple trajectories depending on the language and the specific phonemes involved. Two understudied languages in terms of the development of infants' speech discrimination are Arabic and Hebrew. The purpose of the present study was to examine the influence of listening experience with the native language on the discrimination of the voicing contrast /ba-pa/ in Arabic-learning infants whose native language includes only the phoneme /b/ and in Hebrew-learning infants whose native language includes both phonemes. 128 Arabic-learning infants and Hebrew-learning infants, 4-to-6 and 10-to-12-month-old infants, were tested with the Visual Habituation Procedure. The results showed that 4-to-6-month-old infants discriminated between /ba-pa/ regardless of their native language and order of presentation. However, only 10-to-12-month-old infants learning Hebrew retained this ability. 10-to-12-month-old infants learning Arabic did not discriminate the change from /ba/ to /pa/ but showed a tendency for discriminating the change from /pa/ to /ba/. This is the first study to report on the reduced discrimination of /ba-pa/ in older infants learning Arabic. Our findings are consistent with the notion that experience with the native language changes discrimination abilities and alters sensitivity to non-native contrasts, thus providing evidence for 'top-down' processing in young infants. The directional asymmetry in older infants learning Arabic can be explained by assimilation of the non-native consonant /p/ to the native Arabic category /b/ as predicted by current speech perception models. Copyright © 2015 Elsevier Inc. All rights reserved.
Lexical restructuring in the absence of literacy.
Venturaa, Paulo; Kolinsky, Régine; Fernandesa, Sandra; Queridoa, Luís; Morais, José
2007-11-01
Vocabulary growth was suggested to prompt the implementation of increasingly finer-grained lexical representations of spoken words in children (e.g., [Metsala, J. L., & Walley, A. C. (1998). Spoken vocabulary growth and the segmental restructuring of lexical representations: precursors to phonemic awareness and early reading ability. In J. L. Metsala & L. C. Ehri (Eds.), Word recognition in beginning literacy (pp. 89-120). Hillsdale, NJ: Erlbaum.]). Although literacy was not explicitly mentioned in this lexical restructuring hypothesis, the process of learning to read and spell might also have a significant impact on the specification of lexical representations (e.g., [Carroll, J. M., & Snowling, M. J. (2001). The effects of global similarity between stimuli on children's judgments of rime and alliteration. Applied Psycholinguistics, 22, 327-342.]; [Goswami, U. (2000). Phonological representations, reading development and dyslexia: Towards a cross-linguistic theoretical framework. Dyslexia, 6, 133-151.]). This is what we checked in the present study. We manipulated word frequency and neighborhood density in a gating task (Experiment 1) and a word-identification-in-noise task (Experiment 2) presented to Portuguese literate and illiterate adults. Ex-illiterates were also tested in Experiment 2 in order to disentangle the effects of vocabulary size and literacy. There was an interaction between word frequency and neighborhood density, which was similar in the three groups. These did not differ even for the words that are supposed to undergo lexical restructuring the latest (low frequency words from sparse neighborhoods). Thus, segmental lexical representations seem to develop independently of literacy. While segmental restructuring is not affected by literacy, it constrains the development of phoneme awareness as shown by the fact that, in Experiment 3, neighborhood density modulated the phoneme deletion performance of both illiterates and ex-illiterates.
Semantic and Phonological Encoding Times in Adults Who Stutter: Brain Electrophysiological Evidence.
Maxfield, Nathan D
2017-10-17
Some psycholinguistic theories of stuttering propose that language production operates along a different time course in adults who stutter (AWS) versus typically fluent adults (TFA). However, behavioral evidence for such a difference has been mixed. Here, the time course of semantic and phonological encoding in picture naming was compared in AWS (n = 16) versus TFA (n = 16) by measuring 2 event-related potential (ERP) components: NoGo N200, an ERP index of response inhibition, and lateralized readiness potential, an ERP index of response preparation. Each trial required a semantic judgment about a picture in addition to a phonemic judgment about the target label of the picture. Judgments were mapped onto a dual-choice (Go-NoGo/left-right) push-button response paradigm. On each trial, ERP activity time-locked to picture onset was recorded at 32 scalp electrodes. NoGo N200 was detected earlier to semantic NoGo trials than to phonemic NoGo trials in both groups, replicating previous evidence that semantic encoding generally precedes phonological encoding in language production. Moreover, N200 onset was earlier to semantic NoGo trials in TFA than in AWS, indicating that semantic information triggering response inhibition became available earlier in TFA versus AWS. In contrast, the time course of N200 activity to phonemic NoGo trials did not differ between groups. Lateralized readiness potential activity was influenced by strategic response preparation and, thus, could not be used to index real-time semantic and phonological encoding. NoGo N200 results point to slowed semantic encoding in AWS versus TFA. Discussion considers possible factors in slowed semantic encoding in AWS and how fluency might be impacted by slowed semantic encoding.
Semantic and Phonological Encoding Times in Adults Who Stutter: Brain Electrophysiological Evidence
2017-01-01
Purpose Some psycholinguistic theories of stuttering propose that language production operates along a different time course in adults who stutter (AWS) versus typically fluent adults (TFA). However, behavioral evidence for such a difference has been mixed. Here, the time course of semantic and phonological encoding in picture naming was compared in AWS (n = 16) versus TFA (n = 16) by measuring 2 event-related potential (ERP) components: NoGo N200, an ERP index of response inhibition, and lateralized readiness potential, an ERP index of response preparation. Method Each trial required a semantic judgment about a picture in addition to a phonemic judgment about the target label of the picture. Judgments were mapped onto a dual-choice (Go–NoGo/left–right) push-button response paradigm. On each trial, ERP activity time-locked to picture onset was recorded at 32 scalp electrodes. Results NoGo N200 was detected earlier to semantic NoGo trials than to phonemic NoGo trials in both groups, replicating previous evidence that semantic encoding generally precedes phonological encoding in language production. Moreover, N200 onset was earlier to semantic NoGo trials in TFA than in AWS, indicating that semantic information triggering response inhibition became available earlier in TFA versus AWS. In contrast, the time course of N200 activity to phonemic NoGo trials did not differ between groups. Lateralized readiness potential activity was influenced by strategic response preparation and, thus, could not be used to index real-time semantic and phonological encoding. Conclusion NoGo N200 results point to slowed semantic encoding in AWS versus TFA. Discussion considers possible factors in slowed semantic encoding in AWS and how fluency might be impacted by slowed semantic encoding. PMID:28973156
Small wins big: analytic pinyin skills promote Chinese word reading.
Lin, Dan; McBride-Chang, Catherine; Shu, Hua; Zhang, Yuping; Li, Hong; Zhang, Juan; Aram, Dorit; Levin, Iris
2010-08-01
The present study examined invented spelling of pinyin (a phonological coding system for teaching and learning Chinese words) in relation to subsequent Chinese reading development. Among 296 Chinese kindergartners in Beijing, independent invented pinyin spelling was found to be uniquely predictive of Chinese word reading 12 months later, even with Time 1 syllable deletion, phoneme deletion, and letter knowledge, in addition to the autoregressive effects of Time 1 Chinese word reading, statistically controlled. These results underscore the importance of children's early pinyin representations for Chinese reading acquisition, both theoretically and practically. The findings further support the idea of a universal phonological principle and indicate that pinyin is potentially an ideal measure of phonological awareness in Chinese.
Waltho, Daniel; Hatchell, Alexandra; Thoma, Achilleas
2017-03-01
Gynecomastia is a common deformity of the male breast, where certain cases warrant surgical management. There are several surgical options, which vary depending on the breast characteristics. To guide surgical management, several classification systems for gynecomastia have been proposed. A systematic review was performed to (1) identify all classification systems for the surgical management of gynecomastia, and (2) determine the adequacy of these classification systems to appropriately categorize the condition for surgical decision-making. The search yielded 1012 articles, and 11 articles were included in the review. Eleven classification systems in total were ascertained, and a total of 10 unique features were identified: (1) breast size, (2) skin redundancy, (3) breast ptosis, (4) tissue predominance, (5) upper abdominal laxity, (6) breast tuberosity, (7) nipple malposition, (8) chest shape, (9) absence of sternal notch, and (10) breast skin elasticity. On average, classification systems included two or three of these features. Breast size and ptosis were the most commonly included features. Based on their review of the current classification systems, the authors believe the ideal classification system should be universal and cater to all causes of gynecomastia; be surgically useful and easy to use; and should include a comprehensive set of clinically appropriate patient-related features, such as breast size, breast ptosis, tissue predominance, and skin redundancy. None of the current classification systems appears to fulfill these criteria.
ERIC Educational Resources Information Center
Ballard, W.L.
1968-01-01
The article discusses models of synchronic and diachronic phonology and suggests changes in them. The basic generative model of phonology is outlined with the author's reinterpretations. The systematic phonemic level is questioned in terms of its unreality with respect to linguistic performance and its lack of validity with respect to historical…
ERIC Educational Resources Information Center
Burnham, Jacki; Discher, Stephanie; Ingle, Krista
This brief paper describes the Circle of Collaboration approach at one elementary school in Utah that is focusing on development of an inclusive school for all students and implementation of a program (Balance Literacy) to enhance students' reading skills. Balance Literacy incorporates phonemic awareness, phonic instruction, fluency, vocabulary,…
Russian Orthography and Learning to Read
ERIC Educational Resources Information Center
Kerek, Eugenia; Niemi, Pekka
2009-01-01
The unique structure of Russian orthography may influence the organization and acquisition of reading skills in Russian. The present review examines phonemic-graphemic correspondences in Russian orthography and discusses its grain-size units and possible difficulties for beginning readers and writers. Russian orthography is governed by a…
Linking Music Learning to Reading Instruction.
ERIC Educational Resources Information Center
Hansen, Dee; Bernstorf, Elaine
2002-01-01
Focuses on the connections between music reading and text reading to explore the influences of music education on children's reading skills. Addresses topics, such as phonological and phonemic awareness, sight identification, orthographic awareness, and fluency. Discusses research that addresses the influence of music on reading. (CMK)
Teaching and Testing Early Reading. Focus On
ERIC Educational Resources Information Center
Mraz, Maryann; Kissel, Brian
2007-01-01
This issue of "Focus On" provides an overview of several key early literacy components: phonemic awareness, alphabet knowledge, concepts of print, oral language development, writing, family literacy, and reading aloud. Suggestions for assessing early literacy development are provided, and examples of implementation of effective early literacy…
Spelling Strategies: Take Stock of Students' Spelling Progress.
ERIC Educational Resources Information Center
Gentry, J. Richard
1998-01-01
Presents four informal assessments that elementary teachers can use to compare students' spelling progress to typical midyear benchmarks. The assessments, which target K-6 students, emphasize alphabet recognition, phonemic awareness, attitude and consciousness about spelling, and spelling growth through writing samples. The paper includes a…
Prosodic Phonological Representations Early in Visual Word Recognition
ERIC Educational Resources Information Center
Ashby, Jane; Martin, Andrea E.
2008-01-01
Two experiments examined the nature of the phonological representations used during visual word recognition. We tested whether a minimality constraint (R. Frost, 1998) limits the complexity of early representations to a simple string of phonemes. Alternatively, readers might activate elaborated representations that include prosodic syllable…
Differences between conduction aphasia and Wernicke's aphasia.
Anzaki, F; Izumi, S
2001-07-01
Conduction aphasia and Wernike's aphasia have been differentiated by the degree of auditory language comprehension. We quantitatively compared the speech sound errors of two conduction aphasia patients and three Wernicke's aphasia patients on various language modality tests. All of the patients were Japanese. The two conduction aphasia patients had "conduites d'approche" errors and phonological paraphasia. The patient with mild Wernicke's aphasia made various errors. In the patient with severe Wernicke's aphasia, neologism was observed. Phonological paraphasia in the two conduction aphasia patients seemed to occur when the examinee searched for the target word. They made more errors in vowels than in consonants of target words on the naming and repetition tests. They seemed to search the target word by the correct consonant phoneme and incorrect vocalic phoneme in the table of the Japanese alphabet. The Wernicke's aphasia patients who had severe impairment of auditory comprehension, made more errors in consonants than in vowels of target words. In conclusion, utterance of conduction aphasia and that of Wernicke's aphasia are qualitatively distinct.
Maurer, D; Hess, M; Gross, M
1996-12-01
Theoretic investigations of the "source-filter" model have indicated a pronounced acoustic interaction of glottal source and vocal tract. Empirical investigations of formant pattern variations apart from changes in vowel identity have demonstrated a direct relationship between the fundamental frequency and the patterns. As a consequence of both findings, independence of phonation and articulation may be limited in the speech process. Within the present study, possible interdependence of phonation and phoneme was investigated: vocal fold vibrations and larynx position for vocalizations of different vowels in a healthy man and woman were examined by high-speed light-intensified digital imaging. We found 1) different movements of the vocal folds for vocalizations of different vowel identities within one speaker and at similar fundamental frequency, and 2) constant larynx position within vocalization of one vowel identity, but different positions for vocalizations of different vowel identities. A possible relationship between the vocal fold vibrations and the phoneme is discussed.
The Foundations of Literacy Development in Children at Familial Risk of Dyslexia.
Hulme, Charles; Nash, Hannah M; Gooch, Debbie; Lervåg, Arne; Snowling, Margaret J
2015-12-01
The development of reading skills is underpinned by oral language abilities: Phonological skills appear to have a causal influence on the development of early word-level literacy skills, and reading-comprehension ability depends, in addition to word-level literacy skills, on broader (semantic and syntactic) language skills. Here, we report a longitudinal study of children at familial risk of dyslexia, children with preschool language difficulties, and typically developing control children. Preschool measures of oral language predicted phoneme awareness and grapheme-phoneme knowledge just before school entry, which in turn predicted word-level literacy skills shortly after school entry. Reading comprehension at 8½ years was predicted by word-level literacy skills at 5½ years and by language skills at 3½ years. These patterns of predictive relationships were similar in both typically developing children and those at risk of literacy difficulties. Our findings underline the importance of oral language skills for the development of both word-level literacy and reading comprehension. © The Author(s) 2015.
Music playschool enhances children's linguistic skills.
Linnavalli, Tanja; Putkinen, Vesa; Lipsanen, Jari; Huotilainen, Minna; Tervaniemi, Mari
2018-06-08
Several studies have suggested that intensive musical training enhances children's linguistic skills. Such training, however, is not available to all children. We studied in a community setting whether a low-cost, weekly music playschool provided to 5-6-year-old children in kindergartens could already affect their linguistic abilities. Children (N = 66) were tested four times over two school-years with Phoneme processing and Vocabulary subtests, along with tests for Perceptual reasoning skills and Inhibitory control. We compared the development of music playschool children to their peers either attending to similarly organized dance lessons or not attending to either activity. Music playschool significantly improved the development of children's phoneme processing and vocabulary skills. No such improvements on children's scores for non-verbal reasoning and inhibition were obtained. Our data suggest that even playful group music activities - if attended to for several years - have a positive effect on pre-schoolers' linguistic skills. Therefore we promote the concept of implementing regular music playschool lessons given by professional teachers in early childhood education.
Escalda, Júlia; Lemos, Stela Maris Aguiar; França, Cecília Cavalieri
2011-09-01
To investigate the relations between musical experience, auditory processing and phonological awareness of groups of 5-year-old children with and without musical experience. Participants were 56 5-year-old subjects of both genders, 26 in the Study Group, consisting of children with musical experience, and 30 in the Control Group, consisting of children without musical experience. All participants were assessed with the Simplified Auditory Processing Assessment and Phonological Awareness Test and the data was statistically analyzed. There was a statistically significant difference between the results of the sequential memory test for verbal and non-verbal sounds with four stimuli, phonological awareness tasks of rhyme recognition, phonemic synthesis and phonemic deletion. Analysis of multiple binary logistic regression showed that, with exception of the sequential verbal memory with four syllables, the observed difference in subjects' performance was associated with their musical experience. Musical experience improves auditory and metalinguistic abilities of 5-year-old children.
[FMRI-study of speech perception impairment in post-stroke patients with sensory aphasia].
Maĭorova, L A; Martynova, O V; Fedina, O N; Petrushevskiĭ, A G
2013-01-01
The aim of the study was to find neurophysiological correlates of the primary stage impairment of speech perception, namely phonemic discrimination, in patients with sensory aphasia after acute ischemic stroke in the left hemisphere by noninvasive method of fMRI. For this purpose we registered the fMRI-equivalent of mismatch negativity (MMN) in response to the speech phonemes--syllables "ba" and "pa" in odd-ball paradigm in 20 healthy subjects and 23 patients with post-stroke sensory aphasia. In healthy subjects active brain areas depending from the MMN contrast were observed in the superior temporal and inferior frontal gyri in the right and left hemispheres. In the group of patients there was a significant activation of the auditory cortex in the right hemisphere only, and this activation was less in a volume and intensity than in healthy subjects and correlated to the degree of preservation of speech. Thus, the method of recording fMRI equivalent of MMN is sensitive to study the speech perception impairment.
Mano, Quintino R; Williamson, Brady J; Pae, Hye K; Osmon, David C
2016-01-01
The Stroop Color-Word Test involves a dynamic interplay between reading and executive functioning that elicits intuitions of word reading automaticity. One such intuition is that strong reading skills (i.e., more automatized word reading) play a disruptive role within the test, contributing to Stroop interference. However, evidence has accumulated that challenges this intuition. The present study examined associations among Stroop interference, reading skills (i.e., isolated word identification, grapheme-to-phoneme mapping, phonemic awareness, reading fluency) measured on standardized tests, and orthographic skills measured on experimental computerized tasks. Among university students (N = 152), correlational analyses showed greater Stroop interference to be associated with (a) relatively low scores on all standardized reading tests, and (b) longer response latencies on orthographic tasks. Hierarchical regression demonstrated that reading fluency and prelexical orthographic processing predicted unique and significant variance in Stroop interference beyond baseline rapid naming. Results suggest that strong reading skills, including orthographic processing, play a supportive role in resolving Stroop interference.
Lopopolo, Alessandro; Frank, Stefan L; van den Bosch, Antal; Willems, Roel M
2017-01-01
Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.
Monson, Brian B; Lotto, Andrew J; Story, Brad H
2012-09-01
The human singing and speech spectrum includes energy above 5 kHz. To begin an in-depth exploration of this high-frequency energy (HFE), a database of anechoic high-fidelity recordings of singers and talkers was created and analyzed. Third-octave band analysis from the long-term average spectra showed that production level (soft vs normal vs loud), production mode (singing vs speech), and phoneme (for voiceless fricatives) all significantly affected HFE characteristics. Specifically, increased production level caused an increase in absolute HFE level, but a decrease in relative HFE level. Singing exhibited higher levels of HFE than speech in the soft and normal conditions, but not in the loud condition. Third-octave band levels distinguished phoneme class of voiceless fricatives. Female HFE levels were significantly greater than male levels only above 11 kHz. This information is pertinent to various areas of acoustics, including vocal tract modeling, voice synthesis, augmentative hearing technology (hearing aids and cochlear implants), and training/therapy for singing and speech.
van der Heijden, Martijn; Dikkers, Frederik G; Halmos, Gyorgy B
2015-12-01
Laryngomalacia is the most common cause of dyspnea and stridor in newborn infants. Laryngomalacia is a dynamic change of the upper airway based on abnormally pliable supraglottic structures, which causes upper airway obstruction. In the past, different classification systems have been introduced. Until now no classification system is widely accepted and applied. Our goal is to provide a simple and complete classification system based on systematic literature search and our experiences. Retrospective cohort study with literature review. All patients with laryngomalacia under the age of 5 at time of diagnosis were included. Photo and video documentation was used to confirm diagnosis and characteristics of dynamic airway change. Outcome was compared with available classification systems in literature. Eighty-five patients were included. In contrast to other classification systems, only three typical different dynamic changes have been identified in our series. Two existing classification systems covered 100% of our findings, but there was an unnecessary overlap between different types in most of the systems. Based on our finding, we propose a new a classification system for laryngomalacia, which is purely based on dynamic airway changes. The groningen laryngomalacia classification is a new, simplified classification system with three types, based on purely dynamic laryngeal changes, tested in a tertiary referral center: Type 1: inward collapse of arytenoids cartilages, Type 2: medial displacement of aryepiglottic folds, and Type 3: posterocaudal displacement of epiglottis against the posterior pharyngeal wall. © 2015 Wiley Periodicals, Inc.
Designing and Implementation of River Classification Assistant Management System
NASA Astrophysics Data System (ADS)
Zhao, Yinjun; Jiang, Wenyuan; Yang, Rujun; Yang, Nan; Liu, Haiyan
2018-03-01
In an earlier publication, we proposed a new Decision Classifier (DCF) for Chinese river classification based on their structures. To expand, enhance and promote the application of the DCF, we build a computer system to support river classification named River Classification Assistant Management System. Based on ArcEngine and ArcServer platform, this system implements many functions such as data management, extraction of river network, river classification, and results publication under combining Client / Server with Browser / Server framework.
42 CFR 412.10 - Changes in the DRG classification system.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false Changes in the DRG classification system. 412.10... § 412.10 Changes in the DRG classification system. (a) General rule. CMS issues changes in the DRG classification system in a Federal Register notice at least annually. Except as specified in paragraphs (c) and...
42 CFR 412.10 - Changes in the DRG classification system.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Changes in the DRG classification system. 412.10... § 412.10 Changes in the DRG classification system. (a) General rule. CMS issues changes in the DRG classification system in a Federal Register notice at least annually. Except as specified in paragraphs (c) and...
ERIC Educational Resources Information Center
Hidecker, Mary Jo Cooley; Ho, Nhan Thi; Dodge, Nancy; Hurvitz, Edward A.; Slaughter, Jaime; Workinger, Marilyn Seif; Kent, Ray D.; Rosenbaum, Peter; Lenski, Madeleine; Messaros, Bridget M.; Vanderbeek, Suzette B.; Deroos, Steven; Paneth, Nigel
2012-01-01
Aim: To investigate the relationships among the Gross Motor Function Classification System (GMFCS), Manual Ability Classification System (MACS), and Communication Function Classification System (CFCS) in children with cerebral palsy (CP). Method: Using questionnaires describing each scale, mothers reported GMFCS, MACS, and CFCS levels in 222…
Speech Appliances in the Treatment of Phonological Disorders.
ERIC Educational Resources Information Center
Ruscello, Dennis M.
1995-01-01
This article addresses the rationale for and issues related to the use of speech appliances, especially a removable speech appliance that positions the tongue to produce the correct /r/ phoneme. Research results suggest that this appliance was successful with a large group of clients. (Author/DB)
Spelling Mastery. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2014
2014-01-01
"Spelling Mastery" is designed to explicitly teach spelling skills to students in grades 1 through 6. One of several Direct Instruction curricula from McGraw-Hill that precisely specify how to teach incremental content, "Spelling Mastery" includes phonemic, morphemic, and whole-word strategies. The What Works Clearinghouse…
Reading in French-Speaking Adults with Dyslexia
ERIC Educational Resources Information Center
Martin, Jennifer; Cole, Pascale; Leuwers, Christel; Casalis, Severine; Zorman, Michel; Sprenger-Charolles, Liliane
2010-01-01
This study investigated the reading and reading-related skills of 15 French-speaking adults with dyslexia, whose performance was compared with that of chronological-age controls (CA) and reading-level controls (RL). Experiment 1 assessed the efficiency of their phonological reading-related skills (phonemic awareness, phonological short-term…
Supporting the Essential Elements with CD-ROM Storybooks
ERIC Educational Resources Information Center
Pearman, Cathy J.; Lefever-Davis, Shirley
2006-01-01
CD-ROM storybooks can support the development of the five essential elements of reading instruction identified by The National Reading Panel: phonemic awareness, phonics, fluency, vocabulary, and comprehension. Specific features inherent in these texts, audio pronunciation of text, embedded vocabulary definitions and animated graphics can be used…
KURDISH READERS. PART I, NEWSPAPER KURDISH.
ERIC Educational Resources Information Center
ABDULLA, JAMAL JALAL; MCCARUS, ERNEST N.
ASSUMING A MASTERY OF THE CONTENTS OF THE "BASIC COURSE IN KURDISH" (BY THE SAME AUTHORS), THIS READER PRESENTS A VARIETY OF 28 ARTICLES SELECTED FROM THE IRAQI NEWSPAPERS "ZHIN" AND "KHEBAT." EACH LESSON BEGINS WITH A SELECTION WRITTEN IN KURDISH (MODIFIED ARABIC-PERSIAN) SCRIPT, FOLLOWED BY PHONEMIC TRANSCRIPTION…
Stepping Stones to Literacy. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2007
2007-01-01
Stepping Stones to Literacy (SSL) is a supplemental curriculum designed to promote listening, print conventions, phonological awareness, phonemic awareness, and serial processing/rapid naming (quickly naming familiar visual symbols and stimuli such as letters or colors). The program targets kindergarten and older preschool students considered to…
Orthographic Influence on the Phonological Development of L2 Learners of Korean
ERIC Educational Resources Information Center
Lee, Sooyeon
2012-01-01
This dissertation examines the influence of L2 orthographic representation on the phonological development of American English speakers learning Korean, addressing specifically the syllabification and resyllabification of Korean intervocalic obstruents and the intervocalic liquid phoneme. Although Korean and English both employ alphabetic writing…
Decoding Acquisition: A Study of First Grade Readers.
ERIC Educational Resources Information Center
Hollingsworth, Sandra
To determine the factors accounting for children's growth in decoding skill, a study examined school entering characteristics--age, sex, ethnicity, and developmental abilities--and school-influenced skills and characteristics--phonemic awareness, letter-name knowledge, basal text, and place in series--of approximately 100 grade one students.…
ERIC Educational Resources Information Center
O'Connor, N.; Hermelin, B.
1994-01-01
Two young autistic children exhibited normal reading comprehension but reading speeds considerably faster than controls. The effect of randomizing word order was minimal for the older of the two autistic boys. Results indicate that efficient grapheme-phoneme conversion is primarily responsible for the fast reading of the autistic children.…
NASA Astrophysics Data System (ADS)
Warren, Sean N.; Kallu, Raj R.; Barnard, Chase K.
2016-11-01
Underground gold mines in Nevada are exploiting increasingly deeper ore bodies comprised of weak to very weak rock masses. The Rock Mass Rating (RMR) classification system is widely used at underground gold mines in Nevada and is applicable in fair to good-quality rock masses, but is difficult to apply and loses reliability in very weak rock mass to soil-like material. Because very weak rock masses are transition materials that border engineering rock mass and soil classification systems, soil classification may sometimes be easier and more appropriate to provide insight into material behavior and properties. The Unified Soil Classification System (USCS) is the most likely choice for the classification of very weak rock mass to soil-like material because of its accepted use in tunnel engineering projects and its ability to predict soil-like material behavior underground. A correlation between the RMR and USCS systems was developed by comparing underground geotechnical RMR mapping to laboratory testing of bulk samples from the same locations, thereby assigning a numeric RMR value to the USCS classification that can be used in spreadsheet calculations and geostatistical analyses. The geotechnical classification system presented in this paper including a USCS-RMR correlation, RMR rating equations, and the Geo-Pick Strike Index is collectively introduced as the Weak Rock Mass Rating System (W-RMR). It is the authors' hope that this system will aid in the classification of weak rock masses and more usable design tools based on the RMR system. More broadly, the RMR-USCS correlation and the W-RMR system help define the transition between engineering soil and rock mass classification systems and may provide insight for geotechnical design in very weak rock masses.
Medehouenou, Thierry Comlan Marc; Ayotte, Pierre; St-Jean, Audray; Meziou, Salma; Roy, Cynthia; Muckle, Gina; Lucas, Michel
2015-07-01
Little is known about the suitability of three commonly used body mass index (BMI) classification system for Indigenous children. This study aims to estimate overweight and obesity prevalence among school-aged Nunavik Inuit children according to International Obesity Task Force (IOTF), Centers for Disease Control and Prevention (CDC), and World Health Organization (WHO) BMI classification systems, to measure agreement between those classification systems, and to investigate whether BMI status as defined by these classification systems is associated with levels of metabolic and inflammatory biomarkers. Data were collected on 290 school-aged children (aged 8-14 years; 50.7% girls) from the Nunavik Child Development Study with data collected in 2005-2010. Anthropometric parameters were measured and blood sampled. Participants were classified as normal weight, overweight, and obese according to BMI classification systems. Weighted kappa (κw) statistics assessed agreement between different BMI classification systems, and multivariate analysis of variance ascertained their relationship with metabolic and inflammatory biomarkers. The combined prevalence rate of overweight/obesity was 26.9% (with 6.6% obesity) with IOTF, 24.1% (11.0%) with CDC, and 40.4% (12.8%) with WHO classification systems. Agreement was the highest between IOTF and CDC (κw = .87) classifications, and substantial for IOTF and WHO (κw = .69) and for CDC and WHO (κw = .73). Insulin and high-sensitivity C-reactive protein plasma levels were significantly higher from normal weight to obesity, regardless of classification system. Among obese subjects, higher insulin level was observed with IOTF. Compared with other systems, IOTF classification appears to be more specific to identify overweight and obesity in Inuit children. Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Stroke subtyping for genetic association studies? A comparison of the CCS and TOAST classifications.
Lanfranconi, Silvia; Markus, Hugh S
2013-12-01
A reliable and reproducible classification system of stroke subtype is essential for epidemiological and genetic studies. The Causative Classification of Stroke system is an evidence-based computerized algorithm with excellent inter-rater reliability. It has been suggested that, compared to the Trial of ORG 10172 in Acute Stroke Treatment classification, it increases the proportion of cases with defined subtype that may increase power in genetic association studies. We compared Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system classifications in a large cohort of well-phenotyped stroke patients. Six hundred ninety consecutively recruited patients with first-ever ischemic stroke were classified, using review of clinical data and original imaging, according to the Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system classifications. There was excellent agreement subtype assigned by between Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system (kappa = 0·85). The agreement was excellent for the major individual subtypes: large artery atherosclerosis kappa = 0·888, small-artery occlusion kappa = 0·869, cardiac embolism kappa = 0·89, and undetermined category kappa = 0·884. There was only moderate agreement (kappa = 0·41) for the subjects with at least two competing underlying mechanism. Thirty-five (5·8%) patients classified as undetermined by Trial of ORG 10172 in Acute Stroke Treatment were assigned to a definite subtype by Causative Classification of Stroke system. Thirty-two subjects assigned to a definite subtype by Trial of ORG 10172 in Acute Stroke Treatment were classified as undetermined by Causative Classification of Stroke system. There is excellent agreement between classification using Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke systems but no evidence that Causative Classification of Stroke system reduced the proportion of patients classified to undetermined subtypes. The excellent inter-rater reproducibility and web-based semiautomated nature make Causative Classification of Stroke system suitable for multicenter studies, but the benefit of reclassifying cases already classified using the Trial of ORG 10172 in Acute Stroke Treatment system on existing databases is likely to be small. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
Acoustic-Emergent Phonology in the Amplitude Envelope of Child-Directed Speech
Leong, Victoria; Goswami, Usha
2015-01-01
When acquiring language, young children may use acoustic spectro-temporal patterns in speech to derive phonological units in spoken language (e.g., prosodic stress patterns, syllables, phonemes). Children appear to learn acoustic-phonological mappings rapidly, without direct instruction, yet the underlying developmental mechanisms remain unclear. Across different languages, a relationship between amplitude envelope sensitivity and phonological development has been found, suggesting that children may make use of amplitude modulation (AM) patterns within the envelope to develop a phonological system. Here we present the Spectral Amplitude Modulation Phase Hierarchy (S-AMPH) model, a set of algorithms for deriving the dominant AM patterns in child-directed speech (CDS). Using Principal Components Analysis, we show that rhythmic CDS contains an AM hierarchy comprising 3 core modulation timescales. These timescales correspond to key phonological units: prosodic stress (Stress AM, ~2 Hz), syllables (Syllable AM, ~5 Hz) and onset-rime units (Phoneme AM, ~20 Hz). We argue that these AM patterns could in principle be used by naïve listeners to compute acoustic-phonological mappings without lexical knowledge. We then demonstrate that the modulation statistics within this AM hierarchy indeed parse the speech signal into a primitive hierarchically-organised phonological system comprising stress feet (proto-words), syllables and onset-rime units. We apply the S-AMPH model to two other CDS corpora, one spontaneous and one deliberately-timed. The model accurately identified 72–82% (freely-read CDS) and 90–98% (rhythmically-regular CDS) stress patterns, syllables and onset-rime units. This in-principle demonstration that primitive phonology can be extracted from speech AMs is termed Acoustic-Emergent Phonology (AEP) theory. AEP theory provides a set of methods for examining how early phonological development is shaped by the temporal modulation structure of speech across languages. The S-AMPH model reveals a crucial developmental role for stress feet (AMs ~2 Hz). Stress feet underpin different linguistic rhythm typologies, and speech rhythm underpins language acquisition by infants in all languages. PMID:26641472