Sample records for speech identification based

  1. Study of environmental sound source identification based on hidden Markov model for robust speech recognition

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2003-10-01

    Humans communicate with each other through speech by focusing on the target speech among environmental sounds in real acoustic environments. We can easily identify the target sound from other environmental sounds. For hands-free speech recognition, the identification of the target speech from environmental sounds is imperative. This mechanism may also be important for a self-moving robot to sense the acoustic environments and communicate with humans. Therefore, this paper first proposes hidden Markov model (HMM)-based environmental sound source identification. Environmental sounds are modeled by three states of HMMs and evaluated using 92 kinds of environmental sounds. The identification accuracy was 95.4%. This paper also proposes a new HMM composition method that composes speech HMMs and an HMM of categorized environmental sounds for robust environmental sound-added speech recognition. As a result of the evaluation experiments, we confirmed that the proposed HMM composition outperforms the conventional HMM composition with speech HMMs and a noise (environmental sound) HMM trained using noise periods prior to the target speech in a captured signal. [Work supported by Ministry of Public Management, Home Affairs, Posts and Telecommunications of Japan.

  2. A hybrid technique for speech segregation and classification using a sophisticated deep neural network

    PubMed Central

    Nawaz, Tabassam; Mehmood, Zahid; Rashid, Muhammad; Habib, Hafiz Adnan

    2018-01-01

    Recent research on speech segregation and music fingerprinting has led to improvements in speech segregation and music identification algorithms. Speech and music segregation generally involves the identification of music followed by speech segregation. However, music segregation becomes a challenging task in the presence of noise. This paper proposes a novel method of speech segregation for unlabelled stationary noisy audio signals using the deep belief network (DBN) model. The proposed method successfully segregates a music signal from noisy audio streams. A recurrent neural network (RNN)-based hidden layer segregation model is applied to remove stationary noise. Dictionary-based fisher algorithms are employed for speech classification. The proposed method is tested on three datasets (TIMIT, MIR-1K, and MusicBrainz), and the results indicate the robustness of proposed method for speech segregation. The qualitative and quantitative analysis carried out on three datasets demonstrate the efficiency of the proposed method compared to the state-of-the-art speech segregation and classification-based methods. PMID:29558485

  3. Advancements in robust algorithm formulation for speaker identification of whispered speech

    NASA Astrophysics Data System (ADS)

    Fan, Xing

    Whispered speech is an alternative speech production mode from neutral speech, which is used by talkers intentionally in natural conversational scenarios to protect privacy and to avoid certain content from being overheard/made public. Due to the profound differences between whispered and neutral speech in production mechanism and the absence of whispered adaptation data, the performance of speaker identification systems trained with neutral speech degrades significantly. This dissertation therefore focuses on developing a robust closed-set speaker recognition system for whispered speech by using no or limited whispered adaptation data from non-target speakers. This dissertation proposes the concept of "High''/"Low'' performance whispered data for the purpose of speaker identification. A variety of acoustic properties are identified that contribute to the quality of whispered data. An acoustic analysis is also conducted to compare the phoneme/speaker dependency of the differences between whispered and neutral data in the feature domain. The observations from those acoustic analysis are new in this area and also serve as a guidance for developing robust speaker identification systems for whispered speech. This dissertation further proposes two systems for speaker identification of whispered speech. One system focuses on front-end processing. A two-dimensional feature space is proposed to search for "Low''-quality performance based whispered utterances and separate feature mapping functions are applied to vowels and consonants respectively in order to retain the speaker's information shared between whispered and neutral speech. The other system focuses on speech-mode-independent model training. The proposed method generates pseudo whispered features from neutral features by using the statistical information contained in a whispered Universal Background model (UBM) trained from extra collected whispered data from non-target speakers. Four modeling methods are proposed for the transformation estimation in order to generate the pseudo whispered features. Both of the above two systems demonstrate a significant improvement over the baseline system on the evaluation data. This dissertation has therefore contributed to providing a scientific understanding of the differences between whispered and neutral speech as well as improved front-end processing and modeling method for speaker identification of whispered speech. Such advancements will ultimately contribute to improve the robustness of speech processing systems.

  4. Speech Perception and Short Term Memory Deficits in Persistent Developmental Speech Disorder

    PubMed Central

    Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.

    2008-01-01

    Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech perception and short-term memory. Nine adults with a persistent familial developmental speech disorder without language impairment were compared with 20 controls on tasks requiring the discrimination of fine acoustic cues for word identification and on measures of verbal and nonverbal short-term memory. Significant group differences were found in the slopes of the discrimination curves for first formant transitions for word identification with stop gaps of 40 and 20 ms with effect sizes of 1.60 and 1.56. Significant group differences also occurred on tests of nonverbal rhythm and tonal memory, and verbal short-term memory with effect sizes of 2.38, 1.56 and 1.73. No group differences occurred in the use of stop gap durations for word identification. Because frequency-based speech perception and short-term verbal and nonverbal memory deficits both persisted into adulthood in the speech-impaired adults, these deficits may be involved in the persistence of speech disorders without language impairment. PMID:15896836

  5. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers.

    PubMed

    Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A; Cao, Jiguo; Nie, Yunlong

    2017-01-01

    Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers' performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.

  6. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers

    PubMed Central

    Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A.; Cao, Jiguo; Nie, Yunlong

    2017-01-01

    Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning. PMID:29255435

  7. Discriminative analysis of lip motion features for speaker identification and speech-reading.

    PubMed

    Cetingül, H Ertan; Yemez, Yücel; Erzin, Engin; Tekalp, A Murat

    2006-10-01

    There have been several studies that jointly use audio, lip intensity, and lip geometry information for speaker identification and speech-reading applications. This paper proposes using explicit lip motion information, instead of or in addition to lip intensity and/or geometry information, for speaker identification and speech-reading within a unified feature selection and discrimination analysis framework, and addresses two important issues: 1) Is using explicit lip motion information useful, and, 2) if so, what are the best lip motion features for these two applications? The best lip motion features for speaker identification are considered to be those that result in the highest discrimination of individual speakers in a population, whereas for speech-reading, the best features are those providing the highest phoneme/word/phrase recognition rate. Several lip motion feature candidates have been considered including dense motion features within a bounding box about the lip, lip contour motion features, and combination of these with lip shape features. Furthermore, a novel two-stage, spatial, and temporal discrimination analysis is introduced to select the best lip motion features for speaker identification and speech-reading applications. Experimental results using an hidden-Markov-model-based recognition system indicate that using explicit lip motion information provides additional performance gains in both applications, and lip motion features prove more valuable in the case of speech-reading application.

  8. Robust Speaker Authentication Based on Combined Speech and Voiceprint Recognition

    NASA Astrophysics Data System (ADS)

    Malcangi, Mario

    2009-08-01

    Personal authentication is becoming increasingly important in many applications that have to protect proprietary data. Passwords and personal identification numbers (PINs) prove not to be robust enough to ensure that unauthorized people do not use them. Biometric authentication technology may offer a secure, convenient, accurate solution but sometimes fails due to its intrinsically fuzzy nature. This research aims to demonstrate that combining two basic speech processing methods, voiceprint identification and speech recognition, can provide a very high degree of robustness, especially if fuzzy decision logic is used.

  9. Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.

    PubMed

    Kim, Jeesun; Davis, Chris; Groot, Christopher

    2009-12-01

    This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.

  10. Open-set speaker identification with diverse-duration speech data

    NASA Astrophysics Data System (ADS)

    Karadaghi, Rawande; Hertlein, Heinz; Ariyaeeinia, Aladdin

    2015-05-01

    The concern in this paper is an important category of applications of open-set speaker identification in criminal investigation, which involves operating with short and varied duration speech. The study presents investigations into the adverse effects of such an operating condition on the accuracy of open-set speaker identification, based on both GMMUBM and i-vector approaches. The experiments are conducted using a protocol developed for the identification task, based on the NIST speaker recognition evaluation corpus of 2008. In order to closely cover the real-world operating conditions in the considered application area, the study includes experiments with various combinations of training and testing data duration. The paper details the characteristics of the experimental investigations conducted and provides a thorough analysis of the results obtained.

  11. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    ERIC Educational Resources Information Center

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  12. The perception of sentence stress in cochlear implant recipients.

    PubMed

    Meister, Hartmut; Landwehr, Markus; Pyschny, Verena; Wagner, Petra; Walger, Martin

    2011-01-01

    Sentence stress is a vital attribute of speech since it indicates the importance of specific words within an utterance. Basic acoustic correlates of stress are syllable duration, intensity, and fundamental frequency (F0). Objectives of the study were to determine cochlear implant (CI) users' perception of the acoustic correlates and to uncover which cues are used for stress identification. Several experiments addressed the discrimination of changes in syllable duration, intensity, and F0 as well as stress identification based on these cues. Moreover, the discrimination of combined cues and identification of stress in conversational speech was examined. Both natural utterances and artificial manipulations of the acoustic cues were used as stimuli. Discrimination of syllable duration did not differ significantly between CI recipients and a control group of normal-hearing listeners. In contrast, CI users performed significantly worse on tasks of discrimination and stress identification based on F0 as well as on intensity. Results from these measurements were significantly correlated with the ability to identify stress in conversational speech. Discrimination performance for covarying F0 and intensity changes was more strongly correlated to identification performance than was found for discrimination of either F0 or intensity alone. Syllable duration was not related to stress identification in natural utterances. The outcome emphasizes the importance of both F0 and intensity for CI users' identification of sentence-based stress. Both cues were used separately for stress perception, but combining the cues provided extra benefit for most of the subjects.

  13. Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification.

    PubMed

    Lidestam, Björn; Moradi, Shahram; Pettersson, Rasmus; Ricklefs, Theodor

    2014-08-01

    The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.

  14. Use of amplitude modulation cues recovered from frequency modulation for cochlear implant users when original speech cues are severely degraded.

    PubMed

    Won, Jong Ho; Shim, Hyun Joon; Lorenzi, Christian; Rubinstein, Jay T

    2014-06-01

    Won et al. (J Acoust Soc Am 132:1113-1119, 2012) reported that cochlear implant (CI) speech processors generate amplitude-modulation (AM) cues recovered from broadband speech frequency modulation (FM) and that CI users can use these cues for speech identification in quiet. The present study was designed to extend this finding for a wide range of listening conditions, where the original speech cues were severely degraded by manipulating either the acoustic signals or the speech processor. The manipulation of the acoustic signals included the presentation of background noise, simulation of reverberation, and amplitude compression. The manipulation of the speech processor included changing the input dynamic range and the number of channels. For each of these conditions, multiple levels of speech degradation were tested. Speech identification was measured for CI users and compared for stimuli having both AM and FM information (intact condition) or FM information only (FM condition). Each manipulation degraded speech identification performance for both intact and FM conditions. Performance for the intact and FM conditions became similar for stimuli having the most severe degradations. Identification performance generally overlapped for the intact and FM conditions. Moreover, identification performance for the FM condition was better than chance performance even at the maximum level of distortion. Finally, significant correlations were found between speech identification scores for the intact and FM conditions. Altogether, these results suggest that despite poor frequency selectivity, CI users can make efficient use of AM cues recovered from speech FM in difficult listening situations.

  15. Central Presbycusis: A Review and Evaluation of the Evidence

    PubMed Central

    Humes, Larry E.; Dubno, Judy R.; Gordon-Salant, Sandra; Lister, Jennifer J.; Cacace, Anthony T.; Cruickshanks, Karen J.; Gates, George A.; Wilson, Richard H.; Wingfield, Arthur

    2018-01-01

    Background The authors reviewed the evidence regarding the existence of age-related declines in central auditory processes and the consequences of any such declines for everyday communication. Purpose This report summarizes the review process and presents its findings. Data Collection and Analysis The authors reviewed 165 articles germane to central presbycusis. Of the 165 articles, 132 articles with a focus on human behavioral measures for either speech or nonspeech stimuli were selected for further analysis. Results For 76 smaller-scale studies of speech understanding in older adults reviewed, the following findings emerged: (1) the three most commonly studied behavioral measures were speech in competition, temporally distorted speech, and binaural speech perception (especially dichotic listening); (2) for speech in competition and temporally degraded speech, hearing loss proved to have a significant negative effect on performance in most of the laboratory studies; (3) significant negative effects of age, unconfounded by hearing loss, were observed in most of the studies of speech in competing speech, time-compressed speech, and binaural speech perception; and (4) the influence of cognitive processing on speech understanding has been examined much less frequently, but when included, significant positive associations with speech understanding were observed. For 36 smaller-scale studies of the perception of nonspeech stimuli by older adults reviewed, the following findings emerged: (1) the three most frequently studied behavioral measures were gap detection, temporal discrimination, and temporal-order discrimination or identification; (2) hearing loss was seldom a significant factor; and (3) negative effects of age were almost always observed. For 18 studies reviewed that made use of test batteries and medium-to-large sample sizes, the following findings emerged: (1) all studies included speech-based measures of auditory processing; (2) 4 of the 18 studies included nonspeech stimuli; (3) for the speech-based measures, monaural speech in a competing-speech background, dichotic speech, and monaural time-compressed speech were investigated most frequently; (4) the most frequently used tests were the Synthetic Sentence Identification (SSI) test with Ipsilateral Competing Message (ICM), the Dichotic Sentence Identification (DSI) test, and time-compressed speech; (5) many of these studies using speech-based measures reported significant effects of age, but most of these studies were confounded by declines in hearing, cognition, or both; (6) for nonspeech auditory-processing measures, the focus was on measures of temporal processing in all four studies; (7) effects of cognition on nonspeech measures of auditory processing have been studied less frequently, with mixed results, whereas the effects of hearing loss on performance were minimal due to judicious selection of stimuli; and (8) there is a paucity of observational studies using test batteries and longitudinal designs. Conclusions Based on this review of the scientific literature, there is insufficient evidence to confirm the existence of central presbycusis as an isolated entity. On the other hand, recent evidence has been accumulating in support of the existence of central presbycusis as a multifactorial condition that involves age- and/or disease-related changes in the auditory system and in the brain. Moreover, there is a clear need for additional research in this area. PMID:22967738

  16. Application of the wavelet transform for speech processing

    NASA Technical Reports Server (NTRS)

    Maes, Stephane

    1994-01-01

    Speaker identification and word spotting will shortly play a key role in space applications. An approach based on the wavelet transform is presented that, in the context of the 'modulation model,' enables extraction of speech features which are used as input for the classification process.

  17. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, J.F.; Ng, L.C.

    1998-03-17

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.

  18. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, John F.; Ng, Lawrence C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.

  19. Brain Plasticity in Speech Training in Native English Speakers Learning Mandarin Tones

    NASA Astrophysics Data System (ADS)

    Heinzen, Christina Carolyn

    The current study employed behavioral and event-related potential (ERP) measures to investigate brain plasticity associated with second-language (L2) phonetic learning based on an adaptive computer training program. The program utilized the acoustic characteristics of Infant-Directed Speech (IDS) to train monolingual American English-speaking listeners to perceive Mandarin lexical tones. Behavioral identification and discrimination tasks were conducted using naturally recorded speech, carefully controlled synthetic speech, and non-speech control stimuli. The ERP experiments were conducted with selected synthetic speech stimuli in a passive listening oddball paradigm. Identical pre- and post- tests were administered on nine adult listeners, who completed two-to-three hours of perceptual training. The perceptual training sessions used pair-wise lexical tone identification, and progressed through seven levels of difficulty for each tone pair. The levels of difficulty included progression in speaker variability from one to four speakers and progression through four levels of acoustic exaggeration of duration, pitch range, and pitch contour. Behavioral results for the natural speech stimuli revealed significant training-induced improvement in identification of Tones 1, 3, and 4. Improvements in identification of Tone 4 generalized to novel stimuli as well. Additionally, comparison between discrimination of across-category and within-category stimulus pairs taken from a synthetic continuum revealed a training-induced shift toward more native-like categorical perception of the Mandarin lexical tones. Analysis of the Mismatch Negativity (MMN) responses in the ERP data revealed increased amplitude and decreased latency for pre-attentive processing of across-category discrimination as a result of training. There were also laterality changes in the MMN responses to the non-speech control stimuli, which could reflect reallocation of brain resources in processing pitch patterns for the across-category lexical tone contrast. Overall, the results support the use of IDS characteristics in training non-native speech contrasts and provide impetus for further research.

  20. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  1. "Who" is saying "what"? Brain-based decoding of human voice and speech.

    PubMed

    Formisano, Elia; De Martino, Federico; Bonte, Milene; Goebel, Rainer

    2008-11-07

    Can we decipher speech content ("what" is being said) and speaker identity ("who" is saying it) from observations of brain activity of a listener? Here, we combine functional magnetic resonance imaging with a data-mining algorithm and retrieve what and whom a person is listening to from the neural fingerprints that speech and voice signals elicit in the listener's auditory cortex. These cortical fingerprints are spatially distributed and insensitive to acoustic variations of the input so as to permit the brain-based recognition of learned speech from unknown speakers and of learned voices from previously unheard utterances. Our findings unravel the detailed cortical layout and computational properties of the neural populations at the basis of human speech recognition and speaker identification.

  2. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, J.F.; Ng, L.C.

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used formore » purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.« less

  3. Analysis of human scream and its impact on text-independent speaker verification.

    PubMed

    Hansen, John H L; Nandwana, Mahesh Kumar; Shokouhi, Navid

    2017-04-01

    Scream is defined as sustained, high-energy vocalizations that lack phonological structure. Lack of phonological structure is how scream is identified from other forms of loud vocalization, such as "yell." This study investigates the acoustic aspects of screams and addresses those that are known to prevent standard speaker identification systems from recognizing the identity of screaming speakers. It is well established that speaker variability due to changes in vocal effort and Lombard effect contribute to degraded performance in automatic speech systems (i.e., speech recognition, speaker identification, diarization, etc.). However, previous research in the general area of speaker variability has concentrated on human speech production, whereas less is known about non-speech vocalizations. The UT-NonSpeech corpus is developed here to investigate speaker verification from scream samples. This study considers a detailed analysis in terms of fundamental frequency, spectral peak shift, frame energy distribution, and spectral tilt. It is shown that traditional speaker recognition based on the Gaussian mixture models-universal background model framework is unreliable when evaluated with screams.

  4. Effects of low harmonics on tone identification in natural and vocoded speech.

    PubMed

    Liu, Chang; Azimi, Behnam; Tahmina, Qudsia; Hu, Yi

    2012-11-01

    This study investigated the contribution of low-frequency harmonics to identifying Mandarin tones in natural and vocoded speech in quiet and noisy conditions. Results showed that low-frequency harmonics of natural speech led to highly accurate tone identification; however, for vocoded speech, low-frequency harmonics yielded lower tone identification than stimuli with full harmonics, except for tone 4. Analysis of the correlation between tone accuracy and the amplitude-F0 correlation index suggested that "more" speech contents (i.e., more harmonics) did not necessarily yield better tone recognition for vocoded speech, especially when the amplitude contour of the signals did not co-vary with the F0 contour.

  5. Effect of Dialect on Identification and Severity of Speech Impairment in Indigenous Australian Children

    ERIC Educational Resources Information Center

    Toohill, Bethany J.; Mcleod, Sharynne; Mccormack, Jane

    2012-01-01

    This study investigated the effect of dialectal difference on identification and rating of severity of speech impairment in children from Indigenous Australian backgrounds. The speech of 15 Indigenous Australian children identified by their parents/caregivers and teachers as having "difficulty talking and making speech sounds" was…

  6. Towards the identification of Idiopathic Parkinson’s Disease from the speech. New articulatory kinetic biomarkers

    PubMed Central

    Shattuck-Hufnagel, S.; Choi, J. Y.; Moro-Velázquez, L.; Gómez-García, J. A.

    2017-01-01

    Although a large amount of acoustic indicators have already been proposed in the literature to evaluate the hypokinetic dysarthria of people with Parkinson’s Disease, the goal of this work is to identify and interpret new reliable and complementary articulatory biomarkers that could be applied to predict/evaluate Parkinson’s Disease from a diadochokinetic test, contributing to the possibility of a further multidimensional analysis of the speech of parkinsonian patients. The new biomarkers proposed are based on the kinetic behaviour of the envelope trace, which is directly linked with the articulatory dysfunctions introduced by the disease since the early stages. The interest of these new articulatory indicators stands on their easiness of identification and interpretation, and their potential to be translated into computer based automatic methods to screen the disease from the speech. Throughout this paper, the accuracy provided by these acoustic kinetic biomarkers is compared with the one obtained with a baseline system based on speaker identification techniques. Results show accuracies around 85% that are in line with those obtained with the complex state of the art speaker recognition techniques, but with an easier physical interpretation, which open the possibility to be transferred to a clinical setting. PMID:29240814

  7. Speech Perception Engages a General Timer: Evidence from a Divided Attention Word Identification Task

    ERIC Educational Resources Information Center

    Casini, Laurence; Burle, Boris; Nguyen, Noel

    2009-01-01

    Time is essential to speech. The duration of speech segments plays a critical role in the perceptual identification of these segments, and therefore in that of spoken words. Here, using a French word identification task, we show that vowels are perceived as shorter when attention is divided between two tasks, as compared to a single task control…

  8. Gender differences in identifying emotions from auditory and visual stimuli.

    PubMed

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  9. Speaker Recognition by Combining MFCC and Phase Information in Noisy Conditions

    NASA Astrophysics Data System (ADS)

    Wang, Longbiao; Minami, Kazue; Yamamoto, Kazumasa; Nakagawa, Seiichi

    In this paper, we investigate the effectiveness of phase for speaker recognition in noisy conditions and combine the phase information with mel-frequency cepstral coefficients (MFCCs). To date, almost speaker recognition methods are based on MFCCs even in noisy conditions. For MFCCs which dominantly capture vocal tract information, only the magnitude of the Fourier Transform of time-domain speech frames is used and phase information has been ignored. High complement of the phase information and MFCCs is expected because the phase information includes rich voice source information. Furthermore, some researches have reported that phase based feature was robust to noise. In our previous study, a phase information extraction method that normalizes the change variation in the phase depending on the clipping position of the input speech was proposed, and the performance of the combination of the phase information and MFCCs was remarkably better than that of MFCCs. In this paper, we evaluate the robustness of the proposed phase information for speaker identification in noisy conditions. Spectral subtraction, a method skipping frames with low energy/Signal-to-Noise (SN) and noisy speech training models are used to analyze the effect of the phase information and MFCCs in noisy conditions. The NTT database and the JNAS (Japanese Newspaper Article Sentences) database added with stationary/non-stationary noise were used to evaluate our proposed method. MFCCs outperformed the phase information for clean speech. On the other hand, the degradation of the phase information was significantly smaller than that of MFCCs for noisy speech. The individual result of the phase information was even better than that of MFCCs in many cases by clean speech training models. By deleting unreliable frames (frames having low energy/SN), the speaker identification performance was improved significantly. By integrating the phase information with MFCCs, the speaker identification error reduction rate was about 30%-60% compared with the standard MFCC-based method.

  10. Musician effect on perception of spectro-temporally degraded speech, vocal emotion, and music in young adolescents.

    PubMed

    Başkent, Deniz; Fuller, Christina D; Galvin, John J; Schepel, Like; Gaudrain, Etienne; Free, Rolien H

    2018-05-01

    In adult normal-hearing musicians, perception of music, vocal emotion, and speech in noise has been previously shown to be better than non-musicians, sometimes even with spectro-temporally degraded stimuli. In this study, melodic contour identification, vocal emotion identification, and speech understanding in noise were measured in young adolescent normal-hearing musicians and non-musicians listening to unprocessed or degraded signals. Different from adults, there was no musician effect for vocal emotion identification or speech in noise. Melodic contour identification with degraded signals was significantly better in musicians, suggesting potential benefits from music training for young cochlear-implant users, who experience similar spectro-temporal signal degradations.

  11. Characteristics of Fluency and Speech in Two Families With High Incidences of Stuttering.

    PubMed

    Stager, Sheila V; Freeman, Frances J; Braun, Allen

    2015-10-01

    This study presents data from 2 families with high incidence of stuttering, comparing methods of phenotype assignment and exploring the presence of other fluency disorders and corresponding speech characteristics. Three methods for assigning phenotype of stuttering were used: self-identification, family identification, and expert identification. Agreement on which individuals were assigned by each of these methods was studied. Multiple measures of fluency and speech production were obtained. Self-reports and descriptions of blocking rather than self-identification as a person who stutters demonstrated the best agreement with expert identification of stuttering. Family identification showed poor agreement with both expert and self-identification of stuttering. Using binary categories of fluent or stuttering, 90% of individuals in 1 family were classified by expert consensus. Only 70% of the other family could be similarly categorized. Experts required 2 other categories, cluttering and other fluency disorders, to fully characterize dysfluency within this family. These 2 families also demonstrated differences in speech production. Some families with high incidence of stuttering may also have high incidence of other fluency disorders and other speech-production difficulties. This finding may have ramifications for genetic studies, including criteria for defining phenotype and collapsing data across multiple families.

  12. Effect of Dialect on the Identification of Speech Impairment in Indigenous Children

    ERIC Educational Resources Information Center

    Laffey, Kate; Pearce, Wendy M.; Steed, William

    2014-01-01

    The influence of dialect on child speech assessment processes is important to consider in order to ensure accurate diagnosis and appropriate intervention (teaching or therapy) for bidialectal children. In Australia, there is limited research evidence documenting the influence of dialectal variations on identification of speech impairment among…

  13. Perception of Melodic Contour and Intonation in Autism Spectrum Disorder: Evidence From Mandarin Speakers.

    PubMed

    Jiang, Jun; Liu, Fang; Wan, Xuan; Jiang, Cunmei

    2015-07-01

    Tone language experience benefits pitch processing in music and speech for typically developing individuals. No known studies have examined pitch processing in individuals with autism who speak a tone language. This study investigated discrimination and identification of melodic contour and speech intonation in a group of Mandarin-speaking individuals with high-functioning autism. Individuals with autism showed superior melodic contour identification but comparable contour discrimination relative to controls. In contrast, these individuals performed worse than controls on both discrimination and identification of speech intonation. These findings provide the first evidence for differential pitch processing in music and speech in tone language speakers with autism, suggesting that tone language experience may not compensate for speech intonation perception deficits in individuals with autism.

  14. Deep bottleneck features for spoken language identification.

    PubMed

    Jiang, Bing; Song, Yan; Wei, Si; Liu, Jun-Hua; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    A key problem in spoken language identification (LID) is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF) for spoken LID, motivated by the success of Deep Neural Networks (DNN) in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV), using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09) show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  15. Multilevel Analysis in Analyzing Speech Data

    ERIC Educational Resources Information Center

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  16. Characteristics of Fluency and Speech in Two Families With High Incidences of Stuttering

    PubMed Central

    Freeman, Frances J.; Braun, Allen

    2015-01-01

    Purpose This study presents data from 2 families with high incidence of stuttering, comparing methods of phenotype assignment and exploring the presence of other fluency disorders and corresponding speech characteristics. Method Three methods for assigning phenotype of stuttering were used: self-identification, family identification, and expert identification. Agreement on which individuals were assigned by each of these methods was studied. Multiple measures of fluency and speech production were obtained. Results Self-reports and descriptions of blocking rather than self-identification as a person who stutters demonstrated the best agreement with expert identification of stuttering. Family identification showed poor agreement with both expert and self-identification of stuttering. Using binary categories of fluent or stuttering, 90% of individuals in 1 family were classified by expert consensus. Only 70% of the other family could be similarly categorized. Experts required 2 other categories, cluttering and other fluency disorders, to fully characterize dysfluency within this family. These 2 families also demonstrated differences in speech production. Conclusion Some families with high incidence of stuttering may also have high incidence of other fluency disorders and other speech-production difficulties. This finding may have ramifications for genetic studies, including criteria for defining phenotype and collapsing data across multiple families. PMID:26126023

  17. Gated audiovisual speech identification in silence vs. noise: effects on time and accuracy

    PubMed Central

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2013-01-01

    This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the same stimuli, but presented in an audiovisual instead of an auditory-only manner. The results showed that noise impeded the identification of consonants and words (i.e., delayed IPs and lowered accuracy), but not the identification of final words in sentences. In comparison with the previous study by Moradi et al., it can be concluded that the provision of visual cues expedited IPs and increased the accuracy of speech stimuli identification in both silence and noise. The implication of the results is discussed in terms of models for speech understanding. PMID:23801980

  18. [The Freiburg speech intelligibility test : A pillar of speech audiometry in German-speaking countries].

    PubMed

    Hoth, S

    2016-08-01

    The Freiburg speech intelligibility test according to DIN 45621 was introduced around 60 years ago. For decades, and still today, the Freiburg test has been a standard whose relevance extends far beyond pure audiometry. It is used primarily to determine the speech perception threshold (based on two-digit numbers) and the ability to discriminate speech at suprathreshold presentation levels (based on monosyllabic nouns). Moreover, it is a measure of the degree of disability, the requirement for and success of technical hearing aids (auxiliaries directives), and the compensation for disability and handicap (Königstein recommendation). In differential audiological diagnostics, the Freiburg test contributes to the distinction between low- and high-frequency hearing loss, as well as to identification of conductive, sensory, neural, and central disorders. Currently, the phonemic and perceptual balance of the monosyllabic test lists is subject to critical discussions. Obvious deficiencies exist for testing speech recognition in noise. In this respect, alternatives such as sentence or rhyme tests in closed-answer inventories are discussed.

  19. Age-group differences in speech identification despite matched audiometrically normal hearing: contributions from auditory temporal processing and cognition

    PubMed Central

    Füllgrabe, Christian; Moore, Brian C. J.; Stone, Michael A.

    2015-01-01

    Hearing loss with increasing age adversely affects the ability to understand speech, an effect that results partly from reduced audibility. The aims of this study were to establish whether aging reduces speech intelligibility for listeners with normal audiograms, and, if so, to assess the relative contributions of auditory temporal and cognitive processing. Twenty-one older normal-hearing (ONH; 60–79 years) participants with bilateral audiometric thresholds ≤ 20 dB HL at 0.125–6 kHz were matched to nine young (YNH; 18–27 years) participants in terms of mean audiograms, years of education, and performance IQ. Measures included: (1) identification of consonants in quiet and in noise that was unmodulated or modulated at 5 or 80 Hz; (2) identification of sentences in quiet and in co-located or spatially separated two-talker babble; (3) detection of modulation of the temporal envelope (TE) at frequencies 5–180 Hz; (4) monaural and binaural sensitivity to temporal fine structure (TFS); (5) various cognitive tests. Speech identification was worse for ONH than YNH participants in all types of background. This deficit was not reflected in self-ratings of hearing ability. Modulation masking release (the improvement in speech identification obtained by amplitude modulating a noise background) and spatial masking release (the benefit obtained from spatially separating masker and target speech) were not affected by age. Sensitivity to TE and TFS was lower for ONH than YNH participants, and was correlated positively with speech-in-noise (SiN) identification. Many cognitive abilities were lower for ONH than YNH participants, and generally were correlated positively with SiN identification scores. The best predictors of the intelligibility of SiN were composite measures of cognition and TFS sensitivity. These results suggest that declines in speech perception in older persons are partly caused by cognitive and perceptual changes separate from age-related changes in audiometric sensitivity. PMID:25628563

  20. Working Memory and Speech Comprehension in Older Adults With Hearing Impairment.

    PubMed

    Nagaraj, Naveen K

    2017-10-17

    This study examined the relationship between working memory (WM) and speech comprehension in older adults with hearing impairment (HI). It was hypothesized that WM would explain significant variance in speech comprehension measured in multitalker babble (MTB). Twenty-four older (59-73 years) adults with sensorineural HI participated. WM capacity (WMC) was measured using 3 complex span tasks. Speech comprehension was assessed using multiple passages, and speech identification ability was measured using recall of sentence final-word and key words. Speech measures were performed in quiet and in the presence of MTB at + 5 dB signal-to-noise ratio. Results suggested that participants' speech identification was poorer in MTB, but their ability to comprehend discourse in MTB was at least as good as in quiet. WMC did not explain significant variance in speech comprehension before and after controlling for age and audibility. However, WMC explained significant variance in low-context sentence key words identification in MTB. These results suggest that WMC plays an important role in identifying low-context sentences in MTB, but not when comprehending semantically rich discourse passages. In general, data did not support individual variability in WMC as a factor that predicts speech comprehension ability in older adults with HI.

  1. Identification of a pathway for intelligible speech in the left temporal lobe

    PubMed Central

    Scott, Sophie K.; Blank, C. Catrin; Rosen, Stuart; Wise, Richard J. S.

    2017-01-01

    Summary It has been proposed that the identification of sounds, including species-specific vocalizations, by primates depends on anterior projections from the primary auditory cortex, an auditory pathway analogous to the ventral route proposed for the visual identification of objects. We have identified a similar route in the human for understanding intelligible speech. Using PET imaging to identify separable neural subsystems within the human auditory cortex, we used a variety of speech and speech-like stimuli with equivalent acoustic complexity but varying intelligibility. We have demonstrated that the left superior temporal sulcus responds to the presence of phonetic information, but its anterior part only responds if the stimulus is also intelligible. This novel observation demonstrates a left anterior temporal pathway for speech comprehension. PMID:11099443

  2. Multistage audiovisual integration of speech: dissociating identification and detection.

    PubMed

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  3. [The application of cybernetic modeling methods for the forensic medical personality identification based on the voice and sounding speech characteristics].

    PubMed

    Kaganov, A Sh; Kir'yanov, P A

    2015-01-01

    The objective of the present publication was to discuss the possibility of application of cybernetic modeling methods to overcome the apparent discrepancy between two kinds of the speech records, viz. initial ones (e.g. obtained in the course of special investigation activities) and the voice prints obtained from the persons subjected to the criminalistic examination. The paper is based on the literature sources and the materials of original criminalistics expertises performed by the authors.

  4. [The contribution of different cochlear insertion region to Mandarin speech perception in users of cochlear implant].

    PubMed

    Qi, Beier; Liu, Bo; Liu, Sha; Liu, Haihong; Dong, Ruijuan; Zhang, Ning; Gong, Shusheng

    2011-05-01

    To study the effect of cochlear electrode coverage and different insertion region on speech recognition, especially tone perception of cochlear implant users whose native language is Mandarin Chinese. Setting seven test conditions by fitting software. All conditions were created by switching on/off respective channels in order to simulate different insertion position. Then Mandarin CI users received 4 Speech tests, including Vowel Identification test, Consonant Identification test, Tone Identification test-male speaker, Mandarin HINT test (SRS) in quiet and noise. To all test conditions: the average score of vowel identification was significantly different, from 56% to 91% (Rank sum test, P < 0.05). The average score of consonant identification was significantly different, from 72% to 85% (ANOVNA, P < 0.05). The average score of Tone identification was not significantly different (ANOVNA, P > 0.05). However the more channels activated, the higher scores obtained, from 68% to 81%. This study shows that there is a correlation between insertion depth and speech recognition. Because all parts of the basement membrane can help CI users to improve their speech recognition ability, it is very important to enhance verbal communication ability and social interaction ability of CI users by increasing insertion depth and actively stimulating the top region of cochlear.

  5. The ability of cochlear implant users to use temporal envelope cues recovered from speech frequency modulation.

    PubMed

    Won, Jong Ho; Lorenzi, Christian; Nie, Kaibao; Li, Xing; Jameyson, Elyse M; Drennan, Ward R; Rubinstein, Jay T

    2012-08-01

    Previous studies have demonstrated that normal-hearing listeners can understand speech using the recovered "temporal envelopes," i.e., amplitude modulation (AM) cues from frequency modulation (FM). This study evaluated this mechanism in cochlear implant (CI) users for consonant identification. Stimuli containing only FM cues were created using 1, 2, 4, and 8-band FM-vocoders to determine if consonant identification performance would improve as the recovered AM cues become more available. A consistent improvement was observed as the band number decreased from 8 to 1, supporting the hypothesis that (1) the CI sound processor generates recovered AM cues from broadband FM, and (2) CI users can use the recovered AM cues to recognize speech. The correlation between the intact and the recovered AM components at the output of the sound processor was also generally higher when the band number was low, supporting the consonant identification results. Moreover, CI subjects who were better at using recovered AM cues from broadband FM cues showed better identification performance with intact (unprocessed) speech stimuli. This suggests that speech perception performance variability in CI users may be partly caused by differences in their ability to use AM cues recovered from FM speech cues.

  6. Signal Prediction With Input Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin

    1999-01-01

    A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.

  7. Call recognition and individual identification of fish vocalizations based on automatic speech recognition: An example with the Lusitanian toadfish.

    PubMed

    Vieira, Manuel; Fonseca, Paulo J; Amorim, M Clara P; Teixeira, Carlos J C

    2015-12-01

    The study of acoustic communication in animals often requires not only the recognition of species specific acoustic signals but also the identification of individual subjects, all in a complex acoustic background. Moreover, when very long recordings are to be analyzed, automatic recognition and identification processes are invaluable tools to extract the relevant biological information. A pattern recognition methodology based on hidden Markov models is presented inspired by successful results obtained in the most widely known and complex acoustical communication signal: human speech. This methodology was applied here for the first time to the detection and recognition of fish acoustic signals, specifically in a stream of round-the-clock recordings of Lusitanian toadfish (Halobatrachus didactylus) in their natural estuarine habitat. The results show that this methodology is able not only to detect the mating sounds (boatwhistles) but also to identify individual male toadfish, reaching an identification rate of ca. 95%. Moreover this method also proved to be a powerful tool to assess signal durations in large data sets. However, the system failed in recognizing other sound types.

  8. Sound frequency affects speech emotion perception: results from congenital amusia

    PubMed Central

    Lolli, Sydney L.; Lewenstein, Ari D.; Basurto, Julian; Winnik, Sean; Loui, Psyche

    2015-01-01

    Congenital amusics, or “tone-deaf” individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech. PMID:26441718

  9. An algorithm that improves speech intelligibility in noise for normal-hearing listeners.

    PubMed

    Kim, Gibak; Lu, Yang; Hu, Yi; Loizou, Philipos C

    2009-09-01

    Traditional noise-suppression algorithms have been shown to improve speech quality, but not speech intelligibility. Motivated by prior intelligibility studies of speech synthesized using the ideal binary mask, an algorithm is proposed that decomposes the input signal into time-frequency (T-F) units and makes binary decisions, based on a Bayesian classifier, as to whether each T-F unit is dominated by the target or the masker. Speech corrupted at low signal-to-noise ratio (SNR) levels (-5 and 0 dB) using different types of maskers is synthesized by this algorithm and presented to normal-hearing listeners for identification. Results indicated substantial improvements in intelligibility (over 60% points in -5 dB babble) over that attained by human listeners with unprocessed stimuli. The findings from this study suggest that algorithms that can estimate reliably the SNR in each T-F unit can improve speech intelligibility.

  10. Speech Anxiety: The Importance of Identification in the Basic Speech Course.

    ERIC Educational Resources Information Center

    Mandeville, Mary Y.

    A study investigated speech anxiety in the basic speech course by means of pre and post essays. Subjects, 73 students in 3 classes in the basic speech course at a southwestern multiuniversity, wrote a two-page essay on their perceptions of their speech anxiety before the first speaking project. Students discussed speech anxiety in class and were…

  11. Speech recognition: Acoustic-phonetic knowledge acquisition and representation

    NASA Astrophysics Data System (ADS)

    Zue, Victor W.

    1988-09-01

    The long-term research goal is to develop and implement speaker-independent continuous speech recognition systems. It is believed that the proper utilization of speech-specific knowledge is essential for such advanced systems. This research is thus directed toward the acquisition, quantification, and representation, of acoustic-phonetic and lexical knowledge, and the application of this knowledge to speech recognition algorithms. In addition, we are exploring new speech recognition alternatives based on artificial intelligence and connectionist techniques. We developed a statistical model for predicting the acoustic realization of stop consonants in various positions in the syllable template. A unification-based grammatical formalism was developed for incorporating this model into the lexical access algorithm. We provided an information-theoretic justification for the hierarchical structure of the syllable template. We analyzed segmented duration for vowels and fricatives in continuous speech. Based on contextual information, we developed durational models for vowels and fricatives that account for over 70 percent of the variance, using data from multiple, unknown speakers. We rigorously evaluated the ability of human spectrogram readers to identify stop consonants spoken by many talkers and in a variety of phonetic contexts. Incorporating the declarative knowledge used by the readers, we developed a knowledge-based system for stop identification. We achieved comparable system performance to that to the readers.

  12. The ability of cochlear implant users to use temporal envelope cues recovered from speech frequency modulationa

    PubMed Central

    Won, Jong Ho; Lorenzi, Christian; Nie, Kaibao; Li, Xing; Jameyson, Elyse M.; Drennan, Ward R.; Rubinstein, Jay T.

    2012-01-01

    Previous studies have demonstrated that normal-hearing listeners can understand speech using the recovered “temporal envelopes,” i.e., amplitude modulation (AM) cues from frequency modulation (FM). This study evaluated this mechanism in cochlear implant (CI) users for consonant identification. Stimuli containing only FM cues were created using 1, 2, 4, and 8-band FM-vocoders to determine if consonant identification performance would improve as the recovered AM cues become more available. A consistent improvement was observed as the band number decreased from 8 to 1, supporting the hypothesis that (1) the CI sound processor generates recovered AM cues from broadband FM, and (2) CI users can use the recovered AM cues to recognize speech. The correlation between the intact and the recovered AM components at the output of the sound processor was also generally higher when the band number was low, supporting the consonant identification results. Moreover, CI subjects who were better at using recovered AM cues from broadband FM cues showed better identification performance with intact (unprocessed) speech stimuli. This suggests that speech perception performance variability in CI users may be partly caused by differences in their ability to use AM cues recovered from FM speech cues. PMID:22894230

  13. Investigating speech perception in children with dyslexia: is there evidence of a consistent deficit in individuals?

    PubMed Central

    Messaoud-Galusi, Souhila; Hazan, Valerie; Rosen, Stuart

    2012-01-01

    Purpose The claim that speech perception abilities are impaired in dyslexia was investigated in a group of 62 dyslexic children and 51 average readers matched in age. Method To test whether there was robust evidence of speech perception deficits in children with dyslexia, speech perception in noise and quiet was measured using eight different tasks involving the identification and discrimination of a complex and highly natural synthetic ‘pea’-‘bee’ contrast (copy synthesised from natural models) and the perception of naturally-produced words. Results Children with dyslexia, on average, performed more poorly than average readers in the synthetic syllables identification task in quiet and in across-category discrimination (but not when tested using an adaptive procedure). They did not differ from average readers on two tasks of word recognition in noise or identification of synthetic syllables in noise. For all tasks, a majority of individual children with dyslexia performed within norms. Finally, speech perception generally did not correlate with pseudo-word reading or phonological processing, the core skills related to dyslexia. Conclusions On the tasks and speech stimuli we used, most children with dyslexia do not appear to show a consistent deficit in speech perception. PMID:21930615

  14. Spatial release from masking based on binaural processing for up to six maskers

    PubMed Central

    Yost, William A.

    2017-01-01

    Spatial Release from Masking (SRM) was measured for identification of a female target word spoken in the presence of male masker words. Target words from a single loudspeaker located at midline were presented when two, four, or six masker words were presented either from the same source as the target or from spatially separated masker sources. All masker words were presented from loudspeakers located symmetrically around the centered target source in the front azimuth hemifield. Three masking conditions were employed: speech-in-speech masking (involving both informational and energetic masking), speech-in-noise masking (involving energetic masking), and filtered speech-in-filtered speech masking (involving informational masking). Psychophysical results were summarized as three-point psychometric functions relating proportion of correct word identification to target-to-masker ratio (in decibels) for both the co-located and spatially separated target and masker sources cases. SRM was then calculated by comparing the slopes and intercepts of these functions. SRM decreased as the number of symmetrically placed masker sources increased from two to six. This decrease was independent of the type of masking, with almost no SRM measured for six masker sources. These results suggest that when SRM is dependent primarily on binaural processing, SRM is effectively limited to fewer than six sound sources. PMID:28372135

  15. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    PubMed

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.

  16. Investigating the Role of Working Memory in Speech-in-noise Identification for Listeners with Normal Hearing.

    PubMed

    Füllgrabe, Christian; Rosen, Stuart

    2016-01-01

    With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in understanding speech in noise (SiN). The psychological construct that has received most interest is working memory (WM), representing the ability to simultaneously store and process information. Common lore and theoretical models assume that WM-based processes subtend speech processing in adverse perceptual conditions, such as those associated with hearing loss or background noise. Empirical evidence confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. To assess whether WMC also plays a role when listeners without hearing loss process speech in acoustically adverse conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification. The survey revealed little or no evidence for an association between WMC and SiN performance. We also analysed new data from 132 normal-hearing participants sampled from across the adult lifespan (18-91 years), for a relationship between Reading-Span scores and identification of matrix sentences in noise. Performance on both tasks declined with age, and correlated weakly even after controlling for the effects of age and audibility (r = 0.39, p ≤ 0.001, one-tailed). However, separate analyses for different age groups revealed that the correlation was only significant for middle-aged and older groups but not for the young (< 40 years) participants.

  17. Double Fourier analysis for Emotion Identification in Voiced Speech

    NASA Astrophysics Data System (ADS)

    Sierra-Sosa, D.; Bastidas, M.; Ortiz P., D.; Quintero, O. L.

    2016-04-01

    We propose a novel analysis alternative, based on two Fourier Transforms for emotion recognition from speech. Fourier analysis allows for display and synthesizes different signals, in terms of power spectral density distributions. A spectrogram of the voice signal is obtained performing a short time Fourier Transform with Gaussian windows, this spectrogram portraits frequency related features, such as vocal tract resonances and quasi-periodic excitations during voiced sounds. Emotions induce such characteristics in speech, which become apparent in spectrogram time-frequency distributions. Later, the signal time-frequency representation from spectrogram is considered an image, and processed through a 2-dimensional Fourier Transform in order to perform the spatial Fourier analysis from it. Finally features related with emotions in voiced speech are extracted and presented.

  18. Multilingual Data Selection for Low Resource Speech Recognition

    DTIC Science & Technology

    2016-09-12

    Figure 1: Identification of language clusters using scores from an LID system training languages used in the Base and OP1 evaluation periods of the Babel...the posterior scores over frames. For a set of languages that are used to train the lan- guage identification (LID) network, pairs of languages that...which are combined during test time to produce 10 dimensional language 3854 Figure 3: Identification of language clusters using scores from individually

  19. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    PubMed Central

    Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  20. A Support Vector Machine-Based Gender Identification Using Speech Signal

    NASA Astrophysics Data System (ADS)

    Lee, Kye-Hwan; Kang, Sang-Ick; Kim, Deok-Hwan; Chang, Joon-Hyuk

    We propose an effective voice-based gender identification method using a support vector machine (SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model (GMM)-based method using the mel frequency cepstral coefficients (MFCC). A novel approach of incorporating a features fusion scheme based on a combination of the MFCC and the fundamental frequency is proposed with the aim of improving the performance of gender identification. Experimental results demonstrate that the gender identification performance using the SVM is significantly better than that of the GMM-based scheme. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.

  1. A mathematical model of medial consonant identification by cochlear implant users.

    PubMed

    Svirsky, Mario A; Sagi, Elad; Meyer, Ted A; Kaiser, Adam R; Teoh, Su Wooi

    2011-04-01

    The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects' ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects' consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech.

  2. A mathematical model of medial consonant identification by cochlear implant users

    PubMed Central

    Svirsky, Mario A.; Sagi, Elad; Meyer, Ted A.; Kaiser, Adam R.; Teoh, Su Wooi

    2011-01-01

    The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects’ ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects’ consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech. PMID:21476674

  3. Effects of various electrode configurations on music perception, intonation and speaker gender identification.

    PubMed

    Landwehr, Markus; Fürstenberg, Dirk; Walger, Martin; von Wedel, Hasso; Meister, Hartmut

    2014-01-01

    Advances in speech coding strategies and electrode array designs for cochlear implants (CIs) predominantly aim at improving speech perception. Current efforts are also directed at transmitting appropriate cues of the fundamental frequency (F0) to the auditory nerve with respect to speech quality, prosody, and music perception. The aim of this study was to examine the effects of various electrode configurations and coding strategies on speech intonation identification, speaker gender identification, and music quality rating. In six MED-EL CI users electrodes were selectively deactivated in order to simulate different insertion depths and inter-electrode distances when using the high definition continuous interleaved sampling (HDCIS) and fine structure processing (FSP) speech coding strategies. Identification of intonation and speaker gender was determined and music quality rating was assessed. For intonation identification HDCIS was robust against the different electrode configurations, whereas fine structure processing showed significantly worse results when a short electrode depth was simulated. In contrast, speaker gender recognition was not affected by electrode configuration or speech coding strategy. Music quality rating was sensitive to electrode configuration. In conclusion, the three experiments revealed different outcomes, even though they all addressed the reception of F0 cues. Rapid changes in F0, as seen with intonation, were the most sensitive to electrode configurations and coding strategies. In contrast, electrode configurations and coding strategies did not show large effects when F0 information was available over a longer time period, as seen with speaker gender. Music quality relies on additional spectral cues other than F0, and was poorest when a shallow insertion was simulated.

  4. DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS

    PubMed Central

    Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.

    2014-01-01

    We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757

  5. Speech and language support: How physicians can identify and treat speech and language delays in the office setting.

    PubMed

    Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo

    2014-01-01

    Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society's Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children's speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation.

  6. Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss.

    PubMed

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-03-01

    Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/-B/aa) and more intact onset responses for nonword repetition (Baz for/-B/az). Thus visual speech altered both discrimination and identification in the CHL-to a large extent for the/B/onsets but only minimally for the/G/onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children's discrimination skills (i.e., d' analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets-even after variation due to the other variables was controlled. These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Visual Speech Alters the Discrimination and Identification of Non-Intact Auditory Speech in Children with Hearing Loss

    PubMed Central

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé

    2017-01-01

    Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/–B/aa) and more intact onset responses for nonword repetition (Baz for/–B/az). Thus visual speech altered both discrimination and identification in the CHL—to a large extent for the /B/ onsets but only minimally for the /G/ onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children’s discrimination skills (i.e., d’ analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets—even after variation due to the other variables was controlled. Conclusions These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. PMID:28167003

  8. Minimal Pair Distinctions and Intelligibility in Preschool Children with and without Speech Sound Disorders

    ERIC Educational Resources Information Center

    Hodge, Megan M.; Gotzke, Carrie L.

    2011-01-01

    Listeners' identification of young children's productions of minimally contrastive words and predictive relationships between accurately identified words and intelligibility scores obtained from a 100-word spontaneous speech sample were determined for 36 children with typically developing speech (TDS) and 36 children with speech sound disorders…

  9. Methods and apparatus for non-acoustic speech characterization and recognition

    DOEpatents

    Holzrichter, John F.

    1999-01-01

    By simultaneously recording EM wave reflections and acoustic speech information, the positions and velocities of the speech organs as speech is articulated can be defined for each acoustic speech unit. Well defined time frames and feature vectors describing the speech, to the degree required, can be formed. Such feature vectors can uniquely characterize the speech unit being articulated each time frame. The onset of speech, rejection of external noise, vocalized pitch periods, articulator conditions, accurate timing, the identification of the speaker, acoustic speech unit recognition, and organ mechanical parameters can be determined.

  10. Methods and apparatus for non-acoustic speech characterization and recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, J.F.

    By simultaneously recording EM wave reflections and acoustic speech information, the positions and velocities of the speech organs as speech is articulated can be defined for each acoustic speech unit. Well defined time frames and feature vectors describing the speech, to the degree required, can be formed. Such feature vectors can uniquely characterize the speech unit being articulated each time frame. The onset of speech, rejection of external noise, vocalized pitch periods, articulator conditions, accurate timing, the identification of the speaker, acoustic speech unit recognition, and organ mechanical parameters can be determined.

  11. Talker Identification across Source Mechanisms: Experiments with Laryngeal and Electrolarynx Speech

    ERIC Educational Resources Information Center

    Perrachione, Tyler K.; Stepp, Cara E.; Hillman, Robert E.; Wong, Patrick C. M.

    2014-01-01

    Purpose: The purpose of this study was to determine listeners' ability to learn talker identity from speech produced with an electrolarynx, explore source and filter differentiation in talker identification, and describe acoustic-phonetic changes associated with electrolarynx use. Method: Healthy adult control listeners learned to identify…

  12. Characteristics of Fluency and Speech in Two Families with High Incidences of Stuttering

    ERIC Educational Resources Information Center

    Stager, Sheila V.; Freeman, Frances J.; Braun, Allen

    2015-01-01

    Purpose: This study presents data from 2 families with high incidence of stuttering, comparing methods of phenotype assignment and exploring the presence of other fluency disorders and corresponding speech characteristics. Method: Three methods for assigning phenotype of stuttering were used: self-identification, family identification, and expert…

  13. The prevalence of stuttering, voice, and speech-sound disorders in primary school students in Australia.

    PubMed

    McKinnon, David H; McLeod, Sharynne; Reilly, Sheena

    2007-01-01

    The aims of this study were threefold: to report teachers' estimates of the prevalence of speech disorders (specifically, stuttering, voice, and speech-sound disorders); to consider correspondence between the prevalence of speech disorders and gender, grade level, and socioeconomic status; and to describe the level of support provided to schoolchildren with speech disorders. Students with speech disorders were identified from 10,425 students in Australia using a 4-stage process: training in the data collection process, teacher identification, confirmation by a speech-language pathologist, and consultation with district special needs advisors. The prevalence of students with speech disorders was estimated; specifically, 0.33% of students were identified as stuttering, 0.12% as having a voice disorder, and 1.06% as having a speech-sound disorder. There was a higher prevalence of speech disorders in males than in females. As grade level increased, the prevalence of speech disorders decreased. There was no significant difference in the pattern of prevalence across the three speech disorders and four socioeconomic groups; however, students who were identified with a speech disorder were more likely to be in the higher socioeconomic groups. Finally, there was a difference between the perceived and actual level of support that was provided to these students. These prevalence figures are lower than those using initial identification by speech-language pathologists and similar to those using parent report.

  14. Gated Auditory Speech Perception in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Hearing Impairment and Cognitive Capacity

    PubMed Central

    Lidestam, Björn; Hällgren, Mathias; Rönnberg, Jerker

    2014-01-01

    This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the isolation points (IPs: the shortest duration, from onset, required to correctly identify the speech target). The relationships between working memory capacity, the IPs, and speech accuracy were also measured. Twenty-four EHA users (with mild to moderate hearing impairment) and 24 ENH individuals participated in the present study. Despite the use of their regular hearing aids, the EHA users had delayed IPs and were less accurate in identifying consonants and words compared with the ENH individuals. The EHA users also had delayed IPs for final word identification in sentences with lower predictability; however, no significant between-group difference in accuracy was observed. Finally, there were no significant between-group differences in terms of IPs or accuracy for final word identification in highly predictable sentences. Our results also showed that, among EHA users, greater working memory capacity was associated with earlier IPs and improved accuracy in consonant and word identification. Together, our findings demonstrate that the gated speech perception ability of EHA users was not at the level of ENH individuals, in terms of IPs and accuracy. In addition, gated speech perception was more cognitively demanding for EHA users than for ENH individuals in the absence of semantic context. PMID:25085610

  15. Sound-direction identification with bilateral cochlear implants.

    PubMed

    Neuman, Arlene C; Haravon, Anita; Sislian, Nicole; Waltzman, Susan B

    2007-02-01

    The purpose of this study was to compare the accuracy of sound-direction identification in the horizontal plane by bilateral cochlear implant users when localization was measured with pink noise and with speech stimuli. Eight adults who were bilateral users of Nucleus 24 Contour devices participated in the study. All had received implants in both ears in a single surgery. Sound-direction identification was measured in a large classroom by using a nine-loudspeaker array. Localization was tested in three listening conditions (bilateral cochlear implants, left cochlear implant, and right cochlear implant), using two different stimuli (a speech stimulus and pink noise bursts) in a repeated-measures design. Sound-direction identification accuracy was significantly better when using two implants than when using a single implant. The mean root-mean-square error was 29 degrees for the bilateral condition, 54 degrees for the left cochlear implant, and 46.5 degrees for the right cochlear implant condition. Unilateral accuracy was similar for right cochlear implant and left cochlear implant performance. Sound-direction identification performance was similar for speech and pink noise stimuli. The data obtained in this study add to the growing body of evidence that sound-direction identification with bilateral cochlear implants is better than with a single implant. The similarity in localization performance obtained with the speech and pink noise supports the use of either stimulus for measuring sound-direction identification.

  16. Gender Identification Using High-Frequency Speech Energy: Effects of Increasing the Low-Frequency Limit.

    PubMed

    Donai, Jeremy J; Halbritter, Rachel M

    The purpose of this study was to investigate the ability of normal-hearing listeners to use high-frequency energy for gender identification from naturally produced speech signals. Two experiments were conducted using a repeated-measures design. Experiment 1 investigated the effects of increasing high-pass filter cutoff (i.e., increasing the low-frequency spectral limit) on gender identification from naturally produced vowel segments. Experiment 2 studied the effects of increasing high-pass filter cutoff on gender identification from naturally produced sentences. Confidence ratings for the gender identification task were also obtained for both experiments. Listeners in experiment 1 were capable of extracting talker gender information at levels significantly above chance from vowel segments high-pass filtered up to 8.5 kHz. Listeners in experiment 2 also performed above chance on the gender identification task from sentences high-pass filtered up to 12 kHz. Cumulatively, the results of both experiments provide evidence that normal-hearing listeners can utilize information from the very high-frequency region (above 4 to 5 kHz) of the speech signal for talker gender identification. These findings are at variance with current assumptions regarding the perceptual information regarding talker gender within this frequency region. The current results also corroborate and extend previous studies of the use of high-frequency speech energy for perceptual tasks. These findings have potential implications for the study of information contained within the high-frequency region of the speech spectrum and the role this region may play in navigating the auditory scene, particularly when the low-frequency portion of the spectrum is masked by environmental noise sources or for listeners with substantial hearing loss in the low-frequency region and better hearing sensitivity in the high-frequency region (i.e., reverse slope hearing loss).

  17. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  18. Frequency-Limiting Effects on Speech and Environmental Sound Identification for Cochlear Implant and Normal Hearing Listeners

    PubMed Central

    Chang, Son-A; Won, Jong Ho; Kim, HyangHee; Oh, Seung-Ha; Tyler, Richard S.; Cho, Chang Hyun

    2018-01-01

    Background and Objectives It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. Subjects and Methods Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. Results CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. Conclusions This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing. PMID:29325391

  19. Frequency-Limiting Effects on Speech and Environmental Sound Identification for Cochlear Implant and Normal Hearing Listeners.

    PubMed

    Chang, Son-A; Won, Jong Ho; Kim, HyangHee; Oh, Seung-Ha; Tyler, Richard S; Cho, Chang Hyun

    2017-12-01

    It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing.

  20. [Fool's gold standards in language screening. Sensitivity and specificity of the Hessian child language screening test (Kindersprachscreening, KiSS)].

    PubMed

    Neumann, K; Holler-Zittlau, I; van Minnen, S; Sick, U; Zaretsky, Y; Euler, H A

    2011-01-01

    The German Kindersprachscreening (KiSS) is a universal speech and language screening test for large-scale identification of Hessian kindergarten children requiring special educational language training or clinical speech/language therapy. To calculate the procedural screening validity, 257 children (aged 4.0 to 4.5 years) were tested using KiSS and four language tests (Reynell Development Language Scales III, Patholinguistische Diagnostik, PLAKSS, AWST-R). The majority or consensus judgements of three speech-language professionals, based on the language test results, served as a reference criterion. The base (fail) rates of the professionals were either self-determined or preset based on known prevalence rates. Screening validity was higher for preset than for self-determined base rates due to higher inter-judge agreement. The confusion matrices of the overall index classification of the KiSS (speech-language abnormalities with educational or clinical needs) with the fixed base rate expert judgement about language impairment, including fluency or voice disorders, yielded a sensitivity of 88% and a specificity of 78%, for just language impairment 84% and 75%, respectively. Specificities for disorders requiring clinical diagnostics in the KiSS (language impairment alone or combined with fluency/voice disorders) related to the test-based consensus expert judgment was about 93%. Sensitivities were unsatisfactory because the differentiation between educational and clinical needs requires improvement. Since the judgement concordances between the speech-language professionals was only moderate, the development of a comprehensive German reference test for speech and language disorders with evidence-based algorithmic decision rules rather than subjective clinical judgement is advocated.

  1. Temporal order processing of syllables in the left parietal lobe.

    PubMed

    Moser, Dana; Baker, Julie M; Sanchez, Carmen E; Rorden, Chris; Fridriksson, Julius

    2009-10-07

    Speech processing requires the temporal parsing of syllable order. Individuals suffering from posterior left hemisphere brain injury often exhibit temporal processing deficits as well as language deficits. Although the right posterior inferior parietal lobe has been implicated in temporal order judgments (TOJs) of visual information, there is limited evidence to support the role of the left inferior parietal lobe (IPL) in processing syllable order. The purpose of this study was to examine whether the left inferior parietal lobe is recruited during temporal order judgments of speech stimuli. Functional magnetic resonance imaging data were collected on 14 normal participants while they completed the following forced-choice tasks: (1) syllable order of multisyllabic pseudowords, (2) syllable identification of single syllables, and (3) gender identification of both multisyllabic and monosyllabic speech stimuli. Results revealed increased neural recruitment in the left inferior parietal lobe when participants made judgments about syllable order compared with both syllable identification and gender identification. These findings suggest that the left inferior parietal lobe plays an important role in processing syllable order and support the hypothesized role of this region as an interface between auditory speech and the articulatory code. Furthermore, a breakdown in this interface may explain some components of the speech deficits observed after posterior damage to the left hemisphere.

  2. Temporal Order Processing of Syllables in the Left Parietal Lobe

    PubMed Central

    Baker, Julie M.; Sanchez, Carmen E.; Rorden, Chris; Fridriksson, Julius

    2009-01-01

    Speech processing requires the temporal parsing of syllable order. Individuals suffering from posterior left hemisphere brain injury often exhibit temporal processing deficits as well as language deficits. Although the right posterior inferior parietal lobe has been implicated in temporal order judgments (TOJs) of visual information, there is limited evidence to support the role of the left inferior parietal lobe (IPL) in processing syllable order. The purpose of this study was to examine whether the left inferior parietal lobe is recruited during temporal order judgments of speech stimuli. Functional magnetic resonance imaging data were collected on 14 normal participants while they completed the following forced-choice tasks: (1) syllable order of multisyllabic pseudowords, (2) syllable identification of single syllables, and (3) gender identification of both multisyllabic and monosyllabic speech stimuli. Results revealed increased neural recruitment in the left inferior parietal lobe when participants made judgments about syllable order compared with both syllable identification and gender identification. These findings suggest that the left inferior parietal lobe plays an important role in processing syllable order and support the hypothesized role of this region as an interface between auditory speech and the articulatory code. Furthermore, a breakdown in this interface may explain some components of the speech deficits observed after posterior damage to the left hemisphere. PMID:19812331

  3. Noise and pitch interact during the cortical segregation of concurrent speech.

    PubMed

    Bidelman, Gavin M; Yellamsetty, Anusha

    2017-08-01

    Behavioral studies reveal listeners exploit intrinsic differences in voice fundamental frequency (F0) to segregate concurrent speech sounds-the so-called "F0-benefit." More favorable signal-to-noise ratio (SNR) in the environment, an extrinsic acoustic factor, similarly benefits the parsing of simultaneous speech. Here, we examined the neurobiological substrates of these two cues in the perceptual segregation of concurrent speech mixtures. We recorded event-related brain potentials (ERPs) while listeners performed a speeded double-vowel identification task. Listeners heard two concurrent vowels whose F0 differed by zero or four semitones presented in either clean (no noise) or noise-degraded (+5 dB SNR) conditions. Behaviorally, listeners were more accurate in correctly identifying both vowels for larger F0 separations but F0-benefit was more pronounced at more favorable SNRs (i.e., pitch × SNR interaction). Analysis of the ERPs revealed that only the P2 wave (∼200 ms) showed a similar F0 x SNR interaction as behavior and was correlated with listeners' perceptual F0-benefit. Neural classifiers applied to the ERPs further suggested that speech sounds are segregated neurally within 200 ms based on SNR whereas segregation based on pitch occurs later in time (400-700 ms). The earlier timing of extrinsic SNR compared to intrinsic F0-based segregation implies that the cortical extraction of speech from noise is more efficient than differentiating speech based on pitch cues alone, which may recruit additional cortical processes. Findings indicate that noise and pitch differences interact relatively early in cerebral cortex and that the brain arrives at the identities of concurrent speech mixtures as early as ∼200 ms. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Visual speech discrimination and identification of natural and synthetic consonant stimuli

    PubMed Central

    Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.

    2015-01-01

    From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249

  5. Is There a Relationship between Speech Identification in Noise and Categorical Perception in Children with Dyslexia?

    ERIC Educational Resources Information Center

    Calcus, Axelle; Lorenzi, Christian; Collet, Gregory; Colin, Cécile; Kolinsky, Régine

    2016-01-01

    Purpose: Children with dyslexia have been suggested to experience deficits in both categorical perception (CP) and speech identification in noise (SIN) perception. However, results regarding both abilities are inconsistent, and the relationship between them is still unclear. Therefore, this study aimed to investigate the relationship between CP…

  6. Speech and language support: How physicians can identify and treat speech and language delays in the office setting

    PubMed Central

    Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo

    2014-01-01

    Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. The tool aimed to help physicians achieve three main goals: early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society’s Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children’s speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation. PMID:24627648

  7. Of Mouths and Men: Non-Native Listeners' Identification and Evaluation of Varieties of English.

    ERIC Educational Resources Information Center

    Jarvella, Robert J.; Bang, Eva; Jakobsen, Arnt Lykke; Mees, Inger M.

    2001-01-01

    Advanced Danish students of English tried to identify the national origin of young men from Ireland, Scotland, England, and the United States from their speech and then rated the speech for attractiveness. Listeners rated speech produced by Englishmen as most attractive, and speech by Americans as least attractive. (Author/VWL)

  8. Automatic Method of Pause Measurement for Normal and Dysarthric Speech

    ERIC Educational Resources Information Center

    Rosen, Kristin; Murdoch, Bruce; Folker, Joanne; Vogel, Adam; Cahill, Louise; Delatycki, Martin; Corben, Louise

    2010-01-01

    This study proposes an automatic method for the detection of pauses and identification of pause types in conversational speech for the purpose of measuring the effects of Friedreich's Ataxia (FRDA) on speech. Speech samples of [approximately] 3 minutes were recorded from 13 speakers with FRDA and 18 healthy controls. Pauses were measured from the…

  9. Consonant identification in noise using Hilbert-transform temporal fine-structure speech and recovered-envelope speech for listeners with normal and impaired hearinga)

    PubMed Central

    Léger, Agnès C.; Reed, Charlotte M.; Desloge, Joseph G.; Swaminathan, Jayaganesh; Braida, Louis D.

    2015-01-01

    Consonant-identification ability was examined in normal-hearing (NH) and hearing-impaired (HI) listeners in the presence of steady-state and 10-Hz square-wave interrupted speech-shaped noise. The Hilbert transform was used to process speech stimuli (16 consonants in a-C-a syllables) to present envelope cues, temporal fine-structure (TFS) cues, or envelope cues recovered from TFS speech. The performance of the HI listeners was inferior to that of the NH listeners both in terms of lower levels of performance in the baseline condition and in the need for higher signal-to-noise ratio to yield a given level of performance. For NH listeners, scores were higher in interrupted noise than in steady-state noise for all speech types (indicating substantial masking release). For HI listeners, masking release was typically observed for TFS and recovered-envelope speech but not for unprocessed and envelope speech. For both groups of listeners, TFS and recovered-envelope speech yielded similar levels of performance and consonant confusion patterns. The masking release observed for TFS and recovered-envelope speech may be related to level effects associated with the manner in which the TFS processing interacts with the interrupted noise signal, rather than to the contributions of TFS cues per se. PMID:26233038

  10. Noise-robust speech triage.

    PubMed

    Bartos, Anthony L; Cipr, Tomas; Nelson, Douglas J; Schwarz, Petr; Banowetz, John; Jerabek, Ladislav

    2018-04-01

    A method is presented in which conventional speech algorithms are applied, with no modifications, to improve their performance in extremely noisy environments. It has been demonstrated that, for eigen-channel algorithms, pre-training multiple speaker identification (SID) models at a lattice of signal-to-noise-ratio (SNR) levels and then performing SID using the appropriate SNR dependent model was successful in mitigating noise at all SNR levels. In those tests, it was found that SID performance was optimized when the SNR of the testing and training data were close or identical. In this current effort multiple i-vector algorithms were used, greatly improving both processing throughput and equal error rate classification accuracy. Using identical approaches in the same noisy environment, performance of SID, language identification, gender identification, and diarization were significantly improved. A critical factor in this improvement is speech activity detection (SAD) that performs reliably in extremely noisy environments, where the speech itself is barely audible. To optimize SAD operation at all SNR levels, two algorithms were employed. The first maximized detection probability at low levels (-10 dB ≤ SNR < +10 dB) using just the voiced speech envelope, and the second exploited features extracted from the original speech to improve overall accuracy at higher quality levels (SNR ≥ +10 dB).

  11. Tuning time-frequency methods for the detection of metered HF speech

    NASA Astrophysics Data System (ADS)

    Nelson, Douglas J.; Smith, Lawrence H.

    2002-12-01

    Speech is metered if the stresses occur at a nearly regular rate. Metered speech is common in poetry, and it can occur naturally in speech, if the speaker is spelling a word or reciting words or numbers from a list. In radio communications, the CQ request, call sign and other codes are frequently metered. In tactical communications and air traffic control, location, heading and identification codes may be metered. Moreover metering may be expected to survive even in HF communications, which are corrupted by noise, interference and mistuning. For this environment, speech recognition and conventional machine-based methods are not effective. We describe Time-Frequency methods which have been adapted successfully to the problem of mitigation of HF signal conditions and detection of metered speech. These methods are based on modeled time and frequency correlation properties of nearly harmonic functions. We derive these properties and demonstrate a performance gain over conventional correlation and spectral methods. Finally, in addressing the problem of HF single sideband (SSB) communications, the problems of carrier mistuning, interfering signals, such as manual Morse, and fast automatic gain control (AGC) must be addressed. We demonstrate simple methods which may be used to blindly mitigate mistuning and narrowband interference, and effectively invert the fast automatic gain function.

  12. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    NASA Astrophysics Data System (ADS)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  13. Linguistic experience and audio-visual perception of non-native fricatives.

    PubMed

    Wang, Yue; Behne, Dawn M; Jiang, Haisheng

    2008-09-01

    This study examined the effects of linguistic experience on audio-visual (AV) perception of non-native (L2) speech. Canadian English natives and Mandarin Chinese natives differing in degree of English exposure [long and short length of residence (LOR) in Canada] were presented with English fricatives of three visually distinct places of articulation: interdentals nonexistent in Mandarin and labiodentals and alveolars common in both languages. Stimuli were presented in quiet and in a cafe-noise background in four ways: audio only (A), visual only (V), congruent AV (AVc), and incongruent AV (AVi). Identification results showed that overall performance was better in the AVc than in the A or V condition and better in quiet than in cafe noise. While the Mandarin long LOR group approximated the native English patterns, the short LOR group showed poorer interdental identification, more reliance on visual information, and greater AV-fusion with the AVi materials, indicating the failure of L2 visual speech category formation with the short LOR non-natives and the positive effects of linguistic experience with the long LOR non-natives. These results point to an integrated network in AV speech processing as a function of linguistic background and provide evidence to extend auditory-based L2 speech learning theories to the visual domain.

  14. Speech Perception in Older Hearing Impaired Listeners: Benefits of Perceptual Training

    PubMed Central

    Woods, David L.; Doss, Zoe; Herron, Timothy J.; Arbogast, Tanya; Younus, Masood; Ettlinger, Marc; Yund, E. William

    2015-01-01

    Hearing aids (HAs) only partially restore the ability of older hearing impaired (OHI) listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC) syllables containing different vowels (/ɑ/, /i/, or /u/) and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs) spanning a 12 dB range. Noise levels were adjusted over training sessions based on d’ measures. Listeners were tested before and after training to measure (1) changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2) sentence reception thresholds (SeRTs) using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d’ thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in processing simple sentences and that greater transfer of benefit would be evident in the comprehension of more unpredictable speech material. PMID:25730330

  15. Aided and Unaided Speech Perception by Older Hearing Impaired Listeners

    PubMed Central

    Woods, David L.; Arbogast, Tanya; Doss, Zoe; Younus, Masood; Herron, Timothy J.; Yund, E. William

    2015-01-01

    The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners. PMID:25730423

  16. Cortical characterization of the perception of intelligible and unintelligible speech measured via high-density electroencephalography.

    PubMed

    Utianski, Rene L; Caviness, John N; Liss, Julie M

    2015-01-01

    High-density electroencephalography was used to evaluate cortical activity during speech comprehension via a sentence verification task. Twenty-four participants assigned true or false to sentences produced with 3 noise-vocoded channel levels (1--unintelligible, 6--decipherable, 16--intelligible), during simultaneous EEG recording. Participant data were sorted into higher- (HP) and lower-performing (LP) groups. The identification of a late-event related potential for LP listeners in the intelligible condition and in all listeners when challenged with a 6-Ch signal supports the notion that this induced potential may be related to either processing degraded speech, or degraded processing of intelligible speech. Different cortical locations are identified as neural generators responsible for this activity; HP listeners are engaging motor aspects of their language system, utilizing an acoustic-phonetic based strategy to help resolve the sentence, while LP listeners do not. This study presents evidence for neurophysiological indices associated with more or less successful speech comprehension performance across listening conditions. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Computational validation of the motor contribution to speech perception.

    PubMed

    Badino, Leonardo; D'Ausilio, Alessandro; Fadiga, Luciano; Metta, Giorgio

    2014-07-01

    Action perception and recognition are core abilities fundamental for human social interaction. A parieto-frontal network (the mirror neuron system) matches visually presented biological motion information onto observers' motor representations. This process of matching the actions of others onto our own sensorimotor repertoire is thought to be important for action recognition, providing a non-mediated "motor perception" based on a bidirectional flow of information along the mirror parieto-frontal circuits. State-of-the-art machine learning strategies for hand action identification have shown better performances when sensorimotor data, as opposed to visual information only, are available during learning. As speech is a particular type of action (with acoustic targets), it is expected to activate a mirror neuron mechanism. Indeed, in speech perception, motor centers have been shown to be causally involved in the discrimination of speech sounds. In this paper, we review recent neurophysiological and machine learning-based studies showing (a) the specific contribution of the motor system to speech perception and (b) that automatic phone recognition is significantly improved when motor data are used during training of classifiers (as opposed to learning from purely auditory data). Copyright © 2014 Cognitive Science Society, Inc.

  18. Prediction of acoustic feature parameters using myoelectric signals.

    PubMed

    Lee, Ki-Seung

    2010-07-01

    It is well-known that a clear relationship exists between human voices and myoelectric signals (MESs) from the area of the speaker's mouth. In this study, we utilized this information to implement a speech synthesis scheme in which MES alone was used to predict the parameters characterizing the vocal-tract transfer function of specific speech signals. Several feature parameters derived from MES were investigated to find the optimal feature for maximization of the mutual information between the acoustic and the MES features. After the optimal feature was determined, an estimation rule for the acoustic parameters was proposed, based on a minimum mean square error (MMSE) criterion. In a preliminary study, 60 isolated words were used for both objective and subjective evaluations. The results showed that the average Euclidean distance between the original and predicted acoustic parameters was reduced by about 30% compared with the average Euclidean distance of the original parameters. The intelligibility of the synthesized speech signals using the predicted features was also evaluated. A word-level identification ratio of 65.5% and a syllable-level identification ratio of 73% were obtained through a listening test.

  19. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users

    PubMed Central

    Fuller, Christina D.; Galvin, John J.; Maat, Bert; Başkent, Deniz; Free, Rolien H.

    2018-01-01

    In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users. PMID:29621947

  20. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users.

    PubMed

    Fuller, Christina D; Galvin, John J; Maat, Bert; Başkent, Deniz; Free, Rolien H

    2018-01-01

    In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users.

  1. Overnight consolidation promotes generalization across talkers in the identification of nonnative speech sounds.

    PubMed

    Earle, F Sayako; Myers, Emily B

    2015-01-01

    This investigation explored the generalization of phonetic learning across talkers following training on a nonnative (Hindi dental and retroflex) contrast. Participants were trained in two groups, either in the morning or in the evening. Discrimination and identification performance was assessed in the trained talker and an untrained talker three times over 24 h following training. Results suggest that overnight consolidation promotes generalization across talkers in identification, but not necessarily discrimination, of nonnative speech sounds.

  2. Visual-auditory integration during speech imitation in autism.

    PubMed

    Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.

  3. Rasch Analysis of Word Identification and Magnitude Estimation Scaling Responses in Measuring Naive Listeners' Judgments of Speech Intelligibility of Children with Severe-to-Profound Hearing Impairments

    ERIC Educational Resources Information Center

    Beltyukova, Svetlana A.; Stone, Gregory M.; Ellis, Lee W.

    2008-01-01

    Purpose: Speech intelligibility research typically relies on traditional evidence of reliability and validity. This investigation used Rasch analysis to enhance understanding of the functioning and meaning of scores obtained with 2 commonly used procedures: word identification (WI) and magnitude estimation scaling (MES). Method: Narrative samples…

  4. Children's Identification of Consonants in a Speech-Shaped Noise or a Two-Talker Masker

    ERIC Educational Resources Information Center

    Leibold, Lori J.; Buss, Emily

    2013-01-01

    Purpose: To evaluate child-adult differences for consonant identification in a noise or a 2-talker masker. Error patterns were compared across age and masker type to test the hypothesis that errors with the noise masker reflect limitations in the peripheral encoding of speech, whereas errors with the 2-talker masker reflect target-masker…

  5. Speech and Swallowing in Parkinson’s Disease

    PubMed Central

    Tjaden, Kris

    2009-01-01

    Dysarthria and dysphagia occur frequently in Parkinson’s disease (PD). Reduced speech intelligibility is a significant functional limitation of dysarthria, and in the case of PD is likely related articulatory and phonatory impairment. Prosodically-based treatments show the most promise for addressing these deficits as well as for maximizing speech intelligibility. Communication-oriented strategies also may help to enhance mutual understanding between a speaker and listener. Dysphagia in PD can result in serious health issues, including aspiration pneumonia, malnutrition, and dehydration. Early identification of swallowing abnormalities is critical so as to minimize the impact of dysphagia on health status and quality of life. Feeding modifications, compensatory strategies, and therapeutic swallowing techniques all have a role in the management of dysphagia in PD. PMID:19946386

  6. Transfer of Auditory Perceptual Learning with Spectrally Reduced Speech to Speech and Nonspeech Tasks: Implications for Cochlear Implants

    PubMed Central

    Loebach, Jeremy L.; Pisoni, David B.; Svirsky, Mario A.

    2009-01-01

    Objective The objective of this study was to assess whether training on speech processed with an 8-channel noise vocoder to simulate the output of a cochlear implant would produce transfer of auditory perceptual learning to the recognition of non-speech environmental sounds, the identification of speaker gender, and the discrimination of talkers by voice. Design Twenty-four normal hearing subjects were trained to transcribe meaningful English sentences processed with a noise vocoder simulation of a cochlear implant. An additional twenty-four subjects served as an untrained control group and transcribed the same sentences in their unprocessed form. All subjects completed pre- and posttest sessions in which they transcribed vocoded sentences to provide an assessment of training efficacy. Transfer of perceptual learning was assessed using a series of closed-set, nonlinguistic tasks: subjects identified talker gender, discriminated the identity of pairs of talkers, and identified ecologically significant environmental sounds from a closed set of alternatives. Results Although both groups of subjects showed significant pre- to posttest improvements, subjects who transcribed vocoded sentences during training performed significantly better at posttest than subjects in the control group. Both groups performed equally well on gender identification and talker discrimination. Subjects who received explicit training on the vocoded sentences, however, performed significantly better on environmental sound identification than the untrained subjects. Moreover, across both groups, pretest speech performance, and to a higher degree posttest speech performance, were significantly correlated with environmental sound identification. For both groups, environmental sounds that were characterized as having more salient temporal information were identified more often than environmental sounds that were characterized as having more salient spectral information. Conclusions Listeners trained to identify noise-vocoded sentences showed evidence of transfer of perceptual learning to the identification of environmental sounds. In addition, the correlation between environmental sound identification and sentence transcription indicates that subjects who were better able to utilize the degraded acoustic information to identify the environmental sounds were also better able to transcribe the linguistic content of novel sentences. Both trained and untrained groups performed equally well (~75% correct) on the gender identification task, indicating that training did not have an effect on the ability to identify the gender of talkers. Although better than chance, performance on the talker discrimination task was poor overall (~55%), suggesting that either explicit training is required to reliably discriminate talkers’ voices, or that additional information (perhaps spectral in nature) not present in the vocoded speech is required to excel in such tasks. Taken together, the results suggest that while transfer of auditory perceptual learning with spectrally degraded speech does occur, explicit task-specific training may be necessary for tasks that cannot rely on temporal information alone. PMID:19773659

  7. Contralateral Bimodal Stimulation: A Way to Enhance Speech Performance in Arabic-Speaking Cochlear Implant Patients.

    PubMed

    Abdeltawwab, Mohamed M; Khater, Ahmed; El-Anwar, Mohammad W

    2016-01-01

    The combination of acoustic and electric stimulation as a way to enhance speech recognition performance in cochlear implant (CI) users has generated considerable interest in the recent years. The purpose of this study was to evaluate the bimodal advantage of the FS4 speech processing strategy in combination with hearing aids (HA) as a means to improve low-frequency resolution in CI patients. Nineteen postlingual CI adults were selected to participate in this study. All patients wore implants on one side and HA on the contralateral side with residual hearing. Monosyllabic word recognition, speech in noise, and emotion and talker identification were assessed using CI with fine structure processing/FS4 and high-definition continuous interleaved sampling strategies, HA alone, and a combination of CI and HA. The bimodal stimulation showed improvement in speech performance and emotion identification for the question/statement/order tasks, which was statistically significant compared to patients with CI alone, but there were no significant statistical differences in intragender talker discrimination and emotion identification for the happy/angry/neutral tasks. The poorest performance was obtained with HA only, and it was statistically significant compared to the other modalities. The bimodal stimulation showed enhanced speech performance in CI patients, and it improves the limitations provided by electric or acoustic stimulation alone. © 2016 S. Karger AG, Basel.

  8. An Acoustic and Social Dialect Analysis of Perceptual Variables in Listener Identification and Rating of Negro Speakers. Final Report.

    ERIC Educational Resources Information Center

    Bryden, James D.

    The purpose of this study was to specify variables which function significantly in the racial identification and speech quality rating of Negro and white speakers by Negro and white listeners. Ninety-one adults served as subjects for the speech task; 86 of these subjects, 43 Negro and 43 white, provided the listener responses. Subjects were chosen…

  9. Approaches for Language Identification in Mismatched Environments

    DTIC Science & Technology

    2016-09-08

    different i-vector systems are considered, which differ in their feature extraction mechanism. The first, which we refer to as the standard i-vector, or...both conversational telephone speech and narrowband broadcast speech. Multiple experiments are conducted to assess the performance of the system in...bottleneck features using i-vectors. The proposed system results in a 30% improvement over the baseline result. Index Terms: language identification

  10. Differential neural contributions to native- and foreign-language talker identification

    PubMed Central

    Perrachione, Tyler K.; Pierrehumbert, Janet B.; Wong, Patrick C.M.

    2009-01-01

    Humans are remarkably adept at identifying individuals by the sound of their voice, a behavior supported by the nervous system’s ability to integrate information from voice and speech perception. Talker-identification abilities are significantly impaired when listeners are unfamiliar with the language being spoken. Recent behavioral studies describing the language-familiarity effect implicate functionally integrated neural systems for speech and voice perception, yet specific neuroscientific evidence demonstrating the basis for such integration has not yet been shown. Listeners in the present study learned to identify voices speaking a familiar (native) or unfamiliar (foreign) language. The talker-identification performance of neural circuitry in each cerebral hemisphere was assessed using dichotic listening. To determine the relative contribution of circuitry in each hemisphere to ecological (binaural) talker identification abilities, we compared the predictive capacity of dichotic performance on binaural performance across languages. We found listeners’ right-ear (left hemisphere) performance to be a better predictor of overall accuracy in their native language than a foreign one. The enhanced predictive capacity of the classically language-dominant left-hemisphere on overall talker-identification accuracy demonstrates functionally integrated neural systems for speech and voice perception during natural talker identification. PMID:19968445

  11. Speech Alarms Pilot Study

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Moses, Haifa

    2016-01-01

    Speech alarms have been used extensively in aviation and included in International Building Codes (IBC) and National Fire Protection Association's (NFPA) Life Safety Code. However, they have not been implemented on space vehicles. Previous studies conducted at NASA JSC showed that speech alarms lead to faster identification and higher accuracy. This research evaluated updated speech and tone alerts in a laboratory environment and in the Human Exploration Research Analog (HERA) in a realistic setup.

  12. Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing

    ERIC Educational Resources Information Center

    Kong, Ying-Yee; Braida, Louis D.

    2011-01-01

    Purpose: Improved speech recognition in binaurally combined acoustic-electric stimulation (otherwise known as "bimodal hearing") could arise when listeners integrate speech cues from the acoustic and electric hearing. The aims of this study were (a) to identify speech cues extracted in electric hearing and residual acoustic hearing in the…

  13. Audiovisual integration in children listening to spectrally degraded speech.

    PubMed

    Maidment, David W; Kang, Hi Jee; Stewart, Hannah J; Amitay, Sygal

    2015-02-01

    The study explored whether visual information improves speech identification in typically developing children with normal hearing when the auditory signal is spectrally degraded. Children (n=69) and adults (n=15) were presented with noise-vocoded sentences from the Children's Co-ordinate Response Measure (Rosen, 2011) in auditory-only or audiovisual conditions. The number of bands was adaptively varied to modulate the degradation of the auditory signal, with the number of bands required for approximately 79% correct identification calculated as the threshold. The youngest children (4- to 5-year-olds) did not benefit from accompanying visual information, in comparison to 6- to 11-year-old children and adults. Audiovisual gain also increased with age in the child sample. The current data suggest that children younger than 6 years of age do not fully utilize visual speech cues to enhance speech perception when the auditory signal is degraded. This evidence not only has implications for understanding the development of speech perception skills in children with normal hearing but may also inform the development of new treatment and intervention strategies that aim to remediate speech perception difficulties in pediatric cochlear implant users.

  14. Masking release for words in amplitude-modulated noise as a function of modulation rate and task

    PubMed Central

    Buss, Emily; Whittle, Lisa N.; Grose, John H.; Hall, Joseph W.

    2009-01-01

    For normal-hearing listeners, masked speech recognition can improve with the introduction of masker amplitude modulation. The present experiments tested the hypothesis that this masking release is due in part to an interaction between the temporal distribution of cues necessary to perform the task and the probability of those cues temporally coinciding with masker modulation minima. Stimuli were monosyllabic words masked by speech-shaped noise, and masker modulation was introduced via multiplication with a raised sinusoid of 2.5–40 Hz. Tasks included detection, three-alternative forced-choice identification, and open-set identification. Overall, there was more masking release associated with the closed than the open-set tasks. The best rate of modulation also differed as a function of task; whereas low modulation rates were associated with best performance for the detection and three-alternative identification tasks, performance improved with modulation rate in the open-set task. This task-by-rate interaction was also observed when amplitude-modulated speech was presented in a steady masker, and for low- and high-pass filtered speech presented in modulated noise. These results were interpreted as showing that the optimal rate of amplitude modulation depends on the temporal distribution of speech cues and the information required to perform a particular task. PMID:19603883

  15. Aided speech recognition in single-talker competition by elderly hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Coughlin, Maureen; Humes, Larry

    2004-05-01

    This study examined the speech-identification performance in one-talker interference conditions that increased in complexity while audibility was ensured over a wide bandwidth (200-4000 Hz). Factorial combinations of three independent variables were used to vary the amount of informational masking. These variables were: (1) competition playback direction (forward or reverse); (2) gender match between target and competition talkers (same or different); and (3) target talker uncertainty (one of three possible talkers from trial to trial). Four groups of listeners, two elderly hearing-impaired groups differing in age (65-74 and 75-84 years) and two young normal-hearing groups, were tested. One of the groups of young normal-hearing listeners was tested under acoustically equivalent test conditions and one was tested under perceptually equivalent test conditions. The effect of each independent variable on speech-identification performance and informational masking was generally consistent with expectations. Group differences in the observed informational masking were most pronounced for the oldest group of hearing-impaired listeners. The eight measures of speech-identification performance were found to be strongly correlated with one another, and individual differences in speech understanding performance among the elderly were found to be associated with age and level of education. [Work supported, in part, by NIA.

  16. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    NASA Astrophysics Data System (ADS)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  17. Profiling Speech and Pausing in Amyotrophic Lateral Sclerosis (ALS) and Frontotemporal Dementia (FTD)

    PubMed Central

    Yunusova, Yana; Graham, Naida L.; Shellikeri, Sanjana; Phuong, Kent; Kulkarni, Madhura; Rochon, Elizabeth; Tang-Wai, David F.; Chow, Tiffany W.; Black, Sandra E.; Zinman, Lorne H.; Green, Jordan R.

    2016-01-01

    Objective This study examines reading aloud in patients with amyotrophic lateral sclerosis (ALS) and those with frontotemporal dementia (FTD) in order to determine whether differences in patterns of speaking and pausing exist between patients with primary motor vs. primary cognitive-linguistic deficits, and in contrast to healthy controls. Design 136 participants were included in the study: 33 controls, 85 patients with ALS, and 18 patients with either the behavioural variant of FTD (FTD-BV) or progressive nonfluent aphasia (FTD-PNFA). Participants with ALS were further divided into 4 non-overlapping subgroups—mild, respiratory, bulbar (with oral-motor deficit) and bulbar-respiratory—based on the presence and severity of motor bulbar or respiratory signs. All participants read a passage aloud. Custom-made software was used to perform speech and pause analyses, and this provided measures of speaking and articulatory rates, duration of speech, and number and duration of pauses. These measures were statistically compared in different subgroups of patients. Results The results revealed clear differences between patient groups and healthy controls on the passage reading task. A speech-based motor function measure (i.e., articulatory rate) was able to distinguish patients with bulbar ALS or FTD-PNFA from those with respiratory ALS or FTD-BV. Distinguishing the disordered groups proved challenging based on the pausing measures. Conclusions and Relevance This study demonstrated the use of speech measures in the identification of those with an oral-motor deficit, and showed the usefulness of performing a relatively simple reading test to assess speech versus pause behaviors across the ALS—FTD disease continuum. The findings also suggest that motor speech assessment should be performed as part of the diagnostic workup for patients with FTD. PMID:26789001

  18. Speech-language therapy for adolescents with written-language difficulties: the South African context.

    PubMed

    Erasmus, D; Schutte, L; van der Merwe, M; Geertsema, S

    2013-12-01

    To investigate whether privately practising speech-language therapists in South Africa are fulfilling their role of identification, assessment and intervention for adolescents with written-language and reading difficulties. Further needs concerning training with regard to this population group were also determined. A survey study was conducted, using a self-administered questionnaire. Twenty-two currently practising speech-language therapists who are registered members of the South African Speech-Language-Hearing Association (SASLHA) participated in the study. The respondents indicated that they are aware of their role regarding adolescents with written-language difficulties. However, they feel that South-African speech-language therapists are not fulfilling this role. Existing assessment tools and interventions for written-language difficulties are described as inadequate, and culturally and age inappropriate. Yet, the majority of the respondents feel that they are adequately equipped to work with adolescents with written-language difficulties, based on their own experience, self-study and secondary training. The respondents feel that training regarding effective collaboration with teachers is necessary to establish specific roles, and to promote speech-language therapy for adolescents among teachers. Further research is needed in developing appropriate assessment and intervention tools as well as improvement of training at an undergraduate level.

  19. A comparative intelligibility study of single-microphone noise reduction algorithms.

    PubMed

    Hu, Yi; Loizou, Philipos C

    2007-09-01

    The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.

  20. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    PubMed

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  1. Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data

    PubMed Central

    Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory

    2015-01-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  2. Status Report on Speech Research: A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for Its Investigation, and Practical Applications, July 1 - December 31, 1977).

    ERIC Educational Resources Information Center

    Haskins Labs., New Haven, CT.

    This report is one of a regular series about the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. The 17 papers discuss the identification of sine-wave analogues of speech sounds; prosodic information for vowel identity; progressive changes in articulatory patterns in verbal…

  3. Effects of Audio-Visual Integration on the Detection of Masked Speech and Non-Speech Sounds

    ERIC Educational Resources Information Center

    Eramudugolla, Ranmalee; Henderson, Rachel; Mattingley, Jason B.

    2011-01-01

    Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that…

  4. Intelligibility as a clinical outcome measure following intervention with children with phonologically based speech-sound disorders.

    PubMed

    Lousada, M; Jesus, Luis M T; Hall, A; Joffe, V

    2014-01-01

    The effectiveness of two treatment approaches (phonological therapy and articulation therapy) for treatment of 14 children, aged 4;0-6;7 years, with phonologically based speech-sound disorder (SSD) has been previously analysed with severity outcome measures (percentage of consonants correct score, percentage occurrence of phonological processes and phonetic inventory). Considering that the ultimate goal of intervention for children with phonologically based SSD is to improve intelligibility, it is curious that intervention studies focusing on children's phonology do not routinely use intelligibility as an outcome measure. It is therefore important that the impact of interventions on speech intelligibility is explored. This paper investigates the effectiveness of the two treatment approaches (phonological therapy and articulation therapy) using intelligibility measures, both in single words and in continuous speech, as the primary outcome. Fourteen children with phonologically based SSD participated in the intervention. The children were randomly assigned to phonological therapy or articulation therapy (seven children in each group). Two assessment methods were used for measuring intelligibility: a word identification task (for single words) and a rating scale (for continuous speech). Twenty-one unfamiliar adults listened and judged the children's intelligibility. Reliability analyses showed overall high agreement between listeners across both methods. Significant improvements were noted in intelligibility in both single words (paired t(6)=4.409, p=0.005) and continuous speech (asymptotic Z=2.371, p=0.018) for the group receiving phonology therapy pre- to post-treatment, but no differences in intelligibility were found for those receiving the articulation therapy pre- to post-treatment, either for single words (paired t(6)=1.763, p=0.128) or continuous speech (asymptotic Z=1.442, p=0.149). Intelligibility measures were sensitive enough to show changes in the phonological therapy group but not in the articulation therapy group. These findings emphasize the importance of using intelligibility as an outcome measure to complement the results obtained with other severity measures when exploring the effectiveness of speech interventions. This study presents new evidence for the effectiveness of phonological therapy in improving intelligibility with children with SSD. © 2014 Royal College of Speech and Language Therapists.

  5. Transfer of auditory perceptual learning with spectrally reduced speech to speech and nonspeech tasks: implications for cochlear implants.

    PubMed

    Loebach, Jeremy L; Pisoni, David B; Svirsky, Mario A

    2009-12-01

    The objective of this study was to assess whether training on speech processed with an eight-channel noise vocoder to simulate the output of a cochlear implant would produce transfer of auditory perceptual learning to the recognition of nonspeech environmental sounds, the identification of speaker gender, and the discrimination of talkers by voice. Twenty-four normal-hearing subjects were trained to transcribe meaningful English sentences processed with a noise vocoder simulation of a cochlear implant. An additional 24 subjects served as an untrained control group and transcribed the same sentences in their unprocessed form. All subjects completed pre- and post-test sessions in which they transcribed vocoded sentences to provide an assessment of training efficacy. Transfer of perceptual learning was assessed using a series of closed set, nonlinguistic tasks: subjects identified talker gender, discriminated the identity of pairs of talkers, and identified ecologically significant environmental sounds from a closed set of alternatives. Although both groups of subjects showed significant pre- to post-test improvements, subjects who transcribed vocoded sentences during training performed significantly better at post-test than those in the control group. Both groups performed equally well on gender identification and talker discrimination. Subjects who received explicit training on the vocoded sentences, however, performed significantly better on environmental sound identification than the untrained subjects. Moreover, across both groups, pre-test speech performance and, to a higher degree, post-test speech performance, were significantly correlated with environmental sound identification. For both groups, environmental sounds that were characterized as having more salient temporal information were identified more often than environmental sounds that were characterized as having more salient spectral information. Listeners trained to identify noise-vocoded sentences showed evidence of transfer of perceptual learning to the identification of environmental sounds. In addition, the correlation between environmental sound identification and sentence transcription indicates that subjects who were better able to use the degraded acoustic information to identify the environmental sounds were also better able to transcribe the linguistic content of novel sentences. Both trained and untrained groups performed equally well ( approximately 75% correct) on the gender-identification task, indicating that training did not have an effect on the ability to identify the gender of talkers. Although better than chance, performance on the talker discrimination task was poor overall ( approximately 55%), suggesting that either explicit training is required to discriminate talkers' voices reliably or that additional information (perhaps spectral in nature) not present in the vocoded speech is required to excel in such tasks. Taken together, the results suggest that although transfer of auditory perceptual learning with spectrally degraded speech does occur, explicit task-specific training may be necessary for tasks that cannot rely on temporal information alone.

  6. A mathematical model of vowel identification by users of cochlear implants

    PubMed Central

    Sagi, Elad; Meyer, Ted A.; Kaiser, Adam R.; Teoh, Su Wooi; Svirsky, Mario A.

    2010-01-01

    A simple mathematical model is presented that predicts vowel identification by cochlear implant users based on these listeners’ resolving power for the mean locations of first, second, and∕or third formant energies along the implanted electrode array. This psychophysically based model provides hypotheses about the mechanism cochlear implant users employ to encode and process the input auditory signal to extract information relevant for identifying steady-state vowels. Using one free parameter, the model predicts most of the patterns of vowel confusions made by users of different cochlear implant devices and stimulation strategies, and who show widely different levels of speech perception (from near chance to near perfect). Furthermore, the model can predict results from the literature, such as Skinner, et al. [(1995). Ann. Otol. Rhinol. Laryngol. 104, 307–311] frequency mapping study, and the general trend in the vowel results of Zeng and Galvin’s [(1999). Ear Hear. 20, 60–74] studies of output electrical dynamic range reduction. The implementation of the model presented here is specific to vowel identification by cochlear implant users, but the framework of the model is more general. Computational models such as the one presented here can be useful for advancing knowledge about speech perception in hearing impaired populations, and for providing a guide for clinical research and clinical practice. PMID:20136228

  7. Talker identification across source mechanisms: experiments with laryngeal and electrolarynx speech.

    PubMed

    Perrachione, Tyler K; Stepp, Cara E; Hillman, Robert E; Wong, Patrick C M

    2014-10-01

    The purpose of this study was to determine listeners' ability to learn talker identity from speech produced with an electrolarynx, explore source and filter differentiation in talker identification, and describe acoustic-phonetic changes associated with electrolarynx use. Healthy adult control listeners learned to identify talkers from speech recordings produced using talkers' normal laryngeal vocal source or an electrolarynx. Listeners' abilities to identify talkers from the trained vocal source (Experiment 1) and generalize this knowledge to the untrained source (Experiment 2) were assessed. Acoustic-phonetic measurements of spectral differences between source mechanisms were performed. Additional listeners attempted to match recordings from different source mechanisms to a single talker (Experiment 3). Listeners successfully learned talker identity from electrolarynx speech but less accurately than from laryngeal speech. Listeners were unable to generalize talker identity to the untrained source mechanism. Electrolarynx use resulted in vowels with higher F1 frequencies compared with laryngeal speech. Listeners matched recordings from different sources to a single talker better than chance. Electrolarynx speech, although lacking individual differences in voice quality, nevertheless conveys sufficient indexical information related to the vocal filter and articulation for listeners to identify individual talkers. Psychologically, perception of talker identity arises from a "gestalt" of the vocal source and filter.

  8. Cost of speech-language interventions for children and youth with foetal alcohol spectrum disorder in Canada.

    PubMed

    Popova, Svetlana; Lange, Shannon; Burd, Larry; Shield, Kevin; Rehm, Jürgen

    2014-12-01

    This study, which is part of a large economic project on the overall burden and cost associated with Foetal Alcohol Spectrum Disorder (FASD) in Canada, estimated the cost of 1:1 speech-language interventions among children and youth with FASD for Canada in 2011. The number of children and youth with FASD and speech-language disorder(s) (SLD), the distribution of the level of severity, and the number of hours needed to treat were estimated using data from the available literature. 1:1 speech-language interventions were computed using the average cost per hour for speech-language pathologists. It was estimated that ˜ 37,928 children and youth with FASD had SLD in Canada in 2011. Using the most conservative approach, the annual cost of 1:1 speech-language interventions among children and youth with FASD is substantial, ranging from $72.5 million to $144.1 million Canadian dollars. Speech-language pathologists should be aware of the disproportionate number of children and youth with FASD who have SLD and the need for early identification to improve access to early intervention. Early identification and access to high quality services may have a role in decreasing the risk of developing the secondary disabilities and in reducing the economic burden of FASD on society.

  9. Talker identification across source mechanisms: Experiments with laryngeal and electrolarynx speech

    PubMed Central

    Perrachione, Tyler K.; Stepp, Cara E.; Hillman, Robert E.; Wong, Patrick C.M.

    2015-01-01

    Purpose To determine listeners' ability to learn talker identity from speech produced with an electrolarynx, explore source and filter differentiation in talker identification, and describe acoustic-phonetic changes associated with electrolarynx use. Method Healthy adult control listeners learned to identify talkers from speech recordings produced using talkers' normal laryngeal vocal source or an electrolarynx. Listeners' abilities to identify talkers from the trained vocal source (Experiment 1) and generalize this knowledge to the untrained source (Experiment 2) were assessed. Acoustic-phonetic measurements of spectral differences between source mechanisms were performed. Additional listeners attempted to match recordings from different source mechanisms to a single talker (Experiment 3). Results Listeners successfully learned talker identity from electrolarynx speech, but less accurately than from laryngeal speech. Listeners were unable to generalize talker identity to the untrained source mechanism. Electrolarynx use resulted in vowels with higher F1 frequencies compared to laryngeal speech. Listeners matched recordings from different sources to a single talker better than chance. Conclusions Electrolarynx speech, though lacking individual differences in voice quality, nevertheless conveys sufficient indexical information related to the vocal filter and articulation for listeners to identify individual talkers. Psychologically, perception of talker identity arises from a “gestalt” of the vocal source and filter. PMID:24801962

  10. Speech watermarking: an approach for the forensic analysis of digital telephonic recordings.

    PubMed

    Faundez-Zanuy, Marcos; Lucena-Molina, Jose J; Hagmüller, Martin

    2010-07-01

    In this article, the authors discuss the problem of forensic authentication of digital audio recordings. Although forensic audio has been addressed in several articles, the existing approaches are focused on analog magnetic recordings, which are less prevalent because of the large amount of digital recorders available on the market (optical, solid state, hard disks, etc.). An approach based on digital signal processing that consists of spread spectrum techniques for speech watermarking is presented. This approach presents the advantage that the authentication is based on the signal itself rather than the recording format. Thus, it is valid for usual recording devices in police-controlled telephone intercepts. In addition, our proposal allows for the introduction of relevant information such as the recording date and time and all the relevant data (this is not always possible with classical systems). Our experimental results reveal that the speech watermarking procedure does not interfere in a significant way with the posterior forensic speaker identification.

  11. The Mechanism of Speech Processing in Congenital Amusia: Evidence from Mandarin Speakers

    PubMed Central

    Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren

    2012-01-01

    Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results. PMID:22347374

  12. The mechanism of speech processing in congenital amusia: evidence from Mandarin speakers.

    PubMed

    Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren

    2012-01-01

    Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results.

  13. Voice recognition through phonetic features with Punjabi utterances

    NASA Astrophysics Data System (ADS)

    Kaur, Jasdeep; Juglan, K. C.; Sharma, Vishal; Upadhyay, R. K.

    2017-07-01

    This paper deals with perception and disorders of speech in view of Punjabi language. Visualizing the importance of voice identification, various parameters of speaker identification has been studied. The speech material was recorded with a tape recorder in their normal and disguised mode of utterances. Out of the recorded speech materials, the utterances free from noise, etc were selected for their auditory and acoustic spectrographic analysis. The comparison of normal and disguised speech of seven subjects is reported. The fundamental frequency (F0) at similar places, Plosive duration at certain phoneme, Amplitude ratio (A1:A2) etc. were compared in normal and disguised speech. It was found that the formant frequency of normal and disguised speech remains almost similar only if it is compared at the position of same vowel quality and quantity. If the vowel is more closed or more open in the disguised utterance the formant frequency will be changed in comparison to normal utterance. The ratio of the amplitude (A1: A2) is found to be speaker dependent. It remains unchanged in the disguised utterance. However, this value may shift in disguised utterance if cross sectioning is not done at the same location.

  14. Efficacy of Multiple-Talker Phonetic Identification Training in Postlingually Deafened Cochlear Implant Listeners.

    PubMed

    Miller, Sharon E; Zhang, Yang; Nelson, Peggy B

    2016-02-01

    This study implemented a pretest-intervention-posttest design to examine whether multiple-talker identification training enhanced phonetic perception of the /ba/-/da/ and /wa/-/ja/ contrasts in adult listeners who were deafened postlingually and have cochlear implants (CIs). Nine CI recipients completed 8 hours of identification training using a custom-designed training package. Perception of speech produced by familiar talkers (talkers used during training) and unfamiliar talkers (talkers not used during training) was measured before and after training. Five additional untrained CI recipients completed identical pre- and posttests over the same time course as the trainees to control for procedural learning effects. Perception of the speech contrasts produced by the familiar talkers significantly improved for the trained CI listeners, and effects of perceptual learning transferred to unfamiliar talkers. Such training-induced significant changes were not observed in the control group. The data provide initial evidence of the efficacy of the multiple-talker identification training paradigm for CI users who were deafened postlingually. This pattern of results is consistent with enhanced phonemic categorization of the trained speech sounds.

  15. On The (Un)importance of Working Memory in Speech-in-Noise Processing for Listeners with Normal Hearing Thresholds.

    PubMed

    Füllgrabe, Christian; Rosen, Stuart

    2016-01-01

    With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in the processing of speech in noise (SiN). The psychological construct that has received much interest in recent years is working memory. Empirical evidence indeed confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. However, some theoretical models propose that variations in WMC are an important predictor for variations in speech processing abilities in adverse perceptual conditions for all listeners, and this notion has become widely accepted within the field. To assess whether WMC also plays a role when listeners without hearing loss process speech in adverse listening conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification, using sentence material routinely used in audiological and hearing research. A meta-analysis revealed that, for young listeners with audiometrically normal hearing, individual variations in WMC are estimated to account for, on average, less than 2% of the variance in SiN identification scores. This result cautions against the (intuitively appealing) assumption that individual variations in WMC are predictive of SiN identification independently of the age and hearing status of the listener.

  16. Noise Reduction with Microphone Arrays for Speaker Identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, Z

    Reducing acoustic noise in audio recordings is an ongoing problem that plagues many applications. This noise is hard to reduce because of interfering sources and non-stationary behavior of the overall background noise. Many single channel noise reduction algorithms exist but are limited in that the more the noise is reduced; the more the signal of interest is distorted due to the fact that the signal and noise overlap in frequency. Specifically acoustic background noise causes problems in the area of speaker identification. Recording a speaker in the presence of acoustic noise ultimately limits the performance and confidence of speaker identificationmore » algorithms. In situations where it is impossible to control the environment where the speech sample is taken, noise reduction filtering algorithms need to be developed to clean the recorded speech of background noise. Because single channel noise reduction algorithms would distort the speech signal, the overall challenge of this project was to see if spatial information provided by microphone arrays could be exploited to aid in speaker identification. The goals are: (1) Test the feasibility of using microphone arrays to reduce background noise in speech recordings; (2) Characterize and compare different multichannel noise reduction algorithms; (3) Provide recommendations for using these multichannel algorithms; and (4) Ultimately answer the question - Can the use of microphone arrays aid in speaker identification?« less

  17. Melodic contour identification and sentence recognition using sung speech

    PubMed Central

    Crew, Joseph D.; Galvin, John J.; Fu, Qian-Jie

    2015-01-01

    For bimodal cochlear implant users, acoustic and electric hearing has been shown to contribute differently to speech and music perception. However, differences in test paradigms and stimuli in speech and music testing can make it difficult to assess the relative contributions of each device. To address these concerns, the Sung Speech Corpus (SSC) was created. The SSC contains 50 monosyllable words sung over an octave range and can be used to test both speech and music perception using the same stimuli. Here SSC data are presented with normal hearing listeners and any advantage of musicianship is examined. PMID:26428838

  18. Evaluation of a speaker identification system with and without fusion using three databases in the presence of noise and handset effects

    NASA Astrophysics Data System (ADS)

    S. Al-Kaltakchi, Musab T.; Woo, Wai L.; Dlay, Satnam; Chambers, Jonathon A.

    2017-12-01

    In this study, a speaker identification system is considered consisting of a feature extraction stage which utilizes both power normalized cepstral coefficients (PNCCs) and Mel frequency cepstral coefficients (MFCC). Normalization is applied by employing cepstral mean and variance normalization (CMVN) and feature warping (FW), together with acoustic modeling using a Gaussian mixture model-universal background model (GMM-UBM). The main contributions are comprehensive evaluations of the effect of both additive white Gaussian noise (AWGN) and non-stationary noise (NSN) (with and without a G.712 type handset) upon identification performance. In particular, three NSN types with varying signal to noise ratios (SNRs) were tested corresponding to street traffic, a bus interior, and a crowded talking environment. The performance evaluation also considered the effect of late fusion techniques based on score fusion, namely, mean, maximum, and linear weighted sum fusion. The databases employed were TIMIT, SITW, and NIST 2008; and 120 speakers were selected from each database to yield 3600 speech utterances. As recommendations from the study, mean fusion is found to yield overall best performance in terms of speaker identification accuracy (SIA) with noisy speech, whereas linear weighted sum fusion is overall best for original database recordings.

  19. Acoustic and Perceptual Effects of Dysarthria in Greek with a Focus on Lexical Stress

    NASA Astrophysics Data System (ADS)

    Papakyritsis, Ioannis

    The field of motor speech disorders in Greek is substantially underresearched. Additionally, acoustic studies on lexical stress in dysarthria are generally very rare (Kim et al. 2010). This dissertation examined the acoustic and perceptual effects of Greek dysarthria focusing on lexical stress. Additional possibly deviant speech characteristics were acoustically analyzed. Data from three dysarthric participants and matched controls was analyzed using a case study design. The analysis of lexical stress was based on data drawn from a single word repetition task that included pairs of disyllabic words differentiated by stress location. This data was acoustically analyzed in terms of the use of the acoustic cues for Greek stress. The ability of the dysarthric participants to signal stress in single words was further assessed in a stress identification task carried out by 14 naive Greek listeners. Overall, the acoustic and perceptual data indicated that, although all three dysarthric speakers presented with some difficulty in the patterning of stressed and unstressed syllables, each had different underlying problems that gave rise to quite distinct patterns of deviant speech characteristics. The atypical use of lexical stress cues in Anna's data obscured the prominence relations of stressed and unstressed syllables to the extent that the position of lexical stress was usually not perceptually transparent. Chris and Maria on the other hand, did not have marked difficulties signaling lexical stress location, although listeners were not 100% successful in the stress identification task. For the most part, Chris' atypical phonation patterns and Maria's very slow rate of speech did not interfere with lexical stress signaling. The acoustic analysis of the lexical stress cues was generally in agreement with the participants' performance in the stress identification task. Interestingly, in all three dysarthric participants, but more so in Anna, targets stressed on the 1st syllable were more impervious to error judgments of lexical stress location than targets stressed on the 2nd syllable, although the acoustic metrics did not always suggest a more appropriate use of lexical stress cues in 1st syllable position. The findings contribute to our limited knowledge of the speech characteristics of dysarthria across different languages.

  20. Voice Quality Modelling for Expressive Speech Synthesis

    PubMed Central

    Socoró, Joan Claudi

    2014-01-01

    This paper presents the perceptual experiments that were carried out in order to validate the methodology of transforming expressive speech styles using voice quality (VoQ) parameters modelling, along with the well-known prosody (F 0, duration, and energy), from a neutral style into a number of expressive ones. The main goal was to validate the usefulness of VoQ in the enhancement of expressive synthetic speech in terms of speech quality and style identification. A harmonic plus noise model (HNM) was used to modify VoQ and prosodic parameters that were extracted from an expressive speech corpus. Perception test results indicated the improvement of obtained expressive speech styles using VoQ modelling along with prosodic characteristics. PMID:24587738

  1. The interaction between awareness of one's own speech disorder with linguistics variables: distinctive features and severity of phonological disorder.

    PubMed

    Dias, Roberta Freitas; Melo, Roberta Michelon; Mezzomo, Carolina Lisbôa; Mota, Helena Bolli

    2013-01-01

    To analyze the possible relationship among the awareness of one's own speech disorder and some aspects of the phonological system, as the number and the type of changed distinctive features, as well as the interaction among the severity of the disorder and the non-specification of distinctive features. The analyzed group has 23 children with diagnosis of speech disorder, aged 5:0 to 7:7. The speech data were analyzed through the Distinctive Features Analysis and classified by the Percentage of Correct Consonants. One also applied the Awareness of one's own speech disorder test. The children were separated in two groups: with awareness of their own speech disorder established (more than 50% of correct identification) and without awareness of their own speech disorder established (less than 50% of correct identification). Finally, the variables of this research were submitted to analysis using descriptive and inferential statistics. The type of changed distinctive features weren't different between the groups, as well as the total of changed features and the severity disorder. However, a correlation between the severity disorder and the non-specification of distinctive features was verified, because the more severe disorders have more changes in these linguistic variables. The awareness of one's own speech disorder doesn't seem to be directly influenced by the type and by the number of changed distinctive features, neither by the speech disorder severity. Moreover, one verifies that the greater phonological disorder severity, the greater the number of changed distinctive features.

  2. Language identification from visual-only speech signals

    PubMed Central

    Ronquest, Rebecca E.; Levi, Susannah V.; Pisoni, David B.

    2010-01-01

    Our goal in the present study was to examine how observers identify English and Spanish from visual-only displays of speech. First, we replicated the recent findings of Soto-Faraco et al. (2007) with Spanish and English bilingual and monolingual observers using different languages and a different experimental paradigm (identification). We found that prior linguistic experience affected response bias but not sensitivity (Experiment 1). In two additional experiments, we investigated the visual cues that observers use to complete the language-identification task. The results of Experiment 2 indicate that some lexical information is available in the visual signal but that it is limited. Acoustic analyses confirmed that our Spanish and English stimuli differed acoustically with respect to linguistic rhythmic categories. In Experiment 3, we tested whether this rhythmic difference could be used by observers to identify the language when the visual stimuli is temporally reversed, thereby eliminating lexical information but retaining rhythmic differences. The participants performed above chance even in the backward condition, suggesting that the rhythmic differences between the two languages may aid language identification in visual-only speech signals. The results of Experiments 3A and 3B also confirm previous findings that increased stimulus length facilitates language identification. Taken together, the results of these three experiments replicate earlier findings and also show that prior linguistic experience, lexical information, rhythmic structure, and utterance length influence visual-only language identification. PMID:20675804

  3. Issues in Identification and Assessment of Children with Autism and a Proposed Resource Toolkit for Speech-Language Pathologists.

    PubMed

    Hus, Yvette

    2017-01-01

    The prevalence of autism spectrum disorder (ASD) has increased significantly in the last decade as have treatment choices. Nonetheless, the vastly diverse autism topic includes issues related to naming, description, iden-tification, assessment, and differentiation from other neu-rodevelopmental conditions. ASD issues directly impact speech-language pathologists (SLPs) who often see these children as the second contact, after pediatric medical practitioners. Because of shared symptomology, differentiation among neurodevelopmental disorders is crucial as it impacts treatment, educational choices, and the performance trajectory of affected children. To highlight issues in: identification and differentiation of ASD from other communication and language challenges, the prevalence differences between ASD gender phenotypes, and the insufficient consideration of cultural factors in evaluating ASD in children. A second objective was to propose a tool to assist SLPs in the management of autism in children. A universal resource toolkit development project for SLP communities at large is proposed. The resource is comprised of research-based observation and screening tools for caregivers and educators, as well as parent questionnaires for portraying the children's function in the family, cultural com-munity, and educational setting. © 2017 S. Karger AG, Basel.

  4. Identifying Speech Acts in E-Mails: Toward Automated Scoring of the "TOEIC"® E-Mail Task. Research Report. ETS RR-12-16

    ERIC Educational Resources Information Center

    De Felice, Rachele; Deane, Paul

    2012-01-01

    This study proposes an approach to automatically score the "TOEIC"® Writing e-mail task. We focus on one component of the scoring rubric, which notes whether the test-takers have used particular speech acts such as requests, orders, or commitments. We developed a computational model for automated speech act identification and tested it…

  5. Mother and Father Speech: Distribution of Parental Speech Features in English and Spanish. Papers and Reports on Child Language Development, No. 12.

    ERIC Educational Resources Information Center

    Blount, Ben G.; Padgug, Elise J.

    Features of parental speech to young children was studied in four English-speaking and four Spanish-speaking families. Children ranged in age from 9 to 12 months for the English speakers and from 8 to 22 months for the Spanish speakers. Examination of the utterances led to the identification of 34 prosodic, paralinguistic, and interactional…

  6. Listening with a foreign-accent: The interlanguage speech intelligibility benefit in Mandarin speakers of English

    PubMed Central

    Xie, Xin; Fowler, Carol A.

    2013-01-01

    This study examined the intelligibility of native and Mandarin-accented English speech for native English and native Mandarin listeners. In the latter group, it also examined the role of the language environment and English proficiency. Three groups of listeners were tested: native English listeners (NE), Mandarin-speaking Chinese listeners in the US (M-US) and Mandarin listeners in Beijing, China (M-BJ). As a group, M-US and M-BJ listeners were matched on English proficiency and age of acquisition. A nonword transcription task was used. Identification accuracy for word-final stops in the nonwords established two independent interlanguage intelligibility effects. An interlanguage speech intelligibility benefit for listeners (ISIB-L) was manifest by both groups of Mandarin listeners outperforming native English listeners in identification of Mandarin-accented speech. In the benefit for talkers (ISIB-T), only M-BJ listeners were more accurate identifying Mandarin-accented speech than native English speech. Thus, both Mandarin groups demonstrated an ISIB-L while only the M-BJ group overall demonstrated an ISIB-T. The English proficiency of listeners was found to modulate the magnitude of the ISIB-T in both groups. Regression analyses also suggested that the listener groups differ in their use of acoustic information to identify voicing in stop consonants. PMID:24293741

  7. Effects of reverberation time on the cognitive load in speech communication: theoretical considerations.

    PubMed

    Kjellberg, A

    2004-01-01

    The paper presents a theoretical analysis of possible effects of reverberation time on the cognitive load in speech communication. Speech comprehension requires not only phonological processing of the spoken words. Simultaneously, this information must be further processed and stored. All this processing takes place in the working memory, which has a limited processing capacity. The more resources that are allocated to word identification, the fewer resources are therefore left for the further processing and storing of the information. Reverberation conditions that allow the identification of almost all words may therefore still interfere with speech comprehension and memory storing. These problems are likely to be especially serious in situations where speech has to be followed continuously for a long time. An unfavourable reverberation time (RT) then could contribute to the development of cognitive fatigue, which means that working memory resources are gradually reduced. RT may also affect the cognitive load in two other ways: RT may change the distracting effects of a sound and a person's mood. Both effects could influence the cognitive load of a listener. It is argued that we need studies of RT effects in realistic long-lasting listening situations to better understand the effect of RT on speech communication. Furthermore, the effect of RT on distraction and mood need to be better understood.

  8. A proposed mechanism for rapid adaptation to spectrally distorted speech.

    PubMed

    Azadpour, Mahan; Balaban, Evan

    2015-07-01

    The mechanisms underlying perceptual adaptation to severely spectrally-distorted speech were studied by training participants to comprehend spectrally-rotated speech, which is obtained by inverting the speech spectrum. Spectral-rotation produces severe distortion confined to the spectral domain while preserving temporal trajectories. During five 1-hour training sessions, pairs of participants attempted to extract spoken messages from the spectrally-rotated speech of their training partner. Data on training-induced changes in comprehension of spectrally-rotated sentences and identification/discrimination of spectrally-rotated phonemes were used to evaluate the plausibility of three different classes of underlying perceptual mechanisms: (1) phonemic remapping (the formation of new phonemic categories that specifically incorporate spectrally-rotated acoustic information); (2) experience-dependent generation of a perceptual "inverse-transform" that compensates for spectral-rotation; and (3) changes in cue weighting (the identification of sets of acoustic cues least affected by spectral-rotation, followed by a rapid shift in perceptual emphasis to favour those cues, combined with the recruitment of the same type of "perceptual filling-in" mechanisms used to disambiguate speech-in-noise). Results exclusively support the third mechanism, which is the only one predicting that learning would specifically target temporally-dynamic cues that were transmitting phonetic information most stably in spite of spectral-distortion. No support was found for phonemic remapping or for inverse-transform generation.

  9. Intensive Foreign Language Learning Reveals Effects on Categorical Perception of Sibilant Voicing After Only 3 Weeks

    PubMed Central

    Horn, Nynne Thorup; Sørensen, Stine Derdau; McGregor, William B.; Wallentin, Mikkel

    2015-01-01

    Models of speech learning suggest that adaptations to foreign language sound categories take place within 6 to 12 months of exposure to a foreign language. Results from laboratory language training show effects of very targeted training on nonnative speech contrasts within only 1 to 4 weeks of training. Results from immersion studies are inconclusive, but some suggest continued effects on nonnative speech perception after 6 to 8 years of experience. We investigated this apparent discrepancy in the timing of adaptation to foreign speech sounds in a longitudinal study of foreign language learning. We examined two groups of Danish language officer cadets learning either Arabic (Modern Standard Arabic and Egyptian Arabic) or Dari (Afghan Farsi) through intensive multifaceted language training. We conducted two experiments (identification and discrimination) with the cadets who were tested four times: at the start (T0), after 3 weeks (T1), 6 months (T2), and 19 months (T3). We used a phonemic Arabic contrast (pharyngeal vs. glottal frication) and a phonemic Dari contrast (sibilant voicing) as stimuli. We observed an effect of learning on the Dari learners’ identification of the Dari stimuli already after 3 weeks of language training, which was sustained, but not improved, after 6 and 19 months. The changes in the Dari learners’ identification functions were positively correlated with their grades after 6 months. We observed no other learning effects at the group level. We discuss the results in the light of predictions from speech learning models. PMID:27551355

  10. Brainstem encoding of speech and musical stimuli in congenital amusia: evidence from Cantonese speakers

    PubMed Central

    Liu, Fang; Maggu, Akshay R.; Lau, Joseph C. Y.; Wong, Patrick C. M.

    2015-01-01

    Congenital amusia is a neurodevelopmental disorder of musical processing that also impacts subtle aspects of speech processing. It remains debated at what stage(s) of auditory processing deficits in amusia arise. In this study, we investigated whether amusia originates from impaired subcortical encoding of speech (in quiet and noise) and musical sounds in the brainstem. Fourteen Cantonese-speaking amusics and 14 matched controls passively listened to six Cantonese lexical tones in quiet, two Cantonese tones in noise (signal-to-noise ratios at 0 and 20 dB), and two cello tones in quiet while their frequency-following responses (FFRs) to these tones were recorded. All participants also completed a behavioral lexical tone identification task. The results indicated normal brainstem encoding of pitch in speech (in quiet and noise) and musical stimuli in amusics relative to controls, as measured by FFR pitch strength, pitch error, and stimulus-to-response correlation. There was also no group difference in neural conduction time or FFR amplitudes. Both groups demonstrated better FFRs to speech (in quiet and noise) than to musical stimuli. However, a significant group difference was observed for tone identification, with amusics showing significantly lower accuracy than controls. Analysis of the tone confusion matrices suggested that amusics were more likely than controls to confuse between tones that shared similar acoustic features. Interestingly, this deficit in lexical tone identification was not coupled with brainstem abnormality for either speech or musical stimuli. Together, our results suggest that the amusic brainstem is not functioning abnormally, although higher-order linguistic pitch processing is impaired in amusia. This finding has significant implications for theories of central auditory processing, requiring further investigations into how different stages of auditory processing interact in the human brain. PMID:25646077

  11. Brainstem encoding of speech and musical stimuli in congenital amusia: evidence from Cantonese speakers.

    PubMed

    Liu, Fang; Maggu, Akshay R; Lau, Joseph C Y; Wong, Patrick C M

    2014-01-01

    Congenital amusia is a neurodevelopmental disorder of musical processing that also impacts subtle aspects of speech processing. It remains debated at what stage(s) of auditory processing deficits in amusia arise. In this study, we investigated whether amusia originates from impaired subcortical encoding of speech (in quiet and noise) and musical sounds in the brainstem. Fourteen Cantonese-speaking amusics and 14 matched controls passively listened to six Cantonese lexical tones in quiet, two Cantonese tones in noise (signal-to-noise ratios at 0 and 20 dB), and two cello tones in quiet while their frequency-following responses (FFRs) to these tones were recorded. All participants also completed a behavioral lexical tone identification task. The results indicated normal brainstem encoding of pitch in speech (in quiet and noise) and musical stimuli in amusics relative to controls, as measured by FFR pitch strength, pitch error, and stimulus-to-response correlation. There was also no group difference in neural conduction time or FFR amplitudes. Both groups demonstrated better FFRs to speech (in quiet and noise) than to musical stimuli. However, a significant group difference was observed for tone identification, with amusics showing significantly lower accuracy than controls. Analysis of the tone confusion matrices suggested that amusics were more likely than controls to confuse between tones that shared similar acoustic features. Interestingly, this deficit in lexical tone identification was not coupled with brainstem abnormality for either speech or musical stimuli. Together, our results suggest that the amusic brainstem is not functioning abnormally, although higher-order linguistic pitch processing is impaired in amusia. This finding has significant implications for theories of central auditory processing, requiring further investigations into how different stages of auditory processing interact in the human brain.

  12. Performance of wavelet analysis and neural networks for pathological voices identification

    NASA Astrophysics Data System (ADS)

    Salhi, Lotfi; Talbi, Mourad; Abid, Sabeur; Cherif, Adnane

    2011-09-01

    Within the medical environment, diverse techniques exist to assess the state of the voice of the patient. The inspection technique is inconvenient for a number of reasons, such as its high cost, the duration of the inspection, and above all, the fact that it is an invasive technique. This study focuses on a robust, rapid and accurate system for automatic identification of pathological voices. This system employs non-invasive, non-expensive and fully automated method based on hybrid approach: wavelet transform analysis and neural network classifier. First, we present the results obtained in our previous study while using classic feature parameters. These results allow visual identification of pathological voices. Second, quantified parameters drifting from the wavelet analysis are proposed to characterise the speech sample. On the other hand, a system of multilayer neural networks (MNNs) has been developed which carries out the automatic detection of pathological voices. The developed method was evaluated using voice database composed of recorded voice samples (continuous speech) from normophonic or dysphonic speakers. The dysphonic speakers were patients of a National Hospital 'RABTA' of Tunis Tunisia and a University Hospital in Brussels, Belgium. Experimental results indicate a success rate ranging between 75% and 98.61% for discrimination of normal and pathological voices using the proposed parameters and neural network classifier. We also compared the average classification rate based on the MNN, Gaussian mixture model and support vector machines.

  13. Voice gender identification by cochlear implant users: The role of spectral and temporal resolution

    NASA Astrophysics Data System (ADS)

    Fu, Qian-Jie; Chinchilla, Sherol; Nogaki, Geraldine; Galvin, John J.

    2005-09-01

    The present study explored the relative contributions of spectral and temporal information to voice gender identification by cochlear implant users and normal-hearing subjects. Cochlear implant listeners were tested using their everyday speech processors, while normal-hearing subjects were tested under speech processing conditions that simulated various degrees of spectral resolution, temporal resolution, and spectral mismatch. Voice gender identification was tested for two talker sets. In Talker Set 1, the mean fundamental frequency values of the male and female talkers differed by 100 Hz while in Talker Set 2, the mean values differed by 10 Hz. Cochlear implant listeners achieved higher levels of performance with Talker Set 1, while performance was significantly reduced for Talker Set 2. For normal-hearing listeners, performance was significantly affected by the spectral resolution, for both Talker Sets. With matched speech, temporal cues contributed to voice gender identification only for Talker Set 1 while spectral mismatch significantly reduced performance for both Talker Sets. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to 4-8 spectral channels. The results suggest that, because of the reduced spectral resolution, cochlear implant patients may attend strongly to periodicity cues to distinguish voice gender.

  14. Unsupervised real-time speaker identification for daily movies

    NASA Astrophysics Data System (ADS)

    Li, Ying; Kuo, C.-C. Jay

    2002-07-01

    The problem of identifying speakers for movie content analysis is addressed in this paper. While most previous work on speaker identification was carried out in a supervised mode using pure audio data, more robust results can be obtained in real-time by integrating knowledge from multiple media sources in an unsupervised mode. In this work, both audio and visual cues will be employed and subsequently combined in a probabilistic framework to identify speakers. Particularly, audio information is used to identify speakers with a maximum likelihood (ML)-based approach while visual information is adopted to distinguish speakers by detecting and recognizing their talking faces based on face detection/recognition and mouth tracking techniques. Moreover, to accommodate for speakers' acoustic variations along time, we update their models on the fly by adapting to their newly contributed speech data. Encouraging results have been achieved through extensive experiments, which shows a promising future of the proposed audiovisual-based unsupervised speaker identification system.

  15. Divided listening in noise in a mock-up of a military command post.

    PubMed

    Abel, Sharon M; Nakashima, Ann; Smith, Ingrid

    2012-04-01

    This study investigated divided listening in noise in a mock-up of a vehicular command post. The effects of background noise from the vehicle, unattended speech of coworkers on speech understanding, and a visual cue that directed attention to the message source were examined. Sixteen normal-hearing males participated in sixteen listening conditions, defined by combinations of the absence/presence of vehicle and speech babble noises, availability of a vision cue, and number of channels (2 or 3, diotic or dichotic, and loudspeakers) over which concurrent series of call sign, color, and number phrases were presented. All wore a communications headset with integrated hearing protection. A computer keyboard was used to encode phrases beginning with an assigned call sign. Subjects achieved close to 100% correct phrase identification when presented over the headset (with or without vehicle noise) or over the loudspeakers, without vehicle noise. In contrast, the percentage correct phrase identification was significantly less by 30 to 35% when presented over loudspeakers with vehicle noise. Vehicle noise combined with babble noise decreased the accuracy by an additional 12% for dichotic listening. Vision cues increased phrase identification accuracy by 7% for diotic listening. Outcomes could be explained by the at-ear energy spectra of the speech and noise.

  16. Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading

    PubMed Central

    O’Sullivan, Aisling E.; Crosse, Michael J.; Di Liberto, Giovanni M.; Lalor, Edmund C.

    2017-01-01

    Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. PMID:28123363

  17. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Effects of noise and working memory capacity on memory processing of speech for hearing-aid users.

    PubMed

    Ng, Elaine Hoi Ning; Rudner, Mary; Lunner, Thomas; Pedersen, Michael Syskind; Rönnberg, Jerker

    2013-07-01

    It has been shown that noise reduction algorithms can reduce the negative effects of noise on memory processing in persons with normal hearing. The objective of the present study was to investigate whether a similar effect can be obtained for persons with hearing impairment and whether such an effect is dependent on individual differences in working memory capacity. A sentence-final word identification and recall (SWIR) test was conducted in two noise backgrounds with and without noise reduction as well as in quiet. Working memory capacity was measured using a reading span (RS) test. Twenty-six experienced hearing-aid users with moderate to moderately severe sensorineural hearing loss. Noise impaired recall performance. Competing speech disrupted memory performance more than speech-shaped noise. For late list items the disruptive effect of the competing speech background was virtually cancelled out by noise reduction for persons with high working memory capacity. Noise reduction can reduce the adverse effect of noise on memory for speech for persons with good working memory capacity. We argue that the mechanism behind this is faster word identification that enhances encoding into working memory.

  19. Factors that enhance English-speaking speech-language pathologists' transcription of Cantonese-speaking children's consonants.

    PubMed

    Lockart, Rebekah; McLeod, Sharynne

    2013-08-01

    To investigate speech-language pathology students' ability to identify errors and transcribe typical and atypical speech in Cantonese, a nonnative language. Thirty-three English-speaking speech-language pathology students completed 3 tasks in an experimental within-subjects design. Task 1 (baseline) involved transcribing English words. In Task 2, students transcribed 25 words spoken by a Cantonese adult. An average of 59.1% consonants was transcribed correctly (72.9% when Cantonese-English transfer patterns were allowed). There was higher accuracy on shared English and Cantonese syllable-initial consonants /m,n,f,s,h,j,w,l/ and syllable-final consonants. In Task 3, students identified consonant errors and transcribed 100 words spoken by Cantonese-speaking children under 4 additive conditions: (1) baseline, (2) +adult model, (3) +information about Cantonese phonology, and (4) all variables (2 and 3 were counterbalanced). There was a significant improvement in the students' identification and transcription scores for conditions 2, 3, and 4, with a moderate effect size. Increased skill was not based on listeners' proficiency in speaking another language, perceived transcription skill, musicality, or confidence with multilingual clients. Speech-language pathology students, with no exposure to or specific training in Cantonese, have some skills to identify errors and transcribe Cantonese. Provision of a Cantonese-adult model and information about Cantonese phonology increased students' accuracy in transcribing Cantonese speech.

  20. Data-driven analysis of functional brain interactions during free listening to music and speech.

    PubMed

    Fang, Jun; Hu, Xintao; Han, Junwei; Jiang, Xi; Zhu, Dajiang; Guo, Lei; Liu, Tianming

    2015-06-01

    Natural stimulus functional magnetic resonance imaging (N-fMRI) such as fMRI acquired when participants were watching video streams or listening to audio streams has been increasingly used to investigate functional mechanisms of the human brain in recent years. One of the fundamental challenges in functional brain mapping based on N-fMRI is to model the brain's functional responses to continuous, naturalistic and dynamic natural stimuli. To address this challenge, in this paper we present a data-driven approach to exploring functional interactions in the human brain during free listening to music and speech streams. Specifically, we model the brain responses using N-fMRI by measuring the functional interactions on large-scale brain networks with intrinsically established structural correspondence, and perform music and speech classification tasks to guide the systematic identification of consistent and discriminative functional interactions when multiple subjects were listening music and speech in multiple categories. The underlying premise is that the functional interactions derived from N-fMRI data of multiple subjects should exhibit both consistency and discriminability. Our experimental results show that a variety of brain systems including attention, memory, auditory/language, emotion, and action networks are among the most relevant brain systems involved in classic music, pop music and speech differentiation. Our study provides an alternative approach to investigating the human brain's mechanism in comprehension of complex natural music and speech.

  1. Comparison of speech perception performance between Sprint/Esprit 3G and Freedom processors in children implanted with nucleus cochlear implants.

    PubMed

    Santarelli, Rosamaria; Magnavita, Vincenzo; De Filippi, Roberta; Ventura, Laura; Genovese, Elisabetta; Arslan, Edoardo

    2009-04-01

    To compare speech perception performance in children fitted with previous generation Nucleus sound processor, Sprint or Esprit 3G, and the Freedom, the most recently released system from the Cochlear Corporation that features a larger input dynamic range. Prospective intrasubject comparative study. University Medical Center. Seventeen prelingually deafened children who had received the Nucleus 24 cochlear implant and used the Sprint or Esprit 3G sound processor. Cochlear implantation with Cochlear device. Speech perception was evaluated at baseline (Sprint, n = 11; Esprit 3G, n = 6) and after 1 month's experience with the Freedom sound processor. Identification and recognition of disyllabic words and identification of vowels were performed via recorded voice in quiet (70 dB [A]), in the presence of background noise at various levels of signal-to-noise ratio (+10, +5, 0, -5) and at a soft presentation level (60 dB [A]). Consonant identification and recognition of disyllabic words, trisyllabic words, and sentences were evaluated in live voice. Frequency discrimination was measured in a subset of subjects (n = 5) by using an adaptive, 3-interval, 3-alternative, forced-choice procedure. Identification of disyllabic words administered at a soft presentation level showed a significant increase when switching to the Freedom compared with the previously worn processor in children using the Sprint or Esprit 3G. Identification and recognition of disyllabic words in the presence of background noise as well as consonant identification and sentence recognition increased significantly for the Freedom compared with the previously worn device only in children fitted with the Sprint. Frequency discrimination was significantly better when switching to the Freedom compared with the previously worn processor. Serial comparisons revealed that that speech perception performance evaluated in children aged 5 to 15 years was superior with the Freedom than previous generations of Nucleus sound processors. These differences are deemed to ensue from an increased input dynamic range, a feature that offers potentially enhanced phonemic discrimination.

  2. The Influence of Noise Reduction on Speech Intelligibility, Response Times to Speech, and Perceived Listening Effort in Normal-Hearing Listeners.

    PubMed

    van den Tillaart-Haverkate, Maj; de Ronde-Brons, Inge; Dreschler, Wouter A; Houben, Rolph

    2017-01-01

    Single-microphone noise reduction leads to subjective benefit, but not to objective improvements in speech intelligibility. We investigated whether response times (RTs) provide an objective measure of the benefit of noise reduction and whether the effect of noise reduction is reflected in rated listening effort. Twelve normal-hearing participants listened to digit triplets that were either unprocessed or processed with one of two noise-reduction algorithms: an ideal binary mask (IBM) and a more realistic minimum mean square error estimator (MMSE). For each of these three processing conditions, we measured (a) speech intelligibility, (b) RTs on two different tasks (identification of the last digit and arithmetic summation of the first and last digit), and (c) subjective listening effort ratings. All measurements were performed at four signal-to-noise ratios (SNRs): -5, 0, +5, and +∞ dB. Speech intelligibility was high (>97% correct) for all conditions. A significant decrease in response time, relative to the unprocessed condition, was found for both IBM and MMSE for the arithmetic but not the identification task. Listening effort ratings were significantly lower for IBM than for MMSE and unprocessed speech in noise. We conclude that RT for an arithmetic task can provide an objective measure of the benefit of noise reduction. For young normal-hearing listeners, both ideal and realistic noise reduction can reduce RTs at SNRs where speech intelligibility is close to 100%. Ideal noise reduction can also reduce perceived listening effort.

  3. Speech perception in individuals with auditory dys-synchrony.

    PubMed

    Kumar, U A; Jayaram, M

    2011-03-01

    This study aimed to evaluate the effect of lengthening the transition duration of selected speech segments upon the perception of those segments in individuals with auditory dys-synchrony. Thirty individuals with auditory dys-synchrony participated in the study, along with 30 age-matched normal hearing listeners. Eight consonant-vowel syllables were used as auditory stimuli. Two experiments were conducted. Experiment one measured the 'just noticeable difference' time: the smallest prolongation of the speech sound transition duration which was noticeable by the subject. In experiment two, speech sounds were modified by lengthening the transition duration by multiples of the just noticeable difference time, and subjects' speech identification scores for the modified speech sounds were assessed. Subjects with auditory dys-synchrony demonstrated poor processing of temporal auditory information. Lengthening of speech sound transition duration improved these subjects' perception of both the placement and voicing features of the speech syllables used. These results suggest that innovative speech processing strategies which enhance temporal cues may benefit individuals with auditory dys-synchrony.

  4. Audiovisual Integration in Children Listening to Spectrally Degraded Speech

    ERIC Educational Resources Information Center

    Maidment, David W.; Kang, Hi Jee; Stewart, Hannah J.; Amitay, Sygal

    2015-01-01

    Purpose: The study explored whether visual information improves speech identification in typically developing children with normal hearing when the auditory signal is spectrally degraded. Method: Children (n = 69) and adults (n = 15) were presented with noise-vocoded sentences from the Children's Co-ordinate Response Measure (Rosen, 2011) in…

  5. Automatic speech recognition and training for severely dysarthric users of assistive technology: the STARDUST project.

    PubMed

    Parker, Mark; Cunningham, Stuart; Enderby, Pam; Hawley, Mark; Green, Phil

    2006-01-01

    The STARDUST project developed robust computer speech recognizers for use by eight people with severe dysarthria and concomitant physical disability to access assistive technologies. Independent computer speech recognizers trained with normal speech are of limited functional use by those with severe dysarthria due to limited and inconsistent proximity to "normal" articulatory patterns. Severe dysarthric output may also be characterized by a small mass of distinguishable phonetic tokens making the acoustic differentiation of target words difficult. Speaker dependent computer speech recognition using Hidden Markov Models was achieved by the identification of robust phonetic elements within the individual speaker output patterns. A new system of speech training using computer generated visual and auditory feedback reduced the inconsistent production of key phonetic tokens over time.

  6. Brainstem Correlates of Speech-in-Noise Perception in Children

    PubMed Central

    Anderson, Samira; Skoe, Erika; Chandrasekaran, Bharath; Zecker, Steven; Kraus, Nina

    2010-01-01

    Children often have difficulty understanding speech in challenging listening environments. In the absence of peripheral hearing loss, these speech perception difficulties may arise from dysfunction at more central levels in the auditory system, including subcortical structures. We examined brainstem encoding of pitch in a speech syllable in 38 school-age children. In children with poor speech-in-noise perception, we find impaired encoding of the fundamental frequency and the second harmonic, two important cues for pitch perception. Pitch, an important factor in speaker identification, aids the listener in tracking a specific voice from a background of voices. These results suggest that the robustness of subcortical neural encoding of pitch features in time-varying signals is an important factor in determining success with speech perception in noise. PMID:20708671

  7. Emotional Speech Perception Unfolding in Time: The Role of the Basal Ganglia

    PubMed Central

    Paulmann, Silke; Ott, Derek V. M.; Kotz, Sonja A.

    2011-01-01

    The basal ganglia (BG) have repeatedly been linked to emotional speech processing in studies involving patients with neurodegenerative and structural changes of the BG. However, the majority of previous studies did not consider that (i) emotional speech processing entails multiple processing steps, and the possibility that (ii) the BG may engage in one rather than the other of these processing steps. In the present study we investigate three different stages of emotional speech processing (emotional salience detection, meaning-related processing, and identification) in the same patient group to verify whether lesions to the BG affect these stages in a qualitatively different manner. Specifically, we explore early implicit emotional speech processing (probe verification) in an ERP experiment followed by an explicit behavioral emotional recognition task. In both experiments, participants listened to emotional sentences expressing one of four emotions (anger, fear, disgust, happiness) or neutral sentences. In line with previous evidence patients and healthy controls show differentiation of emotional and neutral sentences in the P200 component (emotional salience detection) and a following negative-going brain wave (meaning-related processing). However, the behavioral recognition (identification stage) of emotional sentences was impaired in BG patients, but not in healthy controls. The current data provide further support that the BG are involved in late, explicit rather than early emotional speech processing stages. PMID:21437277

  8. Three Factors Are Critical in Order to Synthesize Intelligible Noise-Vocoded Japanese Speech

    PubMed Central

    Kishida, Takuya; Nakajima, Yoshitaka; Ueda, Kazuo; Remijn, Gerard B.

    2016-01-01

    Factor analysis (principal component analysis followed by varimax rotation) had shown that 3 common factors appear across 20 critical-band power fluctuations derived from spoken sentences of eight different languages [Ueda et al. (2010). Fechner Day 2010, Padua]. The present study investigated the contributions of such power-fluctuation factors to speech intelligibility. The method of factor analysis was modified to obtain factors suitable for resynthesizing speech sounds as 20-critical-band noise-vocoded speech. The resynthesized speech sounds were used for an intelligibility test. The modification of factor analysis ensured that the resynthesized speech sounds were not accompanied by a steady background noise caused by the data reduction procedure. Spoken sentences of British English, Japanese, and Mandarin Chinese were subjected to this modified analysis. Confirming the earlier analysis, indeed 3–4 factors were common to these languages. The number of power-fluctuation factors needed to make noise-vocoded speech intelligible was then examined. Critical-band power fluctuations of the Japanese spoken sentences were resynthesized from the obtained factors, resulting in noise-vocoded-speech stimuli, and the intelligibility of these speech stimuli was tested by 12 native Japanese speakers. Japanese mora (syllable-like phonological unit) identification performances were measured when the number of factors was 1–9. Statistically significant improvement in intelligibility was observed when the number of factors was increased stepwise up to 6. The 12 listeners identified 92.1% of the morae correctly on average in the 6-factor condition. The intelligibility improved sharply when the number of factors changed from 2 to 3. In this step, the cumulative contribution ratio of factors improved only by 10.6%, from 37.3 to 47.9%, but the average mora identification leaped from 6.9 to 69.2%. The results indicated that, if the number of factors is 3 or more, elementary linguistic information is preserved in such noise-vocoded speech. PMID:27199790

  9. Rapid Statistical Learning Supporting Word Extraction From Continuous Speech.

    PubMed

    Batterink, Laura J

    2017-07-01

    The identification of words in continuous speech, known as speech segmentation, is a critical early step in language acquisition. This process is partially supported by statistical learning, the ability to extract patterns from the environment. Given that speech segmentation represents a potential bottleneck for language acquisition, patterns in speech may be extracted very rapidly, without extensive exposure. This hypothesis was examined by exposing participants to continuous speech streams composed of novel repeating nonsense words. Learning was measured on-line using a reaction time task. After merely one exposure to an embedded novel word, learners demonstrated significant learning effects, as revealed by faster responses to predictable than to unpredictable syllables. These results demonstrate that learners gained sensitivity to the statistical structure of unfamiliar speech on a very rapid timescale. This ability may play an essential role in early stages of language acquisition, allowing learners to rapidly identify word candidates and "break in" to an unfamiliar language.

  10. Secure access to patient's health records using SpeechXRays a mutli-channel biometrics platform for user authentication.

    PubMed

    Spanakis, Emmanouil G; Spanakis, Marios; Karantanas, Apostolos; Marias, Kostas

    2016-08-01

    The most commonly used method for user authentication in ICT services or systems is the application of identification tools such as passwords or personal identification numbers (PINs). The rapid development in ICT technology regarding smart devices (laptops, tablets and smartphones) has allowed also the advance of hardware components that capture several biometric traits such as fingerprints and voice. These components are aiming among others to overcome weaknesses and flaws of password usage under the prism of improved user authentication with higher level of security, privacy and usability. To this respect, the potential application of biometrics for secure user authentication regarding access in systems with sensitive data (i.e. patient's data from electronic health records) shows great potentials. SpeechXRays aims to provide a user recognition platform based on biometrics of voice acoustics analysis and audio-visual identity verification. Among others, the platform aims to be applied as an authentication tool for medical personnel in order to gain specific access to patient's electronic health records. In this work a short description of SpeechXrays implementation tool regarding eHealth is provided and analyzed. This study explores security and privacy issues, and offers a comprehensive overview of biometrics technology applications in addressing the e-Health security challenges. We present and describe the necessary requirement for an eHealth platform concerning biometric security.

  11. McGurk Effect in Gender Identification: Vision Trumps Audition in Voice Judgments.

    PubMed

    Peynircioǧlu, Zehra F; Brent, William; Tatz, Joshua R; Wyatt, Jordan

    2017-01-01

    Demonstrations of non-speech McGurk effects are rare, mostly limited to emotion identification, and sometimes not considered true analogues. We presented videos of males and females singing a single syllable on the same pitch and asked participants to indicate the true range of the voice-soprano, alto, tenor, or bass. For one group of participants, the gender shown on the video matched the gender of the voice heard, and for the other group they were mismatched. Soprano or alto responses were interpreted as "female voice" decisions and tenor or bass responses as "male voice" decisions. Identification of the voice gender was 100% correct in the preceding audio-only condition. However, whereas performance was also 100% correct in the matched video/audio condition, it was only 31% correct in the mismatched video/audio condition. Thus, the visual gender information overrode the voice gender identification, showing a robust non-speech McGurk effect.

  12. A Proposed Methodology to Classify Frontier Capital Markets

    DTIC Science & Technology

    2011-07-31

    out of charity, but because it is the surest route to our common good.” -Inaugural Speech by President Barack Obama, Jan 2009 This project...identification, and machine learning. The algorithm consists of a unique binary classifier mechanism that combines three methods: k-Nearest Neighbors ( kNN ...Support Through kNN Ensemble Classification Techniques E. Capital Market Classification Based on Capital Flows and Trading Architecture F

  13. Perception of Melodic Contour and Intonation in Autism Spectrum Disorder: Evidence from Mandarin Speakers

    ERIC Educational Resources Information Center

    Jiang, Jun; Liu, Fang; Wan, Xuan; Jiang, Cunmei

    2015-01-01

    Tone language experience benefits pitch processing in music and speech for typically developing individuals. No known studies have examined pitch processing in individuals with autism who speak a tone language. This study investigated discrimination and identification of melodic contour and speech intonation in a group of Mandarin-speaking…

  14. Gradient Sensitivity to Within-Category Variation in Words and Syllables

    ERIC Educational Resources Information Center

    McMurray, Bob; Aslin, Richard N.; Tanenhaus, Michael K.; Spivey, Michael J.; Subik, Dana

    2008-01-01

    Five experiments monitored eye movements in phoneme and lexical identification tasks to examine the effect of within-category subphonetic variation on the perception of stop consonants. Experiment 1 demonstrated gradient effects along voice-onset time (VOT) continua made from natural speech, replicating results with synthetic speech (B. McMurray,…

  15. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.

    PubMed

    Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko

    2017-08-15

    During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Gender disparity in subcortical encoding of binaurally presented speech stimuli: an auditory evoked potentials study.

    PubMed

    Ahadi, Mohsen; Pourbakht, Akram; Jafari, Amir Homayoun; Shirjian, Zahra; Jafarpisheh, Amir Salar

    2014-06-01

    To investigate the influence of gender on subcortical representation of speech acoustic parameters where simultaneously presented to both ears. Two-channel speech-evoked auditory brainstem responses were obtained in 25 female and 23 male normal hearing young adults by using binaural presentation of the 40 ms synthetic consonant-vowel/da/, and the encoding of the fast and slow elements of speech stimuli at subcortical level were compared in the temporal and spectral domains between the sexes using independent sample, two tailed t-test. Highly detectable responses were established in both groups. Analysis in the time domain revealed earlier and larger Fast-onset-responses in females but there was no gender related difference in sustained segment and offset of the response. Interpeak intervals between Frequency Following Response peaks were also invariant to sex. Based on shorter onset responses in females, composite onset measures were also sex dependent. Analysis in the spectral domain showed more robust and better representation of fundamental frequency as well as the first formant and high frequency components of first formant in females than in males. Anatomical, biological and biochemical distinctions between females and males could alter the neural encoding of the acoustic cues of speech stimuli at subcortical level. Females have an advantage in binaural processing of the slow and fast elements of speech. This could be a physiological evidence for better identification of speaker and emotional tone of voice, as well as better perceiving the phonetic information of speech in women. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Measurement of trained speech patterns in stuttering: interjudge and intrajudge agreement of experts by means of modified time-interval analysis.

    PubMed

    Alpermann, Anke; Huber, Walter; Natke, Ulrich; Willmes, Klaus

    2010-09-01

    Improved fluency after stuttering therapy is usually measured by the percentage of stuttered syllables. However, outcome studies rarely evaluate the use of trained speech patterns that speakers use to manage stuttering. This study investigated whether the modified time interval analysis can distinguish between trained speech patterns, fluent speech, and stuttered speech. Seventeen German experts on stuttering judged a speech sample on two occasions. Speakers of the sample were stuttering adults, who were not undergoing therapy, as well as participants in a fluency shaping and a stuttering modification therapy. Results showed satisfactory inter-judge and intra-judge agreement above 80%. Intervals with trained speech patterns were identified as consistently as stuttered and fluent intervals. We discuss limitations of the study, as well as implications of our findings for the development of training for identification of trained speech patterns and future outcome studies. The reader will be able to (a) explain different methods to measure the use of trained speech patterns, (b) evaluate whether German experts are able to discriminate intervals with trained speech patterns reliably from fluent and stuttered intervals and (c) describe how the measurement of trained speech patterns can contribute to outcome studies.

  18. The interlanguage speech intelligibility benefit for native speakers of Mandarin: Production and perception of English word-final voicing contrasts

    PubMed Central

    Hayes-Harb, Rachel; Smith, Bruce L.; Bent, Tessa; Bradlow, Ann R.

    2009-01-01

    This study investigated the intelligibility of native and Mandarin-accented English speech for native English and native Mandarin listeners. The word-final voicing contrast was considered (as in minimal pairs such as `cub' and `cup') in a forced-choice word identification task. For these particular talkers and listeners, there was evidence of an interlanguage speech intelligibility benefit for listeners (i.e., native Mandarin listeners were more accurate than native English listeners at identifying Mandarin-accented English words). However, there was no evidence of an interlanguage speech intelligibility benefit for talkers (i.e., native Mandarin listeners did not find Mandarin-accented English speech more intelligible than native English speech). When listener and talker phonological proficiency (operationalized as accentedness) was taken into account, it was found that the interlanguage speech intelligibility benefit for listeners held only for the low phonological proficiency listeners and low phonological proficiency speech. The intelligibility data were also considered in relation to various temporal-acoustic properties of native English and Mandarin-accented English speech in effort to better understand the properties of speech that may contribute to the interlanguage speech intelligibility benefit. PMID:19606271

  19. Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.

    PubMed

    Williams, Jason A

    2012-06-01

    The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.

  20. An acoustic comparison of two women's infant- and adult-directed speech

    NASA Astrophysics Data System (ADS)

    Andruski, Jean; Katz-Gershon, Shiri

    2003-04-01

    In addition to having prosodic characteristics that are attractive to infant listeners, infant-directed (ID) speech shares certain characteristics of adult-directed (AD) clear speech, such as increased acoustic distance between vowels, that might be expected to make ID speech easier for adults to perceive in noise than AD conversational speech. However, perceptual tests of two women's ID productions by Andruski and Bessega [J. Acoust. Soc. Am. 112, 2355] showed that is not always the case. In a word identification task that compared ID speech with AD clear and conversational speech, one speaker's ID productions were less well-identified than AD clear speech, but better identified than AD conversational speech. For the second woman, ID speech was the least accurately identified of the three speech registers. For both speakers, hard words (infrequent words with many lexical neighbors) were also at an increased disadvantage relative to easy words (frequent words with few lexical neighbors) in speech registers that were less accurately perceived. This study will compare several acoustic properties of these women's productions, including pitch and formant-frequency characteristics. Results of the acoustic analyses will be examined with the original perceptual results to suggest reasons for differences in listener's accuracy in identifying these two women's ID speech in noise.

  1. Music and Speech Perception in Children Using Sung Speech

    PubMed Central

    Nie, Yingjiu; Galvin, John J.; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners. PMID:29609496

  2. Music and Speech Perception in Children Using Sung Speech.

    PubMed

    Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.

  3. A study of voice production characteristics of astronuat speech during Apollo 11 for speaker modeling in space.

    PubMed

    Yu, Chengzhu; Hansen, John H L

    2017-03-01

    Human physiology has evolved to accommodate environmental conditions, including temperature, pressure, and air chemistry unique to Earth. However, the environment in space varies significantly compared to that on Earth and, therefore, variability is expected in astronauts' speech production mechanism. In this study, the variations of astronaut voice characteristics during the NASA Apollo 11 mission are analyzed. Specifically, acoustical features such as fundamental frequency and phoneme formant structure that are closely related to the speech production system are studied. For a further understanding of astronauts' vocal tract spectrum variation in space, a maximum likelihood frequency warping based analysis is proposed to detect the vocal tract spectrum displacement during space conditions. The results from fundamental frequency, formant structure, as well as vocal spectrum displacement indicate that astronauts change their speech production mechanism when in space. Moreover, the experimental results for astronaut voice identification tasks indicate that current speaker recognition solutions are highly vulnerable to astronaut voice production variations in space conditions. Future recommendations from this study suggest that successful applications of speaker recognition during extended space missions require robust speaker modeling techniques that could effectively adapt to voice production variation caused by diverse space conditions.

  4. Using Flanagan's phase vocoder to improve cochlear implant performance

    NASA Astrophysics Data System (ADS)

    Zeng, Fan-Gang

    2004-10-01

    The cochlear implant has restored partial hearing to more than 100000 deaf people worldwide, allowing the average user to talk on the telephone in quiet environment. However, significant difficulty still remains for speech recognition in noise, music perception, and tonal language understanding. This difficulty may be related to speech processing strategies in current cochlear implants that emphasized the extraction and encoding of the temporal envelope while ignoring the temporal fine structure in speech sounds. A novel strategy was developed based on Flanagan's phase vocoder [Flanagan and Golden, Bell Syst. Tech. 45, 1493-1509 (1966)], in which frequency modulation was extracted from the temporal fine structure and then added to amplitude modulation in the current cochlear implants. Acoustic simulation results showed that amplitude and frequency modulation contributed complementarily to speech perception with amplitude modulation contributing mainly to intelligibility whereas frequency modulation contributed to speaker identification and auditory grouping. The results also showed that the novel strategy significantly improved cochlear implant performance under realistic listening situations. Overall, the present result demonstrated that Flanagan's classic work on phase vocoder still shed insight on current problems of both theoretical and practical importance. [Work supported by NIH.

  5. Individual differences in selective attention predict speech identification at a cocktail party.

    PubMed

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-08-31

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise.

  6. Exploring the roles of spectral detail and intonation contour in speech intelligibility: an FMRI study.

    PubMed

    Kyong, Jeong S; Scott, Sophie K; Rosen, Stuart; Howe, Timothy B; Agnew, Zarinah K; McGettigan, Carolyn

    2014-08-01

    The melodic contour of speech forms an important perceptual aspect of tonal and nontonal languages and an important limiting factor on the intelligibility of speech heard through a cochlear implant. Previous work exploring the neural correlates of speech comprehension identified a left-dominant pathway in the temporal lobes supporting the extraction of an intelligible linguistic message, whereas the right anterior temporal lobe showed an overall preference for signals clearly conveying dynamic pitch information [Johnsrude, I. S., Penhune, V. B., & Zatorre, R. J. Functional specificity in the right human auditory cortex for perceiving pitch direction. Brain, 123, 155-163, 2000; Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400-2406, 2000]. The current study combined modulations of overall intelligibility (through vocoding and spectral inversion) with a manipulation of pitch contour (normal vs. falling) to investigate the processing of spoken sentences in functional MRI. Our overall findings replicate and extend those of Scott et al. [Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400-2406, 2000], where greater sentence intelligibility was predominately associated with increased activity in the left STS, and the greatest response to normal sentence melody was found in right superior temporal gyrus. These data suggest a spatial distinction between brain areas associated with intelligibility and those involved in the processing of dynamic pitch information in speech. By including a set of complexity-matched unintelligible conditions created by spectral inversion, this is additionally the first study reporting a fully factorial exploration of spectrotemporal complexity and spectral inversion as they relate to the neural processing of speech intelligibility. Perhaps surprisingly, there was little evidence for an interaction between the two factors-we discuss the implications for the processing of sound and speech in the dorsolateral temporal lobes.

  7. The musician effect: does it persist under degraded pitch conditions of cochlear implant simulations?

    PubMed Central

    Fuller, Christina D.; Galvin, John J.; Maat, Bert; Free, Rolien H.; Başkent, Deniz

    2014-01-01

    Cochlear implants (CIs) are auditory prostheses that restore hearing via electrical stimulation of the auditory nerve. Compared to normal acoustic hearing, sounds transmitted through the CI are spectro-temporally degraded, causing difficulties in challenging listening tasks such as speech intelligibility in noise and perception of music. In normal hearing (NH), musicians have been shown to better perform than non-musicians in auditory processing and perception, especially for challenging listening tasks. This “musician effect” was attributed to better processing of pitch cues, as well as better overall auditory cognitive functioning in musicians. Does the musician effect persist when pitch cues are degraded, as it would be in signals transmitted through a CI? To answer this question, NH musicians and non-musicians were tested while listening to unprocessed signals or to signals processed by an acoustic CI simulation. The task increasingly depended on pitch perception: (1) speech intelligibility (words and sentences) in quiet or in noise, (2) vocal emotion identification, and (3) melodic contour identification (MCI). For speech perception, there was no musician effect with the unprocessed stimuli, and a small musician effect only for word identification in one noise condition, in the CI simulation. For emotion identification, there was a small musician effect for both. For MCI, there was a large musician effect for both. Overall, the effect was stronger as the importance of pitch in the listening task increased. This suggests that the musician effect may be more rooted in pitch perception, rather than in a global advantage in cognitive processing (in which musicians would have performed better in all tasks). The results further suggest that musical training before (and possibly after) implantation might offer some advantage in pitch processing that could partially benefit speech perception, and more strongly emotion and music perception. PMID:25071428

  8. Children with bilateral cochlear implants identify emotion in speech and music.

    PubMed

    Volkova, Anna; Trehub, Sandra E; Schellenberg, E Glenn; Papsin, Blake C; Gordon, Karen A

    2013-03-01

    This study examined the ability of prelingually deaf children with bilateral implants to identify emotion (i.e. happiness or sadness) in speech and music. Participants in Experiment 1 were 14 prelingually deaf children from 5-7 years of age who had bilateral implants and 18 normally hearing children from 4-6 years of age. They judged whether linguistically neutral utterances produced by a man and woman sounded happy or sad. Participants in Experiment 2 were 14 bilateral implant users from 4-6 years of age and the same normally hearing children as in Experiment 1. They judged whether synthesized piano excerpts sounded happy or sad. Child implant users' accuracy of identifying happiness and sadness in speech was well above chance levels but significantly below the accuracy achieved by children with normal hearing. Similarly, their accuracy of identifying happiness and sadness in music was well above chance levels but significantly below that of children with normal hearing, who performed at ceiling. For the 12 implant users who participated in both experiments, performance on the speech task correlated significantly with performance on the music task and implant experience was correlated with performance on both tasks. Child implant users' accurate identification of emotion in speech exceeded performance in previous studies, which may be attributable to fewer response alternatives and the use of child-directed speech. Moreover, child implant users' successful identification of emotion in music indicates that the relevant cues are accessible at a relatively young age.

  9. Can Explicit Training in Cued Speech Improve Phoneme Identification?

    ERIC Educational Resources Information Center

    Rees, R.; Fitzpatrick, C.; Foulkes, J.; Peterson, H.; Newton, C.

    2017-01-01

    When identifying phonemes in new spoken words, lipreading is an important source of information for many deaf people. Because many groups of phonemes are virtually indistinguishable by sight, deaf people are able to identify about 30% of phonemes when lipreading non-words. Cued speech (CS) is a system of hand shapes and hand positions used…

  10. Priming of Non-Speech Vocalizations in Male Adults: The Influence of the Speaker's Gender

    ERIC Educational Resources Information Center

    Fecteau, Shirley; Armony, Jorge L.; Joanette, Yves; Belin, Pascal

    2004-01-01

    Previous research reported a priming effect for voices. However, the type of information primed is still largely unknown. In this study, we examined the influence of speaker's gender and emotional category of the stimulus on priming of non-speech vocalizations in 10 male participants, who performed a gender identification task. We found a…

  11. The Effect of Temporal Gap Identification on Speech Perception by Users of Cochlear Implants

    ERIC Educational Resources Information Center

    Sagi, Elad; Kaiser, Adam R.; Meyer, Ted A.; Svirsky, Mario A.

    2009-01-01

    Purpose: This study examined the ability of listeners using cochlear implants (CIs) and listeners with normal hearing (NH) to identify silent gaps of different duration and the relation of this ability to speech understanding in CI users. Method: Sixteen NH adults and 11 postlingually deafened adults with CIs identified synthetic vowel-like…

  12. Adaptation to spectrally-rotated speech.

    PubMed

    Green, Tim; Rosen, Stuart; Faulkner, Andrew; Paterson, Ruth

    2013-08-01

    Much recent interest surrounds listeners' abilities to adapt to various transformations that distort speech. An extreme example is spectral rotation, in which the spectrum of low-pass filtered speech is inverted around a center frequency (2 kHz here). Spectral shape and its dynamics are completely altered, rendering speech virtually unintelligible initially. However, intonation, rhythm, and contrasts in periodicity and aperiodicity are largely unaffected. Four normal hearing adults underwent 6 h of training with spectrally-rotated speech using Continuous Discourse Tracking. They and an untrained control group completed pre- and post-training speech perception tests, for which talkers differed from the training talker. Significantly improved recognition of spectrally-rotated sentences was observed for trained, but not untrained, participants. However, there were no significant improvements in the identification of medial vowels in /bVd/ syllables or intervocalic consonants. Additional tests were performed with speech materials manipulated so as to isolate the contribution of various speech features. These showed that preserving intonational contrasts did not contribute to the comprehension of spectrally-rotated speech after training, and suggested that improvements involved adaptation to altered spectral shape and dynamics, rather than just learning to focus on speech features relatively unaffected by the transformation.

  13. Ongoing slow oscillatory phase modulates speech intelligibility in cooperation with motor cortical activity.

    PubMed

    Onojima, Takayuki; Kitajo, Keiichi; Mizuhara, Hiroaki

    2017-01-01

    Neural oscillation is attracting attention as an underlying mechanism for speech recognition. Speech intelligibility is enhanced by the synchronization of speech rhythms and slow neural oscillation, which is typically observed as human scalp electroencephalography (EEG). In addition to the effect of neural oscillation, it has been proposed that speech recognition is enhanced by the identification of a speaker's motor signals, which are used for speech production. To verify the relationship between the effect of neural oscillation and motor cortical activity, we measured scalp EEG, and simultaneous EEG and functional magnetic resonance imaging (fMRI) during a speech recognition task in which participants were required to recognize spoken words embedded in noise sound. We proposed an index to quantitatively evaluate the EEG phase effect on behavioral performance. The results showed that the delta and theta EEG phase before speech inputs modulated the participant's response time when conducting speech recognition tasks. The simultaneous EEG-fMRI experiment showed that slow EEG activity was correlated with motor cortical activity. These results suggested that the effect of the slow oscillatory phase was associated with the activity of the motor cortex during speech recognition.

  14. Noise suppression methods for robust speech processing

    NASA Astrophysics Data System (ADS)

    Boll, S. F.; Ravindra, H.; Randall, G.; Armantrout, R.; Power, R.

    1980-05-01

    Robust speech processing in practical operating environments requires effective environmental and processor noise suppression. This report describes the technical findings and accomplishments during this reporting period for the research program funded to develop real time, compressed speech analysis synthesis algorithms whose performance in invariant under signal contamination. Fulfillment of this requirement is necessary to insure reliable secure compressed speech transmission within realistic military command and control environments. Overall contributions resulting from this research program include the understanding of how environmental noise degrades narrow band, coded speech, development of appropriate real time noise suppression algorithms, and development of speech parameter identification methods that consider signal contamination as a fundamental element in the estimation process. This report describes the current research and results in the areas of noise suppression using the dual input adaptive noise cancellation using the short time Fourier transform algorithms, articulation rate change techniques, and a description of an experiment which demonstrated that the spectral subtraction noise suppression algorithm can improve the intelligibility of 2400 bps, LPC 10 coded, helicopter speech by 10.6 point.

  15. Talking about science: An interpretation of the effects of teacher talk in a high school science classroom

    NASA Astrophysics Data System (ADS)

    Moje, Elizabeth B.

    This paper builds on research in science education, secondary education, and sociolinguistics by arguing that high school classrooms can be considered speech communities in which language may be selectively used and imposed on students as a means of fostering academic speech community identification. To demonstrate the ways in which a high school teacher's language use may encourage subject area identification, the results of an interactionist analysis of data from a 2-year ethnographic study of one high school chemistry classroom are presented. Findings indicate that this teacher's uses of language fell into three related categories. These uses of language served to foster identification with the academic speech community of science. As a result of the teacher's talk about science according to these three patterns, students developed or reinforced particular views of science. In addition, talking about science in ways that fostered identity with the discipline promoted the teacher as expert and built classroom solidarity or community. These results are discussed in light of sociolinguistic research on classroom competence and of the assertions of science educators regarding social and ideologic implications of language use in science instruction.Received: 23 September 1993; Revised: 15 September 1994;

  16. Sound Localization and Speech Perception in Noise of Pediatric Cochlear Implant Recipients: Bimodal Fitting Versus Bilateral Cochlear Implants.

    PubMed

    Choi, Ji Eun; Moon, Il Joon; Kim, Eun Yeon; Park, Hee-Sung; Kim, Byung Kil; Chung, Won-Ho; Cho, Yang-Sun; Brown, Carolyn J; Hong, Sung Hwa

    The aim of this study was to compare binaural performance of auditory localization task and speech perception in babble measure between children who use a cochlear implant (CI) in one ear and a hearing aid (HA) in the other (bimodal fitting) and those who use bilateral CIs. Thirteen children (mean age ± SD = 10 ± 2.9 years) with bilateral CIs and 19 children with bimodal fitting were recruited to participate. Sound localization was assessed using a 13-loudspeaker array in a quiet sound-treated booth. Speakers were placed in an arc from -90° azimuth to +90° azimuth (15° interval) in horizontal plane. To assess the accuracy of sound location identification, we calculated the absolute error in degrees between the target speaker and the response speaker during each trial. The mean absolute error was computed by dividing the sum of absolute errors by the total number of trials. We also calculated the hemifield identification score to reflect the accuracy of right/left discrimination. Speech-in-babble perception was also measured in the sound field using target speech presented from the front speaker. Eight-talker babble was presented in the following four different listening conditions: from the front speaker (0°), from one of the two side speakers (+90° or -90°), from both side speakers (±90°). Speech, spatial, and quality questionnaire was administered. When the two groups of children were directly compared with each other, there was no significant difference in localization accuracy ability or hemifield identification score under binaural condition. Performance in speech perception test was also similar to each other under most babble conditions. However, when the babble was from the first device side (CI side for children with bimodal stimulation or first CI side for children with bilateral CIs), speech understanding in babble by bilateral CI users was significantly better than that by bimodal listeners. Speech, spatial, and quality scores were comparable with each other between the two groups. Overall, the binaural performance was similar to each other between children who are fit with two CIs (CI + CI) and those who use bimodal stimulation (HA + CI) in most conditions. However, the bilateral CI group showed better speech perception than the bimodal CI group when babble was from the first device side (first CI side for bilateral CI users or CI side for bimodal listeners). Therefore, if bimodal performance is significantly below the mean bilateral CI performance on speech perception in babble, these results suggest that a child should be considered to transit from bimodal stimulation to bilateral CIs.

  17. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    PubMed Central

    ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110

  18. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    PubMed

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  19. Investigation on the music perception skills of Italian children with cochlear implants.

    PubMed

    Scorpecci, Alessandro; Zagari, Felicia; Mari, Giorgia; Giannantonio, Sara; D'Alatri, Lucia; Di Nardo, Walter; Paludetti, Gaetano

    2012-10-01

    To compare the music perception skills of a group of Italian-speaking children with cochlear implants to those of a group of normal hearing children; to analyze possible correlations between implanted children's musical skills and their demographics, clinical characteristics, phonological perception, and speech recognition and production abilities. 18 implanted children aged 5-12 years and a reference group of 23 normal-hearing subjects with typical language development were enrolled. Both groups received a melody identification test and a song (i.e. original version) identification test. The implanted children also received a test battery aimed at assessing speech recognition, speech production and phoneme discrimination. The implanted children scored significantly worse than the normal hearing subjects in both musical tests. In the cochlear implant group, phoneme discrimination abilities were significantly correlated with both melody and song identification skills, and length of device use was significantly correlated with song identification skills. Experience with device use and phonological perception had a moderate-to-strong correlation to implanted children's music perception abilities. In the light of these findings, it is reasonable to assume that a rehabilitation program specifically aimed at improving phonological perception could help pediatric cochlear implant recipients better understand the basic elements of music; moreover, a training aimed at improving the comprehension of the spectral elements of music could enhance implanted children's phonological skills. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. A causal test of the motor theory of speech perception: A case of impaired speech production and spared speech perception

    PubMed Central

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E.; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z.

    2015-01-01

    In the last decade, the debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. However, the exact role of the motor system in auditory speech processing remains elusive. Here we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. The patient’s spontaneous speech was marked by frequent phonological/articulatory errors, and those errors were caused, at least in part, by motor-level impairments with speech production. We found that the patient showed a normal phonemic categorical boundary when discriminating two nonwords that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the nonword stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labeling impairment. These data suggest that the identification (i.e. labeling) of nonword speech sounds may involve the speech motor system, but that the perception of speech sounds (i.e., discrimination) does not require the motor system. This means that motor processes are not causally involved in perception of the speech signal, and suggest that the motor system may be used when other cues (e.g., meaning, context) are not available. PMID:25951749

  1. The effect of viewing speech on auditory speech processing is different in the left and right hemispheres.

    PubMed

    Davis, Chris; Kislyuk, Daniel; Kim, Jeesun; Sams, Mikko

    2008-11-25

    We used whole-head magnetoencephalograpy (MEG) to record changes in neuromagnetic N100m responses generated in the left and right auditory cortex as a function of the match between visual and auditory speech signals. Stimuli were auditory-only (AO) and auditory-visual (AV) presentations of /pi/, /ti/ and /vi/. Three types of intensity matched auditory stimuli were used: intact speech (Normal), frequency band filtered speech (Band) and speech-shaped white noise (Noise). The behavioural task was to detect the /vi/ syllables which comprised 12% of stimuli. N100m responses were measured to averaged /pi/ and /ti/ stimuli. Behavioural data showed that identification of the stimuli was faster and more accurate for Normal than for Band stimuli, and for Band than for Noise stimuli. Reaction times were faster for AV than AO stimuli. MEG data showed that in the left hemisphere, N100m to both AO and AV stimuli was largest for the Normal, smaller for Band and smallest for Noise stimuli. In the right hemisphere, Normal and Band AO stimuli elicited N100m responses of quite similar amplitudes, but N100m amplitude to Noise was about half of that. There was a reduction in N100m for the AV compared to the AO conditions. The size of this reduction for each stimulus type was same in the left hemisphere but graded in the right (being largest to the Normal, smaller to the Band and smallest to the Noise stimuli). The N100m decrease for the Normal stimuli was significantly larger in the right than in the left hemisphere. We suggest that the effect of processing visual speech seen in the right hemisphere likely reflects suppression of the auditory response based on AV cues for place of articulation.

  2. Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.

    PubMed

    Hunter, Cynthia R; Pisoni, David B

    Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.

  3. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users.

    PubMed

    Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn

    2017-01-01

    This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants' auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research.

  4. Noise reduction improves memory for target language speech in competing native but not foreign language speech.

    PubMed

    Ng, Elaine Hoi Ning; Rudner, Mary; Lunner, Thomas; Rönnberg, Jerker

    2015-01-01

    A hearing aid noise reduction (NR) algorithm reduces the adverse effect of competing speech on memory for target speech for individuals with hearing impairment with high working memory capacity. In the present study, we investigated whether the positive effect of NR could be extended to individuals with low working memory capacity, as well as how NR influences recall performance for target native speech when the masker language is non-native. A sentence-final word identification and recall (SWIR) test was administered to 26 experienced hearing aid users. In this test, target spoken native language (Swedish) sentence lists were presented in competing native (Swedish) or foreign (Cantonese) speech with or without binary masking NR algorithm. After each sentence list, free recall of sentence final words was prompted. Working memory capacity was measured using a reading span (RS) test. Recall performance was associated with RS. However, the benefit obtained from NR was not associated with RS. Recall performance was more disrupted by native than foreign speech babble and NR improved recall performance in native but not foreign competing speech. Noise reduction improved memory for speech heard in competing speech for hearing aid users. Memory for native speech was more disrupted by native babble than foreign babble, but the disruptive effect of native speech babble was reduced to that of foreign babble when there was NR.

  5. Management of developmental speech and language disorders: Part 1.

    PubMed

    O'Hare, Anne; Bremner, Lynne

    2016-03-01

    The identification of developmental problems in a child's acquisition of speech, language and/or communication is a core activity in child surveillance. These are common difficulties with up to 15% of toddlers being 'late talkers' and 7% of children entering school with persisting impairments of their language development. These delays can confer disadvantages in the long term, adversely affecting language, cognition, academic attainment, behaviour and mental health. All children presenting with significant speech and language delay should be investigated with a comprehensive hearing assessment and be considered for speech and language therapy assessment. Socioeconomic adversity correlates with delayed language development. Clinical assessment should confirm that the presentation is definitely not acquired (see part 2) and will also guide whether the difficulty is primary, in which there are often familial patterns, or secondary, from a very wide range of aetiologies. Symptoms may be salient, such as the regression of communication in <3-year-olds which 'flags up' autism spectrum disorder. Further investigation will be informed from this clinical assessment, for example, genetic investigation for sex aneuploidies in enduring primary difficulties. Management of the speech and language difficulty itself is the realm of the speech and language therapist, who has an ever-increasing evidence-based choice of interventions. This should take place within a multidisciplinary team, particularly for children with more severe conditions who may benefit from individualised parental and educational supports. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.

    PubMed

    Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina

    2018-05-14

    The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.

  7. Individual differences in selective attention predict speech identification at a cocktail party

    PubMed Central

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-01-01

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise. DOI: http://dx.doi.org/10.7554/eLife.16747.001 PMID:27580272

  8. Factors involved in the identification of stuttering severity in a foreign language.

    PubMed

    Cosyns, Marjan; Einarsdóttir, Jóhanna T; Van Borsel, John

    2015-01-01

    Speech-language pathologists nowadays are more and more confronted with clients who speak a language different from their own mother tongue. The assessment of persons who speak a foreign language poses particular challenges. The present study investigated the possible role and interplay of factors involved in the identification of stuttering severity in a foreign language. Nineteen speech-language pathologists from five different countries (i.e. Iceland, Sweden, Norway, Finland, and Belgium) rated stuttering severity of speech samples featuring persons who stutter speaking Icelandic, Swedish, Norwegian, or Dutch. Additionally, they were asked to score how easy they found it to rate the samples. Accuracy of rating stuttering severity in another language appeared to be foremost determined by the client's stuttering severity, while experienced ease of rating stuttering severity was essentially related to closeness of the language of the clinician to that of the client and familiarity of the clinician with the client's language. Stuttering measurement training programmes in different languages are needed.

  9. Training to use voice onset time as a cue to talker identification induces a left-ear/right-hemisphere processing advantage.

    PubMed

    Francis, Alexander L; Driscoll, Courtney

    2006-09-01

    We examined the effect of perceptual training on a well-established hemispheric asymmetry in speech processing. Eighteen listeners were trained to use a within-category difference in voice onset time (VOT) to cue talker identity. Successful learners (n=8) showed faster response times for stimuli presented only to the left ear than for those presented only to the right. The development of a left-ear/right-hemisphere advantage for processing a prototypically phonetic cue supports a model of speech perception in which lateralization is driven by functional demands (talker identification vs. phonetic categorization) rather than by acoustic stimulus properties alone.

  10. The Seattle Conference: The Role of the State Department of Education in Teacher Education (April 27-29, 1966).

    ERIC Educational Resources Information Center

    Edelfelt, Roy A.; Allen, Wendell C.

    This book includes the papers and speeches which were part of the Seattle Conference in which representatives of 49 states met to reevaluate beliefs and procedures regarding the future role of state departments of education particularly with reference to the identification and setting of goals. There are seven speeches and papers: "The Dynamic Duo…

  11. The impact of compression of speech signal, background noise and acoustic disturbances on the effectiveness of speaker identification

    NASA Astrophysics Data System (ADS)

    Kamiński, K.; Dobrowolski, A. P.

    2017-04-01

    The paper presents the architecture and the results of optimization of selected elements of the Automatic Speaker Recognition (ASR) system that uses Gaussian Mixture Models (GMM) in the classification process. Optimization was performed on the process of selection of individual characteristics using the genetic algorithm and the parameters of Gaussian distributions used to describe individual voices. The system that was developed was tested in order to evaluate the impact of different compression methods used, among others, in landline, mobile, and VoIP telephony systems, on effectiveness of the speaker identification. Also, the results were presented of effectiveness of speaker identification at specific levels of noise with the speech signal and occurrence of other disturbances that could appear during phone calls, which made it possible to specify the spectrum of applications of the presented ASR system.

  12. Motivation and appraisal in perception of poorly specified speech.

    PubMed

    Lidestam, Björn; Beskow, Jonas

    2006-04-01

    Normal-hearing students (n = 72) performed sentence, consonant, and word identification in either A (auditory), V (visual), or AV (audiovisual) modality. The auditory signal had difficult speech-to-noise relations. Talker (human vs. synthetic), topic (no cue vs. cue-words), and emotion (no cue vs. facially displayed vs. cue-words) were varied within groups. After the first block, effects of modality, face, topic, and emotion on initial appraisal and motivation were assessed. After the entire session, effects of modality on longer-term appraisal and motivation were assessed. The results from both assessments showed that V identification was more positively appraised than A identification. Correlations were tentatively interpreted such that evaluation of self-rated performance possibly depends on subjective standard and is reflected on motivation (if below subjective standard, AV group), or on appraisal (if above subjective standard, A group). Suggestions for further research are presented.

  13. Velo-pharyngeal dysfunction: Evaluation and management

    PubMed Central

    Marsh, Jeffrey L.

    2009-01-01

    Separation of the nasal and oral cavities by dynamic closure of the velo-pharyngeal port is necessary for normal speech and swallowing. Velo-pharyngeal dysfunction (VPD) may either follow repair of a cleft palate or be independent of clefting. While the diagnosis of VPD is made by audiologic perceptual evaluation of speech, identification of the mechanism of the dysfunction requires instrumental visualization of the velo-pharyngeal port during specific speech tasks. Matching the specific intervention for management of VPD with the type of dysfunction, i.e. differential management for differential diagnosis, maximizes the result while minimizing the morbidity of the intervention. PMID:19884668

  14. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    PubMed

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate anatomically distinct cortical representations of modulated noise in normal-hearing and hearing-impaired listeners. This work provides the first link among hearing thresholds, the amplitude of cortical representations of modulated sounds, and the ability to understand speech in modulated background noise. In light of previous work, we propose that magnified cortical representations of modulated sounds disrupt the separation of speech from modulated background noise in auditory cortex. Copyright © 2017 Millman et al.

  15. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults

    PubMed Central

    Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We point out that while AV training could be an impediment to immediate unisensory perceptual learning in cochlear implant patients, it was also associated with higher scores during training. PMID:25206344

  16. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.

    PubMed

    Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We point out that while AV training could be an impediment to immediate unisensory perceptual learning in cochlear implant patients, it was also associated with higher scores during training.

  17. Multimedia material about velopharynx and primary palatoplasty for orientation of caregivers of children with cleft lip and palate.

    PubMed

    Costa, Tarcila Lima da; Souza, Olivia Mesquita Vieira de; Carneiro, Homero Aferri; Chiquito Netto, Cristianne; Pegoraro-Krook, Maria Inês; Dutka, Jeniffer de Cássia Rillo

    2016-01-01

    The objective of this study was to describe the process of elaboration and evaluation of multimedia material for caregivers about velopharynx, speech, and primary palatoplasty in babies with cleft lip and palate. The elaboration of the material involved an interdisciplinary relationship between the fields of Speech Language Pathology and Audiology, Dentistry and Arts. The definition and execution of the following activities were based on the principles of art education involving the following: characterization of audience, characterization of content, identification and elaboration of illustrations, characterization of educational approach, elaboration of text and narratives, definition of audiovisual sequence, and video preparation. The material was evaluated with the participation of 41 caregivers of patients with cleft lip and palate involving the comparison between acquired knowledge using an evaluation script applied before and after presenting the material. An increase was observed in correct responses regarding the role of velopharynx and the importance of primary palatoplasty for speech. The multimedia was effective in optimizing the knowledge of caregivers, suggesting the importance of such material during orientation.

  18. Aging Affects Identification of Vocal Emotions in Semantically Neutral Sentences

    ERIC Educational Resources Information Center

    Dupuis, Kate; Pichora-Fuller, M. Kathleen

    2015-01-01

    Purpose: The authors determined the accuracy of younger and older adults in identifying vocal emotions using the Toronto Emotional Speech Set (TESS; Dupuis & Pichora-Fuller, 2010a) and investigated the possible contributions of auditory acuity and suprathreshold processing to emotion identification accuracy. Method: In 2 experiments, younger…

  19. Differential Neural Contributions to Native- and Foreign-Language Talker Identification

    ERIC Educational Resources Information Center

    Perrachione, Tyler K.; Pierrehumbert, Janet B.; Wong, Patrick C. M.

    2009-01-01

    Humans are remarkably adept at identifying individuals by the sound of their voice, a behavior supported by the nervous system's ability to integrate information from voice and speech perception. Talker-identification abilities are significantly impaired when listeners are unfamiliar with the language being spoken. Recent behavioral studies…

  20. Speech-based Class Attendance

    NASA Astrophysics Data System (ADS)

    Faizel Amri, Umar; Nur Wahidah Nik Hashim, Nik; Hazrin Hany Mohamad Hanif, Noor

    2017-11-01

    In the department of engineering, students are required to fulfil at least 80 percent of class attendance. Conventional method requires student to sign his/her initial on the attendance sheet. However, this method is prone to cheating by having another student signing for their fellow classmate that is absent. We develop our hypothesis according to a verse in the Holy Qur’an (95:4), “We have created men in the best of mould”. Based on the verse, we believe each psychological characteristic of human being is unique and thus, their speech characteristic should be unique. In this paper we present the development of speech biometric-based attendance system. The system requires user’s voice to be installed in the system as trained data and it is saved in the system for registration of the user. The following voice of the user will be the test data in order to verify with the trained data stored in the system. The system uses PSD (Power Spectral Density) and Transition Parameter as the method for feature extraction of the voices. Euclidean and Mahalanobis distances are used in order to verified the user’s voice. For this research, ten subjects of five females and five males were chosen to be tested for the performance of the system. The system performance in term of recognition rate is found to be 60% correct identification of individuals.

  1. Experiments in automatic word class and word sense identification for information retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauch, S.; Futrelle, R.P.

    Automatic identification of related words and automatic detection of word senses are two long-standing goals of researchers in natural language processing. Word class information and word sense identification may enhance the performance of information retrieval system4ms. Large online corpora and increased computational capabilities make new techniques based on corpus linguisitics feasible. Corpus-based analysis is especially needed for corpora from specialized fields for which no electronic dictionaries or thesauri exist. The methods described here use a combination of mutual information and word context to establish word similarities. Then, unsupervised classification is done using clustering in the word space, identifying word classesmore » without pretagging. We also describe an extension of the method to handle the difficult problems of disambiguation and of determining part-of-speech and semantic information for low-frequency words. The method is powerful enough to produce high-quality results on a small corpus of 200,000 words from abstracts in a field of molecular biology.« less

  2. Factors influencing speech perception in noise for 5-year-old children using hearing aids or cochlear implants.

    PubMed

    Ching, Teresa Yc; Zhang, Vicky W; Flynn, Christopher; Burns, Lauren; Button, Laura; Hou, Sanna; McGhie, Karen; Van Buynder, Patricia

    2017-07-07

    We investigated the factors influencing speech perception in babble for 5-year-old children with hearing loss who were using hearing aids (HAs) or cochlear implants (CIs). Speech reception thresholds (SRTs) for 50% correct identification were measured in two conditions - speech collocated with babble, and speech with spatially separated babble. The difference in SRTs between the two conditions give a measure of binaural unmasking, commonly known as spatial release from masking (SRM). Multiple linear regression analyses were conducted to examine the influence of a range of demographic factors on outcomes. Participants were 252 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. Children using HAs or CIs required a better signal-to-noise ratio to achieve the same level of performance as their normal-hearing peers but demonstrated SRM of a similar magnitude. For children using HAs, speech perception was significantly influenced by cognitive and language abilities. For children using CIs, age at CI activation and language ability were significant predictors of speech perception outcomes. Speech perception in children with hearing loss can be enhanced by improving their language abilities. Early age at cochlear implantation was also associated with better outcomes.

  3. Processing of speech and non-speech stimuli in children with specific language impairment

    NASA Astrophysics Data System (ADS)

    Basu, Madhavi L.; Surprenant, Aimee M.

    2003-10-01

    Specific Language Impairment (SLI) is a developmental language disorder in which children demonstrate varying degrees of difficulties in acquiring a spoken language. One possible underlying cause is that children with SLI have deficits in processing sounds that are of short duration or when they are presented rapidly. Studies so far have compared their performance on speech and nonspeech sounds of unequal complexity. Hence, it is still unclear whether the deficit is specific to the perception of speech sounds or whether it more generally affects the auditory function. The current study aims to answer this question by comparing the performance of children with SLI on speech and nonspeech sounds synthesized from sine-wave stimuli. The children will be tested using the classic categorical perception paradigm that includes both the identification and discrimination of stimuli along a continuum. If there is a deficit in the performance on both speech and nonspeech tasks, it will show that these children have a deficit in processing complex sounds. Poor performance on only the speech sounds will indicate that the deficit is more related to language. The findings will offer insights into the exact nature of the speech perception deficits in children with SLI. [Work supported by ASHF.

  4. Computer-based auditory phoneme discrimination training improves speech recognition in noise in experienced adult cochlear implant listeners.

    PubMed

    Schumann, Annette; Serman, Maja; Gefeller, Olaf; Hoppe, Ulrich

    2015-03-01

    Specific computer-based auditory training may be a useful completion in the rehabilitation process for cochlear implant (CI) listeners to achieve sufficient speech intelligibility. This study evaluated the effectiveness of a computerized, phoneme-discrimination training programme. The study employed a pretest-post-test design; participants were randomly assigned to the training or control group. Over a period of three weeks, the training group was instructed to train in phoneme discrimination via computer, twice a week. Sentence recognition in different noise conditions (moderate to difficult) was tested pre- and post-training, and six months after the training was completed. The control group was tested and retested within one month. Twenty-seven adult CI listeners who had been using cochlear implants for more than two years participated in the programme; 15 adults in the training group, 12 adults in the control group. Besides significant improvements for the trained phoneme-identification task, a generalized training effect was noted via significantly improved sentence recognition in moderate noise. No significant changes were noted in the difficult noise conditions. Improved performance was maintained over an extended period. Phoneme-discrimination training improves experienced CI listeners' speech perception in noise. Additional research is needed to optimize auditory training for individual benefit.

  5. Speech perception in individuals with auditory dys-synchrony: effect of lengthening of voice onset time and burst duration of speech segments.

    PubMed

    Kumar, U A; Jayaram, M

    2013-07-01

    The purpose of this study was to evaluate the effect of lengthening of voice onset time and burst duration of selected speech stimuli on perception by individuals with auditory dys-synchrony. This is the second of a series of articles reporting the effect of signal enhancing strategies on speech perception by such individuals. Two experiments were conducted: (1) assessment of the 'just-noticeable difference' for voice onset time and burst duration of speech sounds; and (2) assessment of speech identification scores when speech sounds were modified by lengthening the voice onset time and the burst duration in units of one just-noticeable difference, both in isolation and in combination with each other plus transition duration modification. Lengthening of voice onset time as well as burst duration improved perception of voicing. However, the effect of voice onset time modification was greater than that of burst duration modification. Although combined lengthening of voice onset time, burst duration and transition duration resulted in improved speech perception, the improvement was less than that due to lengthening of transition duration alone. These results suggest that innovative speech processing strategies that enhance temporal cues may benefit individuals with auditory dys-synchrony.

  6. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users

    PubMed Central

    Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn

    2017-01-01

    This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research. PMID:28348542

  7. Recognition of speaker-dependent continuous speech with KEAL

    NASA Astrophysics Data System (ADS)

    Mercier, G.; Bigorgne, D.; Miclet, L.; Le Guennec, L.; Querre, M.

    1989-04-01

    A description of the speaker-dependent continuous speech recognition system KEAL is given. An unknown utterance, is recognized by means of the followng procedures: acoustic analysis, phonetic segmentation and identification, word and sentence analysis. The combination of feature-based, speaker-independent coarse phonetic segmentation with speaker-dependent statistical classification techniques is one of the main design features of the acoustic-phonetic decoder. The lexical access component is essentially based on a statistical dynamic programming technique which aims at matching a phonemic lexical entry containing various phonological forms, against a phonetic lattice. Sentence recognition is achieved by use of a context-free grammar and a parsing algorithm derived from Earley's parser. A speaker adaptation module allows some of the system parameters to be adjusted by matching known utterances with their acoustical representation. The task to be performed, described by its vocabulary and its grammar, is given as a parameter of the system. Continuously spoken sentences extracted from a 'pseudo-Logo' language are analyzed and results are presented.

  8. Enhanced perception of pitch changes in speech and music in early blind adults.

    PubMed

    Arnaud, Laureline; Gracco, Vincent; Ménard, Lucie

    2018-06-12

    It is well known that congenitally blind adults have enhanced auditory processing for some tasks. For instance, they show supra-normal capacity to perceive accelerated speech. However, only a few studies have investigated basic auditory processing in this population. In this study, we investigated if pitch processing enhancement in the blind is a domain-general or domain-specific phenomenon, and if pitch processing shares the same properties as in the sighted regarding how scores from different domains are associated. Fifteen congenitally blind adults and fifteen sighted adults participated in the study. We first created a set of personalized native and non-native vowel stimuli using an identification and rating task. Then, an adaptive discrimination paradigm was used to determine the frequency difference limen for pitch direction identification of speech (native and non-native vowels) and non-speech stimuli (musical instruments and pure tones). The results show that the blind participants had better discrimination thresholds than controls for native vowels, music stimuli, and pure tones. Whereas within the blind group, the discrimination thresholds were smaller for musical stimuli than speech stimuli, replicating previous findings in sighted participants, we did not find this effect in the current control group. Further analyses indicate that older sighted participants show higher thresholds for instrument sounds compared to speech sounds. This effect of age was not found in the blind group. Moreover, the scores across domains were not associated to the same extent in the blind as they were in the sighted. In conclusion, in addition to providing further evidence of compensatory auditory mechanisms in early blind individuals, our results point to differences in how auditory processing is modulated in this population. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. NEW TECHNIQUES IN DIAGNOSIS AND APPRAISAL AND IMPLICATION FOR THERAPY FOR ALL PUBLIC SCHOOL CHILDREN WITH COMMUNICATION DISORDERS, PROCEEDINGS OF A SPECIAL STUDY INSTITUTE (OCTOBER 4-6, 1967, MONTGOMERY, ALABAMA).

    ERIC Educational Resources Information Center

    Alabama State Dept. of Education, Montgomery.

    IN OCTOBER, 1967, A SPECIAL STUDY INSTITUTE, SPONSORED BY THE ALABAMA STATE DEPARTMENT OF EDUCATION, CONVENED FOR THE PURPOSES OF (1) DISCUSSING THE MOST EFFICACIOUS MEANS OF SPEECH THERAPY PROGRAM ORGANIZATION, (2) EXPLORING NEW TECHNIQUES OF SPEECH PROBLEM IDENTIFICATION, PROGNOSIS DETERMINATION, AND THERAPEUTIC SEQUENCE DEVELOPMENT, AND (3)…

  10. The effect of temporal gap identification on speech perception by users of cochlear implants

    PubMed Central

    Sagi, Elad; Kaiser, Adam R.; Meyer, Ted A.; Svirsky, Mario A.

    2008-01-01

    Purpose This study examined the ability of listeners using cochlear implants (CIs) and listeners with normal-hearing (NH) to identify silent gaps of different duration, and the relation of this ability to speech understanding in CI users. Method Sixteen NH adults and eleven postlingually deafened adults with CIs identified synthetic vowel-like stimuli that were either continuous or contained an intervening silent gap ranging from 15 to 90 ms. Cumulative d’, an index of discriminability, was calculated for each participant. Consonant and CNC word identification tasks were administered to the CI group. Results Overall, the ability to identify stimuli with gaps of different duration was better for the NH group than for the CI group. Seven CI users had cumulative d' scores that were no higher than those of any NH listener, and their CNC word scores ranged from 0 to 30%. The other four CI users had cumulative d’ scores within the range of the NH group, and their CNC word scores ranged from 46% to 68%. For the CI group, cumulative d’ scores were significantly correlated with their speech testing scores. Conclusions The ability to identify silent gap duration may help explain individual differences in speech perception by CI users. PMID:18806216

  11. On the nature of the speech perception deficits in children with autism spectrum disorders.

    PubMed

    You, R S; Serniclaes, W; Rider, D; Chabane, N

    2017-02-01

    Previous studies have claimed to show deficits in the perception of speech sounds in autism spectrum disorders (ASD). The aim of the current study was to clarify the nature of such deficits. Children with ASD might only exhibit a lesser amount of precision in the perception of phoneme categories (CPR deficit). However, these children might further present an allophonic mode of speech perception, similar to the one evidenced in dyslexia, characterised by enhanced discrimination of acoustic differences within phoneme categories. Allophonic perception usually gives rise to a categorical perception (CP) deficit, characterised by a weaker coherence between discrimination and identification of speech sounds. The perceptual performance of ASD children was compared to that of control children of the same chronological age. Identification and discrimination data were collected for continua of natural vowels, synthetic vowels, and synthetic consonants. Results confirmed that children with ASD exhibit a CPR deficit for the three stimulus continua. These children further exhibited a trend toward allophonic perception that was, however, not accompanied by the usual CP deficit. These findings confirm that the commonly found CPR deficit is also present in ASD. Whether children with ASD also present allophonic perception requires further investigations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Hearing history influences voice gender perceptual performance in cochlear implant users.

    PubMed

    Kovačić, Damir; Balaban, Evan

    2010-12-01

    The study was carried out to assess the role that five hearing history variables (chronological age, age at onset of deafness, age of first cochlear implant [CI] activation, duration of CI use, and duration of known deafness) play in the ability of CI users to identify speaker gender. Forty-one juvenile CI users participated in two voice gender identification tasks. In a fixed, single-interval task, subjects listened to a single speech item from one of 20 adult male or 20 adult female speakers and had to identify speaker gender. In an adaptive speech-based voice gender discrimination task with the fundamental frequency difference between the voices as the adaptive parameter, subjects listened to a pair of speech items presented in sequential order, one of which was always spoken by an adult female and the other by an adult male. Subjects had to identify the speech item spoken by the female voice. Correlation and regression analyses between perceptual scores in the two tasks and the hearing history variables were performed. Subjects fell into three performance groups: (1) those who could distinguish voice gender in both tasks, (2) those who could distinguish voice gender in the adaptive but not the fixed task, and (3) those who could not distinguish voice gender in either task. Gender identification performance for single voices in the fixed task was significantly and negatively related to the duration of deafness before cochlear implantation (shorter deafness yielded better performance), whereas performance in the adaptive task was weakly but significantly related to age at first activation of the CI device, with earlier activations yielding better scores. The existence of a group of subjects able to perform adaptive discrimination but unable to identify the gender of singly presented voices demonstrates the potential dissociability of the skills required for these two tasks, suggesting that duration of deafness and age of cochlear implantation could have dissociable effects on the development of different skills required by CI users to identify speaker gender.

  13. Issues in forensic voice.

    PubMed

    Hollien, Harry; Huntley Bahr, Ruth; Harnsberger, James D

    2014-03-01

    The following article provides a general review of an area that can be referred to as Forensic Voice. Its goals will be outlined and that discussion will be followed by a description of its major elements. Considered are (1) the processing and analysis of spoken utterances, (2) distorted speech, (3) enhancement of speech intelligibility (re: surveillance and other recordings), (4) transcripts, (5) authentication of recordings, (6) speaker identification, and (7) the detection of deception, intoxication, and emotions in speech. Stress in speech and the psychological stress evaluation systems (that some individuals attempt to use as lie detectors) also will be considered. Points of entry will be suggested for individuals with the kinds of backgrounds possessed by professionals already working in the voice area. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  14. A Picture-Identification Test for Hearing-Impaired Children. Final Report.

    ERIC Educational Resources Information Center

    Ross, Mark; Lerman, Jay

    The Word Intelligibility by Picture Identification Test (WIPI) was developed to measure speech discrimination ability in hearing impaired children. In the first phase of development, the word stimuli were evaluated to determine whether they were within the recognition vocabulary of 15 hearing impaired children (aged 6 to 12) and whether the…

  15. Perception of Sung Speech in Bimodal Cochlear Implant Users.

    PubMed

    Crew, Joseph D; Galvin, John J; Fu, Qian-Jie

    2016-11-11

    Combined use of a hearing aid (HA) and cochlear implant (CI) has been shown to improve CI users' speech and music performance. However, different hearing devices, test stimuli, and listening tasks may interact and obscure bimodal benefits. In this study, speech and music perception were measured in bimodal listeners for CI-only, HA-only, and CI + HA conditions, using the Sung Speech Corpus, a database of monosyllabic words produced at different fundamental frequencies. Sentence recognition was measured using sung speech in which pitch was held constant or varied across words, as well as for spoken speech. Melodic contour identification (MCI) was measured using sung speech in which the words were held constant or varied across notes. Results showed that sentence recognition was poorer with sung speech relative to spoken, with little difference between sung speech with a constant or variable pitch; mean performance was better with CI-only relative to HA-only, and best with CI + HA. MCI performance was better with constant words versus variable words; mean performance was better with HA-only than with CI-only and was best with CI + HA. Relative to CI-only, a strong bimodal benefit was observed for speech and music perception. Relative to the better ear, bimodal benefits remained strong for sentence recognition but were marginal for MCI. While variations in pitch and timbre may negatively affect CI users' speech and music perception, bimodal listening may partially compensate for these deficits. © The Author(s) 2016.

  16. Reliability Measure of a Clinical Test: Appreciation of Music in Cochlear Implantees (AMICI)

    PubMed Central

    Cheng, Min-Yu; Spitzer, Jaclyn B.; Shafiro, Valeriy; Sheft, Stanley; Mancuso, Dean

    2014-01-01

    Purpose The goals of this study were (1) to investigate the reliability of a clinical music perception test, Appreciation of Music in Cochlear Implantees (AMICI), and (2) examine associations between the perception of music and speech. AMICI was developed as a clinical instrument for assessing music perception in persons with cochlear implants (CIs). The test consists of four subtests: (1) music versus environmental noise discrimination, (2) musical instrument identification (closed-set), (3) musical style identification (closed-set), and (4) identification of musical pieces (open-set). To be clinically useful, it is crucial for AMICI to demonstrate high test-retest reliability, so that CI users can be assessed and retested after changes in maps or programming strategies. Research Design Thirteen CI subjects were tested with AMICI for the initial visit and retested again 10–14 days later. Two speech perception tests (consonant-nucleus-consonant [CNC] and Bamford-Kowal-Bench Speech-in-Noise [BKB-SIN]) were also administered. Data Analysis Test-retest reliability and equivalence of the test’s three forms were analyzed using paired t-tests and correlation coefficients, respectively. Correlation analysis was also conducted between results from the music and speech perception tests. Results Results showed no significant difference between test and retest (p > 0.05) with adequate power (0.9) as well as high correlations between the three forms (Forms A and B, r = 0.91; Forms A and C, r = 0.91; Forms B and C, r = 0.95). Correlation analysis showed high correlation between AMICI and BKB-SIN (r = −0.71), and moderate correlation between AMICI and CNC (r = 0.4). Conclusions The study showed AMICI is highly reliable for assessing musical perception in CI users. PMID:24384082

  17. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users.

    PubMed

    Jaekel, Brittany N; Newman, Rochelle S; Goupell, Matthew J

    2017-05-24

    Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes. Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates. Twenty-three CI and 29 NH participants performed a phoneme identification task. NH participants heard the same unprocessed stimuli as the CI participants or stimuli degraded by a sine vocoder, simulating aspects of CI processing. CI participants showed larger rate normalization effects (6.6 ms) than the NH participants (3.7 ms) and had shallower (less reliable) category boundary slopes. NH participants showed similarly shallow slopes when presented acoustically degraded vocoded signals, but an equal or smaller rate effect in response to reductions in available spectral and temporal information. CI participants can rate normalize, despite their degraded speech input, and show a larger rate effect compared to NH participants. CI participants may particularly rely on rate normalization to better maintain perceptual constancy of the speech signal.

  18. Identification of speech transients using variable frame rate analysis and wavelet packets.

    PubMed

    Rasetshwane, Daniel M; Boston, J Robert; Li, Ching-Chung

    2006-01-01

    Speech transients are important cues for identifying and discriminating speech sounds. Yoo et al. and Tantibundhit et al. were successful in identifying speech transients and, emphasizing them, improving the intelligibility of speech in noise. However, their methods are computationally intensive and unsuitable for real-time applications. This paper presents a method to identify and emphasize speech transients that combines subband decomposition by the wavelet packet transform with variable frame rate (VFR) analysis and unvoiced consonant detection. The VFR analysis is applied to each wavelet packet to define a transitivity function that describes the extent to which the wavelet coefficients of that packet are changing. Unvoiced consonant detection is used to identify unvoiced consonant intervals and the transitivity function is amplified during these intervals. The wavelet coefficients are multiplied by the transitivity function for that packet, amplifying the coefficients localized at times when they are changing and attenuating coefficients at times when they are steady. Inverse transform of the modified wavelet packet coefficients produces a signal corresponding to speech transients similar to the transients identified by Yoo et al. and Tantibundhit et al. A preliminary implementation of the algorithm runs more efficiently.

  19. Role of working memory and lexical knowledge in perceptual restoration of interrupted speech.

    PubMed

    Nagaraj, Naveen K; Magimairaj, Beula M

    2017-12-01

    The role of working memory (WM) capacity and lexical knowledge in perceptual restoration (PR) of missing speech was investigated using the interrupted speech perception paradigm. Speech identification ability, which indexed PR, was measured using low-context sentences periodically interrupted at 1.5 Hz. PR was measured for silent gated, low-frequency speech noise filled, and low-frequency fine-structure and envelope filled interrupted conditions. WM capacity was measured using verbal and visuospatial span tasks. Lexical knowledge was assessed using both receptive vocabulary and meaning from context tests. Results showed that PR was better for speech noise filled condition than other conditions tested. Both receptive vocabulary and verbal WM capacity explained unique variance in PR for the speech noise filled condition, but were unrelated to performance in the silent gated condition. It was only receptive vocabulary that uniquely predicted PR for fine-structure and envelope filled conditions. These findings suggest that the contribution of lexical knowledge and verbal WM during PR depends crucially on the information content that replaced the silent intervals. When perceptual continuity was partially restored by filler speech noise, both lexical knowledge and verbal WM capacity facilitated PR. Importantly, for fine-structure and envelope filled interrupted conditions, lexical knowledge was crucial for PR.

  20. High-Resolution, Non-Invasive Imaging of Upper Vocal Tract Articulators Compatible with Human Brain Recordings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.

    A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less

  1. Recognition of speech in noise after application of time-frequency masks: Dependence on frequency and threshold parameters

    PubMed Central

    Sinex, Donal G.

    2013-01-01

    Binary time-frequency (TF) masks can be applied to separate speech from noise. Previous studies have shown that with appropriate parameters, ideal TF masks can extract highly intelligible speech even at very low speech-to-noise ratios (SNRs). Two psychophysical experiments provided additional information about the dependence of intelligibility on the frequency resolution and threshold criteria that define the ideal TF mask. Listeners identified AzBio Sentences in noise, before and after application of TF masks. Masks generated with 8 or 16 frequency bands per octave supported nearly-perfect identification. Word recognition accuracy was slightly lower and more variable with 4 bands per octave. When TF masks were generated with a local threshold criterion of 0 dB SNR, the mean speech reception threshold was −9.5 dB SNR, compared to −5.7 dB for unprocessed sentences in noise. Speech reception thresholds decreased by about 1 dB per dB of additional decrease in the local threshold criterion. Information reported here about the dependence of speech intelligibility on frequency and level parameters has relevance for the development of non-ideal TF masks for clinical applications such as speech processing for hearing aids. PMID:23556604

  2. High-Resolution, Non-Invasive Imaging of Upper Vocal Tract Articulators Compatible with Human Brain Recordings

    PubMed Central

    Anumanchipalli, Gopala K.; Dichter, Benjamin; Chaisanguanthum, Kris S.; Johnson, Keith; Chang, Edward F.

    2016-01-01

    A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial—especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship across speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics. PMID:27019106

  3. High-Resolution, Non-Invasive Imaging of Upper Vocal Tract Articulators Compatible with Human Brain Recordings

    DOE PAGES

    Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.; ...

    2016-03-28

    A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less

  4. [Test set for the evaluation of hearing and speech development after cochlear implantation in children].

    PubMed

    Lamprecht-Dinnesen, A; Sick, U; Sandrieser, P; Illg, A; Lesinski-Schiedat, A; Döring, W H; Müller-Deile, J; Kiefer, J; Matthias, K; Wüst, A; Konradi, E; Riebandt, M; Matulat, P; Von Der Haar-Heise, S; Swart, J; Elixmann, K; Neumann, K; Hildmann, A; Coninx, F; Meyer, V; Gross, M; Kruse, E; Lenarz, T

    2002-10-01

    Since autumn 1998 the multicenter interdisciplinary study group "Test Materials for CI Children" has been compiling a uniform examination tool for evaluation of speech and hearing development after cochlear implantation in childhood. After studying the relevant literature, suitable materials were checked for practical applicability, modified and provided with criteria for execution and break-off. For data acquisition, observation forms for preparation of a PC-version were developed. The evaluation set contains forms for master data with supplements relating to postoperative processes. The hearing tests check supra-threshold hearing with loudness scaling for children, speech comprehension in silence (Mainz and Göttingen Test for Speech Comprehension in Childhood) and phonemic differentiation (Oldenburg Rhyme Test for Children), the central auditory processes of detection, discrimination, identification and recognition (modification of the "Frankfurt Functional Hearing Test for Children") and audiovisual speech perception (Open Paragraph Tracking, Kiel Speech Track Program). The materials for speech and language development comprise phonetics-phonology, lexicon and semantics (LOGO Pronunciation Test), syntax and morphology (analysis of spontaneous speech), language comprehension (Reynell Scales), communication and pragmatics (observation forms). The MAIS and MUSS modified questionnaires are integrated. The evaluation set serves quality assurance and permits factor analysis as well as controls for regularity through the multicenter comparison of long-term developmental trends after cochlear implantation.

  5. Intelligent acoustic data fusion technique for information security analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Ying; Tang, Yize; Lu, Wenda; Wang, Zhongfeng; Wang, Zepeng; Zhang, Luming

    2017-08-01

    Tone is an essential component of word formation in all tonal languages, and it plays an important role in the transmission of information in speech communication. Therefore, tones characteristics study can be applied into security analysis of acoustic signal by the means of language identification, etc. In speech processing, fundamental frequency (F0) is often viewed as representing tones by researchers of speech synthesis. However, regular F0 values may lead to low naturalness in synthesized speech. Moreover, F0 and tone are not equivalent linguistically; F0 is just a representation of a tone. Therefore, the Electroglottography (EGG) signal is collected for deeper tones characteristics study. In this paper, focusing on the Northern Kam language, which has nine tonal contours and five level tone types, we first collected EGG and speech signals from six natural male speakers of the Northern Kam language, and then achieved the clustering distributions of the tone curves. After summarizing the main characteristics of tones of Northern Kam, we analyzed the relationship between EGG and speech signal parameters, and laid the foundation for further security analysis of acoustic signal.

  6. Speech Perception Deficits in Mandarin-Speaking School-Aged Children with Poor Reading Comprehension

    PubMed Central

    Liu, Huei-Mei; Tsao, Feng-Ming

    2017-01-01

    Previous studies have shown that children learning alphabetic writing systems who have language impairment or dyslexia exhibit speech perception deficits. However, whether such deficits exist in children learning logographic writing systems who have poor reading comprehension remains uncertain. To further explore this issue, the present study examined speech perception deficits in Mandarin-speaking children with poor reading comprehension. Two self-designed tasks, consonant categorical perception task and lexical tone discrimination task were used to compare speech perception performance in children (n = 31, age range = 7;4–10;2) with poor reading comprehension and an age-matched typically developing group (n = 31, age range = 7;7–9;10). Results showed that the children with poor reading comprehension were less accurate in consonant and lexical tone discrimination tasks and perceived speech contrasts less categorically than the matched group. The correlations between speech perception skills (i.e., consonant and lexical tone discrimination sensitivities and slope of consonant identification curve) and individuals’ oral language and reading comprehension were stronger than the correlations between speech perception ability and word recognition ability. In conclusion, the results revealed that Mandarin-speaking children with poor reading comprehension exhibit less-categorized speech perception, suggesting that imprecise speech perception, especially lexical tone perception, is essential to account for reading learning difficulties in Mandarin-speaking children. PMID:29312031

  7. An odor identification approach based on event-related pupil dilation and gaze focus.

    PubMed

    Aguillon-Hernandez, Nadia; Naudin, Marine; Roché, Laëtitia; Bonnet-Brilhault, Frédérique; Belzung, Catherine; Martineau, Joëlle; Atanasova, Boriana

    2015-06-01

    Olfactory disorders constitute a potential marker of many diseases and are considered valuable clues to the diagnosis and evaluation of progression for many disorders. The most commonly used test for the evaluation of impairments of olfactory identification requires the active participation of the subject, who must select the correct name of the perceived odor from a list. An alternative method is required because speech may be impaired or not yet learned in many patients. As odor identification is known to be facilitated by searching for visual clues, we aimed to develop an objective, vision-based approach for the evaluation of odor identification. We used an eye tracking method to quantify pupillary and ocular responses during the simultaneous presentation of olfactory and visual stimuli, in 39 healthy participants aged from 19 to 77years. Odor presentation triggered an increase in pupil dilation and gaze focus on the picture corresponding to the odor presented. These results suggest that odorant stimuli increase recruitment of the sympathetic system (as demonstrated by the reactivity of the pupil) and draw attention to the visual clue. These results validate the objectivity of this method. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Effect of Age on F[subscript 0] Difference Limen and Concurrent Vowel Identification

    ERIC Educational Resources Information Center

    Vongpaisal, Tara; Pichora-Fuller, Margaret Kathleen

    2007-01-01

    Purpose: To investigate the effect of age on voice fundamental frequency (F[subscript 0]) difference limen (DL) and identification of concurrently presented vowels. Method: Fifteen younger and 15 older adults with normal audiometric thresholds in the speech range participated in 2 experiments. In Experiment 1, F[subscript 0] DLs were measured for…

  9. Cost-sensitive learning for emotion robust speaker recognition.

    PubMed

    Li, Dongdong; Yang, Yingchun; Dai, Weihui

    2014-01-01

    In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved.

  10. Speech-generating devices: effectiveness of interface design-a comparative study of autism spectrum disorders.

    PubMed

    Chen, Chien-Hsu; Wang, Chuan-Po; Lee, I-Jui; Su, Chris Chun-Chin

    2016-01-01

    We analyzed the efficacy of the interface design of speech generating devices on three non-verbal adolescents with autism spectrum disorder (ASD), in hopes of improving their on-campus communication and cognitive disability. The intervention program was created based on their social and communication needs in school. Two operating interfaces were designed and compared: the Hierarchical Relating Menu and the Pie Abbreviation-Expansion Menu. The experiment used the ABCACB multiple-treatment reversal design. The test items included: (1) accuracy of operating identification; (2) interface operation in response to questions; (3) degree of independent completion. Each of these three items improved with both intervention interfaces. The children were able to operate the interfaces skillfully and respond to questions accurately, which evidenced the effectiveness of the interfaces. We conclude that both interfaces are efficacious enough to help nonverbal children with ASD at different levels.

  11. Cost-Sensitive Learning for Emotion Robust Speaker Recognition

    PubMed Central

    Li, Dongdong; Yang, Yingchun

    2014-01-01

    In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved. PMID:24999492

  12. [A summer afternoon in Grinzing. Thomas Mann visits Sigmund Freud].

    PubMed

    Hummel, Gerhard

    2006-01-01

    Focussing on June 14th, 1936 when Mann visited Freud to read him the speech he had delivered in Vienna in celebration of Freud's 80th birthday, the paper investigates the "less than simple" relation between the two men. It shows how they gradually approached each other and then in 1929 entered into direct contact after Mann had publicly underlined Freud's relevance for his project "psychology and myth". Some traces of personal ambivalence contained in the 1936 lecture are highlighted. The author discusses the potential significance for both men of Freud's response to Mann's speech where he interpreted aspects of Napoleon's life as based on his identification with the biblical Joseph in order to surpass his elder brother. Finally it is considered whether Mann's contact with Freud may have helped him to cope with the trauma of the early loss of his father.

  13. Measuring Speech Comprehensibility in Students with Down Syndrome

    PubMed Central

    Woynaroski, Tiffany; Camarata, Stephen

    2016-01-01

    Purpose There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based measure of the comprehensibility of conversational speech in students with Down syndrome. Method Participants were 10 elementary school students with Down syndrome and 4 unfamiliar adult raters. Averaged across-observer Likert ratings of speech comprehensibility were called a ratings-based measure of speech comprehensibility. The proportion of utterance attempts fully glossed constituted an orthography-based measure of speech comprehensibility. Results Averaging across 4 raters on four 5-min segments produced a reliable (G = .83) ratings-based measure of speech comprehensibility. The ratings-based measure was strongly (r > .80) correlated with the orthography-based measure for both the same and different conversational samples. Conclusion Reliable and valid measures of speech comprehensibility are achievable with the resources available to many researchers and some clinicians. PMID:27299989

  14. High stimulus variability in nonnative speech learning supports formation of abstract categories: evidence from Japanese geminates.

    PubMed

    Sadakata, Makiko; McQueen, James M

    2013-08-01

    This study reports effects of a high-variability training procedure on nonnative learning of a Japanese geminate-singleton fricative contrast. Thirty native speakers of Dutch took part in a 5-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. Participants were trained with either many repetitions of a limited set of words recorded by a single speaker (low-variability training) or with fewer repetitions of a more variable set of words recorded by multiple speakers (high-variability training). Both types of training enhanced identification of speech but not of nonspeech materials, indicating that learning was domain specific. High-variability training led to superior performance in identification but not in discrimination tests, and supported better generalization of learning as shown by transfer from the trained fricatives to the identification of untrained stops and affricates. Variability thus helps nonnative listeners to form abstract categories rather than to enhance early acoustic analysis.

  15. Perception of musical and lexical tones by Taiwanese-speaking musicians.

    PubMed

    Lee, Chao-Yang; Lee, Yuh-Fang; Shr, Chia-Lin

    2011-07-01

    This study explored the relationship between music and speech by examining absolute pitch and lexical tone perception. Taiwanese-speaking musicians were asked to identify musical tones without a reference pitch and multispeaker Taiwanese level tones without acoustic cues typically present for speaker normalization. The results showed that a high percentage of the participants (65% with an exact match required and 81% with one-semitone errors allowed) possessed absolute pitch, as measured by the musical tone identification task. A negative correlation was found between occurrence of absolute pitch and age of onset of musical training, suggesting that the acquisition of absolute pitch resembles the acquisition of speech. The participants were able to identify multispeaker Taiwanese level tones with above-chance accuracy, even though the acoustic cues typically present for speaker normalization were not available in the stimuli. No correlations were found between the performance in musical tone identification and the performance in Taiwanese tone identification. Potential reasons for the lack of association between the two tasks are discussed. © 2011 Acoustical Society of America

  16. Training in Temporal Information Processing Ameliorates Phonetic Identification.

    PubMed

    Szymaszek, Aneta; Dacewicz, Anna; Urban, Paulina; Szelag, Elzbieta

    2018-01-01

    Many studies revealed a link between temporal information processing (TIP) in a millisecond range and speech perception. Previous studies indicated a dysfunction in TIP accompanied by deficient phonemic hearing in children with specific language impairment (SLI). In this study we concentrate in SLI on phonetic identification, using the voice-onset-time (VOT) phenomenon in which TIP is built-in. VOT is crucial for speech perception, as stop consonants (like /t/ vs. /d/) may be distinguished by an acoustic difference in time between the onsets of the consonant (stop release burst) and the following vibration of vocal folds (voicing). In healthy subjects two categories (voiced and unvoiced) are determined using VOT task. The present study aimed at verifying whether children with SLI indicate a similar pattern of phonetic identification as their healthy peers and whether the intervention based on TIP results in improved performance on the VOT task. Children aged from 5 to 8 years ( n = 47) were assigned into two groups: normal children without any language disability (NC, n = 20), and children with SLI ( n = 27). In the latter group participants were randomly classified into two treatment subgroups, i.e., experimental temporal training (EG, n = 14) and control non-temporal training (CG, n = 13). The analyzed indicators of phonetic identification were: (1) the boundary location (α) determined as the VOT value corresponding to 50% voicing/unvoicing distinctions; (2) ranges of voiced/unvoiced categories; (3) the slope of identification curve (β) reflecting the identification correctness; (4) percent of voiced distinctions within the applied VOT spectrum. The results indicated similar α values and similar ranges of voiced/unvoiced categories between SLI and NC. However, β in SLI was significantly higher than that in NC. After the intervention, the significant improvement of β was observed only in EG. They achieved the level of performance comparable to that observed in NC. The training-related improvement in CG was non-significant. Furthermore, only in EG the β values in post-test correlated with measures of TIP as well as with phonemic hearing obtained in our previous studies. These findings provide another evidence that TIP is omnipresent in language communication and reflected not only in phonemic hearing but also in phonetic identification.

  17. Improving Understanding of Emotional Speech Acoustic Content

    NASA Astrophysics Data System (ADS)

    Tinnemore, Anna

    Children with cochlear implants show deficits in identifying emotional intent of utterances without facial or body language cues. A known limitation to cochlear implants is the inability to accurately portray the fundamental frequency contour of speech which carries the majority of information needed to identify emotional intent. Without reliable access to the fundamental frequency, other methods of identifying vocal emotion, if identifiable, could be used to guide therapies for training children with cochlear implants to better identify vocal emotion. The current study analyzed recordings of adults speaking neutral sentences with a set array of emotions in a child-directed and adult-directed manner. The goal was to identify acoustic cues that contribute to emotion identification that may be enhanced in child-directed speech, but are also present in adult-directed speech. Results of this study showed that there were significant differences in the variation of the fundamental frequency, the variation of intensity, and the rate of speech among emotions and between intended audiences.

  18. The minor third communicates sadness in speech, mirroring its use in music.

    PubMed

    Curtis, Meagan E; Bharucha, Jamshed J

    2010-06-01

    There is a long history of attempts to explain why music is perceived as expressing emotion. The relationship between pitches serves as an important cue for conveying emotion in music. The musical interval referred to as the minor third is generally thought to convey sadness. We reveal that the minor third also occurs in the pitch contour of speech conveying sadness. Bisyllabic speech samples conveying four emotions were recorded by 9 actresses. Acoustic analyses revealed that the relationship between the 2 salient pitches of the sad speech samples tended to approximate a minor third. Participants rated the speech samples for perceived emotion, and the use of numerous acoustic parameters as cues for emotional identification was modeled using regression analysis. The minor third was the most reliable cue for identifying sadness. Additional participants rated musical intervals for emotion, and their ratings verified the historical association between the musical minor third and sadness. These findings support the theory that human vocal expressions and music share an acoustic code for communicating sadness.

  19. Selling health data: de-identification, privacy, and speech.

    PubMed

    Kaplan, Bonnie

    2015-07-01

    Two court cases that involve selling prescription data for pharmaceutical marketing affect biomedical informatics, patient and clinician privacy, and regulation. Sorrell v. IMS Health Inc. et al. in the United States and R v. Department of Health, Ex Parte Source Informatics Ltd. in the United Kingdom concern privacy and health data protection, data de-identification and reidentification, drug detailing (marketing), commercial benefit from the required disclosure of personal information, clinician privacy and the duty of confidentiality, beneficial and unsavory uses of health data, regulating health technologies, and considering data as speech. Individuals should, at the very least, be aware of how data about them are collected and used. Taking account of how those data are used is needed so societal norms and law evolve ethically as new technologies affect health data privacy and protection.

  20. Melodic Contour Identification and Music Perception by Cochlear Implant Users

    PubMed Central

    Galvin, John J.; Fu, Qian-Jie; Shannon, Robert V.

    2013-01-01

    Research and outcomes with cochlear implants (CIs) have revealed a dichotomy in the cues necessary for speech and music recognition. CI devices typically transmit 16–22 spectral channels, each modulated slowly in time. This coarse representation provides enough information to support speech understanding in quiet and rhythmic perception in music, but not enough to support speech understanding in noise or melody recognition. Melody recognition requires some capacity for complex pitch perception, which in turn depends strongly on access to spectral fine structure cues. Thus, temporal envelope cues are adequate for speech perception under optimal listening conditions, while spectral fine structure cues are needed for music perception. In this paper, we present recent experiments that directly measure CI users’ melodic pitch perception using a melodic contour identification (MCI) task. While normal-hearing (NH) listeners’ performance was consistently high across experiments, MCI performance was highly variable across CI users. CI users’ MCI performance was significantly affected by instrument timbre, as well as by the presence of a competing instrument. In general, CI users had great difficulty extracting melodic pitch from complex stimuli. However, musically-experienced CI users often performed as well as NH listeners, and MCI training in less experienced subjects greatly improved performance. With fixed constraints on spectral resolution, such as it occurs with hearing loss or an auditory prosthesis, training and experience can provide a considerable improvements in music perception and appreciation. PMID:19673835

  1. Factors influencing speech and language outcomes of children with early identified severe/profound hearing loss: Clinician-identified facilitators and barriers.

    PubMed

    Fulcher, Anne Nivelles; Purcell, Alison; Baker, Elise; Munro, Natalie

    2015-06-01

    Early identification of severe/profound childhood hearing loss (HL) gives these children access to hearing devices and early intervention to facilitate improved speech and language outcomes. Predicting which infants will go on to achieve such outcomes remains difficult. This study describes clinician identified malleable and non-malleable factors that may influence speech and language outcomes for children with severe/profound HL. Semi-structured interviews were conducted with six experienced auditory verbal clinicians. A collective case study design was implemented. The interviews were transcribed and coded into themes using constant comparative analysis. Clinicians identified that, for children with severe/profound HL, early identification, early amplification and commencing auditory-verbal intervention under 6 months of age may facilitate child progress. Possible barriers were living in rural/remote areas, the clinicians' lack of experience and confidence in providing intervention for infants under age 6-months and belonging to a family with a culturally and linguistically diverse (CALD) background. The results indicate that multiple factors need to be considered by clinicians working with children with HL and their families to determine how each child functions within their own environment and personal contexts, consistent with the International Classification of Functioning, Disability and Health (ICF) framework. Such an approach is likely to empower clinicians to carefully balance potential barriers to, and facilitators of, optimal speech and language outcomes for all children with HL.

  2. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    NASA Astrophysics Data System (ADS)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  3. Characterising receptive language processing in schizophrenia using word and sentence tasks.

    PubMed

    Tan, Eric J; Yelland, Gregory W; Rossell, Susan L

    2016-01-01

    Language dysfunction is proposed to relate to the speech disturbances in schizophrenia, which are more commonly referred to as formal thought disorder (FTD). Presently, language production deficits in schizophrenia are better characterised than language comprehension difficulties. This study thus aimed to examine three aspects of language comprehension in schizophrenia: (1) the role of lexical processing, (2) meaning attribution for words and sentences, and (3) the relationship between comprehension and production. Fifty-seven schizophrenia/schizoaffective disorder patients and 48 healthy controls completed a clinical assessment and three language tasks assessing word recognition, synonym identification, and sentence comprehension. Poorer patient performance was expected on the latter two tasks. Recognition of word form was not impaired in schizophrenia, indicating intact lexical processing. Whereas single-word synonym identification was not significantly impaired, there was a tendency to attribute word meanings based on phonological similarity with increasing FTD severity. Importantly, there was a significant sentence comprehension deficit for processing deep structure, which correlated with FTD severity. These findings established a receptive language deficit in schizophrenia at the syntactic level. There was also evidence for a relationship between some aspects of language comprehension and speech production/FTD. Apart from indicating language as another mechanism in FTD aetiology, the data also suggest that remediating language comprehension problems may be an avenue to pursue in alleviating FTD symptomatology.

  4. The effect of intensive speech rate and intonation therapy on intelligibility in Parkinson's disease.

    PubMed

    Martens, Heidi; Van Nuffelen, Gwen; Dekens, Tomas; Hernández-Díaz Huici, Maria; Kairuz Hernández-Díaz, Hector Arturo; De Letter, Miet; De Bodt, Marc

    2015-01-01

    Most studies on treatment of prosody in individuals with dysarthria due to Parkinson's disease are based on intensive treatment of loudness. The present study investigates the effect of intensive treatment of speech rate and intonation on the intelligibility of individuals with dysarthria due to Parkinson's disease. A one group pretest-posttest design was used to compare intelligibility, speech rate, and intonation before and after treatment. Participants included eleven Dutch-speaking individuals with predominantly moderate dysarthria due to Parkinson's disease, who received five one-hour treatment sessions per week during three weeks. Treatment focused on lowering speech rate and magnifying the phrase final intonation contrast between statements and questions. Intelligibility was perceptually assessed using a standardized sentence intelligibility test. Speech rate was automatically assessed during the sentence intelligibility test as well as during a passage reading task and a storytelling task. Intonation was perceptually assessed using a sentence reading task and a sentence repetition task, and also acoustically analyzed in terms of maximum fundamental frequency. After treatment, there was a significant improvement of sentence intelligibility (effect size .83), a significant increase of pause frequency during the passage reading task, a significant improvement of correct listener identification of statements and questions, and a significant increase of the maximum fundamental frequency in the final syllable of questions during both intonation tasks. The findings suggest that participants were more intelligible and more able to manipulate pause frequency and statement-question intonation after treatment. However, the relationship between the change in intelligibility on the one hand and the changes in speech rate and intonation on the other hand is not yet fully understood. Results are nuanced in the light of the operated research design. The reader will be able to: (1) describe the effect of intensive speech rate and intonation treatment on intelligibility of speakers with dysarthria due to PD, (2) describe the effect of intensive speech rate treatment on rate manipulation by speakers with dysarthria due to PD, and (3) describe the effect of intensive intonation treatment on manipulation of the phrase final intonation contrast between statements and questions by speakers with dysarthria due to PD. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Whole-exome sequencing supports genetic heterogeneity in childhood apraxia of speech.

    PubMed

    Worthey, Elizabeth A; Raca, Gordana; Laffin, Jennifer J; Wilk, Brandon M; Harris, Jeremy M; Jakielski, Kathy J; Dimmock, David P; Strand, Edythe A; Shriberg, Lawrence D

    2013-10-02

    Childhood apraxia of speech (CAS) is a rare, severe, persistent pediatric motor speech disorder with associated deficits in sensorimotor, cognitive, language, learning and affective processes. Among other neurogenetic origins, CAS is the disorder segregating with a mutation in FOXP2 in a widely studied, multigenerational London family. We report the first whole-exome sequencing (WES) findings from a cohort of 10 unrelated participants, ages 3 to 19 years, with well-characterized CAS. As part of a larger study of children and youth with motor speech sound disorders, 32 participants were classified as positive for CAS on the basis of a behavioral classification marker using auditory-perceptual and acoustic methods that quantify the competence, precision and stability of a speaker's speech, prosody and voice. WES of 10 randomly selected participants was completed using the Illumina Genome Analyzer IIx Sequencing System. Image analysis, base calling, demultiplexing, read mapping, and variant calling were performed using Illumina software. Software developed in-house was used for variant annotation, prioritization and interpretation to identify those variants likely to be deleterious to neurodevelopmental substrates of speech-language development. Among potentially deleterious variants, clinically reportable findings of interest occurred on a total of five chromosomes (Chr3, Chr6, Chr7, Chr9 and Chr17), which included six genes either strongly associated with CAS (FOXP1 and CNTNAP2) or associated with disorders with phenotypes overlapping CAS (ATP13A4, CNTNAP1, KIAA0319 and SETX). A total of 8 (80%) of the 10 participants had clinically reportable variants in one or two of the six genes, with variants in ATP13A4, KIAA0319 and CNTNAP2 being the most prevalent. Similar to the results reported in emerging WES studies of other complex neurodevelopmental disorders, our findings from this first WES study of CAS are interpreted as support for heterogeneous genetic origins of this pediatric motor speech disorder with multiple genes, pathways and complex interactions. We also submit that our findings illustrate the potential use of WES for both gene identification and case-by-case clinical diagnostics in pediatric motor speech disorders.

  6. Auditory and cognitive factors underlying individual differences in aided speech-understanding among older adults

    PubMed Central

    Humes, Larry E.; Kidd, Gary R.; Lentz, Jennifer J.

    2013-01-01

    This study was designed to address individual differences in aided speech understanding among a relatively large group of older adults. The group of older adults consisted of 98 adults (50 female and 48 male) ranging in age from 60 to 86 (mean = 69.2). Hearing loss was typical for this age group and about 90% had not worn hearing aids. All subjects completed a battery of tests, including cognitive (6 measures), psychophysical (17 measures), and speech-understanding (9 measures), as well as the Speech, Spatial, and Qualities of Hearing (SSQ) self-report scale. Most of the speech-understanding measures made use of competing speech and the non-speech psychophysical measures were designed to tap phenomena thought to be relevant for the perception of speech in competing speech (e.g., stream segregation, modulation-detection interference). All measures of speech understanding were administered with spectral shaping applied to the speech stimuli to fully restore audibility through at least 4000 Hz. The measures used were demonstrated to be reliable in older adults and, when compared to a reference group of 28 young normal-hearing adults, age-group differences were observed on many of the measures. Principal-components factor analysis was applied successfully to reduce the number of independent and dependent (speech understanding) measures for a multiple-regression analysis. Doing so yielded one global cognitive-processing factor and five non-speech psychoacoustic factors (hearing loss, dichotic signal detection, multi-burst masking, stream segregation, and modulation detection) as potential predictors. To this set of six potential predictor variables were added subject age, Environmental Sound Identification (ESI), and performance on the text-recognition-threshold (TRT) task (a visual analog of interrupted speech recognition). These variables were used to successfully predict one global aided speech-understanding factor, accounting for about 60% of the variance. PMID:24098273

  7. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus)

    PubMed Central

    Flaherty, Mary; Dent, Micheal L.; Sawusch, James R.

    2017-01-01

    The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with “d” or “t” and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal. PMID:28562597

  8. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus).

    PubMed

    Flaherty, Mary; Dent, Micheal L; Sawusch, James R

    2017-01-01

    The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  9. Speaker gender identification based on majority vote classifiers

    NASA Astrophysics Data System (ADS)

    Mezghani, Eya; Charfeddine, Maha; Nicolas, Henri; Ben Amar, Chokri

    2017-03-01

    Speaker gender identification is considered among the most important tools in several multimedia applications namely in automatic speech recognition, interactive voice response systems and audio browsing systems. Gender identification systems performance is closely linked to the selected feature set and the employed classification model. Typical techniques are based on selecting the best performing classification method or searching optimum tuning of one classifier parameters through experimentation. In this paper, we consider a relevant and rich set of features involving pitch, MFCCs as well as other temporal and frequency-domain descriptors. Five classification models including decision tree, discriminant analysis, nave Bayes, support vector machine and k-nearest neighbor was experimented. The three best perming classifiers among the five ones will contribute by majority voting between their scores. Experimentations were performed on three different datasets spoken in three languages: English, German and Arabic in order to validate language independency of the proposed scheme. Results confirm that the presented system has reached a satisfying accuracy rate and promising classification performance thanks to the discriminating abilities and diversity of the used features combined with mid-level statistics.

  10. Lexical Effects on Children's Speech Processing: Individual Differences Reflected in the Autism-Spectrum Quotient (AQ)

    ERIC Educational Resources Information Center

    Ota, Mitsuhiko; Stewart, Mary E.; Petrou, Alexandra M.; Dickie, Catherine

    2015-01-01

    Purpose: This study was undertaken to examine whether children exhibit the same relationship that adults show between lexical influence on phoneme identification and individual variation on the Autism-Spectrum Quotient (AQ). Method: Data from 62 4- to 7-year-olds with no diagnosis of autism were analyzed. The main task involved identification of…

  11. Contributions of Fundamental Frequency, Formant Spacing, and Glottal Waveform to Talker Identification. Research on Speech Perception. Technical Report No. 5.

    ERIC Educational Resources Information Center

    Carrell, Thomas D.

    This study investigated the contributions of fundamental frequency, formant spacing, and glottal waveform to talker identification. The first two experiments focused on the effect of glottal waveform in the perception of talker identity. Subjects in the first experiment, 30 undergraduate students enrolled in an introductory psychology course,…

  12. Bilingualism affects audiovisual phoneme identification

    PubMed Central

    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia

    2014-01-01

    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience—i.e., the exposure to a double phonological code during childhood—affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically “deaf” and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech. PMID:25374551

  13. Determining the energetic and informational components of speech-on-speech masking

    PubMed Central

    Kidd, Gerald; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Clayton, Kameron K.; Best, Virginia

    2016-01-01

    Identification of target speech was studied under masked conditions consisting of two or four independent speech maskers. In the reference conditions, the maskers were colocated with the target, the masker talkers were the same sex as the target, and the masker speech was intelligible. The comparison conditions, intended to provide release from masking, included different-sex target and masker talkers, time-reversal of the masker speech, and spatial separation of the maskers from the target. Significant release from masking was found for all comparison conditions. To determine whether these reductions in masking could be attributed to differences in energetic masking, ideal time-frequency segregation (ITFS) processing was applied so that the time-frequency units where the masker energy dominated the target energy were removed. The remaining target-dominated “glimpses” were reassembled as the stimulus. Speech reception thresholds measured using these resynthesized ITFS-processed stimuli were the same for the reference and comparison conditions supporting the conclusion that the amount of energetic masking across conditions was the same. These results indicated that the large release from masking found under all comparison conditions was due primarily to a reduction in informational masking. Furthermore, the large individual differences observed generally were correlated across the three masking release conditions. PMID:27475139

  14. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users

    PubMed Central

    Newman, Rochelle S.; Goupell, Matthew J.

    2017-01-01

    Purpose Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes. Method Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates. Twenty-three CI and 29 NH participants performed a phoneme identification task. NH participants heard the same unprocessed stimuli as the CI participants or stimuli degraded by a sine vocoder, simulating aspects of CI processing. Results CI participants showed larger rate normalization effects (6.6 ms) than the NH participants (3.7 ms) and had shallower (less reliable) category boundary slopes. NH participants showed similarly shallow slopes when presented acoustically degraded vocoded signals, but an equal or smaller rate effect in response to reductions in available spectral and temporal information. Conclusion CI participants can rate normalize, despite their degraded speech input, and show a larger rate effect compared to NH participants. CI participants may particularly rely on rate normalization to better maintain perceptual constancy of the speech signal. PMID:28395319

  15. Early electrophysiological markers of atypical language processing in prematurely born infants.

    PubMed

    Paquette, Natacha; Vannasing, Phetsamone; Tremblay, Julie; Lefebvre, Francine; Roy, Marie-Sylvie; McKerral, Michelle; Lepore, Franco; Lassonde, Maryse; Gallagher, Anne

    2015-12-01

    Because nervous system development may be affected by prematurity, many prematurely born children present language or cognitive disorders at school age. The goal of this study is to investigate whether these impairments can be identified early in life using electrophysiological auditory event-related potentials (AERPs) and mismatch negativity (MMN). Brain responses to speech and non-speech stimuli were assessed in prematurely born children to identify early electrophysiological markers of language and cognitive impairments. Participants were 74 children (41 full-term, 33 preterm) aged 3, 12, and 36 months. Pre-attentional auditory responses (MMN and AERPs) were assessed using an oddball paradigm, with speech and non-speech stimuli presented in counterbalanced order between participants. Language and cognitive development were assessed using the Bayley Scale of Infant Development, Third Edition (BSID-III). Results show that preterms as young as 3 months old had delayed MMN response to speech stimuli compared to full-terms. A significant negative correlation was also found between MMN latency to speech sounds and the BSID-III expressive language subscale. However, no significant differences between full-terms and preterms were found for the MMN to non-speech stimuli, suggesting preserved pre-attentional auditory discrimination abilities in these children. Identification of early electrophysiological markers for delayed language development could facilitate timely interventions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Common cues to emotion in the dynamic facial expressions of speech and song.

    PubMed

    Livingstone, Steven R; Thompson, William F; Wanderley, Marcelo M; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

  17. New Perspectives on Assessing Amplification Effects

    PubMed Central

    Souza, Pamela E.; Tremblay, Kelly L.

    2006-01-01

    Clinicians have long been aware of the range of performance variability with hearing aids. Despite improvements in technology, there remain many instances of well-selected and appropriately fitted hearing aids whereby the user reports minimal improvement in speech understanding. This review presents a multistage framework for understanding how a hearing aid affects performance. Six stages are considered: (1) acoustic content of the signal, (2) modification of the signal by the hearing aid, (3) interaction between sound at the output of the hearing aid and the listener's ear, (4) integrity of the auditory system, (5) coding of available acoustic cues by the listener's auditory system, and (6) correct identification of the speech sound. Within this framework, this review describes methodology and research on 2 new assessment techniques: acoustic analysis of speech measured at the output of the hearing aid and auditory evoked potentials recorded while the listener wears hearing aids. Acoustic analysis topics include the relationship between conventional probe microphone tests and probe microphone measurements using speech, appropriate procedures for such tests, and assessment of signal-processing effects on speech acoustics and recognition. Auditory evoked potential topics include an overview of physiologic measures of speech processing and the effect of hearing loss and hearing aids on cortical auditory evoked potential measurements in response to speech. Finally, the clinical utility of these procedures is discussed. PMID:16959734

  18. The Bilingual Language Interaction Network for Comprehension of Speech*

    PubMed Central

    Marian, Viorica

    2013-01-01

    During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension. PMID:24363602

  19. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    PubMed

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Tracing the Trajectory of Sensory Plasticity across Different Stages of Speech Learning in Adulthood.

    PubMed

    Reetzke, Rachel; Xie, Zilong; Llanos, Fernando; Chandrasekaran, Bharath

    2018-05-07

    Although challenging, adults can learn non-native phonetic contrasts with extensive training [1, 2], indicative of perceptual learning beyond an early sensitivity period [3, 4]. Training can alter low-level sensory encoding of newly acquired speech sound patterns [5]; however, the time-course, behavioral relevance, and long-term retention of such sensory plasticity is unclear. Some theories argue that sensory plasticity underlying signal enhancement is immediate and critical to perceptual learning [6, 7]. Others, like the reverse hierarchy theory (RHT), posit a slower time-course for sensory plasticity [8]. RHT proposes that higher-level categorical representations guide immediate, novice learning, while lower-level sensory changes do not emerge until expert stages of learning [9]. We trained 20 English-speaking adults to categorize a non-native phonetic contrast (Mandarin lexical tones) using a criterion-dependent sound-to-category training paradigm. Sensory and perceptual indices were assayed across operationally defined learning phases (novice, experienced, over-trained, and 8-week retention) by measuring the frequency-following response, a neurophonic potential that reflects fidelity of sensory encoding, and the perceptual identification of a tone continuum. Our results demonstrate that while robust changes in sensory encoding and perceptual identification of Mandarin tones emerged with training and were retained, such changes followed different timescales. Sensory changes were evidenced and related to behavioral performance only when participants were over-trained. In contrast, changes in perceptual identification reflecting improvement in categorical percept emerged relatively earlier. Individual differences in perceptual identification, and not sensory encoding, related to faster learning. Our findings support the RHT-sensory plasticity accompanies, rather than drives, expert levels of non-native speech learning. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Identification of Pure-Tone Audiologic Thresholds for Pediatric Cochlear Implant Candidacy: A Systematic Review.

    PubMed

    de Kleijn, Jasper L; van Kalmthout, Ludwike W M; van der Vossen, Martijn J B; Vonck, Bernard M D; Topsakal, Vedat; Bruijnzeel, Hanneke

    2018-05-24

    Although current guidelines recommend cochlear implantation only for children with profound hearing impairment (HI) (>90 decibel [dB] hearing level [HL]), studies show that children with severe hearing impairment (>70-90 dB HL) could also benefit from cochlear implantation. To perform a systematic review to identify audiologic thresholds (in dB HL) that could serve as an audiologic candidacy criterion for pediatric cochlear implantation using 4 domains of speech and language development as independent outcome measures (speech production, speech perception, receptive language, and auditory performance). PubMed and Embase databases were searched up to June 28, 2017, to identify studies comparing speech and language development between children who were profoundly deaf using cochlear implants and children with severe hearing loss using hearing aids, because no studies are available directly comparing children with severe HI in both groups. If cochlear implant users with profound HI score better on speech and language tests than those with severe HI who use hearing aids, this outcome could support adjusting cochlear implantation candidacy criteria to lower audiologic thresholds. Literature search, screening, and article selection were performed using a predefined strategy. Article screening was executed independently by 4 authors in 2 pairs; consensus on article inclusion was reached by discussion between these 4 authors. This study is reported according to the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) statement. Title and abstract screening of 2822 articles resulted in selection of 130 articles for full-text review. Twenty-one studies were selected for critical appraisal, resulting in selection of 10 articles for data extraction. Two studies formulated audiologic thresholds (in dB HLs) at which children could qualify for cochlear implantation: (1) at 4-frequency pure-tone average (PTA) thresholds of 80 dB HL or greater based on speech perception and auditory performance subtests and (2) at PTA thresholds of 88 and 96 dB HL based on a speech perception subtest. In 8 of the 18 outcome measures, children with profound HI using cochlear implants performed similarly to children with severe HI using hearing aids. Better performance of cochlear implant users was shown with a picture-naming test and a speech perception in noise test. Owing to large heterogeneity in study population and selected tests, it was not possible to conduct a meta-analysis. Studies indicate that lower audiologic thresholds (≥80 dB HL) than are advised in current national and manufacturer guidelines would be appropriate as audiologic candidacy criteria for pediatric cochlear implantation.

  2. Robust Recognition of Loud and Lombard speech in the Fighter Cockpit Environment

    DTIC Science & Technology

    1988-08-01

    the latter as inter-speaker variability. According to Zue [Z85j, inter-speaker variabilities can be attributed to sociolinguistic background, dialect...34 Journal of the Acoustical Society of America , Vol 50, 1971. [At74I B. S. Atal, "Linear prediction for speaker identification," Journal of the Acoustical...Society of America , Vol 55, 1974. [B771 B. Beek, E. P. Neuberg, and D. C. Hodge, "An Assessment of the Technology of Automatic Speech Recognition for

  3. Evaluation of Brain Activity Related to Speech and Handwriting Using NIRS

    NASA Astrophysics Data System (ADS)

    Asano, Hirotoshi; Suzuki, Hiroaki; Ide, Hideto

    The difference by the brain activity when writing it down speech the use of the near-infrared spectroscopy is examined in the present study. It is thought that it becomes a help of the communications between the nurse and people requiring long-term care if the identification of the information transmission means can be evaluated and be detected from the brain activity. Because the possibility of the evaluation of result, “Speech”, and “Writing” was shown, it reports.

  4. Voice input/output capabilities at Perception Technology Corporation

    NASA Technical Reports Server (NTRS)

    Ferber, Leon A.

    1977-01-01

    Condensed resumes of key company personnel at the Perception Technology Corporation are presented. The staff possesses recognition, speech synthesis, speaker authentication, and language identification. Hardware and software engineers' capabilities are included.

  5. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  6. Processing melodic contour and speech intonation in congenital amusics with Mandarin Chinese.

    PubMed

    Jiang, Cunmei; Hamm, Jeff P; Lim, Vanessa K; Kirk, Ian J; Yang, Yufang

    2010-07-01

    Congenital amusia is a disorder in the perception and production of musical pitch. It has been suggested that early exposure to a tonal language may compensate for the pitch disorder (Peretz, 2008). If so, it is reasonable to expect that there would be different characterizations of pitch perception in music and speech in congenital amusics who speak a tonal language, such as Mandarin. In this study, a group of 11 adults with amusia whose first language was Mandarin were tested with melodic contour and speech intonation discrimination and identification tasks. The participants with amusia were impaired in discriminating and identifying melodic contour. These abnormalities were also detected in identifying both speech and non-linguistic analogue derived patterns for the Mandarin intonation tasks. In addition, there was an overall trend for the participants with amusia to show deficits with respect to controls in the intonation discrimination tasks for both speech and non-linguistic analogues. These findings suggest that the amusics' melodic pitch deficits may extend to the perception of speech, and could potentially result in some language deficits in those who speak a tonal language. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  7. Cingulo-opercular activity affects incidental memory encoding for speech in noise.

    PubMed

    Vaden, Kenneth I; Teubner-Rhodes, Susan; Ahlstrom, Jayne B; Dubno, Judy R; Eckert, Mark A

    2017-08-15

    Correctly understood speech in difficult listening conditions is often difficult to remember. A long-standing hypothesis for this observation is that the engagement of cognitive resources to aid speech understanding can limit resources available for memory encoding. This hypothesis is consistent with evidence that speech presented in difficult conditions typically elicits greater activity throughout cingulo-opercular regions of frontal cortex that are proposed to optimize task performance through adaptive control of behavior and tonic attention. However, successful memory encoding of items for delayed recognition memory tasks is consistently associated with increased cingulo-opercular activity when perceptual difficulty is minimized. The current study used a delayed recognition memory task to test competing predictions that memory encoding for words is enhanced or limited by the engagement of cingulo-opercular activity during challenging listening conditions. An fMRI experiment was conducted with twenty healthy adult participants who performed a word identification in noise task that was immediately followed by a delayed recognition memory task. Consistent with previous findings, word identification trials in the poorer signal-to-noise ratio condition were associated with increased cingulo-opercular activity and poorer recognition memory scores on average. However, cingulo-opercular activity decreased for correctly identified words in noise that were not recognized in the delayed memory test. These results suggest that memory encoding in difficult listening conditions is poorer when elevated cingulo-opercular activity is not sustained. Although increased attention to speech when presented in difficult conditions may detract from more active forms of memory maintenance (e.g., sub-vocal rehearsal), we conclude that task performance monitoring and/or elevated tonic attention supports incidental memory encoding in challenging listening conditions. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    PubMed

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  9. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

    PubMed Central

    Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520

  10. Temporal event structure and timing in schizophrenia: preserved binding in a longer "now".

    PubMed

    Martin, Brice; Giersch, Anne; Huron, Caroline; van Wassenhove, Virginie

    2013-01-01

    Patients with schizophrenia experience a loss of temporal continuity or subjective fragmentation along the temporal dimension. Here, we develop the hypothesis that impaired temporal awareness results from a perturbed structuring of events in time-i.e., canonical neural dynamics. To address this, 26 patients and their matched controls took part in two psychophysical studies using desynchronized audiovisual speech. Two tasks were used and compared: first, an identification task testing for multisensory binding impairments in which participants reported what they heard while looking at a speaker's face; in a second task, we tested the perceived simultaneity of the same audiovisual speech stimuli. In both tasks, we used McGurk fusion and combination that are classic ecologically valid multisensory illusions. First, and contrary to previous reports, our results show that patients do not significantly differ from controls in their rate of illusory reports. Second, the illusory reports of patients in the identification task were more sensitive to audiovisual speech desynchronies than those of controls. Third, and surprisingly, patients considered audiovisual speech to be synchronized for longer delays than controls. As such, the temporal tolerance profile observed in a temporal judgement task was less of a predictor for sensory binding in schizophrenia than for that obtained in controls. We interpret our results as an impairment of temporal event structuring in schizophrenia which does not specifically affect sensory binding operations but rather, the explicit access to timing information associated here with audiovisual speech processing. Our findings are discussed in the context of curent neurophysiological frameworks for the binding and the structuring of sensory events in time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Speech Clarity Index (Ψ): A Distance-Based Speech Quality Indicator and Recognition Rate Prediction for Dysarthric Speakers with Cerebral Palsy

    NASA Astrophysics Data System (ADS)

    Kayasith, Prakasith; Theeramunkong, Thanaruk

    It is a tedious and subjective task to measure severity of a dysarthria by manually evaluating his/her speech using available standard assessment methods based on human perception. This paper presents an automated approach to assess speech quality of a dysarthric speaker with cerebral palsy. With the consideration of two complementary factors, speech consistency and speech distinction, a speech quality indicator called speech clarity index (Ψ) is proposed as a measure of the speaker's ability to produce consistent speech signal for a certain word and distinguished speech signal for different words. As an application, it can be used to assess speech quality and forecast speech recognition rate of speech made by an individual dysarthric speaker before actual exhaustive implementation of an automatic speech recognition system for the speaker. The effectiveness of Ψ as a speech recognition rate predictor is evaluated by rank-order inconsistency, correlation coefficient, and root-mean-square of difference. The evaluations had been done by comparing its predicted recognition rates with ones predicted by the standard methods called the articulatory and intelligibility tests based on the two recognition systems (HMM and ANN). The results show that Ψ is a promising indicator for predicting recognition rate of dysarthric speech. All experiments had been done on speech corpus composed of speech data from eight normal speakers and eight dysarthric speakers.

  12. The Voice of Emotion: Acoustic Properties of Six Emotional Expressions.

    NASA Astrophysics Data System (ADS)

    Baldwin, Carol May

    Studies in the perceptual identification of emotional states suggested that listeners seemed to depend on a limited set of vocal cues to distinguish among emotions. Linguistics and speech science literatures have indicated that this small set of cues included intensity, fundamental frequency, and temporal properties such as speech rate and duration. Little research has been done, however, to validate these cues in the production of emotional speech, or to determine if specific dimensions of each cue are associated with the production of a particular emotion for a variety of speakers. This study addressed deficiencies in understanding of the acoustical properties of duration and intensity as components of emotional speech by means of speech science instrumentation. Acoustic data were conveyed in a brief sentence spoken by twelve English speaking adult male and female subjects, half with dramatic training, and half without such training. Simulated expressions included: happiness, surprise, sadness, fear, anger, and disgust. The study demonstrated that the acoustic property of mean intensity served as an important cue for a vocal taxonomy. Overall duration was rejected as an element for a general taxonomy due to interactions involving gender and role. Findings suggested a gender-related taxonomy, however, based on differences in the ways in which men and women use the duration cue in their emotional expressions. Results also indicated that speaker training may influence greater use of the duration cue in expressions of emotion, particularly for male actors. Discussion of these results provided linkages to (1) practical management of emotional interactions in clinical and interpersonal environments, (2) implications for differences in the ways in which males and females may be socialized to express emotions, and (3) guidelines for future perceptual studies of emotional sensitivity.

  13. Intonation contrast in Cantonese speakers with hypokinetic dysarthria associated with Parkinson's disease.

    PubMed

    Ma, Joan K-Y; Whitehill, Tara L; So, Susanne Y-S

    2010-08-01

    Speech produced by individuals with hypokinetic dysarthria associated with Parkinson's disease (PD) is characterized by a number of features including impaired speech prosody. The purpose of this study was to investigate intonation contrasts produced by this group of speakers. Speech materials with a question-statement contrast were collected from 14 Cantonese speakers with PD. Twenty listeners then classified the productions as either questions or statements. Acoustic analyses of F0, duration, and intensity were conducted to determine which acoustic cues distinguished the production of questions from statements, and which cues appeared to be exploited by listeners in identifying intonational contrasts. The results show that listeners identified statements with a high degree of accuracy, but the accuracy of question identification ranged from 0.56% to 96% across the 14 speakers. The speakers with PD used similar acoustic cues as nondysarthric Cantonese speakers to mark the question-statement contrast, although the contrasts were not observed in all speakers. Listeners mainly used F0 cues at the final syllable for intonation identification. These data contribute to the researchers' understanding of intonation marking in speakers with PD, with specific application to the production and perception of intonation in a lexical tone language.

  14. Identification and discrimination of bilingual talkers across languages1

    PubMed Central

    Winters, Stephen J.; Levi, Susannah V.; Pisoni, David B.

    2008-01-01

    This study investigated the extent to which language familiarity affects the perception of the indexical properties of speech by testing listeners’ identification and discrimination of bilingual talkers across two different languages. In one experiment, listeners were trained to identify bilingual talkers speaking in only one language and were then tested on their ability to identify the same talkers speaking in another language. In the second experiment, listeners discriminated between bilingual talkers across languages in an AX discrimination paradigm. The results of these experiments indicate that there is sufficient language-independent indexical information in speech for listeners to generalize knowledge of talkers’ voices across languages and to successfully discriminate between bilingual talkers regardless of the language they are speaking. However, the results of these studies also revealed that listeners do not solely rely on language-independent information when performing these tasks. Listeners use language-dependent indexical cues to identify talkers who are speaking a familiar language. Moreover, the tendency to perceive two talkers as the “same” or “different” depends on whether the talkers are speaking in the same language. The combined results of these experiments thus suggest that indexical processing relies on both language-dependent and language-independent information in the speech signal. PMID:18537401

  15. Temporal and spatio-temporal vibrotactile displays for voice fundamental frequency: an initial evaluation of a new vibrotactile speech perception aid with normal-hearing and hearing-impaired individuals.

    PubMed

    Auer, E T; Bernstein, L E; Coulter, D C

    1998-10-01

    Four experiments were performed to evaluate a new wearable vibrotactile speech perception aid that extracts fundamental frequency (F0) and displays the extracted F0 as a single-channel temporal or an eight-channel spatio-temporal stimulus. Specifically, we investigated the perception of intonation (i.e., question versus statement) and emphatic stress (i.e., stress on the first, second, or third word) under Visual-Alone (VA), Visual-Tactile (VT), and Tactile-Alone (TA) conditions and compared performance using the temporal and spatio-temporal vibrotactile display. Subjects were adults with normal hearing in experiments I-III and adults with severe to profound hearing impairments in experiment IV. Both versions of the vibrotactile speech perception aid successfully conveyed intonation. Vibrotactile stress information was successfully conveyed, but vibrotactile stress information did not enhance performance in VT conditions beyond performance in VA conditions. In experiment III, which involved only intonation identification, a reliable advantage for the spatio-temporal display was obtained. Differences between subject groups were obtained for intonation identification, with more accurate VT performance by those with normal hearing. Possible effects of long-term hearing status are discussed.

  16. On the recognition of emotional vocal expressions: motivations for a holistic approach.

    PubMed

    Esposito, Anna; Esposito, Antonietta M

    2012-10-01

    Human beings seem to be able to recognize emotions from speech very well and information communication technology aims to implement machines and agents that can do the same. However, to be able to automatically recognize affective states from speech signals, it is necessary to solve two main technological problems. The former concerns the identification of effective and efficient processing algorithms capable of capturing emotional acoustic features from speech sentences. The latter focuses on finding computational models able to classify, with an approximation as good as human listeners, a given set of emotional states. This paper will survey these topics and provide some insights for a holistic approach to the automatic analysis, recognition and synthesis of affective states.

  17. A measure for assessing the effects of audiovisual speech integration.

    PubMed

    Altieri, Nicholas; Townsend, James T; Wenger, Michael J

    2014-06-01

    We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.

  18. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    PubMed

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound disorders. Non-speech oral motor exercise use was most frequently reported in the treatment of dysarthria. Non-speech oral motor exercise use when targeting speech sound disorders is not widely endorsed in the literature.

  19. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    PubMed

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  20. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior

    PubMed Central

    2018-01-01

    Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners’ abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication. PMID:28938250

  1. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior.

    PubMed

    Peelle, Jonathan E

    Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.

  2. The relationship between the Nasality Severity Index 2.0 and perceptual judgments of hypernasality.

    PubMed

    Bettens, Kim; De Bodt, Marc; Maryn, Youri; Luyten, Anke; Wuyts, Floris L; Van Lierde, Kristiane M

    2016-01-01

    The Nasality Severity Index 2.0 (NSI 2.0) forms a new, multiparametric approach in the identification of hypernasality. The present study aimed to investigate the correlation between the NSI 2.0 scores and the perceptual assessment of hypernasality. Speech samples of 35 patients, representing a range of nasality from normal to severely hypernasal, were rated by four expert speech-language pathologists using visual analogue scaling (VAS) judging the degree of hypernasality, audible nasal airflow (ANA) and speech intelligibility. Inter- and intra-listener reliability was verified using intraclass correlation coefficients. Correlations between NSI 2.0 scores and its parameters (i.e. nasalance score of an oral text and vowel /u/, voice low tone to high tone ratio of the vowel /i/) and the degree of hypernasality were determined using Pearson correlation coefficients. Multiple linear regression analysis was used to investigate the possible influence of ANA and speech intelligibility on the NSI 2.0 scores. Overall good to excellent inter- and intra-listener reliability was found for the perceptual ratings. A moderate, but significant negative correlation between NSI 2.0 scores and perceived hypernasality (r=-0.64) was found, in which a more negative NSI 2.0 score indicates the presence of more severe hypernasality. No significant influence of ANA or intelligibility on the NSI 2.0 was observed based on the regression analysis. Because the NSI 2.0 correlates significantly with perceived hypernasality, it provides an easy-to-interpret severity score of hypernasality which will facilitate the evaluation of therapy outcomes, communication to the patient and other clinicians, and decisions for treatment planning, based on a multiparametric approach. However, research is still necessary to further explore the instrumental correlates of perceived hypernasality. The reader will be able to (1) describe and discuss current issues and influencing variables regarding perceptual ratings of hypernasality; (2) describe and discuss the relationship between the Nasality Severity Index 2.0, a new multiparametric approach to hypernasality, and perceptual judgments of hypernasality based on visual analogue scale ratings; (3) compare these results with the correlations based on a single parameter approach and (4) describe and discuss the possible influence of audible nasal airflow and speech intelligibility on the NSI 2.0 scores. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Visually-guided attention enhances target identification in a complex auditory scene.

    PubMed

    Best, Virginia; Ozmeral, Erol J; Shinn-Cunningham, Barbara G

    2007-06-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.

  4. Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene

    PubMed Central

    Ozmeral, Erol J.; Shinn-Cunningham, Barbara G.

    2007-01-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors. PMID:17453308

  5. Evaluation of a Wind Noise Attenuation Algorithm on Subjective Annoyance and Speech-in-Wind Performance.

    PubMed

    Korhonen, Petri; Kuk, Francis; Seper, Eric; Mørkebjerg, Martin; Roikjer, Majken

    2017-01-01

    Wind noise is a common problem reported by hearing aid wearers. The MarkeTrak VIII reported that 42% of hearing aid wearers are not satisfied with the performance of their hearing aids in situations where wind is present. The current study investigated the effect of a new wind noise attenuation (WNA) algorithm on subjective annoyance and speech recognition in the presence of wind. A single-blinded, repeated measures design was used. Fifteen experienced hearing aid wearers with bilaterally symmetrical (≤10 dB) mild-to-moderate sensorineural hearing loss participated in the study. Subjective rating for wind noise annoyance was measured for wind presented alone from 0° and 290° at wind speeds of 4, 5, 6, 7, and 10 m/sec. Phoneme identification performance was measured using Widex Office of Clinical Amplification Nonsense Syllable Test presented at 60, 65, 70, and 75 dB SPL from 270° in the presence of wind originating from 0° at a speed of 5 m/sec. The subjective annoyance from wind noise was reduced for wind originating from 0° at wind speeds from 4 to 7 m/sec. The largest improvement in phoneme identification with the WNA algorithm was 48.2% when speech was presented from 270° at 65 dB SPL and the wind originated from 0° azimuth at 5 m/sec. The WNA algorithm used in this study reduced subjective annoyance for wind speeds ranging from 4 to 7 m/sec. The algorithm was effective in improving speech identification in the presence of wind originating from 0° at 5 m/sec. These results suggest that the WNA algorithm used in the current study could expand the range of real-life situations where a hearing-impaired person can use the hearing aid optimally. American Academy of Audiology

  6. Perception of environmental sounds by experienced cochlear implant patients.

    PubMed

    Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan

    2011-01-01

    Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries, or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds, and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Seventeen experienced postlingually deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern, and temporal order for tones tests), and a backward digit recall test. The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants, and r = 0.48 for vowels. HINT and CNC scores in quiet moderately correlated with the temporal order for tones. However, the correlation between speech and environmental sounds changed little after partialling out the variance due to other variables. Present findings indicate that environmental sound identification is difficult for CI patients. They further suggest that speech and environmental sounds may overlap considerably in their perceptual processing. Certain spectrotemproral processing abilities are separately associated with speech and environmental sound performance. However, they do not appear to mediate the relationship between speech and environmental sounds in CI patients. Environmental sound rehabilitation may be beneficial to some patients. Environmental sound testing may have potential diagnostic applications, especially with difficult-to-test populations and might also be predictive of speech performance for prelingually deafened patients with cochlear implants.

  7. Phenotype of FOXP2 haploinsufficiency in a mother and son.

    PubMed

    Rice, Gregory M; Raca, Gordana; Jakielski, Kathy J; Laffin, Jennifer J; Iyama-Kurtycz, Christina M; Hartley, Sigan L; Sprague, Rae E; Heintzelman, Anne T; Shriberg, Lawrence D

    2012-01-01

    Disruptions in FOXP2, a transcription factor, are the only known monogenic cause of speech and language impairment. We report on clinical findings for two new individuals with a submicroscopic deletion of FOXP2: a boy with severe apraxia of speech and his currently moderately affected mother. A 1.57 Mb deletion on chromosome 7q31 was detected by array comparative genomic hybridization (aCGH). In addition to FOXP2, the patients' deletion involves two other genes, MDFIC and PPP1R3A, neither of which has been associated with speech or language disorders. Thus, findings for these two family members provide informative phenotypic information on FOXP2 haploinsufficiency. Evaluation by a clinical geneticist indicated no major congenital anomalies or dysmorphic features. Evaluations by a clinical psychologist and occupational therapist indicated cognitive-linguistic processing and sensorimotor control deficits, but did not support a diagnosis of autism spectrum disorder. Evaluation by clinical and research speech pathologists confirmed that both patients' speech deficits met contemporary criteria for apraxia of speech. Notably, the patients were not able to laugh, cough, or sneeze spontaneously, replicating findings reported for two other FOXP2 cases and a potential diagnostic sign of nonsyndromic apraxia of speech. Speech severity findings for the boy were not consistent with the hypothesis that loss of maternal FOXP2 should be relatively benign. Better understanding of the behavioral phenotype of FOXP2 disruptions will aid identification of patients, toward an eventual understanding of the pathophysiology of syndromic and nonsyndromic apraxia of speech. Copyright © 2011 Wiley Periodicals, Inc.

  8. Speech-based interaction with in-vehicle computers: the effect of speech-based e-mail on drivers' attention to the roadway.

    PubMed

    Lee, J D; Caven, B; Haake, S; Brown, T L

    2001-01-01

    As computer applications for cars emerge, a speech-based interface offers an appealing alternative to the visually demanding direct manipulation interface. However, speech-based systems may pose cognitive demands that could undermine driving safety. This study used a car-following task to evaluate how a speech-based e-mail system affects drivers' response to the periodic braking of a lead vehicle. The study included 24 drivers between the ages of 18 and 24 years. A baseline condition with no e-mail system was compared with a simple and a complex e-mail system in both simple and complex driving environments. The results show a 30% (310 ms) increase in reaction time when the speech-based system is used. Subjective workload ratings and probe questions also indicate that speech-based interaction introduces a significant cognitive load, which was highest for the complex e-mail system. These data show that a speech-based interface is not a panacea that eliminates the potential distraction of in-vehicle computers. Actual or potential applications of this research include design of in-vehicle information systems and evaluation of their contributions to driver distraction.

  9. The Pathways for Intelligible Speech: Multivariate and Univariate Perspectives

    PubMed Central

    Evans, S.; Kyong, J.S.; Rosen, S.; Golestani, N.; Warren, J.E.; McGettigan, C.; Mourão-Miranda, J.; Wise, R.J.S.; Scott, S.K.

    2014-01-01

    An anterior pathway, concerned with extracting meaning from sound, has been identified in nonhuman primates. An analogous pathway has been suggested in humans, but controversy exists concerning the degree of lateralization and the precise location where responses to intelligible speech emerge. We have demonstrated that the left anterior superior temporal sulcus (STS) responds preferentially to intelligible speech (Scott SK, Blank CC, Rosen S, Wise RJS. 2000. Identification of a pathway for intelligible speech in the left temporal lobe. Brain. 123:2400–2406.). A functional magnetic resonance imaging study in Cerebral Cortex used equivalent stimuli and univariate and multivariate analyses to argue for the greater importance of bilateral posterior when compared with the left anterior STS in responding to intelligible speech (Okada K, Rong F, Venezia J, Matchin W, Hsieh IH, Saberi K, Serences JT,Hickok G. 2010. Hierarchical organization of human auditory cortex: evidence from acoustic invariance in the response to intelligible speech. 20: 2486–2495.). Here, we also replicate our original study, demonstrating that the left anterior STS exhibits the strongest univariate response and, in decoding using the bilateral temporal cortex, contains the most informative voxels showing an increased response to intelligible speech. In contrast, in classifications using local “searchlights” and a whole brain analysis, we find greater classification accuracy in posterior rather than anterior temporal regions. Thus, we show that the precise nature of the multivariate analysis used will emphasize different response profiles associated with complex sound to speech processing. PMID:23585519

  10. Evidence-Based Systematic Review: Effects of Nonspeech Oral Motor Exercises on Speech

    ERIC Educational Resources Information Center

    McCauley, Rebecca J.; Strand, Edythe; Lof, Gregory L.; Schooling, Tracy; Frymark, Tobi

    2009-01-01

    Purpose: The purpose of this systematic review was to examine the current evidence for the use of oral motor exercises (OMEs) on speech (i.e., speech physiology, speech production, and functional speech outcomes) as a means of supporting further research and clinicians' use of evidence-based practice. Method: The peer-reviewed literature from 1960…

  11. A Networking of Community-Based Speech Therapy: Borabue District, Maha Sarakham.

    PubMed

    Pumnum, Tawitree; Kum-ud, Weawta; Prathanee, Benjamas

    2015-08-01

    Most children with cleft lip and palate have articulation problems because of compensatory articulation disorders from velopharyngeal insufficiency. Theoretically, children should receive speech therapy from a speech and language pathologist (SLP) 1-2 sessions per week. For developing countries, particularly Thailand, most of them cannot reach standard speech services because of limitation of speech services and SLP Networking of a Community-Based Speech Model might be an appropriate way to solve this problem. To study the effectiveness of a networking of Khon Kaen University (KKU) Community-Based Speech Model, Non Thong Tambon Health Promotion Hospital, Borabue, Maha Sarakham, in decreasing the number of articulation errors for children with CLP. Six children with cleft lip and palate (CLP) who lived in Borabue and the surrounding district, Maha Sarakham, and had medical records in Srinagarind Hospital. They were assessed for pre- and post-articulation errors and provided speech therapy by SLP via teaching on service for speech assistant (SA). Then, children with CLP received speech correction (SC) by SA based on assignment and caregivers practiced home program for a year. Networking of Non Thong Tambon Health Promotion Hospital, Borabue, Maha Sarakham significantly reduce the number of post-articulation errors for 3 children with CLP. There were factors affecting the results in treatment of other children as follows: delayed speech and language development, hypernaslaity, and consistency of SC at local hospital and home. A networking of KKU Community-Based Speech Model, Non Thong Tambon Health Promotion Hospital, Borabue, and Maha Sarakham was a good way to enhance speech therapy in Thailand or other developing countries, where have limitation of speech services or lack of professionals.

  12. Trimodal speech perception: how residual acoustic hearing supplements cochlear-implant consonant recognition in the presence of visual cues.

    PubMed

    Sheffield, Benjamin M; Schuchman, Gerald; Bernstein, Joshua G W

    2015-01-01

    As cochlear implant (CI) acceptance increases and candidacy criteria are expanded, these devices are increasingly recommended for individuals with less than profound hearing loss. As a result, many individuals who receive a CI also retain acoustic hearing, often in the low frequencies, in the nonimplanted ear (i.e., bimodal hearing) and in some cases in the implanted ear (i.e., hybrid hearing) which can enhance the performance achieved by the CI alone. However, guidelines for clinical decisions pertaining to cochlear implantation are largely based on expectations for postsurgical speech-reception performance with the CI alone in auditory-only conditions. A more comprehensive prediction of postimplant performance would include the expected effects of residual acoustic hearing and visual cues on speech understanding. An evaluation of auditory-visual performance might be particularly important because of the complementary interaction between the speech information relayed by visual cues and that contained in the low-frequency auditory signal. The goal of this study was to characterize the benefit provided by residual acoustic hearing to consonant identification under auditory-alone and auditory-visual conditions for CI users. Additional information regarding the expected role of residual hearing in overall communication performance by a CI listener could potentially lead to more informed decisions regarding cochlear implantation, particularly with respect to recommendations for or against bilateral implantation for an individual who is functioning bimodally. Eleven adults 23 to 75 years old with a unilateral CI and air-conduction thresholds in the nonimplanted ear equal to or better than 80 dB HL for at least one octave frequency between 250 and 1000 Hz participated in this study. Consonant identification was measured for conditions involving combinations of electric hearing (via the CI), acoustic hearing (via the nonimplanted ear), and speechreading (visual cues). The results suggest that the benefit to CI consonant-identification performance provided by the residual acoustic hearing is even greater when visual cues are also present. An analysis of consonant confusions suggests that this is because the voicing cues provided by the residual acoustic hearing are highly complementary with the mainly place-of-articulation cues provided by the visual stimulus. These findings highlight the need for a comprehensive prediction of trimodal (acoustic, electric, and visual) postimplant speech-reception performance to inform implantation decisions. The increased influence of residual acoustic hearing under auditory-visual conditions should be taken into account when considering surgical procedures or devices that are intended to preserve acoustic hearing in the implanted ear. This is particularly relevant when evaluating the candidacy of a current bimodal CI user for a second CI (i.e., bilateral implantation). Although recent developments in CI technology and surgical techniques have increased the likelihood of preserving residual acoustic hearing, preservation cannot be guaranteed in each individual case. Therefore, the potential gain to be derived from bilateral implantation needs to be weighed against the possible loss of the benefit provided by residual acoustic hearing.

  13. Talker variability in audio-visual speech perception

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919

  14. Talker variability in audio-visual speech perception.

    PubMed

    Heald, Shannon L M; Nusbaum, Howard C

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  15. Unequal effects of speech and nonspeech contexts on the perceptual normalization of Cantonese level tones.

    PubMed

    Zhang, Caicai; Peng, Gang; Wang, William S-Y

    2012-08-01

    Context is important for recovering language information from talker-induced variability in acoustic signals. In tone perception, previous studies reported similar effects of speech and nonspeech contexts in Mandarin, supporting a general perceptual mechanism underlying tone normalization. However, no supportive evidence was obtained in Cantonese, also a tone language. Moreover, no study has compared speech and nonspeech contexts in the multi-talker condition, which is essential for exploring the normalization mechanism of inter-talker variability in speaking F0. The other question is whether a talker's full F0 range and mean F0 equally facilitate normalization. To answer these questions, this study examines the effects of four context conditions (speech/nonspeech × F0 contour/mean F0) in the multi-talker condition in Cantonese. Results show that raising and lowering the F0 of speech contexts change the perception of identical stimuli from mid level tone to low and high level tone, whereas nonspeech contexts only mildly increase the identification preference. It supports the speech-specific mechanism of tone normalization. Moreover, speech context with flattened F0 trajectory, which neutralizes cues of a talker's full F0 range, fails to facilitate normalization in some conditions, implying that a talker's mean F0 is less efficient for minimizing talker-induced lexical ambiguity in tone perception.

  16. Individual differences in speech-in-noise perception parallel neural speech processing and attention in preschoolers.

    PubMed

    Thompson, Elaine C; Woodruff Carr, Kali; White-Schwoch, Travis; Otto-Meyer, Sebastian; Kraus, Nina

    2017-02-01

    From bustling classrooms to unruly lunchrooms, school settings are noisy. To learn effectively in the unwelcome company of numerous distractions, children must clearly perceive speech in noise. In older children and adults, speech-in-noise perception is supported by sensory and cognitive processes, but the correlates underlying this critical listening skill in young children (3-5 year olds) remain undetermined. Employing a longitudinal design (two evaluations separated by ∼12 months), we followed a cohort of 59 preschoolers, ages 3.0-4.9, assessing word-in-noise perception, cognitive abilities (intelligence, short-term memory, attention), and neural responses to speech. Results reveal changes in word-in-noise perception parallel changes in processing of the fundamental frequency (F0), an acoustic cue known for playing a role central to speaker identification and auditory scene analysis. Four unique developmental trajectories (speech-in-noise perception groups) confirm this relationship, in that improvements and declines in word-in-noise perception couple with enhancements and diminishments of F0 encoding, respectively. Improvements in word-in-noise perception also pair with gains in attention. Word-in-noise perception does not relate to strength of neural harmonic representation or short-term memory. These findings reinforce previously-reported roles of F0 and attention in hearing speech in noise in older children and adults, and extend this relationship to preschool children. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Individual differences in speech-in-noise perception parallel neural speech processing and attention in preschoolers

    PubMed Central

    Thompson, Elaine C.; Carr, Kali Woodruff; White-Schwoch, Travis; Otto-Meyer, Sebastian; Kraus, Nina

    2016-01-01

    From bustling classrooms to unruly lunchrooms, school settings are noisy. To learn effectively in the unwelcome company of numerous distractions, children must clearly perceive speech in noise. In older children and adults, speech-in-noise perception is supported by sensory and cognitive processes, but the correlates underlying this critical listening skill in young children (3–5 year olds) remain undetermined. Employing a longitudinal design (two evaluations separated by ~12 months), we followed a cohort of 59 preschoolers, ages 3.0–4.9, assessing word-in-noise perception, cognitive abilities (intelligence, short-term memory, attention), and neural responses to speech. Results reveal changes in word-in-noise perception parallel changes in processing of the fundamental frequency (F0), an acoustic cue known for playing a role central to speaker identification and auditory scene analysis. Four unique developmental trajectories (speech-in-noise perception groups) confirm this relationship, in that improvements and declines in word-in-noise perception couple with enhancements and diminishments of F0 encoding, respectively. Improvements in word-in-noise perception also pair with gains in attention. Word-in-noise perception does not relate to strength of neural harmonic representation or short-term memory. These findings reinforce previously-reported roles of F0 and attention in hearing speech in noise in older children and adults, and extend this relationship to preschool children. PMID:27864051

  18. Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners with Hearing Impairment Using Hearing Aids

    ERIC Educational Resources Information Center

    Moradi, Shahram; Lidestam, Bjorn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Ronnberg, Jerker

    2017-01-01

    Purpose: We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels--in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands--in listeners with hearing impairment using hearing aids. Method: The study comprised 199…

  19. The Clinical Practice of Speech and Language Therapists with Children with Phonologically Based Speech Sound Disorders

    ERIC Educational Resources Information Center

    Oliveira, Carla; Lousada, Marisa; Jesus, Luis M. T.

    2015-01-01

    Children with speech sound disorders (SSD) represent a large number of speech and language therapists' caseloads. The intervention with children who have SSD can involve different therapy approaches, and these may be articulatory or phonologically based. Some international studies reveal a widespread application of articulatory based approaches in…

  20. Automated recognition of helium speech. Phase I: Investigation of microprocessor based analysis/synthesis system

    NASA Astrophysics Data System (ADS)

    Jelinek, H. J.

    1986-01-01

    This is the Final Report of Electronic Design Associates on its Phase I SBIR project. The purpose of this project is to develop a method for correcting helium speech, as experienced in diver-surface communication. The goal of the Phase I study was to design, prototype, and evaluate a real time helium speech corrector system based upon digital signal processing techniques. The general approach was to develop hardware (an IBM PC board) to digitize helium speech and software (a LAMBDA computer based simulation) to translate the speech. As planned in the study proposal, this initial prototype may now be used to assess expected performance from a self contained real time system which uses an identical algorithm. The Final Report details the work carried out to produce the prototype system. Four major project tasks were: a signal processing scheme for converting helium speech to normal sounding speech was generated. The signal processing scheme was simulated on a general purpose (LAMDA) computer. Actual helium speech was supplied to the simulation and the converted speech was generated. An IBM-PC based 14 bit data Input/Output board was designed and built. A bibliography of references on speech processing was generated.

  1. Pitch perception and production in congenital amusia: Evidence from Cantonese speakers.

    PubMed

    Liu, Fang; Chan, Alice H D; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C M

    2016-07-01

    This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production.

  2. Risk and protective factors associated with speech and language impairment in a nationally representative sample of 4- to 5-year-old children.

    PubMed

    Harrison, Linda J; McLeod, Sharynne

    2010-04-01

    To determine risk and protective factors for speech and language impairment in early childhood. Data are presented for a nationally representative sample of 4,983 children participating in the Longitudinal Study of Australian Children (described in McLeod & Harrison, 2009). Thirty-one child, parent, family, and community factors previously reported as being predictors of speech and language impairment were tested as predictors of (a) parent-rated expressive speech/language concern and (b) receptive language concern, (c) use of speech-language pathology services, and (d) low receptive vocabulary. Bivariate logistic regression analyses confirmed 29 of the identified factors. However, when tested concurrently with other predictors in multivariate analyses, only 19 remained significant: 9 for 2-4 outcomes and 10 for 1 outcome. Consistent risk factors were being male, having ongoing hearing problems, and having a more reactive temperament. Protective factors were having a more persistent and sociable temperament and higher levels of maternal well-being. Results differed by outcome for having an older sibling, parents speaking a language other than English, and parental support for children's learning at home. Identification of children requiring speech and language assessment requires consideration of the context of family life as well as biological and psychosocial factors intrinsic to the child.

  3. Pitch perception and production in congenital amusia: Evidence from Cantonese speakers

    PubMed Central

    Liu, Fang; Chan, Alice H. D.; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C. M.

    2016-01-01

    This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production. PMID:27475178

  4. Semantic and phonetic enhancements for speech-in-noise recognition by native and non-native listeners.

    PubMed

    Bradlow, Ann R; Alexander, Jennifer A

    2007-04-01

    Previous research has shown that speech recognition differences between native and proficient non-native listeners emerge under suboptimal conditions. Current evidence has suggested that the key deficit that underlies this disproportionate effect of unfavorable listening conditions for non-native listeners is their less effective use of compensatory information at higher levels of processing to recover from information loss at the phoneme identification level. The present study investigated whether this non-native disadvantage could be overcome if enhancements at various levels of processing were presented in combination. Native and non-native listeners were presented with English sentences in which the final word varied in predictability and which were produced in either plain or clear speech. Results showed that, relative to the low-predictability-plain-speech baseline condition, non-native listener final word recognition improved only when both semantic and acoustic enhancements were available (high-predictability-clear-speech). In contrast, the native listeners benefited from each source of enhancement separately and in combination. These results suggests that native and non-native listeners apply similar strategies for speech-in-noise perception: The crucial difference is in the signal clarity required for contextual information to be effective, rather than in an inability of non-native listeners to take advantage of this contextual information per se.

  5. Conflict monitoring in speech processing: An fMRI study of error detection in speech production and perception.

    PubMed

    Gauvin, Hanna S; De Baene, Wouter; Brass, Marcel; Hartsuiker, Robert J

    2016-02-01

    To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated whether internal verbal monitoring takes place through the speech perception system, as proposed by perception-based theories of speech monitoring, or whether mechanisms independent of perception are applied, as proposed by production-based theories of speech monitoring. With the use of fMRI during a tongue twister task we observed that error detection in internal speech during noise-masked overt speech production and error detection in speech perception both recruit the same neural network, which includes pre-supplementary motor area (pre-SMA), dorsal anterior cingulate cortex (dACC), anterior insula (AI), and inferior frontal gyrus (IFG). Although production and perception recruit similar areas, as proposed by perception-based accounts, we did not find activation in superior temporal areas (which are typically associated with speech perception) during internal speech monitoring in speech production as hypothesized by these accounts. On the contrary, results are highly compatible with a domain general approach to speech monitoring, by which internal speech monitoring takes place through detection of conflict between response options, which is subsequently resolved by a domain general executive center (e.g., the ACC). Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Identification and tracking of particular speaker in noisy environment

    NASA Astrophysics Data System (ADS)

    Sawada, Hideyuki; Ohkado, Minoru

    2004-10-01

    Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.

  7. Tracking development from early speech-language acquisition to reading skills at age 13.

    PubMed

    Bartl-Pokorny, Katrin D; Marschik, Peter B; Sachse, Steffi; Green, Vanessa A; Zhang, Dajie; Van Der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2013-06-01

    Previous studies have indicated a link between speech-language and literacy development. To add to this body of knowledge, we investigated whether lexical and grammatical skills from toddler to early school age are related to reading competence in adolescence. Twenty-three typically developing children were followed from age 1;6 to 13;6 (years;months). Parental checklists and standardized tests were used to assess the development of mental lexicon, grammatical and reading capacities of the children. Direct assessment of early speech-language functions positively correlated with later reading competence, whereas lexical skills reported by parents were not associated with this capacity. At (pre-) school age, larger vocabulary and better grammatical abilities predicted advanced reading abilities in adolescence. Our study contributes to the understanding of typical speech-language development and its relation to later reading outcome, extending the body of knowledge on these developmental domains for future early identification of children at risk for reading difficulties.

  8. Second Language Ability and Emotional Prosody Perception

    PubMed Central

    Bhatara, Anjali; Laukka, Petri; Boll-Avetisyan, Natalie; Granjon, Lionel; Anger Elfenbein, Hillary; Bänziger, Tanja

    2016-01-01

    The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions. PMID:27253326

  9. Acoustics based assessment of respiratory diseases using GMM classification.

    PubMed

    Mayorga, P; Druzgalski, C; Morelos, R L; Gonzalez, O H; Vidales, J

    2010-01-01

    The focus of this paper is to present a method utilizing lung sounds for a quantitative assessment of patient health as it relates to respiratory disorders. In order to accomplish this, applicable traditional techniques within the speech processing domain were utilized to evaluate lung sounds obtained with a digital stethoscope. Traditional methods utilized in the evaluation of asthma involve auscultation and spirometry, but utilization of more sensitive electronic stethoscopes, which are currently available, and application of quantitative signal analysis methods offer opportunities of improved diagnosis. In particular we propose an acoustic evaluation methodology based on the Gaussian Mixed Models (GMM) which should assist in broader analysis, identification, and diagnosis of asthma based on the frequency domain analysis of wheezing and crackles.

  10. Degradation of labial information modifies audiovisual speech perception in cochlear-implanted children.

    PubMed

    Huyse, Aurélie; Berthommier, Frédéric; Leybaert, Jacqueline

    2013-01-01

    The aim of the present study was to examine audiovisual speech integration in cochlear-implanted children and in normally hearing children exposed to degraded auditory stimuli. Previous studies have shown that speech perception in cochlear-implanted users is biased toward the visual modality when audition and vision provide conflicting information. Our main question was whether an experimentally designed degradation of the visual speech cue would increase the importance of audition in the response pattern. The impact of auditory proficiency was also investigated. A group of 31 children with cochlear implants and a group of 31 normally hearing children matched for chronological age were recruited. All children with cochlear implants had profound congenital deafness and had used their implants for at least 2 years. Participants had to perform an /aCa/ consonant-identification task in which stimuli were presented randomly in three conditions: auditory only, visual only, and audiovisual (congruent and incongruent McGurk stimuli). In half of the experiment, the visual speech cue was normal; in the other half (visual reduction) a degraded visual signal was presented, aimed at preventing lipreading of good quality. The normally hearing children received a spectrally reduced speech signal (simulating the input delivered by the cochlear implant). First, performance in visual-only and in congruent audiovisual modalities were decreased, showing that the visual reduction technique used here was efficient at degrading lipreading. Second, in the incongruent audiovisual trials, visual reduction led to a major increase in the number of auditory based responses in both groups. Differences between proficient and nonproficient children were found in both groups, with nonproficient children's responses being more visual and less auditory than those of proficient children. Further analysis revealed that differences between visually clear and visually reduced conditions and between groups were not only because of differences in unisensory perception but also because of differences in the process of audiovisual integration per se. Visual reduction led to an increase in the weight of audition, even in cochlear-implanted children, whose perception is generally dominated by vision. This result suggests that the natural bias in favor of vision is not immutable. Audiovisual speech integration partly depends on the experimental situation, which modulates the informational content of the sensory channels and the weight that is awarded to each of them. Consequently, participants, whether deaf with cochlear implants or having normal hearing, not only base their perception on the most reliable modality but also award it an additional weight.

  11. A Deep Denoising Autoencoder Approach to Improving the Intelligibility of Vocoded Speech in Cochlear Implant Simulation.

    PubMed

    Lai, Ying-Hui; Chen, Fei; Wang, Syu-Siang; Lu, Xugang; Tsao, Yu; Lee, Chin-Hui

    2017-07-01

    In a cochlear implant (CI) speech processor, noise reduction (NR) is a critical component for enabling CI users to attain improved speech perception under noisy conditions. Identifying an effective NR approach has long been a key topic in CI research. Recently, a deep denoising autoencoder (DDAE) based NR approach was proposed and shown to be effective in restoring clean speech from noisy observations. It was also shown that DDAE could provide better performance than several existing NR methods in standardized objective evaluations. Following this success with normal speech, this paper further investigated the performance of DDAE-based NR to improve the intelligibility of envelope-based vocoded speech, which simulates speech signal processing in existing CI devices. We compared the performance of speech intelligibility between DDAE-based NR and conventional single-microphone NR approaches using the noise vocoder simulation. The results of both objective evaluations and listening test showed that, under the conditions of nonstationary noise distortion, DDAE-based NR yielded higher intelligibility scores than conventional NR approaches. This study confirmed that DDAE-based NR could potentially be integrated into a CI processor to provide more benefits to CI users under noisy conditions.

  12. The Neural Bases of Difficult Speech Comprehension and Speech Production: Two Activation Likelihood Estimation (ALE) Meta-Analyses

    ERIC Educational Resources Information Center

    Adank, Patti

    2012-01-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…

  13. Therapy Services.

    ERIC Educational Resources Information Center

    Austin Independent School District, TX.

    Reviewed are the goals and activities of the therapy services in the Austin Early Childhood Special Education Program. Specific sections detail activities for speech therapy (such as diagnostic assessment, habilitation, consultation, and reporting procedures), occupational therapy (including identification and assessment, and services to children,…

  14. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  15. The effects of indexical and phonetic variation on vowel perception in typically developing 9- to 12-year-old children

    PubMed Central

    Jacewicz, Ewa; Fox, Robert Allen

    2015-01-01

    Purpose To investigate how linguistic knowledge interacts with indexical knowledge in older children's perception under demanding listening conditions created by extensive talker variability. Method Twenty five 9- to 12-year-old children, 12 from North Carolina (NC) and 13 from Wisconsin (WI), identified 12 vowels in isolated hVd-words produced by 120 talkers representing the two dialects (NC and WI), both genders and three age groups (generations) of residents from the same geographic locations as the listeners. Results Identification rates were higher for responses to talkers from the same dialect as the listeners and for female speech. Listeners were sensitive to systematic positional variations in vowels and their dynamic structure (formant movement) associated with generational differences in vowel pronunciation resulting from sound change in a speech community. Overall identification rate was 71.7%, which is 8.5% lower than for the adults responding to the same stimuli in Jacewicz and Fox (2012). Conclusions Typically developing older children are successful in dealing with both phonetic and indexical variation related to talker dialect, gender and generation. They are less consistent than the adults most likely due to their less efficient encoding of acoustic-phonetic information in the speech of multiple talkers and relative inexperience with indexical variation. PMID:24686520

  16. The effect of talker and intonation variability on speech perception in noise in children with dyslexia

    PubMed Central

    Hazan, Valerie; Messaoud-Galusi, Souhila; Rosen, Stuart

    2013-01-01

    Purpose To determine whether children with dyslexia (DYS) are more affected than age-matched average readers (AR) by talker and intonation variability when perceiving speech in noise. Method Thirty-four DYS and 25 AR children were tested on their perception of consonants in naturally-produced consonant-vowel (CV) tokens in multi-talker babble. Twelve CVs were presented for identification in four conditions varying in the degree of talker and intonation variability. Consonant place (/bi/-/di/) and voicing (/bi/-/pi/) discrimination was investigated with the same conditions. Results DYS children made slightly more identification errors than AR children but only for conditions with variable intonation. Errors were more frequent for a subset of consonants, generally weakly-encoded for AR children, for tokens with intonation patterns (steady and rise-fall) that occur infrequently in connected discourse. In discrimination tasks, which have a greater memory and cognitive load, DYS children scored lower than AR children across all conditions. Conclusions Unusual intonation patterns had a disproportionate (but small) effect on consonant intelligibility in noise for DYS children but adding talker variability did not. DYS children do not appear to have a general problem in perceiving speech in degraded conditions, which makes it unlikely that they lack robust phonological representations. PMID:22761322

  17. The effect of talker and intonation variability on speech perception in noise in children with dyslexia.

    PubMed

    Hazan, Valerie; Messaoud-Galusi, Souhila; Rosen, Stuart

    2013-02-01

    In this study, the authors aimed to determine whether children with dyslexia (hereafter referred to as "DYS children") are more affected than children with average reading ability (hereafter referred to as "AR children") by talker and intonation variability when perceiving speech in noise. Thirty-four DYS and 25 AR children were tested on their perception of consonants in naturally produced CV tokens in multitalker babble. Twelve CVs were presented for identification in four conditions varying in the degree of talker and intonation variability. Consonant place (/bi/-/di/) and voicing (/bi/-/pi/) discrimination were investigated with the same conditions. DYS children made slightly more identification errors than AR children but only for conditions with variable intonation. Errors were more frequent for a subset of consonants, generally weakly encoded for AR children, for tokens with intonation patterns (steady and rise-fall) that occur infrequently in connected discourse. In discrimination tasks, which have a greater memory and cognitive load, DYS children scored lower than AR children across all conditions. Unusual intonation patterns had a disproportionate (but small) effect on consonant intelligibility in noise for DYS children, but adding talker variability did not. DYS children do not appear to have a general problem in perceiving speech in degraded conditions, which makes it unlikely that they lack robust phonological representations.

  18. Source Camera Identification and Blind Tamper Detections for Images

    DTIC Science & Technology

    2007-04-24

    measures and image quality measures in camera identification problem was studied using conjunction with a KNN classifier to identify the feature sets...shots varying from nature scenes .-.. motorala to close-ups of people. We experimented with the KNN *~. * ny classifier (K=5) as well SVM algorithm of...on Acoustic, Speech and Signal Processing (ICASSP), France, May 2006, vol. 5, pp. 401-404. [9] H. Farid and S. Lyu, "Higher-order wavelet statistics

  19. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis

    PubMed Central

    Altieri, Nicholas; Wenger, Michael J.

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of −12 dB, and S/N ratio of −18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity. PMID:24058358

  20. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    PubMed

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  1. The effect of compression and attention allocation on speech intelligibility

    NASA Astrophysics Data System (ADS)

    Choi, Sangsook; Carrell, Thomas

    2003-10-01

    Research investigating the effects of amplitude compression on speech intelligibility for individuals with sensorineural hearing loss has demonstrated contradictory results [Souza and Turner (1999)]. Because percent-correct measures may not be the best indicator of compression effectiveness, a speech intelligibility and motor coordination task was developed to provide data that may more thoroughly explain the perception of compressed speech signals. In the present study, a pursuit rotor task [Dlhopolsky (2000)] was employed along with word identification task to measure the amount of attention required to perceive compressed and non-compressed words in noise. Monosyllabic words were mixed with speech-shaped noise at a fixed signal-to-noise ratio and compressed using a wide dynamic range compression scheme. Participants with normal hearing identified each word with or without a simultaneous pursuit-rotor task. Also, participants completed the pursuit-rotor task without simultaneous word presentation. It was expected that the performance on the additional motor task would reflect effect of the compression better than simple word-accuracy measures. Results were complex. For example, in some conditions an irrelevant task actually improved performance on a simultaneous listening task. This suggests there might be an optimal level of attention required for recognition of monosyllabic words.

  2. Texting while driving: is speech-based text entry less risky than handheld text entry?

    PubMed

    He, J; Chaparro, A; Nguyen, B; Burge, R J; Crandall, J; Chaparro, B; Ni, R; Cao, S

    2014-11-01

    Research indicates that using a cell phone to talk or text while maneuvering a vehicle impairs driving performance. However, few published studies directly compare the distracting effects of texting using a hands-free (i.e., speech-based interface) versus handheld cell phone, which is an important issue for legislation, automotive interface design and driving safety training. This study compared the effect of speech-based versus handheld text entries on simulated driving performance by asking participants to perform a car following task while controlling the duration of a secondary text-entry task. Results showed that both speech-based and handheld text entries impaired driving performance relative to the drive-only condition by causing more variation in speed and lane position. Handheld text entry also increased the brake response time and increased variation in headway distance. Text entry using a speech-based cell phone was less detrimental to driving performance than handheld text entry. Nevertheless, the speech-based text entry task still significantly impaired driving compared to the drive-only condition. These results suggest that speech-based text entry disrupts driving, but reduces the level of performance interference compared to text entry with a handheld device. In addition, the difference in the distraction effect caused by speech-based and handheld text entry is not simply due to the difference in task duration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Perceptual context effects of speech and nonspeech sounds: the role of auditory categories.

    PubMed

    Aravamudhan, Radhika; Lotto, Andrew J; Hawks, John W

    2008-09-01

    Williams [(1986). "Role of dynamic information in the perception of coarticulated vowels," Ph.D. thesis, University of Connecticut, Standford, CT] demonstrated that nonspeech contexts had no influence on pitch judgments of nonspeech targets, whereas context effects were obtained when instructed to perceive the sounds as speech. On the other hand, Holt et al. [(2000). "Neighboring spectral content influences vowel identification," J. Acoust. Soc. Am. 108, 710-722] showed that nonspeech contexts were sufficient to elicit context effects in speech targets. The current study was to test a hypothesis that could explain the varying effectiveness of nonspeech contexts: Context effects are obtained only when there are well-established perceptual categories for the target stimuli. Experiment 1 examined context effects in speech and nonspeech signals using four series of stimuli: steady-state vowels that perceptually spanned from /inverted ohm/-/I/ in isolation and in the context of /w/ (with no steady-state portion) and two nonspeech sine-wave series that mimicked the acoustics of the speech series. In agreement with previous work context effects were obtained for speech contexts and targets but not for nonspeech analogs. Experiment 2 tested predictions of the hypothesis by testing for nonspeech context effects after the listeners had been trained to categorize the sounds. Following training, context-dependent categorization was obtained for nonspeech stimuli in the training group. These results are presented within a general perceptual-cognitive framework for speech perception research.

  4. Perceptual context effects of speech and nonspeech sounds: The role of auditory categories

    PubMed Central

    Aravamudhan, Radhika; Lotto, Andrew J.; Hawks, John W.

    2008-01-01

    Williams [(1986). “Role of dynamic information in the perception of coarticulated vowels,” Ph.D. thesis, University of Connecticut, Standford, CT] demonstrated that nonspeech contexts had no influence on pitch judgments of nonspeech targets, whereas context effects were obtained when instructed to perceive the sounds as speech. On the other hand, Holt et al. [(2000). “Neighboring spectral content influences vowel identification,” J. Acoust. Soc. Am. 108, 710–722] showed that nonspeech contexts were sufficient to elicit context effects in speech targets. The current study was to test a hypothesis that could explain the varying effectiveness of nonspeech contexts: Context effects are obtained only when there are well-established perceptual categories for the target stimuli. Experiment 1 examined context effects in speech and nonspeech signals using four series of stimuli: steady-state vowels that perceptually spanned from ∕ʊ∕-∕ɪ∕ in isolation and in the context of ∕w∕ (with no steady-state portion) and two nonspeech sine-wave series that mimicked the acoustics of the speech series. In agreement with previous work context effects were obtained for speech contexts and targets but not for nonspeech analogs. Experiment 2 tested predictions of the hypothesis by testing for nonspeech context effects after the listeners had been trained to categorize the sounds. Following training, context-dependent categorization was obtained for nonspeech stimuli in the training group. These results are presented within a general perceptual-cognitive framework for speech perception research. PMID:19045660

  5. Ageing without hearing loss or cognitive impairment causes a decrease in speech intelligibility only in informational maskers.

    PubMed

    Rajan, R; Cainer, K E

    2008-06-23

    In most everyday settings, speech is heard in the presence of competing sounds and understanding speech requires skills in auditory streaming and segregation, followed by identification and recognition, of the attended signals. Ageing leads to difficulties in understanding speech in noisy backgrounds. In addition to age-related changes in hearing-related factors, cognitive factors also play a role but it is unclear to what extent these are generalized or modality-specific cognitive factors. We examined how ageing in normal-hearing decade age cohorts from 20 to 69 years affected discrimination of open-set speech in background noise. We used two types of sentences of similar structural and linguistic characteristics but different masking levels (i.e. differences in signal-to-noise ratios required for detection of sentences in a standard masker) so as to vary sentence demand, and two background maskers (one causing purely energetic masking effects and the other causing energetic and informational masking) to vary load conditions. There was a decline in performance (measured as speech reception thresholds for perception of sentences in noise) in the oldest cohort for both types of sentences, but only in the presence of the more demanding informational masker. We interpret these results to indicate a modality-specific decline in cognitive processing, likely a decrease in the ability to use acoustic and phonetic cues efficiently to segregate speech from background noise, in subjects aged >60.

  6. Determining the effect of laser acupuncture in treating stutterers in comparison with speech therapy.

    PubMed

    Shafiei, Bijan; Heshmatipour, Mojtaba; Tavakol, Samira; Saghaei, Mahmud; Ghayumi, Zahra

    2015-01-01

    Investigation of strategies and methods of therapy in stutterers regarding to its characteristics, length of treatment, and relapse of stuttering is very important. Acupuncture has been introduced as a therapeutic method for the treatment of stuttering. The aim of the present research was the identification of the results of intervention of laser acupuncture in comparison with speech therapy in stutterers. This clinical - trial and case control research was conducted on 20 stutterers and 20 non-stutterers. In the present study, speech therapy and laser acupuncture were used on 10 persons who had developmental stuttering from childhood. The results were compared with the data of speech therapy and placebo laser in 10 control subjects. All of the subjects were followed up for 12 weeks after the intervention. The obtained data showed that accompanying of speech therapy with laser acupuncture resulted the increasing of maintenance and therapeutic effects of stuttering treatment and decreasing of relapsing. The speech rate and percent of stuttered words before and after the intervention in both groups (A and B) were decreased. Following the results after 12 weeks showed that the results were stable in laser group more than the other group and there was a significant difference between the two groups. The results of the present study showed that using of laser acupuncture therapy accompanying by speech therapy has many effects on the treatment of stuttering and prevents the relapsing of stuttering that is very common.

  7. Automated Discovery of Speech Act Categories in Educational Games

    ERIC Educational Resources Information Center

    Rus, Vasile; Moldovan, Cristian; Niraula, Nobal; Graesser, Arthur C.

    2012-01-01

    In this paper we address the important task of automated discovery of speech act categories in dialogue-based, multi-party educational games. Speech acts are important in dialogue-based educational systems because they help infer the student speaker's intentions (the task of speech act classification) which in turn is crucial to providing adequate…

  8. The Suitability of Cloud-Based Speech Recognition Engines for Language Learning

    ERIC Educational Resources Information Center

    Daniels, Paul; Iwago, Koji

    2017-01-01

    As online automatic speech recognition (ASR) engines become more accurate and more widely implemented with call software, it becomes important to evaluate the effectiveness and the accuracy of these recognition engines using authentic speech samples. This study investigates two of the most prominent cloud-based speech recognition engines--Apple's…

  9. Speech Correction for Children with Cleft Lip and Palate by Networking of Community-Based Care.

    PubMed

    Hanchanlert, Yotsak; Pramakhatay, Worawat; Pradubwong, Suteera; Prathanee, Benjamas

    2015-08-01

    Prevalence of cleft lip and palate (CLP) is high in Northeast Thailand. Most children with CLP face many problems, particularly compensatory articulation disorders (CAD) beyond surgery while speech services and the number of speech and language pathologists (SLPs) are limited. To determine the effectiveness of networking of Khon Kaen University (KKU) Community-Based Speech Therapy Model: Kosumphisai Hospital, Kosumphisai District and Maha Sarakham Hospital, Mueang District, Maha Sarakham Province for reduction of the number of articulations errors for children with CLP. Eleven children with CLP were recruited in 3 1-year projects of KKU Community-Based Speech Therapy Model. Articulation tests were formally assessed by qualified language pathologists (SLPs) for baseline and post treatment outcomes. Teachings on services for speech assistants (SAs) were conducted by SLPs. Assigned speech correction (SC) was performed by SAs at home and at local hospitals. Caregivers also gave SC at home 3-4 days a week. Networking of Community-Based Speech Therapy Model signficantly reduced the number of articulation errors for children with CLP in both word and sentence levels (mean difference = 6.91, 95% confidence interval = 4.15-9.67; mean difference = 5.36, 95% confidence interval = 2.99-7.73, respectively). Networking by Kosumphisai and Maha Sarakham of KKU Community-Based Speech Therapy Model was a valid and efficient method for providing speech services for children with cleft palate and could be extended to any area in Thailand and other developing countries, where have similar contexts.

  10. Multichannel Speech Enhancement Based on Generalized Gamma Prior Distribution with Its Online Adaptive Estimation

    NASA Astrophysics Data System (ADS)

    Dat, Tran Huy; Takeda, Kazuya; Itakura, Fumitada

    We present a multichannel speech enhancement method based on MAP speech spectral magnitude estimation using a generalized gamma model of speech prior distribution, where the model parameters are adapted from actual noisy speech in a frame-by-frame manner. The utilization of a more general prior distribution with its online adaptive estimation is shown to be effective for speech spectral estimation in noisy environments. Furthermore, the multi-channel information in terms of cross-channel statistics are shown to be useful to better adapt the prior distribution parameters to the actual observation, resulting in better performance of speech enhancement algorithm. We tested the proposed algorithm in an in-car speech database and obtained significant improvements of the speech recognition performance, particularly under non-stationary noise conditions such as music, air-conditioner and open window.

  11. The perception of syllable affiliation of singleton stops in repetitive speech.

    PubMed

    de Jong, Kenneth J; Lim, Byung-Jin; Nagao, Kyoko

    2004-01-01

    Stetson (1951) noted that repeating singleton coda consonants at fast speech rates makes them be perceived as onset consonants affiliated with a following vowel. The current study documents the perception of rate-induced resyllabification, as well as what temporal properties give rise to the perception of syllable affiliation. Stimuli were extracted from a previous study of repeated stop + vowel and vowel + stop syllables (de Jong, 2001a, 2001b). Forced-choice identification tasks show that slow repetitions are clearly distinguished. As speakers increase rate, they reach a point after which listeners disagree as to the affiliation of the stop. This pattern is found for voiced and voiceless consonants using different stimulus extraction techniques. Acoustic models of the identifications indicate that the sudden shift in syllabification occurs with the loss of an acoustic hiatus between successive syllables. Acoustic models of the fast rate identifications indicate various other qualities, such as consonant voicing, affect the probability that the consonants will be perceived as onsets. These results indicate a model of syllabic affiliation where specific juncture-marking aspects of the signal dominate parsing, and in their absence other differences provide additional, weaker cues to syllabic affiliation.

  12. Is there still a TRACE of trace?

    NASA Astrophysics Data System (ADS)

    McClelland, James; Mirman, Daniel; Holt, Lori

    2003-04-01

    According to the TRACE model [McClelland and Elman, Cogn. Psychol. 18, 1-86 (1986)], speech recognition is an interactive activation process involving the integrated use of top-down (lexical) and bottom-up (acoustic) information. Although it is widely accepted that there are lexical influences on speech perception, there has been a disagreement over their exact nature. Two contested predictions of TRACE are that (a) lexical influences should delay or inhibit recognition of phonemes not consistent with lexical information and (b) a lexical influence on the identification of one phoneme can trigger compensation for co-articulation, affecting the identification of other phonemes. Others [Norris, McQueen, and Cutler, BBS 23, 299-370 (2000)] have argued that the predicted effects do not occur, taking this to support an alternative to the TRACE model in which lexical influences do not affect perception, but only a post-perceptual identification process. We re-examine the evidence on these points along with the recent finding that lexical information may lead to a lasting adjustment of category boundaries [McQueen, Norris, and Cutler, Psychonomics Abstract 255 (2001)]. Our analysis indicates that the existing evidence is completely consistent with TRACE, and we suggest additional research that will be necessary to resolve unanswered questions.

  13. Personal Verification/Identification via Analysis of the Peripheral ECG Leads: Influence of the Personal Health Status on the Accuracy

    PubMed Central

    Bortolan, Giovanni

    2015-01-01

    Traditional means for identity validation (PIN codes, passwords), and physiological and behavioral biometric characteristics (fingerprint, iris, and speech) are susceptible to hacker attacks and/or falsification. This paper presents a method for person verification/identification based on correlation of present-to-previous limb ECG leads: I (r I), II (r II), calculated from them first principal ECG component (r PCA), linear and nonlinear combinations between r I, r II, and r PCA. For the verification task, the one-to-one scenario is applied and threshold values for r I, r II, and r PCA and their combinations are derived. The identification task supposes one-to-many scenario and the tested subject is identified according to the maximal correlation with a previously recorded ECG in a database. The population based ECG-ILSA database of 540 patients (147 healthy subjects, 175 patients with cardiac diseases, and 218 with hypertension) has been considered. In addition a common reference PTB dataset (14 healthy individuals) with short time interval between the two acquisitions has been taken into account. The results on ECG-ILSA database were satisfactory with healthy people, and there was not a significant decrease in nonhealthy patients, demonstrating the robustness of the proposed method. With PTB database, the method provides an identification accuracy of 92.9% and a verification sensitivity and specificity of 100% and 89.9%. PMID:26568954

  14. Personal Verification/Identification via Analysis of the Peripheral ECG Leads: Influence of the Personal Health Status on the Accuracy.

    PubMed

    Jekova, Irena; Bortolan, Giovanni

    2015-01-01

    Traditional means for identity validation (PIN codes, passwords), and physiological and behavioral biometric characteristics (fingerprint, iris, and speech) are susceptible to hacker attacks and/or falsification. This paper presents a method for person verification/identification based on correlation of present-to-previous limb ECG leads: I (r I), II (r II), calculated from them first principal ECG component (r PCA), linear and nonlinear combinations between r I, r II, and r PCA. For the verification task, the one-to-one scenario is applied and threshold values for r I, r II, and r PCA and their combinations are derived. The identification task supposes one-to-many scenario and the tested subject is identified according to the maximal correlation with a previously recorded ECG in a database. The population based ECG-ILSA database of 540 patients (147 healthy subjects, 175 patients with cardiac diseases, and 218 with hypertension) has been considered. In addition a common reference PTB dataset (14 healthy individuals) with short time interval between the two acquisitions has been taken into account. The results on ECG-ILSA database were satisfactory with healthy people, and there was not a significant decrease in nonhealthy patients, demonstrating the robustness of the proposed method. With PTB database, the method provides an identification accuracy of 92.9% and a verification sensitivity and specificity of 100% and 89.9%.

  15. [In Process Citation

    PubMed

    Ackermann; Mathiak

    1999-11-01

    Pure word deafness (auditory verbal agnosia) is characterized by an impairment of auditory comprehension, repetition of verbal material and writing to dictation whereas spontaneous speech production and reading largely remain unaffected. Sometimes, this syndrome is preceded by complete deafness (cortical deafness) of varying duration. Perception of vowels and suprasegmental features of verbal utterances (e.g., intonation contours) seems to be less disrupted than the processing of consonants and, therefore, might mediate residual auditory functions. Often, lip reading and/or slowing of speaking rate allow within some limits to compensate for speech comprehension deficits. Apart from a few exceptions, the available reports of pure word deafness documented a bilateral temporal lesion. In these instances, as a rule, identification of nonverbal (environmental) sounds, perception of music, temporal resolution of sequential auditory cues and/or spatial localization of acoustic events were compromised as well. The observed variable constellation of auditory signs and symptoms in central hearing disorders following bilateral temporal disorders, most probably, reflects the multitude of functional maps at the level of the auditory cortices subserving, as documented in a variety of non-human species, the encoding of specific stimulus parameters each. Thus, verbal/nonverbal auditory agnosia may be considered a paradigm of distorted "auditory scene analysis" (Bregman 1990) affecting both primitive and schema-based perceptual processes. It cannot be excluded, however, that disconnection of the Wernicke-area from auditory input (Geschwind 1965) and/or an impairment of suggested "phonetic module" (Liberman 1996) contribute to the observed deficits as well. Conceivably, these latter mechanisms underly the rare cases of pure word deafness following a lesion restricted to the dominant hemisphere. Only few instances of a rather isolated disruption of the discrimination/identification of nonverbal sound sources, in the presence of uncompromised speech comprehension, have been reported so far (nonverbal auditory agnosia). As a rule, unilateral right-sided damage has been found to be the relevant lesion.

  16. An automatic speech recognition system with speaker-independent identification support

    NASA Astrophysics Data System (ADS)

    Caranica, Alexandru; Burileanu, Corneliu

    2015-02-01

    The novelty of this work relies on the application of an open source research software toolkit (CMU Sphinx) to train, build and evaluate a speech recognition system, with speaker-independent support, for voice-controlled hardware applications. Moreover, we propose to use the trained acoustic model to successfully decode offline voice commands on embedded hardware, such as an ARMv6 low-cost SoC, Raspberry PI. This type of single-board computer, mainly used for educational and research activities, can serve as a proof-of-concept software and hardware stack for low cost voice automation systems.

  17. Evaluation of the importance of time-frequency contributions to speech intelligibility in noise

    PubMed Central

    Yu, Chengzhu; Wójcicki, Kamil K.; Loizou, Philipos C.; Hansen, John H. L.; Johnson, Michael T.

    2014-01-01

    Recent studies on binary masking techniques make the assumption that each time-frequency (T-F) unit contributes an equal amount to the overall intelligibility of speech. The present study demonstrated that the importance of each T-F unit to speech intelligibility varies in accordance with speech content. Specifically, T-F units are categorized into two classes, speech-present T-F units and speech-absent T-F units. Results indicate that the importance of each speech-present T-F unit to speech intelligibility is highly related to the loudness of its target component, while the importance of each speech-absent T-F unit varies according to the loudness of its masker component. Two types of mask errors are also considered, which include miss and false alarm errors. Consistent with previous work, false alarm errors are shown to be more harmful to speech intelligibility than miss errors when the mixture signal-to-noise ratio (SNR) is below 0 dB. However, the relative importance between the two types of error is conditioned on the SNR level of the input speech signal. Based on these observations, a mask-based objective measure, the loudness weighted hit-false, is proposed for predicting speech intelligibility. The proposed objective measure shows significantly higher correlation with intelligibility compared to two existing mask-based objective measures. PMID:24815280

  18. Supporting culturally and linguistically diverse children with speech, language and communication needs: Overarching principles, individual approaches.

    PubMed

    Verdon, Sarah; McLeod, Sharynne; Wong, Sandie

    2015-01-01

    Speech-language pathologists (SLPs) are working with an increasing number of families from culturally and linguistically diverse backgrounds as the world's population continues to become more internationally mobile. The heterogeneity of these diverse populations makes it impossible to identify and document a one size fits all strategy for working with culturally and linguistically diverse families. This paper explores approaches to practice by SLPs identified as specialising in multilingual and multicultural practice in culturally and linguistically diverse contexts from around the world. Data were obtained from ethnographic observation of 14 sites in 5 countries on 4 continents. The sites included hospital settings, university clinics, school-based settings, private practices and Indigenous community-based services. There were 652 individual artefacts collected from the sites which included interview transcripts, photographs, videos, narrative reflections, informal and formal field notes. The data were analysed using Cultural-Historical Activity Theory (Engeström, 1987). From the analysis six overarching Principles of Culturally Competent Practice (PCCP) were identified. These were: (1) identification of culturally appropriate and mutually motivating therapy goals, (2) knowledge of languages and culture, (3) use of culturally appropriate resources, (4) consideration of the cultural, social and political context, (5) consultation with families and communities, and (6) collaboration between professionals. These overarching principles align with the six position statements developed by the International Expert Panel on Multilingual Children's Speech (2012) which aim to enhance the cultural competence of speech pathologists and their practice. The international examples provided in the current study demonstrate the individualised ways that these overarching principles are enacted in a range of different organisational, social, cultural and political contexts. Tensions experienced in enacting the principles are also discussed. This paper emphasises the potential for individual SLPs to enhance their practice by adopting these overarching principles to support the individual children and families in diverse contexts around the world. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. The Speech, Spatial and Qualities of Hearing Scale (SSQ)

    PubMed Central

    Gatehouse, Stuart; Noble, William

    2017-01-01

    The Speech, Spatial and Qualities of Hearing Scale (SSQ) is designed to measure a range of hearing disabilities across several domains. Particular attention is given to hearing speech in a variety of competing contexts, and to the directional, distance and movement components of spatial hearing. In addition, the abilities both to segregate sounds and to attend to simultaneous speech streams are assessed, reflecting the reality of hearing in the everyday world. Qualities of hearing experience include ease of listening, and the naturalness, clarity and identifiability of different speakers, different musical pieces and instruments, and different everyday sounds. Application of the SSQ to 153 new clinic clients prior to hearing aid fitting showed that the greatest difficulty was experienced with simultaneous speech streams, ease of listening, listening in groups and in noise, and judging distance and movement. SSQ ratings were compared with an independent measure of handicap. After differences in hearing level were controlled for, it was found that identification, attention and effort problems, as well as spatial hearing problems, feature prominently in the disability–handicap relationship, along with certain features of speech hearing. The results implicate aspects of temporal and spatial dynamics of hearing disability in the experience of handicap. The SSQ shows promise as an instrument for evaluating interventions of various kinds, particularly (but not exclusively) those that implicate binaural function. PMID:15035561

  20. Women's Speech/Men's Speech: Does Forensic Training Make a Difference?

    ERIC Educational Resources Information Center

    Larson, Suzanne; Vreeland, Amy L.

    A study of cross examination speeches of males and females was conducted to determine gender differences in intercollegiate debate. The theory base for gender differences in speech is closely tied to the analysis of dyadic conversation. It is based on the belief that women are less forceful and dominant in cross examination, and will exhibit…

  1. Perceptual Learning of Time-Compressed Speech: More than Rapid Adaptation

    PubMed Central

    Banai, Karen; Lavner, Yizhar

    2012-01-01

    Background Time-compressed speech, a form of rapidly presented speech, is harder to comprehend than natural speech, especially for non-native speakers. Although it is possible to adapt to time-compressed speech after a brief exposure, it is not known whether additional perceptual learning occurs with further practice. Here, we ask whether multiday training on time-compressed speech yields more learning than that observed during the initial adaptation phase and whether the pattern of generalization following successful learning is different than that observed with initial adaptation only. Methodology/Principal Findings Two groups of non-native Hebrew speakers were tested on five different conditions of time-compressed speech identification in two assessments conducted 10–14 days apart. Between those assessments, one group of listeners received five practice sessions on one of the time-compressed conditions. Between the two assessments, trained listeners improved significantly more than untrained listeners on the trained condition. Furthermore, the trained group generalized its learning to two untrained conditions in which different talkers presented the trained speech materials. In addition, when the performance of the non-native speakers was compared to that of a group of naïve native Hebrew speakers, performance of the trained group was equivalent to that of the native speakers on all conditions on which learning occurred, whereas performance of the untrained non-native listeners was substantially poorer. Conclusions/Significance Multiday training on time-compressed speech results in significantly more perceptual learning than brief adaptation. Compared to previous studies of adaptation, the training induced learning is more stimulus specific. Taken together, the perceptual learning of time-compressed speech appears to progress from an initial, rapid adaptation phase to a subsequent prolonged and more stimulus specific phase. These findings are consistent with the predictions of the Reverse Hierarchy Theory of perceptual learning and suggest constraints on the use of perceptual-learning regimens during second language acquisition. PMID:23056592

  2. Temporal processing of speech in a time-feature space

    NASA Astrophysics Data System (ADS)

    Avendano, Carlos

    1997-09-01

    The performance of speech communication systems often degrades under realistic environmental conditions. Adverse environmental factors include additive noise sources, room reverberation, and transmission channel distortions. This work studies the processing of speech in the temporal-feature or modulation spectrum domain, aiming for alleviation of the effects of such disturbances. Speech reflects the geometry of the vocal organs, and the linguistically dominant component is in the shape of the vocal tract. At any given point in time, the shape of the vocal tract is reflected in the short-time spectral envelope of the speech signal. The rate of change of the vocal tract shape appears to be important for the identification of linguistic components. This rate of change, or the rate of change of the short-time spectral envelope can be described by the modulation spectrum, i.e. the spectrum of the time trajectories described by the short-time spectral envelope. For a wide range of frequency bands, the modulation spectrum of speech exhibits a maximum at about 4 Hz, the average syllabic rate. Disturbances often have modulation frequency components outside the speech range, and could in principle be attenuated without significantly affecting the range with relevant linguistic information. Early efforts for exploiting the modulation spectrum domain (temporal processing), such as the dynamic cepstrum or the RASTA processing, used ad hoc designed processing and appear to be suboptimal. As a major contribution, in this dissertation we aim for a systematic data-driven design of temporal processing. First we analytically derive and discuss some properties and merits of temporal processing for speech signals. We attempt to formalize the concept and provide a theoretical background which has been lacking in the field. In the experimental part we apply temporal processing to a number of problems including adaptive noise reduction in cellular telephone environments, reduction of reverberation for speech enhancement, and improvements on automatic recognition of speech degraded by linear distortions and reverberation.

  3. Application of artifical intelligence principles to the analysis of "crazy" speech.

    PubMed

    Garfield, D A; Rapp, C

    1994-04-01

    Artificial intelligence computer simulation methods can be used to investigate psychotic or "crazy" speech. Here, symbolic reasoning algorithms establish semantic networks that schematize speech. These semantic networks consist of two main structures: case frames and object taxonomies. Node-based reasoning rules apply to object taxonomies and pathway-based reasoning rules apply to case frames. Normal listeners may recognize speech as "crazy talk" based on violations of node- and pathway-based reasoning rules. In this article, three separate segments of schizophrenic speech illustrate violations of these rules. This artificial intelligence approach is compared and contrasted with other neurolinguistic approaches and is discussed as a conceptual link between neurobiological and psychodynamic understandings of psychopathology.

  4. The impact of workplace factors on evidence-based speech-language pathology practice for children with autism spectrum disorders.

    PubMed

    Cheung, Gladys; Trembath, David; Arciuli, Joanne; Togher, Leanne

    2013-08-01

    Although researchers have examined barriers to implementing evidence-based practice (EBP) at the level of the individual, little is known about the effects workplaces have on speech-language pathologists' implementation of EBP. The aim of this study was to examine the impact of workplace factors on the use of EBP amongst speech-language pathologists who work with children with Autism Spectrum Disorder (ASD). This study sought to (a) explore views about EBP amongst speech-language pathologists who work with children with ASD, (b) identify workplace factors which, in the participants' opinions, acted as barriers or enablers to their provision of evidence-based speech-language pathology services, and (c) examine whether or not speech-language pathologists' responses to workplace factors differed based on the type of workplace or their years of experience. A total of 105 speech-language pathologists from across Australia completed an anonymous online questionnaire. The results indicate that, although the majority of speech-language pathologists agreed that EBP is necessary, they experienced barriers to their implementation of EBP including workplace culture and support, lack of time, cost of EBP, and the availability and accessibility of EBP resources. The barriers reported by speech-language pathologists were similar, regardless of their workplace (private practice vs organization) and years of experience.

  5. Long short-term memory for speaker generalization in supervised speech separation

    PubMed Central

    Chen, Jitong; Wang, DeLiang

    2017-01-01

    Speech separation can be formulated as learning to estimate a time-frequency mask from acoustic features extracted from noisy speech. For supervised speech separation, generalization to unseen noises and unseen speakers is a critical issue. Although deep neural networks (DNNs) have been successful in noise-independent speech separation, DNNs are limited in modeling a large number of speakers. To improve speaker generalization, a separation model based on long short-term memory (LSTM) is proposed, which naturally accounts for temporal dynamics of speech. Systematic evaluation shows that the proposed model substantially outperforms a DNN-based model on unseen speakers and unseen noises in terms of objective speech intelligibility. Analyzing LSTM internal representations reveals that LSTM captures long-term speech contexts. It is also found that the LSTM model is more advantageous for low-latency speech separation and it, without future frames, performs better than the DNN model with future frames. The proposed model represents an effective approach for speaker- and noise-independent speech separation. PMID:28679261

  6. Relationship between perceptual learning in speech and statistical learning in younger and older adults

    PubMed Central

    Neger, Thordis M.; Rietveld, Toni; Janse, Esther

    2014-01-01

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly. PMID:25225475

  7. Relationship between perceptual learning in speech and statistical learning in younger and older adults.

    PubMed

    Neger, Thordis M; Rietveld, Toni; Janse, Esther

    2014-01-01

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.

  8. Speech enhancement based on modified phase-opponency detectors

    NASA Astrophysics Data System (ADS)

    Deshmukh, Om D.; Espy-Wilson, Carol Y.

    2005-09-01

    A speech enhancement algorithm based on a neural model was presented by Deshmukh et al., [149th meeting of the Acoustical Society America, 2005]. The algorithm consists of a bank of Modified Phase Opponency (MPO) filter pairs tuned to different center frequencies. This algorithm is able to enhance salient spectral features in speech signals even at low signal-to-noise ratios. However, the algorithm introduces musical noise and sometimes misses a spectral peak that is close in frequency to a stronger spectral peak. Refinement in the design of the MPO filters was recently made that takes advantage of the falling spectrum of the speech signal in sonorant regions. The modified set of filters leads to better separation of the noise and speech signals, and more accurate enhancement of spectral peaks. The improvements also lead to a significant reduction in musical noise. Continuity algorithms based on the properties of speech signals are used to further reduce the musical noise effect. The efficiency of the proposed method in enhancing the speech signal when the level of the background noise is fluctuating will be demonstrated. The performance of the improved speech enhancement method will be compared with various spectral subtraction-based methods. [Work supported by NSF BCS0236707.

  9. Development of a speech autocuer

    NASA Astrophysics Data System (ADS)

    Bedles, R. L.; Kizakvich, P. N.; Lawson, D. T.; McCartney, M. L.

    1980-12-01

    A wearable, visually based prosthesis for the deaf based upon the proven method for removing lipreading ambiguity known as cued speech was fabricated and tested. Both software and hardware developments are described, including a microcomputer, display, and speech preprocessor.

  10. Development of a speech autocuer

    NASA Technical Reports Server (NTRS)

    Bedles, R. L.; Kizakvich, P. N.; Lawson, D. T.; Mccartney, M. L.

    1980-01-01

    A wearable, visually based prosthesis for the deaf based upon the proven method for removing lipreading ambiguity known as cued speech was fabricated and tested. Both software and hardware developments are described, including a microcomputer, display, and speech preprocessor.

  11. Job Stress of School-Based Speech-Language Pathologists

    ERIC Educational Resources Information Center

    Harris, Stephanie Ferney; Prater, Mary Anne; Dyches, Tina Taylor; Heath, Melissa Allen

    2009-01-01

    Stress and burnout contribute significantly to the shortages of school-based speech-language pathologists (SLPs). At the request of the Utah State Office of Education, the researchers measured the stress levels of 97 school-based SLPs using the "Speech-Language Pathologist Stress Inventory." Results indicated that participants' emotional-fatigue…

  12. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123

    PubMed Central

    Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  13. Towards Contactless Silent Speech Recognition Based on Detection of Active and Visible Articulators Using IR-UWB Radar

    PubMed Central

    Shin, Young Hoon; Seo, Jiwon

    2016-01-01

    People with hearing or speaking disabilities are deprived of the benefits of conventional speech recognition technology because it is based on acoustic signals. Recent research has focused on silent speech recognition systems that are based on the motions of a speaker’s vocal tract and articulators. Because most silent speech recognition systems use contact sensors that are very inconvenient to users or optical systems that are susceptible to environmental interference, a contactless and robust solution is hence required. Toward this objective, this paper presents a series of signal processing algorithms for a contactless silent speech recognition system using an impulse radio ultra-wide band (IR-UWB) radar. The IR-UWB radar is used to remotely and wirelessly detect motions of the lips and jaw. In order to extract the necessary features of lip and jaw motions from the received radar signals, we propose a feature extraction algorithm. The proposed algorithm noticeably improved speech recognition performance compared to the existing algorithm during our word recognition test with five speakers. We also propose a speech activity detection algorithm to automatically select speech segments from continuous input signals. Thus, speech recognition processing is performed only when speech segments are detected. Our testbed consists of commercial off-the-shelf radar products, and the proposed algorithms are readily applicable without designing specialized radar hardware for silent speech processing. PMID:27801867

  14. Towards Contactless Silent Speech Recognition Based on Detection of Active and Visible Articulators Using IR-UWB Radar.

    PubMed

    Shin, Young Hoon; Seo, Jiwon

    2016-10-29

    People with hearing or speaking disabilities are deprived of the benefits of conventional speech recognition technology because it is based on acoustic signals. Recent research has focused on silent speech recognition systems that are based on the motions of a speaker's vocal tract and articulators. Because most silent speech recognition systems use contact sensors that are very inconvenient to users or optical systems that are susceptible to environmental interference, a contactless and robust solution is hence required. Toward this objective, this paper presents a series of signal processing algorithms for a contactless silent speech recognition system using an impulse radio ultra-wide band (IR-UWB) radar. The IR-UWB radar is used to remotely and wirelessly detect motions of the lips and jaw. In order to extract the necessary features of lip and jaw motions from the received radar signals, we propose a feature extraction algorithm. The proposed algorithm noticeably improved speech recognition performance compared to the existing algorithm during our word recognition test with five speakers. We also propose a speech activity detection algorithm to automatically select speech segments from continuous input signals. Thus, speech recognition processing is performed only when speech segments are detected. Our testbed consists of commercial off-the-shelf radar products, and the proposed algorithms are readily applicable without designing specialized radar hardware for silent speech processing.

  15. Reading in Subjects with an Oral Cleft: Speech, Hearing and Neuropsychological Skills

    PubMed Central

    Conrad, Amy L.; McCoy, Thomasin E.; DeVolder, Ian; Richman, Lynn C.; Nopoulos, Peg

    2014-01-01

    Objective Evaluate speech, hearing, and neuropsychological correlates to reading among children, adolescents and young adults with non-syndromic cleft of the lip and/or palate (NSCL/P). Method All testing was completed in one visit at a Midwestern university hospital. Subjects in both the NSCL/P (n = 80) and control group (n = 62) ranged in age from 7 to 26 years (average age = 17.60 and 17.66, respectively). Subjects completed a battery of standardized tests evaluating intelligence, neuropsychological skills, and word reading. Subjects with NSCL/P also underwent speech assessment and past audiology records were evaluated. Results After controlling for age and SES, subjects with cleft performed significantly worse on a test of word reading. For subjects with cleft, word reading deficits were not associated with measures of speech or hearing, but were correlated with impairments in auditory memory. Conclusions These findings show poorer reading among subjects with NCL/P compared to those without. Further work needs to focus on correlates of reading among subjects with cleft to allow early identification and appropriate intervention/accommodation for those at risk. PMID:24188114

  16. Atypical preference for infant-directed speech as an early marker of autism spectrum disorders? A literature review and directions for further research.

    PubMed

    Filipe, Marisa G; Watson, Linda; Vicente, Selene G; Frota, Sónia

    2018-01-01

    Autism spectrum disorders (ASD) refer to a complex group of neurodevelopmental disorders causing difficulties with communication and interpersonal relationships, as well as restricted and repetitive behaviours and interests. As early identification, diagnosis, and intervention provide better long-term outcomes, early markers of ASD have gained increased research attention. This review examines evidence that auditory processing enhanced by social interest, in particular auditory preference of speech directed towards infants and young children (i.e. infant-directed speech - IDS), may be an early marker of risk for ASD. Although this review provides evidence for IDS preference as, indeed, a potential early marker of ASD, the explanation for differences in IDS processing among children with ASD versus other children remains unclear, as are the implications of these impairments for later social-communicative development. Therefore, it is crucial to explore atypicalities in IDS processing early on development and to understand whether preferential listening to specific types of speech sounds in the first years of life may help to predict the impairments in social and language development.

  17. Identification of the Transcriptional Targets of FOXP2, a Gene Linked to Speech and Language, in Developing Human Brain

    PubMed Central

    Spiteri, Elizabeth ; Konopka, Genevieve ; Coppola, Giovanni ; Bomar, Jamee ; Oldham, Michael ; Ou, Jing ; Vernes, Sonja C. ; Fisher, Simon E. ; Ren, Bing ; Geschwind, Daniel H. 

    2007-01-01

    Mutations in FOXP2, a member of the forkhead family of transcription factor genes, are the only known cause of developmental speech and language disorders in humans. To date, there are no known targets of human FOXP2 in the nervous system. The identification of FOXP2 targets in the developing human brain, therefore, provides a unique tool with which to explore the development of human language and speech. Here, we define FOXP2 targets in human basal ganglia (BG) and inferior frontal cortex (IFC) by use of chromatin immunoprecipitation followed by microarray analysis (ChIP-chip) and validate the functional regulation of targets in vitro. ChIP-chip identified 285 FOXP2 targets in fetal human brain; statistically significant overlap of targets in BG and IFC indicates a core set of 34 transcriptional targets of FOXP2. We identified targets specific to IFC or BG that were not observed in lung, suggesting important regional and tissue differences in FOXP2 activity. Many target genes are known to play critical roles in specific aspects of central nervous system patterning or development, such as neurite outgrowth, as well as plasticity. Subsets of the FOXP2 transcriptional targets are either under positive selection in humans or differentially expressed between human and chimpanzee brain. This is the first ChIP-chip study to use human brain tissue, making the FOXP2-target genes identified in these studies important to understanding the pathways regulating speech and language in the developing human brain. These data provide the first insight into the functional network of genes directly regulated by FOXP2 in human brain and by evolutionary comparisons, highlighting genes likely to be involved in the development of human higher-order cognitive processes. PMID:17999357

  18. Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces

    PubMed Central

    Bocquelet, Florent; Hueber, Thomas; Girin, Laurent; Savariaux, Christophe; Yvert, Blaise

    2016-01-01

    Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications. This synthesizer converts movements of the main speech articulators (tongue, jaw, velum, and lips) into intelligible speech. The articulatory-to-acoustic mapping is performed using a deep neural network (DNN) trained on electromagnetic articulography (EMA) data recorded on a reference speaker synchronously with the produced speech signal. This DNN is then used in both offline and online modes to map the position of sensors glued on different speech articulators into acoustic parameters that are further converted into an audio signal using a vocoder. In offline mode, highly intelligible speech could be obtained as assessed by perceptual evaluation performed by 12 listeners. Then, to anticipate future BCI applications, we further assessed the real-time control of the synthesizer by both the reference speaker and new speakers, in a closed-loop paradigm using EMA data recorded in real time. A short calibration period was used to compensate for differences in sensor positions and articulatory differences between new speakers and the reference speaker. We found that real-time synthesis of vowels and consonants was possible with good intelligibility. In conclusion, these results open to future speech BCI applications using such articulatory-based speech synthesizer. PMID:27880768

  19. Identification of Teaching Behaviors Which Predict Success for Mainstreamed Students.

    ERIC Educational Resources Information Center

    Larrivee, Barbara; Algina, James

    The final phase of a study investigating effective teaching behaviors for mainstreamed students involved 118 elementary teachers. Teachers provided information on mainstreamed students and a sample of students was randomly selected to represent classification categories (learning disabilities, behavior disorders, speech impairments, and hearing…

  20. Voice Disorders in School Children: Clinical Management.

    ERIC Educational Resources Information Center

    Garbee, Frederick E., Ed.

    Five papers presented at two inservice institutes for school speech and language pathologists delineated identification, remediation, and management of voice disorders in school children. Keynote remarks emphasized the intimate relationship between children's voices and their affective behavior and psychological needs, and thus, the importance of…

  1. The Effects of the Literal Meaning of Emotional Phrases on the Identification of Vocal Emotions.

    PubMed

    Shigeno, Sumi

    2018-02-01

    This study investigates the discrepancy between the literal emotional content of speech and emotional tone in the identification of speakers' vocal emotions in both the listeners' native language (Japanese), and in an unfamiliar language (random-spliced Japanese). Both experiments involve a "congruent condition," in which the emotion contained in the literal meaning of speech (words and phrases) was compatible with vocal emotion, and an "incongruent condition," in which these forms of emotional information were discordant. Results for Japanese indicated that performance in identifying emotions did not differ significantly between the congruent and incongruent conditions. However, the results for random-spliced Japanese indicated that vocal emotion was correctly identified more often in the congruent than in the incongruent condition. The different results for Japanese and random-spliced Japanese suggested that the literal meaning of emotional phrases influences the listener's perception of the speaker's emotion, and that Japanese participants could infer speakers' intended emotions in the incongruent condition.

  2. Identification of impaired hearing in early childhood.

    PubMed Central

    Wong, D; Shah, C P

    1979-01-01

    Although the incidence of congenital deafness is high, routine neonatal screening for this problem is not practised, and early identification of congenital or early acquired deafness is relatively rare. Delaying therapy until a child is 3 or more years old severely limits speech development, language acquisition and learning. The commonest causes of delay in diagnosis are the refusal of physicians to listen to the parents' observations, their failure to screen children for hearing and speech problems, and their reluctance to arrange prompt referral for audiologic assessment. Diagnostic delay occurs even though half the children who have impaired hearing are known to be at increased risk. A plea is made for the setting up of a register of infants known to be at risk for impaired hearing. First-contact physicians should be alert to the possibility of hearing problems, particularly in children at high risk. Screening methods for use by nonspecialist practitioners are outlined. Images p535-a p536-a p540-a PMID:387194

  3. Temporal dynamics and the identification of musical key.

    PubMed

    Farbood, Morwaread Mary; Marcus, Gary; Poeppel, David

    2013-08-01

    A central process in music cognition involves the identification of key; however, little is known about how listeners accomplish this task in real time. This study derives from work that suggests overlap between the neural and cognitive resources underlying the analyses of both music and speech and is the first, to our knowledge, to explore the timescales at which the brain infers musical key. We investigated the temporal psychophysics of key-finding over a wide range of tempi using melodic sequences with strong structural cues, where statistical information about overall key profile was ambiguous. Listeners were able to provide robust judgments within specific limits, at rates as high as 400 beats per minute (bpm; ∼7 Hz) and as low as 30 bpm (0.5 Hz), but not outside those bounds. These boundaries on reliable performance show that the process of key-finding is restricted to timescales that are closely aligned with beat induction and speech processing. 2013 APA, all rights reserved

  4. Significance of parametric spectral ratio methods in detection and recognition of whispered speech

    NASA Astrophysics Data System (ADS)

    Mathur, Arpit; Reddy, Shankar M.; Hegde, Rajesh M.

    2012-12-01

    In this article the significance of a new parametric spectral ratio method that can be used to detect whispered speech segments within normally phonated speech is described. Adaptation methods based on the maximum likelihood linear regression (MLLR) are then used to realize a mismatched train-test style speech recognition system. This proposed parametric spectral ratio method computes a ratio spectrum of the linear prediction (LP) and the minimum variance distortion-less response (MVDR) methods. The smoothed ratio spectrum is then used to detect whispered segments of speech within neutral speech segments effectively. The proposed LP-MVDR ratio method exhibits robustness at different SNRs as indicated by the whisper diarization experiments conducted on the CHAINS and the cell phone whispered speech corpus. The proposed method also performs reasonably better than the conventional methods for whisper detection. In order to integrate the proposed whisper detection method into a conventional speech recognition engine with minimal changes, adaptation methods based on the MLLR are used herein. The hidden Markov models corresponding to neutral mode speech are adapted to the whispered mode speech data in the whispered regions as detected by the proposed ratio method. The performance of this method is first evaluated on whispered speech data from the CHAINS corpus. The second set of experiments are conducted on the cell phone corpus of whispered speech. This corpus is collected using a set up that is used commercially for handling public transactions. The proposed whisper speech recognition system exhibits reasonably better performance when compared to several conventional methods. The results shown indicate the possibility of a whispered speech recognition system for cell phone based transactions.

  5. Reliability of Interaural Time Difference-Based Localization Training in Elderly Individuals with Speech-in-Noise Perception Disorder.

    PubMed

    Delphi, Maryam; Lotfi, M-Yones; Moossavi, Abdollah; Bakhshi, Enayatollah; Banimostafa, Maryam

    2017-09-01

    Previous studies have shown that interaural-time-difference (ITD) training can improve localization ability. Surprisingly little is, however, known about localization training vis-à-vis speech perception in noise based on interaural time difference in the envelope (ITD ENV). We sought to investigate the reliability of an ITD ENV-based training program in speech-in-noise perception among elderly individuals with normal hearing and speech-in-noise disorder. The present interventional study was performed during 2016. Sixteen elderly men between 55 and 65 years of age with the clinical diagnosis of normal hearing up to 2000 Hz and speech-in-noise perception disorder participated in this study. The training localization program was based on changes in ITD ENV. In order to evaluate the reliability of the training program, we performed speech-in-noise tests before the training program, immediately afterward, and then at 2 months' follow-up. The reliability of the training program was analyzed using the Friedman test and the SPSS software. Significant statistical differences were shown in the mean scores of speech-in-noise perception between the 3 time points (P=0.001). The results also indicated no difference in the mean scores of speech-in-noise perception between the 2 time points of immediately after the training program and 2 months' follow-up (P=0.212). The present study showed the reliability of an ITD ENV-based localization training in elderly individuals with speech-in-noise perception disorder.

  6. Referred speech-language and hearing complaints in the western region of São Paulo, Brazil

    PubMed Central

    Samelli, Alessandra Giannella; Rondon, Silmara; Oliver, Fátima Correa; Junqueira, Simone Rennó; Molini-Avejonas, Daniela Regina

    2014-01-01

    OBJECTIVE: The aim of this study was to characterize the epidemiological profile of the population attending primary health care units in the western region of the city of São Paulo, Brazil, highlighting referred speech-language and hearing complaints. METHOD: This investigation was a cross-sectional observational study conducted in primary health care units. Household surveys were conducted and information was obtained from approximately 2602 individuals, including (but not limited to) data related to education, family income, health issues, access to public services and access to health services. The speech-language and hearing complaints were identified from specific questions. RESULTS: Our results revealed that the populations participating in the survey were heterogeneous in terms of their demographic and economic characteristics. The prevalence of referred speech-language and hearing complaints in this population was 10%, and only half the users of the public health system in the studied region who had complaints were monitored or received specific treatment. CONCLUSIONS: The results demonstrate the importance of using population surveys to identify speech-language and hearing complaints at the level of primary health care. Moreover, these findings highlight the need to reorganize the speech-language pathology and audiology service in the western region of São Paulo, as well as the need to improve the Family Health Strategy in areas that do not have a complete coverage, in order to expand and improve the territorial diagnostics and the speech-language pathology and audiology actions related to the prevention, identification, and rehabilitation of human communication disorders. PMID:24964306

  7. [Peripheral nervous system and speech disorders].

    PubMed

    Ferri, Lluís

    2014-02-24

    Disorders affecting the lower motor neurons in childhood, with a congenital or acquired aetiology, give rise to difficulties in neuromotor response and, therefore, motor disorders affecting speech in a period that is especially critical for the development of language. The low incidence of this pathology, its comorbidity with other brain conditions and its uncertain prognosis make it a particularly interesting area of study. The purpose of this work is to review the motor disorders affecting speech in flaccid dysarthria, together with its functional evaluation and speech therapy interventions. The study aims to carry out the clinical characterisation of the disorders affecting verbal production of a peripheral origin, and more specifically flaccid dysarthria and its respiratory, phonatory, resonance, articulatory and prosodic manifestations. The analysis then goes on to outline the functional evaluation and lines of intervention for its treatment are proposed. The clinical manifestations of flaccid dysarthria are very heterogeneous and range from very slight difficulties in articulation to severe disorders that seriously limit the capacity for verbal expression. In most cases, a functional examination yields valuable findings for its identification and classification, for determining the need for complementary evaluations and for establishing the most suitable programme of speech therapy. The guided participation of the family and the interdisciplinary approach are factors that play a decisive role in improving these processes.

  8. Status and progress of studies on the nature of speech, instrumentation for its investigation and practical applications

    NASA Astrophysics Data System (ADS)

    Liberman, A. M.

    1983-09-01

    This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. Manuscripts cover the following topics: The association between comprehension of spoken sentences and early reading ability: The role of phonetic representation; Phonetic coding and order memory in relation to reading proficiency: A comparison of short-term memory for temporal and spatial order information; Exploring the oral and written language errors made by language disabled children; Perceiving phonetic events; Converging evidence in support of common dynamical principles for speech and movement coordination; Phase transitions and critical behavior in human bimanual coordination; Timing and coarticulation for alveolo-palatals and sequences of alveolar +J in Catalan; V-to-C coarticulation in Catalan VCV sequences: An articulatory and acoustical study; Prosody and the /S/-/c/ distinction; Intersections of tone and intonation in Thai; Simultaneous measurements of vowels produced by a hearing-impaired speaker; Extending format transitions may not improve aphasics' perception of stop consonant place of articulation; Against a role of chirp identification in duplex perception; Further evidence for the role of relative timing in speech: A reply to Barry; Review (Phonological intervention: Concepts and procedures); and Review (Temporal variables in speech).

  9. Cochlear Implantation in Siblings With Refsum's Disease.

    PubMed

    Stähr, Kerstin; Kuechler, Alma; Gencik, Martin; Arnolds, Judith; Dendy, Meaghan; Lang, Stephan; Arweiler-Harbeck, Diana

    2017-08-01

    Whether the origin of severe hearing loss in Refsum's syndrome is caused by cochlear impairment or retrocochlear degeneration remains unclear. This case report aims to investigate hearing performance before and after cochlear implantation to shed light on this question. Also, identification of new mutations causing Refsum's syndrome would be helpful in generating additional means of diagnosis. A family of 4 individuals was subjected to genetic testing. Two siblings (56 and 61 years old) suffered from severe hearing and vision loss and received bilateral cochlear implants. Genetic analysis, audiological outcome, and clinical examinations were performed. One new mutation in the PHYH gene (c.768del63bp) causing Refsum's disease was found. Preoperative distortion product otoacoustic emissions (DPAOEs) were absent. Postoperative speech perception in Freiburger speech test was 100% for bisyllabic words and 85% (patient No. 1) and 65% (patient No. 2), respectively, for monosyllabic words. Five years after implantation, speech perception remained stable for bisyllabic words but showed decreasing capabilities for monosyllabic words. A new mutation causing Refsum's disease is presented. Cochlear implantation in case of severe hearing loss leads to an improvement in speech perception and should be recommended for patients with Refsum's disease, especially when the hearing loss is combined with a severe loss of vision. Decrease of speech perception in the long-term follow-up could indicate an additional retrocochlear degeneration.

  10. Speech Processing to Improve the Perception of Speech in Background Noise for Children With Auditory Processing Disorder and Typically Developing Peers.

    PubMed

    Flanagan, Sheila; Zorilă, Tudor-Cătălin; Stylianou, Yannis; Moore, Brian C J

    2018-01-01

    Auditory processing disorder (APD) may be diagnosed when a child has listening difficulties but has normal audiometric thresholds. For adults with normal hearing and with mild-to-moderate hearing impairment, an algorithm called spectral shaping with dynamic range compression (SSDRC) has been shown to increase the intelligibility of speech when background noise is added after the processing. Here, we assessed the effect of such processing using 8 children with APD and 10 age-matched control children. The loudness of the processed and unprocessed sentences was matched using a loudness model. The task was to repeat back sentences produced by a female speaker when presented with either speech-shaped noise (SSN) or a male competing speaker (CS) at two signal-to-background ratios (SBRs). Speech identification was significantly better with SSDRC processing than without, for both groups. The benefit of SSDRC processing was greater for the SSN than for the CS background. For the SSN, scores were similar for the two groups at both SBRs. For the CS, the APD group performed significantly more poorly than the control group. The overall improvement produced by SSDRC processing could be useful for enhancing communication in a classroom where the teacher's voice is broadcast using a wireless system.

  11. Inter-speaker speech variability assessment using statistical deformable models from 3.0 tesla magnetic resonance images.

    PubMed

    Vasconcelos, Maria J M; Ventura, Sandra M R; Freitas, Diamantino R S; Tavares, João Manuel R S

    2012-03-01

    The morphological and dynamic characterisation of the vocal tract during speech production has been gaining greater attention due to the motivation of the latest improvements in magnetic resonance (MR) imaging; namely, with the use of higher magnetic fields, such as 3.0 Tesla. In this work, the automatic study of the vocal tract from 3.0 Tesla MR images was assessed through the application of statistical deformable models. Therefore, the primary goal focused on the analysis of the shape of the vocal tract during the articulation of European Portuguese sounds, followed by the evaluation of the results concerning the automatic segmentation, i.e. identification of the vocal tract in new MR images. In what concerns speech production, this is the first attempt to automatically characterise and reconstruct the vocal tract shape of 3.0 Tesla MR images by using deformable models; particularly, by using active and appearance shape models. The achieved results clearly evidence the adequacy and advantage of the automatic analysis of the 3.0 Tesla MR images of these deformable models in order to extract the vocal tract shape and assess the involved articulatory movements. These achievements are mostly required, for example, for a better knowledge of speech production, mainly of patients suffering from articulatory disorders, and to build enhanced speech synthesizer models.

  12. Electrophysiological evidence of functional integration between the language and motor systems in the brain: a study of the speech Bereitschaftspotential.

    PubMed

    McArdle, J J; Mari, Z; Pursley, R H; Schulz, G M; Braun, A R

    2009-02-01

    We investigated whether the Bereitschaftspotential (BP), an event related potential believed to reflect motor planning, would be modulated by language-related parameters prior to speech. We anticipated that articulatory complexity would produce effects on the BP distribution similar to those demonstrated for complex limb movements. We also hypothesized that lexical semantic operations would independently impact the BP. Eighteen participants performed 3 speech tasks designed to differentiate lexical semantic and articulatory contributions to the BP. EEG epochs were time-locked to the earliest source of speech movement per trial. Lip movements were assessed using EMG recordings. Doppler imaging was used to determine the onset of tongue movement during speech, providing a means of identification and elimination of potential artifact. Compared to simple repetition, complex articulations produced an anterior shift in the maximum midline BP. Tasks requiring lexical search and selection augmented these effects and independently elicited a left lateralized asymmetry in the frontal distribution. The findings indicate that the BP is significantly modulated by linguistic processing, suggesting that the premotor system might play a role in lexical access. These novel findings support the notion that the motor systems may play a significant role in the formulation of language.

  13. Common cues to emotion in the dynamic facial expressions of speech and song

    PubMed Central

    Livingstone, Steven R.; Thompson, William F.; Wanderley, Marcelo M.; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production. PMID:25424388

  14. Exploring the role of hand gestures in learning novel phoneme contrasts and vocabulary in a second language

    PubMed Central

    Kelly, Spencer D.; Hirata, Yukari; Manansala, Michael; Huang, Jessica

    2014-01-01

    Co-speech hand gestures are a type of multimodal input that has received relatively little attention in the context of second language learning. The present study explored the role that observing and producing different types of gestures plays in learning novel speech sounds and word meanings in an L2. Naïve English-speakers were taught two components of Japanese—novel phonemic vowel length contrasts and vocabulary items comprised of those contrasts—in one of four different gesture conditions: Syllable Observe, Syllable Produce, Mora Observe, and Mora Produce. Half of the gestures conveyed intuitive information about syllable structure, and the other half, unintuitive information about Japanese mora structure. Within each Syllable and Mora condition, half of the participants only observed the gestures that accompanied speech during training, and the other half also produced the gestures that they observed along with the speech. The main finding was that participants across all four conditions had similar outcomes in two different types of auditory identification tasks and a vocabulary test. The results suggest that hand gestures may not be well suited for learning novel phonetic distinctions at the syllable level within a word, and thus, gesture-speech integration may break down at the lowest levels of language processing and learning. PMID:25071646

  15. High-Throughput Analysis of Promoter Occupancy Reveals Direct Neural Targets of FOXP2, a Gene Mutated in Speech and Language Disorders

    PubMed Central

    Vernes, Sonja C. ; Spiteri, Elizabeth ; Nicod, Jérôme ; Groszer, Matthias ; Taylor, Jennifer M. ; Davies, Kay E. ; Geschwind, Daniel H. ; Fisher, Simon E. 

    2007-01-01

    We previously discovered that mutations of the human FOXP2 gene cause a monogenic communication disorder, primarily characterized by difficulties in learning to make coordinated sequences of articulatory gestures that underlie speech. Affected people have deficits in expressive and receptive linguistic processing and display structural and/or functional abnormalities in cortical and subcortical brain regions. FOXP2 provides a unique window into neural processes involved in speech and language. In particular, its role as a transcription factor gene offers powerful functional genomic routes for dissecting critical neurogenetic mechanisms. Here, we employ chromatin immunoprecipitation coupled with promoter microarrays (ChIP-chip) to successfully identify genomic sites that are directly bound by FOXP2 protein in native chromatin of human neuron-like cells. We focus on a subset of downstream targets identified by this approach, showing that altered FOXP2 levels yield significant changes in expression in our cell-based models and that FOXP2 binds in a specific manner to consensus sites within the relevant promoters. Moreover, we demonstrate significant quantitative differences in target expression in embryonic brains of mutant mice, mediated by specific in vivo Foxp2-chromatin interactions. This work represents the first identification and in vivo verification of neural targets regulated by FOXP2. Our data indicate that FOXP2 has dual functionality, acting to either repress or activate gene expression at occupied promoters. The identified targets suggest roles in modulating synaptic plasticity, neurodevelopment, neurotransmission, and axon guidance and represent novel entry points into in vivo pathways that may be disturbed in speech and language disorders. PMID:17999362

  16. Speech recognition systems on the Cell Broadband Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Jones, H; Vaidya, S

    In this paper we describe our design, implementation, and first results of a prototype connected-phoneme-based speech recognition system on the Cell Broadband Engine{trademark} (Cell/B.E.). Automatic speech recognition decodes speech samples into plain text (other representations are possible) and must process samples at real-time rates. Fortunately, the computational tasks involved in this pipeline are highly data-parallel and can receive significant hardware acceleration from vector-streaming architectures such as the Cell/B.E. Identifying and exploiting these parallelism opportunities is challenging, but also critical to improving system performance. We observed, from our initial performance timings, that a single Cell/B.E. processor can recognize speech from thousandsmore » of simultaneous voice channels in real time--a channel density that is orders-of-magnitude greater than the capacity of existing software speech recognizers based on CPUs (central processing units). This result emphasizes the potential for Cell/B.E.-based speech recognition and will likely lead to the future development of production speech systems using Cell/B.E. clusters.« less

  17. Construct-related validity of the TOCS measures: comparison of intelligibility and speaking rate scores in children with and without speech disorders.

    PubMed

    Hodge, Megan M; Gotzke, Carrie L

    2014-01-01

    This study evaluated construct-related validity of the Test of Children's Speech (TOCS). Intelligibility scores obtained using open-set word identification tasks (orthographic transcription) for the TOCS word and sentence tests and rate scores for the TOCS sentence test (words per minute or WPM and intelligible words per minute or IWPM) were compared for a group of 15 adults (18-30 years of age) with normal speech production and three groups of children: 48 3-6 year-olds with typical speech development and neurological histories (TDS), 48 3-6 year-olds with a speech sound disorder of unknown origin and no identified neurological impairment (SSD-UNK), and 22 3-10 year-olds with dysarthria and cerebral palsy (DYS). As expected, mean intelligibility scores and rates increased with age in the TDS group. However, word test intelligibility, WPM and IWPM scores for the 6 year-olds in the TDS group were significantly lower than those for the adults. The DYS group had significantly lower word and sentence test intelligibility and WPM and IWPM scores than the TDS and SSD-UNK groups. Compared to the TDS group, the SSD-UNK group also had significantly lower intelligibility scores for the word and sentence tests, and significantly lower IWPM, but not WPM scores on the sentence test. The results support the construct-related validity of TOCS as a tool for obtaining intelligibility and rate scores that are sensitive to group differences in 3-6 year-old children, with and without speech sound disorders, and to 3+ year-old children with speech disorders, with and without dysarthria. Readers will describe the word and sentence intelligibility and speaking rate performance of children with typically developing speech at age levels of 3, 4, 5 and 6 years, as measured by the Test of Children's Speech, and how these compare with adult speakers and two groups of children with speech disorders. They will also recognize what measures on this test differentiate children with speech sound disorders of unknown origin from children with cerebral palsy and dysarthria. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Building an Evidence Base for Speech-Language Services in the Schools: Challenges and Recommendations

    ERIC Educational Resources Information Center

    Whitmire, Kathleen A.; Rivers, Kenyatta O.; Mele-McCarthy, Joan A.; Staskowski, Maureen

    2014-01-01

    Speech-language pathologists are faced with demands for evidence to support practice. Federal legislation requires high-quality evidence for decisions regarding school-based services as part of evidence-based practice. The purpose of this article is to discuss the limited scientific evidence for making appropriate decisions about speech-language…

  19. Public and private language ideologies as reflected in language attitudes on the Island of Korcula.

    PubMed

    Sujoldzić, Anita; Simicić, Lucija

    2013-06-01

    Since languages are such powerful means of group identification, they may be considered as constitutive of communities. Attitudes expressed toward certain linguistic varieties may thus be perceived as attitudes held toward respective community-members. However, as attitudes are not always easily accessible, and are rarely one-dimensional but rather multi-layered, an insight into overt (publicly proclaimed) and covert (privately held) ideologies can enhance understanding of language attitudes and their meaning. This paper brings the analysis of these two types of attitudes held by adolescents in three most populated places on the island of Korcula, Croatia. The analysis is based on the results obtained by means of a questionnaire eliciting, among other things, overt attitudes toward six local, regional and supra-regional varieties, and covert attitudes toward judges' local speech and the Standard variety of Croatian. Although the results confirm some expected tendencies in the evaluation of different varieties, subsequently conducted analysis of speech recognition rates offers some valuable insights and interesting implications for further interpretation of the results.

  20. Comparison of bimodal and bilateral cochlear implant users on speech recognition with competing talker, music perception, affective prosody discrimination, and talker identification.

    PubMed

    Cullington, Helen E; Zeng, Fan-Gang

    2011-02-01

    Despite excellent performance in speech recognition in quiet, most cochlear implant users have great difficulty with speech recognition in noise, music perception, identifying tone of voice, and discriminating different talkers. This may be partly due to the pitch coding in cochlear implant speech processing. Most current speech processing strategies use only the envelope information; the temporal fine structure is discarded. One way to improve electric pitch perception is to use residual acoustic hearing via a hearing aid on the nonimplanted ear (bimodal hearing). This study aimed to test the hypothesis that bimodal users would perform better than bilateral cochlear implant users on tasks requiring good pitch perception. Four pitch-related tasks were used. 1. Hearing in Noise Test (HINT) sentences spoken by a male talker with a competing female, male, or child talker. 2. Montreal Battery of Evaluation of Amusia. This is a music test with six subtests examining pitch, rhythm and timing perception, and musical memory. 3. Aprosodia Battery. This has five subtests evaluating aspects of affective prosody and recognition of sarcasm. 4. Talker identification using vowels spoken by 10 different talkers (three men, three women, two boys, and two girls). Bilateral cochlear implant users were chosen as the comparison group. Thirteen bimodal and 13 bilateral adult cochlear implant users were recruited; all had good speech perception in quiet. There were no significant differences between the mean scores of the bimodal and bilateral groups on any of the tests, although the bimodal group did perform better than the bilateral group on almost all tests. Performance on the different pitch-related tasks was not correlated, meaning that if a subject performed one task well they would not necessarily perform well on another. The correlation between the bimodal users' hearing threshold levels in the aided ear and their performance on these tasks was weak. Although the bimodal cochlear implant group performed better than the bilateral group on most parts of the four pitch-related tests, the differences were not statistically significant. The lack of correlation between test results shows that the tasks used are not simply providing a measure of pitch ability. Even if the bimodal users have better pitch perception, the real-world tasks used are reflecting more diverse skills than pitch. This research adds to the existing speech perception, language, and localization studies that show no significant difference between bimodal and bilateral cochlear implant users.

  1. 78 FR 63152 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-23

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... for telecommunications relay services (TRS) by eliminating standards for Internet-based relay services... comments, identified by CG Docket No. 03-123, by any of the following methods: Electronic Filers: Comments...

  2. Contemporary Reflections on Speech-Based Language Learning

    ERIC Educational Resources Information Center

    Gustafson, Marianne

    2009-01-01

    In "The Relation of Language to Mental Development and of Speech to Language Teaching," S.G. Davidson displayed several timeless insights into the role of speech in developing language and reasons for using speech as the basis for instruction for children who are deaf and hard of hearing. His understanding that speech includes more than merely…

  3. La Traduccion de la Nueva Novela Latinoamericana al Ingles (English Translation of the New Latin American Novel)

    ERIC Educational Resources Information Center

    Vazquez-Ayona, Gerardo

    1978-01-01

    While distinguishing between literary criticism and the scientific study of translation, Latin American translations are analyzed according to the identification of "speech facts," levels of stylistic performance, translating from scratch, and the stylistic features of Rabassa. (NCR)

  4. Intonation Processing in Congenital Amusia: Discrimination, Identification and Imitation

    ERIC Educational Resources Information Center

    Liu, Fang; Patel, Aniruddh D.; Fourcin, Adrian; Stewart, Lauren

    2010-01-01

    This study investigated whether congenital amusia, a neuro-developmental disorder of musical perception, also has implications for speech intonation processing. In total, 16 British amusics and 16 matched controls completed five intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance…

  5. Developing a Weighted Measure of Speech Sound Accuracy

    PubMed Central

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2010-01-01

    Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344

  6. Deep neural network and noise classification-based speech enhancement

    NASA Astrophysics Data System (ADS)

    Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei

    2017-07-01

    In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.

  7. The shadow of a doubt? Evidence for perceptuo-motor linkage during auditory and audiovisual close-shadowing

    PubMed Central

    Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc

    2014-01-01

    One classical argument in favor of a functional role of the motor system in speech perception comes from the close-shadowing task in which a subject has to identify and to repeat as quickly as possible an auditory speech stimulus. The fact that close-shadowing can occur very rapidly and much faster than manual identification of the speech target is taken to suggest that perceptually induced speech representations are already shaped in a motor-compatible format. Another argument is provided by audiovisual interactions often interpreted as referring to a multisensory-motor framework. In this study, we attempted to combine these two paradigms by testing whether the visual modality could speed motor response in a close-shadowing task. To this aim, both oral and manual responses were evaluated during the perception of auditory and audiovisual speech stimuli, clear or embedded in white noise. Overall, oral responses were faster than manual ones, but it also appeared that they were less accurate in noise, which suggests that motor representations evoked by the speech input could be rough at a first processing stage. In the presence of acoustic noise, the audiovisual modality led to both faster and more accurate responses than the auditory modality. No interaction was however, observed between modality and response. Altogether, these results are interpreted within a two-stage sensory-motor framework, in which the auditory and visual streams are integrated together and with internally generated motor representations before a final decision may be available. PMID:25009512

  8. Sensitivity and specificity of the Percentage of Consonants Correct-Revised in the identification of speech sound disorder.

    PubMed

    Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Pinheiro da Silva, Joyce; Wertzner, Haydée Fiszbein

    2017-05-22

    The purpose of the study was to determine the sensitivity and specificity, and to establish cutoff points for the severity index Percentage of Consonants Correct - Revised (PCC-R) in Brazilian Portuguese-speaking children with and without speech sound disorders. 72 children between 5:00 and 7:11 years old - 36 children without speech and language complaints and 36 children with speech sound disorders. The PCC-R was applied to the figure naming and word imitation tasks that are part of the ABFW Child Language Test. Results were statistically analyzed. The ROC curve was performed and sensitivity and specificity values ​​of the index were verified. The group of children without speech sound disorders presented greater PCC-R values in both tasks, regardless of the gender of the participants. The cutoff value observed for the picture naming task was 93.4%, with a sensitivity value of 0.89 and specificity of 0.94 (age independent). For the word imitation task, results were age-dependent: for age group ≤6:5 years old, the cutoff value was 91.0% (sensitivity of 0.77 and specificity of 0.94) and for age group >6:5 years-old, the cutoff value was 93.9% (sensitivity of 0.93 and specificity of 0.94). Given the high sensitivity and specificity of PCC-R, we can conclude that the index was effective in discriminating and identifying children with and without speech sound disorders.

  9. The Influence of Environmental Sound Training on the Perception of Spectrally Degraded Speech and Environmental Sounds

    PubMed Central

    Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N.

    2012-01-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients. PMID:22891070

  10. The benefit of combining a deep neural network architecture with ideal ratio mask estimation in computational speech segregation to improve speech intelligibility.

    PubMed

    Bentsen, Thomas; May, Tobias; Kressner, Abigail A; Dau, Torsten

    2018-01-01

    Computational speech segregation attempts to automatically separate speech from noise. This is challenging in conditions with interfering talkers and low signal-to-noise ratios. Recent approaches have adopted deep neural networks and successfully demonstrated speech intelligibility improvements. A selection of components may be responsible for the success with these state-of-the-art approaches: the system architecture, a time frame concatenation technique and the learning objective. The aim of this study was to explore the roles and the relative contributions of these components by measuring speech intelligibility in normal-hearing listeners. A substantial improvement of 25.4 percentage points in speech intelligibility scores was found going from a subband-based architecture, in which a Gaussian Mixture Model-based classifier predicts the distributions of speech and noise for each frequency channel, to a state-of-the-art deep neural network-based architecture. Another improvement of 13.9 percentage points was obtained by changing the learning objective from the ideal binary mask, in which individual time-frequency units are labeled as either speech- or noise-dominated, to the ideal ratio mask, where the units are assigned a continuous value between zero and one. Therefore, both components play significant roles and by combining them, speech intelligibility improvements were obtained in a six-talker condition at a low signal-to-noise ratio.

  11. Are individuals with Parkinson's disease capable of speech-motor learning? - A preliminary evaluation.

    PubMed

    Kaipa, Ramesh; Jones, Richard D; Robb, Michael P

    2016-07-01

    The benefits of different practice conditions in limb-based rehabilitation of motor disorders are well documented. Conversely, the role of practice structure in the treatment of motor-based speech disorders has only been minimally investigated. Considering this limitation, the current study aimed to investigate the effectiveness of selected practice conditions in spatial and temporal learning of novel speech utterances in individuals with Parkinson's disease (PD). Participants included 16 individuals with PD who were randomly and equally assigned to constant, variable, random, and blocked practice conditions. Participants in all four groups practiced a speech phrase for two consecutive days, and reproduced the speech phrase on the third day without further practice or feedback. There were no significant differences (p > 0.05) between participants across the four practice conditions with respect to either spatial or temporal learning of the speech phrase. Overall, PD participants demonstrated diminished spatial and temporal learning in comparison to healthy controls. Tests of strength of association between participants' demographic/clinical characteristics and speech-motor learning outcomes did not reveal any significant correlations. The findings from the current study suggest that repeated practice facilitates speech-motor learning in individuals with PD irrespective of the type of practice. Clinicians need to be cautious in applying practice conditions to treat speech deficits associated with PD based on the findings of non-speech-motor learning tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. School-Based Speech-Language Pathologists' Use of iPads

    ERIC Educational Resources Information Center

    Romane, Garvin Philippe

    2017-01-01

    This study explored school-based speech-language pathologists' (SLPs') use of iPads and apps for speech and language instruction, specifically for articulation, language, and vocabulary goals. A mostly quantitative-based survey was administered to approximately 2,800 SLPs in a K-12 setting; the final sample consisted of 189 licensed SLPs. Overall,…

  13. Severity-Based Adaptation with Limited Data for ASR to Aid Dysarthric Speakers

    PubMed Central

    Mustafa, Mumtaz Begum; Salim, Siti Salwah; Mohamed, Noraini; Al-Qatab, Bassam; Siong, Chng Eng

    2014-01-01

    Automatic speech recognition (ASR) is currently used in many assistive technologies, such as helping individuals with speech impairment in their communication ability. One challenge in ASR for speech-impaired individuals is the difficulty in obtaining a good speech database of impaired speakers for building an effective speech acoustic model. Because there are very few existing databases of impaired speech, which are also limited in size, the obvious solution to build a speech acoustic model of impaired speech is by employing adaptation techniques. However, issues that have not been addressed in existing studies in the area of adaptation for speech impairment are as follows: (1) identifying the most effective adaptation technique for impaired speech; and (2) the use of suitable source models to build an effective impaired-speech acoustic model. This research investigates the above-mentioned two issues on dysarthria, a type of speech impairment affecting millions of people. We applied both unimpaired and impaired speech as the source model with well-known adaptation techniques like the maximum likelihood linear regression (MLLR) and the constrained-MLLR(C-MLLR). The recognition accuracy of each impaired speech acoustic model is measured in terms of word error rate (WER), with further assessments, including phoneme insertion, substitution and deletion rates. Unimpaired speech when combined with limited high-quality speech-impaired data improves performance of ASR systems in recognising severely impaired dysarthric speech. The C-MLLR adaptation technique was also found to be better than MLLR in recognising mildly and moderately impaired speech based on the statistical analysis of the WER. It was found that phoneme substitution was the biggest contributing factor in WER in dysarthric speech for all levels of severity. The results show that the speech acoustic models derived from suitable adaptation techniques improve the performance of ASR systems in recognising impaired speech with limited adaptation data. PMID:24466004

  14. Fifty years of progress in speech and speaker recognition

    NASA Astrophysics Data System (ADS)

    Furui, Sadaoki

    2004-10-01

    Speech and speaker recognition technology has made very significant progress in the past 50 years. The progress can be summarized by the following changes: (1) from template matching to corpus-base statistical modeling, e.g., HMM and n-grams, (2) from filter bank/spectral resonance to Cepstral features (Cepstrum + DCepstrum + DDCepstrum), (3) from heuristic time-normalization to DTW/DP matching, (4) from gdistanceh-based to likelihood-based methods, (5) from maximum likelihood to discriminative approach, e.g., MCE/GPD and MMI, (6) from isolated word to continuous speech recognition, (7) from small vocabulary to large vocabulary recognition, (8) from context-independent units to context-dependent units for recognition, (9) from clean speech to noisy/telephone speech recognition, (10) from single speaker to speaker-independent/adaptive recognition, (11) from monologue to dialogue/conversation recognition, (12) from read speech to spontaneous speech recognition, (13) from recognition to understanding, (14) from single-modality (audio signal only) to multi-modal (audio/visual) speech recognition, (15) from hardware recognizer to software recognizer, and (16) from no commercial application to many practical commercial applications. Most of these advances have taken place in both the fields of speech recognition and speaker recognition. The majority of technological changes have been directed toward the purpose of increasing robustness of recognition, including many other additional important techniques not noted above.

  15. Difficulties in Automatic Speech Recognition of Dysarthric Speakers and Implications for Speech-Based Applications Used by the Elderly: A Literature Review

    ERIC Educational Resources Information Center

    Young, Victoria; Mihailidis, Alex

    2010-01-01

    Despite their growing presence in home computer applications and various telephony services, commercial automatic speech recognition technologies are still not easily employed by everyone; especially individuals with speech disorders. In addition, relatively little research has been conducted on automatic speech recognition performance with older…

  16. Vocoders and Speech Perception: Uses of Computer-Based Speech Analysis-Synthesis in Stimulus Generation.

    ERIC Educational Resources Information Center

    Tierney, Joseph; Mack, Molly

    1987-01-01

    Stimuli used in research on the perception of the speech signal have often been obtained from simple filtering and distortion of the speech waveform, sometimes accompanied by noise. However, for more complex stimulus generation, the parameters of speech can be manipulated, after analysis and before synthesis, using various types of algorithms to…

  17. Relative importance of temporal information in various frequency regions for consonant identification in quiet and in noise

    NASA Astrophysics Data System (ADS)

    Apoux, Frédéric; Bacon, Sid P.

    2004-09-01

    The relative importance of temporal information in broad spectral regions for consonant identification was assessed in normal-hearing listeners. For the purpose of forcing listeners to use primarily temporal-envelope cues, speech sounds were spectrally degraded using four-noise-band vocoder processing. Frequency-weighting functions were determined using two methods. The first method consisted of measuring the intelligibility of speech with a hole in the spectrum either in quiet or in noise. The second method consisted of correlating performance with the randomly and independently varied signal-to-noise ratio within each band. Results demonstrated that all bands contributed equally to consonant identification when presented in quiet. In noise, however, both methods indicated that listeners consistently placed relatively more weight upon the highest frequency band. It is proposed that the explanation for the difference in results between quiet and noise relates to the shape of the modulation spectra in adjacent frequency bands. Overall, the results suggest that normal-hearing listeners use a common listening strategy in a given condition. However, this strategy may be influenced by the competing sounds, and thus may vary according to the context. Some implications of the results for cochlear implantees and hearing-impaired listeners are discussed.

  18. The perception of lexical tone contrasts in Cantonese children with and without specific language impairment (SLI).

    PubMed

    Wong, Anita M-Y; Ciocca, Valter; Yung, Sun

    2009-12-01

    This study examined the perception of fundamental frequency (f0) patterns by Cantonese children with and without specific language impairment (SLI). Participants were 14 five-year-old children with SLI, and 14 age-matched (AM) and 13 four-year-old vocabulary-matched (VM) controls. The children identified a word from familiar word pairs that illustrated the 8 minimally contrastive pairs of the 6 lexical tones. They discriminated the f0 patterns within contrastive tonal pairs in speech and nonspeech stimuli. In tone identification, the SLI group performed worse than the AM group but not the VM group. In tone discrimination, the SLI group did worse than the AM group on 2 contrasts and showed a nonsignificant trend of poorer performance on all contrasts combined. The VM group generally did worse than the AM group. There were no group differences in discrimination performance between speech and nonspeech stimuli. No correlation was found between identification and discrimination performance. Only the normal controls showed a moderate correlation between vocabulary scores and performance in the 2 perception tasks. The SLI group's poor tone identification cannot be accounted for by vocabulary knowledge alone. The group's tone discrimination performance suggests that some children with SLI have a deficit in f0 processing.

  19. An acoustic feature-based similarity scoring system for speech rehabilitation assistance.

    PubMed

    Syauqy, Dahnial; Wu, Chao-Min; Setyawati, Onny

    2016-08-01

    The purpose of this study is to develop a tool to assist speech therapy and rehabilitation, which focused on automatic scoring based on the comparison of the patient's speech with another normal speech on several aspects including pitch, vowel, voiced-unvoiced segments, strident fricative and sound intensity. The pitch estimation employed the use of cepstrum-based algorithm for its robustness; the vowel classification used multilayer perceptron (MLP) to classify vowel from pitch and formants; and the strident fricative detection was based on the major peak spectral intensity, location and the pitch existence in the segment. In order to evaluate the performance of the system, this study analyzed eight patient's speech recordings (four males, four females; 4-58-years-old), which had been recorded in previous study in cooperation with Taipei Veterans General Hospital and Taoyuan General Hospital. The experiment result on pitch algorithm showed that the cepstrum method had 5.3% of gross pitch error from a total of 2086 frames. On the vowel classification algorithm, MLP method provided 93% accuracy (men), 87% (women) and 84% (children). In total, the overall results showed that 156 tool's grading results (81%) were consistent compared to 192 audio and visual observations done by four experienced respondents. Implication for Rehabilitation Difficulties in communication may limit the ability of a person to transfer and exchange information. The fact that speech is one of the primary means of communication has encouraged the needs of speech diagnosis and rehabilitation. The advances of technology in computer-assisted speech therapy (CAST) improve the quality, time efficiency of the diagnosis and treatment of the disorders. The present study attempted to develop tool to assist speech therapy and rehabilitation, which provided simple interface to let the assessment be done even by the patient himself without the need of particular knowledge of speech processing while at the same time, also provided further deep analysis of the speech, which can be useful for the speech therapist.

  20. Affective Prosody Labeling in Youths with Bipolar Disorder or Severe Mood Dysregulation

    ERIC Educational Resources Information Center

    Deveney, Christen M.; Brotman, Melissa A.; Decker, Ann Marie; Pine, Daniel S.; Leibenluft, Ellen

    2012-01-01

    Background: Accurate identification of nonverbal emotional cues is essential to successful social interactions, yet most research is limited to emotional face expression labeling. Little research focuses on the processing of emotional prosody, or tone of verbal speech, in clinical populations. Methods: Using the Diagnostic Analysis of Nonverbal…

  1. Speech-Language Pathologists: Vital Listening and Spoken Language Professionals

    ERIC Educational Resources Information Center

    Houston, K. Todd; Perigoe, Christina B.

    2010-01-01

    Determining the most effective methods and techniques to facilitate the spoken language development of individuals with hearing loss has been a focus of practitioners for centuries. Due to modern advances in hearing technology, earlier identification of hearing loss, and immediate enrollment in early intervention, children with hearing loss are…

  2. Myths and Political Rhetoric: Jimmy Carter Accepts the Nomination.

    ERIC Educational Resources Information Center

    Corso, Dianne M.

    Like other political speakers who have drawn on the personification, identification, and dramatic encounter images of mythology to pressure and persuade audiences, Jimmy Carter evoked the myths of the hero, the American Dream, and the ideal political process in his presidential nomination acceptance speech. By stressing his unknown status, his…

  3. Automatic Speech Recognition from Neural Signals: A Focused Review.

    PubMed

    Herff, Christian; Schultz, Tanja

    2016-01-01

    Speech interfaces have become widely accepted and are nowadays integrated in various real-life applications and devices. They have become a part of our daily life. However, speech interfaces presume the ability to produce intelligible speech, which might be impossible due to either loud environments, bothering bystanders or incapabilities to produce speech (i.e., patients suffering from locked-in syndrome). For these reasons it would be highly desirable to not speak but to simply envision oneself to say words or sentences. Interfaces based on imagined speech would enable fast and natural communication without the need for audible speech and would give a voice to otherwise mute people. This focused review analyzes the potential of different brain imaging techniques to recognize speech from neural signals by applying Automatic Speech Recognition technology. We argue that modalities based on metabolic processes, such as functional Near Infrared Spectroscopy and functional Magnetic Resonance Imaging, are less suited for Automatic Speech Recognition from neural signals due to low temporal resolution but are very useful for the investigation of the underlying neural mechanisms involved in speech processes. In contrast, electrophysiologic activity is fast enough to capture speech processes and is therefor better suited for ASR. Our experimental results indicate the potential of these signals for speech recognition from neural data with a focus on invasively measured brain activity (electrocorticography). As a first example of Automatic Speech Recognition techniques used from neural signals, we discuss the Brain-to-text system.

  4. Address entry while driving: speech recognition versus a touch-screen keyboard.

    PubMed

    Tsimhoni, Omer; Smith, Daniel; Green, Paul

    2004-01-01

    A driving simulator experiment was conducted to determine the effects of entering addresses into a navigation system during driving. Participants drove on roads of varying visual demand while entering addresses. Three address entry methods were explored: word-based speech recognition, character-based speech recognition, and typing on a touch-screen keyboard. For each method, vehicle control and task measures, glance timing, and subjective ratings were examined. During driving, word-based speech recognition yielded the shortest total task time (15.3 s), followed by character-based speech recognition (41.0 s) and touch-screen keyboard (86.0 s). The standard deviation of lateral position when performing keyboard entry (0.21 m) was 60% higher than that for all other address entry methods (0.13 m). Degradation of vehicle control associated with address entry using a touch screen suggests that the use of speech recognition is favorable. Speech recognition systems with visual feedback, however, even with excellent accuracy, are not without performance consequences. Applications of this research include the design of in-vehicle navigation systems as well as other systems requiring significant driver input, such as E-mail, the Internet, and text messaging.

  5. Monkey vocal tracts are speech-ready.

    PubMed

    Fitch, W Tecumseh; de Boer, Bart; Mathur, Neil; Ghazanfar, Asif A

    2016-12-01

    For four decades, the inability of nonhuman primates to produce human speech sounds has been claimed to stem from limitations in their vocal tract anatomy, a conclusion based on plaster casts made from the vocal tract of a monkey cadaver. We used x-ray videos to quantify vocal tract dynamics in living macaques during vocalization, facial displays, and feeding. We demonstrate that the macaque vocal tract could easily produce an adequate range of speech sounds to support spoken language, showing that previous techniques based on postmortem samples drastically underestimated primate vocal capabilities. Our findings imply that the evolution of human speech capabilities required neural changes rather than modifications of vocal anatomy. Macaques have a speech-ready vocal tract but lack a speech-ready brain to control it.

  6. Detecting Abnormal Word Utterances in Children With Autism Spectrum Disorders: Machine-Learning-Based Voice Analysis Versus Speech Therapists.

    PubMed

    Nakai, Yasushi; Takiguchi, Tetsuya; Matsui, Gakuyo; Yamaoka, Noriko; Takada, Satoshi

    2017-10-01

    Abnormal prosody is often evident in the voice intonations of individuals with autism spectrum disorders. We compared a machine-learning-based voice analysis with human hearing judgments made by 10 speech therapists for classifying children with autism spectrum disorders ( n = 30) and typical development ( n = 51). Using stimuli limited to single-word utterances, machine-learning-based voice analysis was superior to speech therapist judgments. There was a significantly higher true-positive than false-negative rate for machine-learning-based voice analysis but not for speech therapists. Results are discussed in terms of some artificiality of clinician judgments based on single-word utterances, and the objectivity machine-learning-based voice analysis adds to judging abnormal prosody.

  7. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    PubMed

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  8. Effects of cooperating and conflicting cues on speech intonation recognition by cochlear implant users and normal hearing listeners.

    PubMed

    Peng, Shu-Chen; Lu, Nelson; Chatterjee, Monita

    2009-01-01

    Cochlear implant (CI) recipients have only limited access to fundamental frequency (F0) information, and thus exhibit deficits in speech intonation recognition. For speech intonation, F0 serves as the primary cue, and other potential acoustic cues (e.g. intensity properties) may also contribute. This study examined the effects of cooperating or conflicting acoustic cues on speech intonation recognition by adult CI and normal hearing (NH) listeners with full-spectrum and spectrally degraded speech stimuli. Identification of speech intonation that signifies question and statement contrasts was measured in 13 CI recipients and 4 NH listeners, using resynthesized bi-syllabic words, where F0 and intensity properties were systematically manipulated. The stimulus set was comprised of tokens whose acoustic cues (i.e. F0 contour and intensity patterns) were either cooperating or conflicting. Subjects identified if each stimulus is a 'statement' or a 'question' in a single-interval, 2-alternative forced-choice (2AFC) paradigm. Logistic models were fitted to the data, and estimated coefficients were compared under cooperating and conflicting conditions, between the subject groups (CI vs. NH), and under full-spectrum and spectrally degraded conditions for NH listeners. The results indicated that CI listeners' intonation recognition was enhanced by cooperating F0 contour and intensity cues, but was adversely affected by these cues being conflicting. On the other hand, with full-spectrum stimuli, NH listeners' intonation recognition was not affected by cues being cooperating or conflicting. The effects of cues being cooperating or conflicting were comparable between the CI group and NH listeners with spectrally degraded stimuli. These findings suggest the importance of taking multiple acoustic sources for speech recognition into consideration in aural rehabilitation for CI recipients. Copyright (C) 2009 S. Karger AG, Basel.

  9. Effects of cooperating and conflicting cues on speech intonation recognition by cochlear implant users and normal hearing listeners

    PubMed Central

    Peng, Shu-Chen; Lu, Nelson; Chatterjee, Monita

    2009-01-01

    Cochlear implant (CI) recipients have only limited access to fundamental frequency (F0) information, and thus exhibit deficits in speech intonation recognition. For speech intonation, F0 serves as the primary cue, and other potential acoustic cues (e.g., intensity properties) may also contribute. This study examined the effects of acoustic cues being cooperating or conflicting on speech intonation recognition by adult cochlear implant (CI), and normal-hearing (NH) listeners with full-spectrum and spectrally degraded speech stimuli. Identification of speech intonation that signifies question and statement contrasts was measured in 13 CI recipients and 4 NH listeners, using resynthesized bi-syllabic words, where F0 and intensity properties were systematically manipulated. The stimulus set was comprised of tokens whose acoustic cues, i.e., F0 contour and intensity patterns, were either cooperating or conflicting. Subjects identified if each stimulus is a “statement” or a “question” in a single-interval, two-alternative forced-choice (2AFC) paradigm. Logistic models were fitted to the data, and estimated coefficients were compared under cooperating and conflicting conditions, between the subject groups (CI vs. NH), and under full-spectrum and spectrally degraded conditions for NH listeners. The results indicated that CI listeners’ intonation recognition was enhanced by F0 contour and intensity cues being cooperating, but was adversely affected by these cues being conflicting. On the other hand, with full-spectrum stimuli, NH listeners’ intonation recognition was not affected by cues being cooperating or conflicting. The effects of cues being cooperating or conflicting were comparable between the CI group and NH listeners with spectrally-degraded stimuli. These findings suggest the importance of taking multiple acoustic sources for speech recognition into consideration in aural rehabilitation for CI recipients. PMID:19372651

  10. Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition

    PubMed Central

    Rigoulot, Simon; Wassiliwizky, Eugen; Pell, Marc D.

    2013-01-01

    Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400–1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech. PMID:23805115

  11. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training

    PubMed Central

    Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566

  12. Spectral identification of sperm whales from Littoral Acoustic Demonstration Center passive acoustic recordings

    NASA Astrophysics Data System (ADS)

    Sidorovskaia, Natalia A.; Richard, Blake; Ioup, George E.; Ioup, Juliette W.

    2005-09-01

    The Littoral Acoustic Demonstration Center (LADC) made a series of passive broadband acoustic recordings in the Gulf of Mexico and Ligurian Sea to study noise and marine mammal phonations. The collected data contain a large amount of various types of sperm whale phonations, such as isolated clicks and communication codas. It was previously reported that the spectrograms of the extracted clicks and codas contain well-defined null patterns that seem to be unique for individuals. The null pattern is formed due to individual features of the sound production organs of an animal. These observations motivated the present studies of adapting human speech identification techniques for deep-diving marine mammal phonations. A three-state trained hidden Markov model (HMM) was used with the phonation spectra of sperm whales. The HHM-algorithm gave 75% accuracy in identifying individuals when it had been initially tested for the acoustic data set correlated with visual observations of sperm whales. A comparison of the identification accuracy based on null-pattern similarity analysis and the HMM-algorithm is presented. The results can establish the foundation for developing an acoustic identification database for sperm whales and possibly other deep-diving marine mammals that would be difficult to observe visually. [Research supported by ONR.

  13. A Case Study of a Collaborative Speech-Language Pathologist

    ERIC Educational Resources Information Center

    Ritzman, Mitzi J.; Sanger, Dixie; Coufal, Kathy L.

    2006-01-01

    This study explored how a school-based speech-language pathologist implemented a classroom-based service delivery model that focused on collaborative practices in classroom settings. The study used ethnographic observations and interviews with 1 speech-language pathologist to provide insights into how she implemented collaborative consultation and…

  14. Choosing and Using Text-to-Speech Software

    ERIC Educational Resources Information Center

    Peters, Tom; Bell, Lori

    2007-01-01

    This article describes a computer-based technology for generating speech called text-to-speech (TTS). This software is ready for widespread use by libraries, other organizations, and individual users. It offers the affordable ability to turn just about any electronic text that is not image-based into an artificially spoken communication. The…

  15. An Internet-Based Telerehabilitation System for the Assessment of Motor Speech Disorders: A Pilot Study

    ERIC Educational Resources Information Center

    Hill, Anne J.; Theodoros, Deborah G.; Russell, Trevor G.; Cahill, Louise M.; Ward, Elizabeth C.; Clark, Kathy M.

    2006-01-01

    Purpose: This pilot study explored the feasibility and effectiveness of an Internet-based telerehabilitation application for the assessment of motor speech disorders in adults with acquired neurological impairment. Method: Using a counterbalanced, repeated measures research design, 2 speech-language pathologists assessed 19 speakers with…

  16. Elements of a Plan-Based Theory of Speech Acts. Technical Report No. 141.

    ERIC Educational Resources Information Center

    Cohen, Philip R.; Perrault, C. Raymond

    This report proposes that people often plan their speech acts to affect their listeners' beliefs, goals, and emotional states and that such language use can be modeled by viewing speech acts as operators in a planning system, allowing both physical and speech acts to be integrated into plans. Methodological issues of how speech acts should be…

  17. Speech rhythm analysis with decomposition of the amplitude envelope: characterizing rhythmic patterns within and across languages.

    PubMed

    Tilsen, Sam; Arvaniti, Amalia

    2013-07-01

    This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.

  18. [Restoration of speech function in oncological patients with maxillary defects].

    PubMed

    Matiakin, E G; Chuchkov, V M; Akhundov, A A; Azizian, R I; Romanov, I S; Chuchkov, M V; Agapov, V V

    2009-01-01

    Speech quality was evaluated in 188 patients with acquired maxillary defects. Prosthetic treatment of 29 patients was preceded by pharmacopsychotherapy. Sixty three patients had lessons with a logopedist and 66 practiced self-tuition based on the specially developed test. Thirty patients were examined for the quality of speech without preliminary preparation. Speech quality was assessed by auditory and spectral analysis. The main forms of impaired speech quality in the patients with maxillary defects were marked rhinophonia and impaired articulation. The proposed analytical tests were based on a combination of "difficult" vowels and consonants. The use of a removable prostheses with an obturator failed to correct the affected speech function but created prerequisites for the formation of the correct speech stereotype. Results of the study suggest the relationship between the quality of speech in subjects with maxillary defects and their intellectual faculties as well as the desire to overcome this drawback. The proposed tests are designed to activate the neuromuscular apparatus responsible for the generation of the speech. Lessons with a speech therapist give a powerful emotional incentive to the patients and promote their efforts toward restoration of speaking ability. Pharmacopsychotherapy and self-control are another efficacious tools for the improvement of speech quality in patients with maxillary defects.

  19. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    PubMed

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  20. Predicting speech intelligibility based on the signal-to-noise envelope power ratio after modulation-frequency selective processing.

    PubMed

    Jørgensen, Søren; Dau, Torsten

    2011-09-01

    A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data. The model estimates the speech-to-noise envelope power ratio, SNR(env), at the output of a modulation filterbank and relates this metric to speech intelligibility using the concept of an ideal observer. Predictions were compared to data on the intelligibility of speech presented in stationary speech-shaped noise. The model was further tested in conditions with noisy speech subjected to reverberation and spectral subtraction. Good agreement between predictions and data was found in all cases. For spectral subtraction, an analysis of the model's internal representation of the stimuli revealed that the predicted decrease of intelligibility was caused by the estimated noise envelope power exceeding that of the speech. The classical concept of the speech transmission index fails in this condition. The results strongly suggest that the signal-to-noise ratio at the output of a modulation frequency selective process provides a key measure of speech intelligibility. © 2011 Acoustical Society of America

  1. Speech-Language Dissociations, Distractibility, and Childhood Stuttering

    PubMed Central

    Conture, Edward G.; Walden, Tedra A.; Lambert, Warren E.

    2015-01-01

    Purpose This study investigated the relation among speech-language dissociations, attentional distractibility, and childhood stuttering. Method Participants were 82 preschool-age children who stutter (CWS) and 120 who do not stutter (CWNS). Correlation-based statistics (Bates, Appelbaum, Salcedo, Saygin, & Pizzamiglio, 2003) identified dissociations across 5 norm-based speech-language subtests. The Behavioral Style Questionnaire Distractibility subscale measured attentional distractibility. Analyses addressed (a) between-groups differences in the number of children exhibiting speech-language dissociations; (b) between-groups distractibility differences; (c) the relation between distractibility and speech-language dissociations; and (d) whether interactions between distractibility and dissociations predicted the frequency of total, stuttered, and nonstuttered disfluencies. Results More preschool-age CWS exhibited speech-language dissociations compared with CWNS, and more boys exhibited dissociations compared with girls. In addition, male CWS were less distractible than female CWS and female CWNS. For CWS, but not CWNS, less distractibility (i.e., greater attention) was associated with more speech-language dissociations. Last, interactions between distractibility and dissociations did not predict speech disfluencies in CWS or CWNS. Conclusions The present findings suggest that for preschool-age CWS, attentional processes are associated with speech-language dissociations. Future investigations are warranted to better understand the directionality of effect of this association (e.g., inefficient attentional processes → speech-language dissociations vs. inefficient attentional processes ← speech-language dissociations). PMID:26126203

  2. A Speech Recognition-based Solution for the Automatic Detection of Mild Cognitive Impairment from Spontaneous Speech

    PubMed Central

    Tóth, László; Hoffmann, Ildikó; Gosztolya, Gábor; Vincze, Veronika; Szatlóczki, Gréta; Bánréti, Zoltán; Pákáski, Magdolna; Kálmán, János

    2018-01-01

    Background: Even today the reliable diagnosis of the prodromal stages of Alzheimer’s disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive de-cline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Methods: Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech sig-nals, first manually (using the Praat software), and then automatically, with an automatic speech recogni-tion (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. Results: The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process – that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. Conclusion: The temporal analysis of spontaneous speech can be exploited in implementing a new, auto-matic detection-based tool for screening MCI for the community. PMID:29165085

  3. A Speech Recognition-based Solution for the Automatic Detection of Mild Cognitive Impairment from Spontaneous Speech.

    PubMed

    Toth, Laszlo; Hoffmann, Ildiko; Gosztolya, Gabor; Vincze, Veronika; Szatloczki, Greta; Banreti, Zoltan; Pakaski, Magdolna; Kalman, Janos

    2018-01-01

    Even today the reliable diagnosis of the prodromal stages of Alzheimer's disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive decline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech signals, first manually (using the Praat software), and then automatically, with an automatic speech recognition (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process - that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. The temporal analysis of spontaneous speech can be exploited in implementing a new, automatic detection-based tool for screening MCI for the community. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. An exploratory study on the driving method of speech synthesis based on the human eye reading imaging data

    NASA Astrophysics Data System (ADS)

    Gao, Pei-pei; Liu, Feng

    2016-10-01

    With the development of information technology and artificial intelligence, speech synthesis plays a significant role in the fields of Human-Computer Interaction Techniques. However, the main problem of current speech synthesis techniques is lacking of naturalness and expressiveness so that it is not yet close to the standard of natural language. Another problem is that the human-computer interaction based on the speech synthesis is too monotonous to realize mechanism of user subjective drive. This thesis introduces the historical development of speech synthesis and summarizes the general process of this technique. It is pointed out that prosody generation module is an important part in the process of speech synthesis. On the basis of further research, using eye activity rules when reading to control and drive prosody generation was introduced as a new human-computer interaction method to enrich the synthetic form. In this article, the present situation of speech synthesis technology is reviewed in detail. Based on the premise of eye gaze data extraction, using eye movement signal in real-time driving, a speech synthesis method which can express the real speech rhythm of the speaker is proposed. That is, when reader is watching corpora with its eyes in silent reading, capture the reading information such as the eye gaze duration per prosodic unit, and establish a hierarchical prosodic pattern of duration model to determine the duration parameters of synthesized speech. At last, after the analysis, the feasibility of the above method is verified.

  5. Pulse Vector-Excitation Speech Encoder

    NASA Technical Reports Server (NTRS)

    Davidson, Grant; Gersho, Allen

    1989-01-01

    Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.

  6. The a priori SDR Estimation Techniques with Reduced Speech Distortion for Acoustic Echo and Noise Suppression

    NASA Astrophysics Data System (ADS)

    Thoonsaengngam, Rattapol; Tangsangiumvisai, Nisachon

    This paper proposes an enhanced method for estimating the a priori Signal-to-Disturbance Ratio (SDR) to be employed in the Acoustic Echo and Noise Suppression (AENS) system for full-duplex hands-free communications. The proposed a priori SDR estimation technique is modified based upon the Two-Step Noise Reduction (TSNR) algorithm to suppress the background noise while preserving speech spectral components. In addition, a practical approach to determine accurately the Echo Spectrum Variance (ESV) is presented based upon the linear relationship assumption between the power spectrum of far-end speech and acoustic echo signals. The ESV estimation technique is then employed to alleviate the acoustic echo problem. The performance of the AENS system that employs these two proposed estimation techniques is evaluated through the Echo Attenuation (EA), Noise Attenuation (NA), and two speech distortion measures. Simulation results based upon real speech signals guarantee that our improved AENS system is able to mitigate efficiently the problem of acoustic echo and background noise, while preserving the speech quality and speech intelligibility.

  7. Assessment of Spectral and Temporal Resolution in Cochlear Implant Users Using Psychoacoustic Discrimination and Speech Cue Categorization.

    PubMed

    Winn, Matthew B; Won, Jong Ho; Moon, Il Joon

    This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of voice onset time or with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart nonlinguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (voice onset time) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language.

  8. Assessment of spectral and temporal resolution in cochlear implant users using psychoacoustic discrimination and speech cue categorization

    PubMed Central

    Winn, Matthew B.; Won, Jong Ho; Moon, Il Joon

    2016-01-01

    Objectives This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). We hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. We further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design Nineteen CI listeners and 10 listeners with normal hearing (NH) participated in a suite of tasks that included spectral ripple discrimination (SRD), temporal modulation detection (TMD), and syllable categorization, which was split into a spectral-cue-based task (targeting the /ba/-/da/ contrast) and a timing-cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated in order to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression in order to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for CI listeners. Results CI users were generally less successful at utilizing both spectral and temporal cues for categorization compared to listeners with normal hearing. For the CI listener group, SRD was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. TMD using 100 Hz and 10 Hz modulated noise was not correlated with the CI subjects’ categorization of VOT, nor with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. Conclusions When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart non-linguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (VOT) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language. PMID:27438871

  9. Expertise with artificial non-speech sounds recruits speech-sensitive cortical regions

    PubMed Central

    Leech, Robert; Holt, Lori L.; Devlin, Joseph T.; Dick, Frederic

    2009-01-01

    Regions of the human temporal lobe show greater activation for speech than for other sounds. These differences may reflect intrinsically specialized domain-specific adaptations for processing speech, or they may be driven by the significant expertise we have in listening to the speech signal. To test the expertise hypothesis, we used a video-game-based paradigm that tacitly trained listeners to categorize acoustically complex, artificial non-linguistic sounds. Before and after training, we used functional MRI to measure how expertise with these sounds modulated temporal lobe activation. Participants’ ability to explicitly categorize the non-speech sounds predicted the change in pre- to post-training activation in speech-sensitive regions of the left posterior superior temporal sulcus, suggesting that emergent auditory expertise may help drive this functional regionalization. Thus, seemingly domain-specific patterns of neural activation in higher cortical regions may be driven in part by experience-based restructuring of high-dimensional perceptual space. PMID:19386919

  10. Effects of Speaking Rate on Word Recognition in Parkinson’s Disease and Normal Aging

    PubMed Central

    Forrest, Karen; Nygaard, Lynne; Pisoni, David B.; Siemers, Eric

    2011-01-01

    Current theories of basal ganglia function emphasize their role in the integration of sensory information into motor activities, particularly in the control of movement timing. People with basal ganglia disorders such as Parkinson’s disease exhibit poor temporal control of movements, in general and articulation in particular, as demonstrated by irregular speaking rate, reduced stress contrasts, and reduced movement durations and velocities. Previous research has implicated sensory deficits as contributory factors in limb movement control in patients with Parkinson’s disease; however, the relation between sensory deficits and speech-movement abnormalities has not been documented. In the present study, the existence of perceptual processing difficulties of speaking rate was investigated in subjects with Parkinsonian dysarthria (PD). Comparisons in perception were made between subjects with PD, neurologically normal geriatrics (GN) and neurologically normal young adults (YN) for accuracy in identification of words presented at different speaking rates. We hypothesized that word-identification scores would be lower for PD and GN subjects compared to the YN subjects, an effect that was supported by the data. We also expected that there would be differences between the GN and PD subjects in their accuracy of word identification at a faster speaking rate, an hypothesis that was not supported by the data. Rather, GN and PD subjects differed in identification scores for words spoken at a slow rate. PD subjects who had faster habitual speaking rates (HSR) had significantly lower word-identification scores in the slow compared to conversational rate conditions, a relation that was significant r = +0.64). These data suggest the need to consider perceptual deficits as an additional factor that contributes to rate variations in PD speech. PMID:21637728

  11. Speaker identification for the improvement of the security communication between law enforcement units

    NASA Astrophysics Data System (ADS)

    Tovarek, Jaromir; Partila, Pavol

    2017-05-01

    This article discusses the speaker identification for the improvement of the security communication between law enforcement units. The main task of this research was to develop the text-independent speaker identification system which can be used for real-time recognition. This system is designed for identification in the open set. It means that the unknown speaker can be anyone. Communication itself is secured, but we have to check the authorization of the communication parties. We have to decide if the unknown speaker is the authorized for the given action. The calls are recorded by IP telephony server and then these recordings are evaluate using classification If the system evaluates that the speaker is not authorized, it sends a warning message to the administrator. This message can detect, for example a stolen phone or other unusual situation. The administrator then performs the appropriate actions. Our novel proposal system uses multilayer neural network for classification and it consists of three layers (input layer, hidden layer, and output layer). A number of neurons in input layer corresponds with the length of speech features. Output layer then represents classified speakers. Artificial Neural Network classifies speech signal frame by frame, but the final decision is done over the complete record. This rule substantially increases accuracy of the classification. Input data for the neural network are a thirteen Mel-frequency cepstral coefficients, which describe the behavior of the vocal tract. These parameters are the most used for speaker recognition. Parameters for training, testing and validation were extracted from recordings of authorized users. Recording conditions for training data correspond with the real traffic of the system (sampling frequency, bit rate). The main benefit of the research is the system developed for text-independent speaker identification which is applied to secure communication between law enforcement units.

  12. Towards Artificial Speech Therapy: A Neural System for Impaired Speech Segmentation.

    PubMed

    Iliya, Sunday; Neri, Ferrante

    2016-09-01

    This paper presents a neural system-based technique for segmenting short impaired speech utterances into silent, unvoiced, and voiced sections. Moreover, the proposed technique identifies those points of the (voiced) speech where the spectrum becomes steady. The resulting technique thus aims at detecting that limited section of the speech which contains the information about the potential impairment of the speech. This section is of interest to the speech therapist as it corresponds to the possibly incorrect movements of speech organs (lower lip and tongue with respect to the vocal tract). Two segmentation models to detect and identify the various sections of the disordered (impaired) speech signals have been developed and compared. The first makes use of a combination of four artificial neural networks. The second is based on a support vector machine (SVM). The SVM has been trained by means of an ad hoc nested algorithm whose outer layer is a metaheuristic while the inner layer is a convex optimization algorithm. Several metaheuristics have been tested and compared leading to the conclusion that some variants of the compact differential evolution (CDE) algorithm appears to be well-suited to address this problem. Numerical results show that the SVM model with a radial basis function is capable of effective detection of the portion of speech that is of interest to a therapist. The best performance has been achieved when the system is trained by the nested algorithm whose outer layer is hybrid-population-based/CDE. A population-based approach displays the best performance for the isolation of silence/noise sections, and the detection of unvoiced sections. On the other hand, a compact approach appears to be clearly well-suited to detect the beginning of the steady state of the voiced signal. Both the proposed segmentation models display outperformed two modern segmentation techniques based on Gaussian mixture model and deep learning.

  13. Noise reduction algorithm with the soft thresholding based on the Shannon entropy and bone-conduction speech cross- correlation bands.

    PubMed

    Na, Sung Dae; Wei, Qun; Seong, Ki Woong; Cho, Jin Ho; Kim, Myoung Nam

    2018-01-01

    The conventional methods of speech enhancement, noise reduction, and voice activity detection are based on the suppression of noise or non-speech components of the target air-conduction signals. However, air-conduced speech is hard to differentiate from babble or white noise signals. To overcome this problem, the proposed algorithm uses the bone-conduction speech signals and soft thresholding based on the Shannon entropy principle and cross-correlation of air- and bone-conduction signals. A new algorithm for speech detection and noise reduction is proposed, which makes use of the Shannon entropy principle and cross-correlation with the bone-conduction speech signals to threshold the wavelet packet coefficients of the noisy speech. The proposed method can be get efficient result by objective quality measure that are PESQ, RMSE, Correlation, SNR. Each threshold is generated by the entropy and cross-correlation approaches in the decomposed bands using the wavelet packet decomposition. As a result, the noise is reduced by the proposed method using the MATLAB simulation. To verify the method feasibility, we compared the air- and bone-conduction speech signals and their spectra by the proposed method. As a result, high performance of the proposed method is confirmed, which makes it quite instrumental to future applications in communication devices, noisy environment, construction, and military operations.

  14. The Role of Categorical Speech Perception and Phonological Processing in Familial Risk Children With and Without Dyslexia.

    PubMed

    Hakvoort, Britt; de Bree, Elise; van der Leij, Aryan; Maassen, Ben; van Setten, Ellie; Maurits, Natasha; van Zuijen, Titia L

    2016-12-01

    This study assessed whether a categorical speech perception (CP) deficit is associated with dyslexia or familial risk for dyslexia, by exploring a possible cascading relation from speech perception to phonology to reading and by identifying whether speech perception distinguishes familial risk (FR) children with dyslexia (FRD) from those without dyslexia (FRND). Data were collected from 9-year-old FRD (n = 37) and FRND (n = 41) children and age-matched controls (n = 49) on CP identification and discrimination and on the phonological processing measures rapid automatized naming, phoneme awareness, and nonword repetition. The FRD group performed more poorly on CP than the FRND and control groups. Findings on phonological processing align with the literature in that (a) phonological processing related to reading and (b) the FRD group showed the lowest phonological processing outcomes. Furthermore, CP correlated weakly with reading, but this relationship was fully mediated by rapid automatized naming. Although CP phonological skills are related to dyslexia, there was no strong evidence for a cascade from CP to phonology to reading. Deficits in CP at the behavioral level are not directly associated with dyslexia.

  15. Voice-onset time and buzz-onset time identification: A ROC analysis

    NASA Astrophysics Data System (ADS)

    Lopez-Bascuas, Luis E.; Rosner, Burton S.; Garcia-Albea, Jose E.

    2004-05-01

    Previous studies have employed signal detection theory to analyze data from speech and nonspeech experiments. Typically, signal distributions were assumed to be Gaussian. Schouten and van Hessen [J. Acoust. Soc. Am. 104, 2980-2990 (1998)] explicitly tested this assumption for an intensity continuum and a speech continuum. They measured response distributions directly and, assuming an interval scale, concluded that the Gaussian assumption held for both continua. However, Pastore and Macmillan [J. Acoust. Soc. Am. 111, 2432 (2002)] applied ROC analysis to Schouten and van Hessen's data, assuming only an ordinal scale. Their ROC curves suppported the Gaussian assumption for the nonspeech signals only. Previously, Lopez-Bascuas [Proc. Audit. Bas. Speech Percept., 158-161 (1997)] found evidence with a rating scale procedure that the Gaussian model was inadequate for a voice-onset time continuum but not for a noise-buzz continuum. Both continua contained ten stimuli with asynchronies ranging from -35 ms to +55 ms. ROC curves (double-probability plots) are now reported for each pair of adjacent stimuli on the two continua. Both speech and nonspeech ROCs often appeared nonlinear, indicating non-Gaussian signal distributions under the usual zero-variance assumption for response criteria.

  16. Production and perception of whispered vowels

    NASA Astrophysics Data System (ADS)

    Kiefte, Michael

    2005-09-01

    Information normally associated with pitch, such as intonation, can still be conveyed in whispered speech despite the absence of voicing. For example, it is possible to whisper the question ``You are going today?'' without any syntactic information to distinguish this sentence from a simple declarative. It has been shown that pitch change in whispered speech is correlated with the simultaneous raising or lowering of several formants [e.g., M. Kiefte, J. Acoust. Soc. Am. 116, 2546 (2004)]. However, spectral peak frequencies associated with formants have been identified as important correlates to vowel identity. Spectral peak frequencies may serve two roles in the perception of whispered speech: to indicate both vowel identity and intended pitch. Data will be presented to examine the relative importance of several acoustic properties including spectral peak frequencies and spectral shape parameters in both the production and perception of whispered vowels. Speakers were asked to phonate and whisper vowels at three different pitches across a range of roughly a musical fifth. It will be shown that relative spectral change is preserved within vowels across intended pitches in whispered speech. In addition, several models of vowel identification by listeners will be presented. [Work supported by SSHRC.

  17. Status report on speech research. A report on the status and progress of studies of the nature of speech, instrumentation for its investigation and practical applications

    NASA Astrophysics Data System (ADS)

    Studdert-Kennedy, M.; Obrien, N.

    1983-05-01

    This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. Manuscripts cover the following topics: The influence of subcategorical mismatches on lexical access; The Serbo-Croatian orthography constraints the reader to a phonologically analytic strategy; Grammatical priming effects between pronouns and inflected verb forms; Misreadings by beginning readers of Serrbo-Croatian; Bi-alphabetism and work recognition; Orthographic and phonemic coding for word identification: Evidence for Hebrew; Stress and vowel duration effects on syllable recognition; Phonetic and auditory trading relations between acoustic cues in speech perception: Further results; Linguistic coding by deaf children in relation beginning reading success; Determinants of spelling ability in deaf and hearing adults: Access to linguistic structures; A dynamical basis for action systems; On the space-time structure of human interlimb coordination; Some acoustic and physiological observations on diphthongs; Relationship between pitch control and vowel articulation; Laryngeal vibrations: A comparison between high-speed filming and glottographic techniques; Compensatory articulation in hearing impaired speakers: A cinefluorographic study; and Review (Pierre Delattre: Studies in comparative phonetics.)

  18. Speaker recognition with temporal cues in acoustic and electric hearing

    NASA Astrophysics Data System (ADS)

    Vongphoe, Michael; Zeng, Fan-Gang

    2005-08-01

    Natural spoken language processing includes not only speech recognition but also identification of the speaker's gender, age, emotional, and social status. Our purpose in this study is to evaluate whether temporal cues are sufficient to support both speech and speaker recognition. Ten cochlear-implant and six normal-hearing subjects were presented with vowel tokens spoken by three men, three women, two boys, and two girls. In one condition, the subject was asked to recognize the vowel. In the other condition, the subject was asked to identify the speaker. Extensive training was provided for the speaker recognition task. Normal-hearing subjects achieved nearly perfect performance in both tasks. Cochlear-implant subjects achieved good performance in vowel recognition but poor performance in speaker recognition. The level of the cochlear implant performance was functionally equivalent to normal performance with eight spectral bands for vowel recognition but only to one band for speaker recognition. These results show a disassociation between speech and speaker recognition with primarily temporal cues, highlighting the limitation of current speech processing strategies in cochlear implants. Several methods, including explicit encoding of fundamental frequency and frequency modulation, are proposed to improve speaker recognition for current cochlear implant users.

  19. Transcranial magnetic stimulation: language function.

    PubMed

    Epstein, C M

    1998-07-01

    Studies of language using transcranial magnetic stimulation (TMS) have focused both on identification of language areas and on elucidation of function. TMS may result in either inhibition or facilitation of language processes and may operate directly at a presumptive site of language cortex or indirectly through intracortical networks. TMS has been used to create reversible "temporary lesions," similar to those produced by Wada tests and direct cortical electrical stimulation, in cerebral cortical areas subserving language function. Rapid-rate TMS over the left inferior frontal region blocks speech output in most subjects. However, the results are not those predicted from classic models of language organization. Speech arrest is obtained most easily over facial motor cortex, and true aphasia is rare, whereas right hemisphere or bilateral lateralization is unexpectedly prominent. A clinical role for these techniques is not yet fully established. Interfering with language comprehension and verbal memory is currently more difficult than blocking speech output, but numerous TMS studies have demonstrated facilitation of language-related tasks, including oral word association, story recall, digit span, and picture naming. Conversely, speech output also facilitates motor responses to TMS in the dominant hemisphere. Such new and often-unexpected findings may provide important insights into the organization of language.

  20. Rural and remote speech-language pathology service inequities: An Australian human rights dilemma.

    PubMed

    Jones, Debra M; McAllister, Lindy; Lyle, David M

    2018-02-01

    Access to healthcare is a fundamental human right for all Australians. Article 19 of the Universal Declaration of Human Rights acknowledges the right to freedom of opinion and to seek, receive and impart information and ideas. Capacities for self-expression and effective communication underpin the realisation of these fundamental human rights. For rural and remote Australian children this realisation is compromised by complex disadvantages and inequities that contribute to communication delays, inequity of access to essential speech-language pathology services and poorer later life outcomes. Localised solutions to the provision of civically engaged, accessible, acceptable and sustainable speech-language pathology services within rural and remote Australian contexts are required if we are to make substantive human rights gains. However, civically engaged and sustained healthcare can significantly challenge traditional professionalised perspectives on how best to design and implement speech-language pathology services that seek to address rural and remote communication needs and access inequities. A failure to engage these communities in the identification of childhood communication delays and solutions to address these delays, ultimately denies children, families and communities of their human rights for healthcare access, self-expression, self-dignity and meaningful inclusion within Australian society.

  1. Combined electric and acoustic hearing performance with Zebra® speech processor: speech reception, place, and temporal coding evaluation.

    PubMed

    Vaerenberg, Bart; Péan, Vincent; Lesbros, Guillaume; De Ceulaer, Geert; Schauwers, Karen; Daemers, Kristin; Gnansia, Dan; Govaerts, Paul J

    2013-06-01

    To assess the auditory performance of Digisonic(®) cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure. Six patients implanted with a Digisonic(®) SP implant and showing low-frequency residual hearing were fitted with the Zebra(®) speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( Vaerenberg et al., 2011 ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS. Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking. These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.

  2. Neurophysiological evidence for the interplay of speech segmentation and word-referent mapping during novel word learning.

    PubMed

    François, Clément; Cunillera, Toni; Garcia, Enara; Laine, Matti; Rodriguez-Fornells, Antoni

    2017-04-01

    Learning a new language requires the identification of word units from continuous speech (the speech segmentation problem) and mapping them onto conceptual representation (the word to world mapping problem). Recent behavioral studies have revealed that the statistical properties found within and across modalities can serve as cues for both processes. However, segmentation and mapping have been largely studied separately, and thus it remains unclear whether both processes can be accomplished at the same time and if they share common neurophysiological features. To address this question, we recorded EEG of 20 adult participants during both an audio alone speech segmentation task and an audiovisual word-to-picture association task. The participants were tested for both the implicit detection of online mismatches (structural auditory and visual semantic violations) as well as for the explicit recognition of words and word-to-picture associations. The ERP results from the learning phase revealed a delayed learning-related fronto-central negativity (FN400) in the audiovisual condition compared to the audio alone condition. Interestingly, while online structural auditory violations elicited clear MMN/N200 components in the audio alone condition, visual-semantic violations induced meaning-related N400 modulations in the audiovisual condition. The present results support the idea that speech segmentation and meaning mapping can take place in parallel and act in synergy to enhance novel word learning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Music Training Can Improve Music and Speech Perception in Pediatric Mandarin-Speaking Cochlear Implant Users.

    PubMed

    Cheng, Xiaoting; Liu, Yangwenyi; Shu, Yilai; Tao, Duo-Duo; Wang, Bing; Yuan, Yasheng; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2018-01-01

    Due to limited spectral resolution, cochlear implants (CIs) do not convey pitch information very well. Pitch cues are important for perception of music and tonal language; it is possible that music training may improve performance in both listening tasks. In this study, we investigated music training outcomes in terms of perception of music, lexical tones, and sentences in 22 young (4.8 to 9.3 years old), prelingually deaf Mandarin-speaking CI users. Music perception was measured using a melodic contour identification (MCI) task. Speech perception was measured for lexical tones and sentences presented in quiet. Subjects received 8 weeks of MCI training using pitch ranges not used for testing. Music and speech perception were measured at 2, 4, and 8 weeks after training was begun; follow-up measures were made 4 weeks after training was stopped. Mean baseline performance was 33.2%, 76.9%, and 45.8% correct for MCI, lexical tone recognition, and sentence recognition, respectively. After 8 weeks of MCI training, mean performance significantly improved by 22.9, 14.4, and 14.5 percentage points for MCI, lexical tone recognition, and sentence recognition, respectively ( p < .05 in all cases). Four weeks after training was stopped, there was no significant change in posttraining music and speech performance. The results suggest that music training can significantly improve pediatric Mandarin-speaking CI users' music and speech perception.

  4. Value of pre-operative caloric test in predicting speech perception after cochlear implantation in adults with post-lingual hearing loss.

    PubMed

    Yang, Chan Joo; Lee, Jee Yeon; Ahn, Joong Ho; Lee, Kwang-Sun

    2016-09-01

    This study shows that, in cochlear implantation (CI) surgery, pre-operative caloric test results are not correlated with post-operative outcomes of dizziness or speech perception. To determine the role of pre-operative caloric tests in CI. The records of 95 patients who underwent unilateral CI were reviewed retrospectively. Patients were divided into four groups according to caloric response. Forty-six patients with normal caloric responses were classified as Group A, 19 patients who underwent CI in the ear with worse caloric responses as Group B, 18 patients with bilateral loss of caloric responses as Group C, and 12 patients who underwent CI in the ear with better caloric responses as Group D. Speech performance and post-operative dizziness were compared between the four groups. Speech perception was determined by evaluating consonant-vowel phoneme detection, closed-set word and open-set mono-syllabic and bi-syllabic word identification, and sentence comprehension test scores. The speech perception and aided pure-tone average (PTA) test results at 3 and 6 months and at 1, 2, and 3 years after implantation were not significantly different between Groups A, B, C, and D (p > 0.05). Eight patients (8.4%) reported post-operative dizziness, but there was no significant difference between the four groups (p = 0.627).

  5. Commercial speech in crisis: Crisis Pregnancy Center regulations and definitions of commercial speech.

    PubMed

    Gilbert, Kathryn E

    2013-02-01

    Recent attempts to regulate Crisis Pregnancy Centers, pseudoclinics that surreptitiously aim to dissuade pregnant women from choosing abortion, have confronted the thorny problem of how to define commercial speech. The Supreme Court has offered three potential answers to this definitional quandary. This Note uses the Crisis Pregnancy Center cases to demonstrate that courts should use one of these solutions, the factor-based approach of Bolger v. Youngs Drugs Products Corp., to define commercial speech in the Crisis Pregnancy Center cases and elsewhere. In principle and in application, the Bolger factor-based approach succeeds in structuring commercial speech analysis at the margins of the doctrine.

  6. Speech graphs provide a quantitative measure of thought disorder in psychosis.

    PubMed

    Mota, Natalia B; Vasconcelos, Nivaldo A P; Lemos, Nathalia; Pieretti, Ana C; Kinouchi, Osame; Cecchi, Guillermo A; Copelli, Mauro; Ribeiro, Sidarta

    2012-01-01

    Psychosis has various causes, including mania and schizophrenia. Since the differential diagnosis of psychosis is exclusively based on subjective assessments of oral interviews with patients, an objective quantification of the speech disturbances that characterize mania and schizophrenia is in order. In principle, such quantification could be achieved by the analysis of speech graphs. A graph represents a network with nodes connected by edges; in speech graphs, nodes correspond to words and edges correspond to semantic and grammatical relationships. To quantify speech differences related to psychosis, interviews with schizophrenics, manics and normal subjects were recorded and represented as graphs. Manics scored significantly higher than schizophrenics in ten graph measures. Psychopathological symptoms such as logorrhea, poor speech, and flight of thoughts were grasped by the analysis even when verbosity differences were discounted. Binary classifiers based on speech graph measures sorted schizophrenics from manics with up to 93.8% of sensitivity and 93.7% of specificity. In contrast, sorting based on the scores of two standard psychiatric scales (BPRS and PANSS) reached only 62.5% of sensitivity and specificity. The results demonstrate that alterations of the thought process manifested in the speech of psychotic patients can be objectively measured using graph-theoretical tools, developed to capture specific features of the normal and dysfunctional flow of thought, such as divergence and recurrence. The quantitative analysis of speech graphs is not redundant with standard psychometric scales but rather complementary, as it yields a very accurate sorting of schizophrenics and manics. Overall, the results point to automated psychiatric diagnosis based not on what is said, but on how it is said.

  7. Micro-Based Speech Recognition: Instructional Innovation for Handicapped Learners.

    ERIC Educational Resources Information Center

    Horn, Carin E.; Scott, Brian L.

    A new voice based learning system (VBLS), which allows the handicapped user to interact with a microcomputer by voice commands, is described. Speech or voice recognition is the computerized process of identifying a spoken word or phrase, including those resulting from speech impediments. This new technology is helpful to the severely physically…

  8. Is Comprehension Necessary for Error Detection? A Conflict-Based Account of Monitoring in Speech Production

    ERIC Educational Resources Information Center

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…

  9. Using Web Speech Technology with Language Learning Applications

    ERIC Educational Resources Information Center

    Daniels, Paul

    2015-01-01

    In this article, the author presents the history of human-to-computer interaction based upon the design of sophisticated computerized speech recognition algorithms. Advancements such as the arrival of cloud-based computing and software like Google's Web Speech API allows anyone with an Internet connection and Chrome browser to take advantage of…

  10. Evidence-Based Practice for Children with Speech Sound Disorders: Part 1 Narrative Review

    ERIC Educational Resources Information Center

    Baker, Elise; McLeod, Sharynne

    2011-01-01

    Purpose: This article provides a comprehensive narrative review of intervention studies for children with speech sound disorders (SSD). Its companion paper (Baker & McLeod, 2011) provides a tutorial and clinical example of how speech-language pathologists (SLPs) can engage in evidence-based practice (EBP) for this clinical population. Method:…

  11. Visemic Processing in Audiovisual Discrimination of Natural Speech: A Simultaneous fMRI-EEG Study

    ERIC Educational Resources Information Center

    Dubois, Cyril; Otzenberger, Helene; Gounot, Daniel; Sock, Rudolph; Metz-Lutz, Marie-Noelle

    2012-01-01

    In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a counterpart based on "visemes", the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a…

  12. Voxel-based morphometry of auditory and speech-related cortex in stutterers.

    PubMed

    Beal, Deryk S; Gracco, Vincent L; Lafaille, Sophie J; De Nil, Luc F

    2007-08-06

    Stutterers demonstrate unique functional neural activation patterns during speech production, including reduced auditory activation, relative to nonstutterers. The extent to which these functional differences are accompanied by abnormal morphology of the brain in stutterers is unclear. This study examined the neuroanatomical differences in speech-related cortex between stutterers and nonstutterers using voxel-based morphometry. Results revealed significant differences in localized grey matter and white matter densities of left and right hemisphere regions involved in auditory processing and speech production.

  13. Particle Swarm Optimization Based Feature Enhancement and Feature Selection for Improved Emotion Recognition in Speech and Glottal Signals

    PubMed Central

    Muthusamy, Hariharan; Polat, Kemal; Yaacob, Sazali

    2015-01-01

    In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature. PMID:25799141

  14. Effects of Additional Low-Pass-Filtered Speech on Listening Effort for Noise-Band-Vocoded Speech in Quiet and in Noise.

    PubMed

    Pals, Carina; Sarampalis, Anastasios; van Dijk, Mart; Başkent, Deniz

    2018-05-11

    Residual acoustic hearing in electric-acoustic stimulation (EAS) can benefit cochlear implant (CI) users in increased sound quality, speech intelligibility, and improved tolerance to noise. The goal of this study was to investigate whether the low-pass-filtered acoustic speech in simulated EAS can provide the additional benefit of reducing listening effort for the spectrotemporally degraded signal of noise-band-vocoded speech. Listening effort was investigated using a dual-task paradigm as a behavioral measure, and the NASA Task Load indeX as a subjective self-report measure. The primary task of the dual-task paradigm was identification of sentences presented in three experiments at three fixed intelligibility levels: at near-ceiling, 50%, and 79% intelligibility, achieved by manipulating the presence and level of speech-shaped noise in the background. Listening effort for the primary intelligibility task was reflected in the performance on the secondary, visual response time task. Experimental speech processing conditions included monaural or binaural vocoder, with added low-pass-filtered speech (to simulate EAS) or without (to simulate CI). In Experiment 1, in quiet with intelligibility near-ceiling, additional low-pass-filtered speech reduced listening effort compared with binaural vocoder, in line with our expectations, although not compared with monaural vocoder. In Experiments 2 and 3, for speech in noise, added low-pass-filtered speech allowed the desired intelligibility levels to be reached at less favorable speech-to-noise ratios, as expected. It is interesting that this came without the cost of increased listening effort usually associated with poor speech-to-noise ratios; at 50% intelligibility, even a reduction in listening effort on top of the increased tolerance to noise was observed. The NASA Task Load indeX did not capture these differences. The dual-task results provide partial evidence for a potential decrease in listening effort as a result of adding low-frequency acoustic speech to noise-band-vocoded speech. Whether these findings translate to CI users with residual acoustic hearing will need to be addressed in future research because the quality and frequency range of low-frequency acoustic sound available to listeners with hearing loss may differ from our idealized simulations, and additional factors, such as advanced age and varying etiology, may also play a role.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

  15. Speech and Hearing Science, Anatomy and Physiology.

    ERIC Educational Resources Information Center

    Zemlin, Willard R.

    Written for those interested in speech pathology and audiology, the text presents the anatomical, physiological, and neurological bases for speech and hearing. Anatomical nomenclature used in the speech and hearing sciences is introduced and the breathing mechanism is defined and discussed in terms of the respiratory passage, the framework and…

  16. Interventions for Speech Sound Disorders in Children

    ERIC Educational Resources Information Center

    Williams, A. Lynn, Ed.; McLeod, Sharynne, Ed.; McCauley, Rebecca J., Ed.

    2010-01-01

    With detailed discussion and invaluable video footage of 23 treatment interventions for speech sound disorders (SSDs) in children, this textbook and DVD set should be part of every speech-language pathologist's professional preparation. Focusing on children with functional or motor-based speech disorders from early childhood through the early…

  17. Developing a Weighted Measure of Speech Sound Accuracy

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2011-01-01

    Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…

  18. Automated Assessment of Speech Fluency for L2 English Learners

    ERIC Educational Resources Information Center

    Yoon, Su-Youn

    2009-01-01

    This dissertation provides an automated scoring method of speech fluency for second language learners of English (L2 learners) based that uses speech recognition technology. Non-standard pronunciation, frequent disfluencies, faulty grammar, and inappropriate lexical choices are crucial characteristics of L2 learners' speech. Due to the ease of…

  19. Speech Synthesis Applied to Language Teaching.

    ERIC Educational Resources Information Center

    Sherwood, Bruce

    1981-01-01

    The experimental addition of speech output to computer-based Esperanto lessons using speech synthesized from text is described. Because of Esperanto's phonetic spelling and simple rhythm, it is particularly easy to describe the mechanisms of Esperanto synthesis. Attention is directed to how the text-to-speech conversion is performed and the ways…

  20. A novel probabilistic framework for event-based speech recognition

    NASA Astrophysics Data System (ADS)

    Juneja, Amit; Espy-Wilson, Carol

    2003-10-01

    One of the reasons for unsatisfactory performance of the state-of-the-art automatic speech recognition (ASR) systems is the inferior acoustic modeling of low-level acoustic-phonetic information in the speech signal. An acoustic-phonetic approach to ASR, on the other hand, explicitly targets linguistic information in the speech signal, but such a system for continuous speech recognition (CSR) is not known to exist. A probabilistic and statistical framework for CSR based on the idea of the representation of speech sounds by bundles of binary valued articulatory phonetic features is proposed. Multiple probabilistic sequences of linguistically motivated landmarks are obtained using binary classifiers of manner phonetic features-syllabic, sonorant and continuant-and the knowledge-based acoustic parameters (APs) that are acoustic correlates of those features. The landmarks are then used for the extraction of knowledge-based APs for source and place phonetic features and their binary classification. Probabilistic landmark sequences are constrained using manner class language models for isolated or connected word recognition. The proposed method could overcome the disadvantages encountered by the early acoustic-phonetic knowledge-based systems that led the ASR community to switch to systems highly dependent on statistical pattern analysis methods and probabilistic language or grammar models.

  1. Speech sound disorders in a community study of preschool children.

    PubMed

    McLeod, Sharynne; Harrison, Linda J; McAllister, Lindy; McCormack, Jane

    2013-08-01

    To undertake a community (nonclinical) study to describe the speech of preschool children who had been identified by parents/teachers as having difficulties "talking and making speech sounds" and compare the speech characteristics of those who had and had not accessed the services of a speech-language pathologist (SLP). Stage 1: Parent/teacher concern regarding the speech skills of 1,097 4- to 5-year-old children attending early childhood centers was documented. Stage 2a: One hundred forty-three children who had been identified with concerns were assessed. Stage 2b: Parents returned questionnaires about service access for 109 children. The majority of the 143 children (86.7%) achieved a standard score below the normal range for the percentage of consonants correct (PCC) on the Diagnostic Evaluation of Articulation and Phonology (Dodd, Hua, Crosbie, Holm, & Ozanne, 2002). Consonants produced incorrectly were consistent with the late-8 phonemes ( Shriberg, 1993). Common phonological patterns were fricative simplification (82.5%), cluster simplification (49.0%)/reduction (19.6%), gliding (41.3%), and palatal fronting (15.4%). Interdental lisps on /s/ and /z/ were produced by 39.9% of the children, dentalization of other sibilants by 17.5%, and lateral lisps by 13.3%. Despite parent/teacher concern, only 41/109 children had contact with an SLP. These children were more likely to be unintelligible to strangers, to express distress about their speech, and to have a lower PCC and a smaller consonant inventory compared to the children who had no contact with an SLP. A significant number of preschool-age children with speech sound disorders (SSD) have not had contact with an SLP. These children have mild-severe SSD and would benefit from SLP intervention. Integrated SLP services within early childhood communities would enable earlier identification of SSD and access to intervention to reduce potential educational and social impacts affiliated with SSD.

  2. Multisensory and modality specific processing of visual speech in different regions of the premotor cortex

    PubMed Central

    Callan, Daniel E.; Jones, Jeffery A.; Callan, Akiko

    2014-01-01

    Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures. PMID:24860526

  3. Meeting the needs of children and young people with speech, language and communication difficulties.

    PubMed

    Lindsay, Geoff; Dockrell, Julie; Desforges, Martin; Law, James; Peacey, Nick

    2010-01-01

    The UK government set up a review of provision for children and young people with the full range of speech, language and communication needs led by a Member of Parliament, John Bercow. A research study was commissioned to provide empirical evidence to inform the Bercow Review. To examine the efficiency and effectiveness of different arrangements for organizing and providing services for children and young people with needs associated with primary speech, language and communication difficulties. Six Local Authorities in England and associated Primary Care Trusts were selected to represent a range of locations reflecting geographic spread, urban/rural and prevalence of children with speech, language and communication difficulties. In each case study, interviews were held with the senior Local Authority manager for special educational needs and a Primary Care Trust senior manager for speech and language therapy. A further 23 head teachers or heads of specialist provision for speech, language and communication difficulties were also interviewed and policy documents were examined. A thematic analysis of the interviews produced four main themes: identification of children and young people with speech, language and communication difficulties; meeting their needs; monitoring and evaluation; and research and evaluation. There were important differences between Local Authorities and Primary Care Trusts in the collection, analysis and use of data, in particular. There were also differences between Local Authority/Primary Care Trust pairs, especially in the degree to which they collaborated in developing policy and implementing practice. This study has demonstrated a lack of consistency across Local Authorities and Primary Care Trusts. Optimizing provision to meet the needs of children and young people with speech, language and communication difficulties will require concerted action, with leadership from central government. The study was used by the Bercow Review whose recommendations have been addressed by central government and a funded action plan has been implemented as a result.

  4. Identification of Changes along a Continuum of Speech Intonation is Impaired in Congenital Amusia.

    PubMed

    Hutchins, Sean; Gosselin, Nathalie; Peretz, Isabelle

    2010-01-01

    A small number of individuals have severe musical problems that have neuro-genetic underpinnings. This musical disorder is termed "congenital amusia," an umbrella term for lifelong musical disabilities that cannot be attributed to deafness, lack of exposure, or brain damage after birth. Amusics seem to lack the ability to detect fine pitch differences in tone sequences. However, differences between statements and questions, which vary in final pitch, are well perceived by most congenital amusic individuals. We hypothesized that the origin of this apparent domain-specificity of the disorder lies in the range of pitch variations, which are very coarse in speech as compared to music. Here, we tested this hypothesis by using a continuum of gradually increasing final pitch in both speech and tone sequences. To this aim, nine amusic cases and nine matched controls were presented with statements and questions that varied on a pitch continuum from falling to rising in 11 steps. The sentences were either naturally spoken or were tone sequence versions of these. The task was to categorize the sentences as statements or questions and the tone sequences as falling or rising. In each case, the observation of an S-shaped identification function indicates that amusics can accurately identify unambiguous examples of statements and questions but have problems with fine variations between these endpoints. Thus, the results indicate that a deficient pitch perception might compromise music, not because it is specialized for that domain but because music requirements are more fine-grained.

  5. Gender and speaker identification as a function of the number of channels in spectrally reduced speech

    NASA Astrophysics Data System (ADS)

    Gonzalez, Julio; Oliver, Juan C.

    2005-07-01

    Considerable research on speech intelligibility for cochlear-implant users has been conducted using acoustic simulations with normal-hearing subjects. However, some relevant topics about perception through cochlear implants remain scantly explored. The present study examined the perception by normal-hearing subjects of gender and identity of a talker as a function of the number of channels in spectrally reduced speech. Two simulation strategies were compared. They were implemented by two different processors that presented signals as either the sum of sine waves at the center of the channels or as the sum of noise bands. In Experiment 1, 15 subjects determined the gender of 40 talkers (20 males + 20 females) from a natural utterance processed through 3, 4, 5, 6, 8, 10, 12, and 16 channels with both processors. In Experiment 2, 56 subjects matched a natural sentence uttered by 10 talkers with the corresponding simulation replicas processed through 3, 4, 8, and 16 channels for each processor. In Experiment 3, 72 subjects performed the same task but different sentences were used for natural and processed stimuli. A control Experiment 4 was conducted to equate the processing steps between the two simulation strategies. Results showed that gender and talker identification was better for the sine-wave processor, and that performance through the noise-band processor was more sensitive to the number of channels. Implications and possible explanations for the superiority of sine-wave simulations are discussed.

  6. A Role for the Right Superior Temporal Sulcus in Categorical Perception of Musical Chords

    ERIC Educational Resources Information Center

    Klein, Mike E.; Zatorre, Robert J.

    2011-01-01

    Categorical perception (CP) is a mechanism whereby non-identical stimuli that have the same underlying meaning become invariantly represented in the brain. Through behavioral identification and discrimination tasks, CP has been demonstrated to occur broadly across the auditory modality, including in perception of speech (e.g. phonemes) and music…

  7. Retaining the Learning Disabled Adult. A Handbook for Adult Basic Education Instructors.

    ERIC Educational Resources Information Center

    Hawkins, Diane, Comp.

    This handbook is designed to address teachers' most common concerns about learning disabilities (LDs) and teaching adults with LDs. Section 1 defines LDs by describing traits that adults with LDs share. Section 2 focuses on identification of students with LDs. LD characteristics are divided into eight categories: academics, speech/language, motor…

  8. A Mediating Role of the Premotor Cortex in Phoneme Segmentation

    ERIC Educational Resources Information Center

    Sato, Marc; Tremblay, Pascale; Gracco, Vincent L.

    2009-01-01

    Consistent with a functional role of the motor system in speech perception, disturbing the activity of the left ventral premotor cortex by means of repetitive transcranial magnetic stimulation (rTMS) has been shown to impair auditory identification of syllables that were masked with white noise. However, whether this region is crucial for speech…

  9. Language as Sensuous Action: Sir Richard Paget, Kenneth Burke, and Gesture-Speech Theory

    ERIC Educational Resources Information Center

    Hawhee, Debra

    2006-01-01

    This somatic genealogy of Dramatism's core terms--symbolic action, attitude, identification--argues for the importance of keeping rhetoric, rhetorical theory, and rhetorical pedagogy more closely tied to bodies that generate, induce, and respond to rhetoric. It does so by examining Burke's use of Sir Richard Paget's theory that spoken language…

  10. Identification and Management of Dysphagia in the Public Schools: Epilogue.

    ERIC Educational Resources Information Center

    Logemann, Jeri A.; O'Toole, Thomas J.

    2000-01-01

    This epilogue to a series of articles on the management of dysphagia in students and the role of speech-pathologists emphasizes the need for a team approach to the management of children with swallowing disorders and the importance of interaction between clinicians in the school setting and clinicians in medical environments. (CR)

  11. Program Development: Identification and Formulation of Desirable Educational Goals.

    ERIC Educational Resources Information Center

    Goodlad, John I.

    In this speech, the author suggests that the success of public schools depends heavily on commitment to and large-scale agreement on educational goals. He examines the difficulty in creating rational programs to carry out specific behavioral goals and the more remote ends usually stated for educational systems. The author then discusses the…

  12. Shibboleth: An Automated Foreign Accent Identification Program

    ERIC Educational Resources Information Center

    Frost, Wende

    2013-01-01

    The speech of non-native (L2) speakers of a language contains phonological rules that differentiate them from native speakers. These phonological rules characterize or distinguish accents in an L2. The Shibboleth program creates combinatorial rule-sets to describe the phonological pattern of these accents and classifies L2 speakers into their…

  13. Early Intervention Practices for Children with Hearing Loss: Impact of Professional Development

    ERIC Educational Resources Information Center

    Martin-Prudent, Angi; Lartz, Maribeth; Borders, Christina; Meehan, Tracy

    2016-01-01

    Early identification and appropriate intervention services for children who are deaf or hard of hearing significantly increase the likelihood of better language, speech, and social-emotional development. However, current research suggests that there is a critical shortage of professionals trained to provide early intervention services to deaf and…

  14. "Feminism Lite?" Feminist Identification, Speaker Appearance, and Perceptions of Feminist and Antifeminist Messengers

    ERIC Educational Resources Information Center

    Bullock, Heather E.; Fernald, Julian L.

    2003-01-01

    Drawing on a communications model of persuasion (Hovland, Janis, & Kelley, 1953), this study examined the effect of target appearance on feminists' and nonfeminists' perceptions of a speaker delivering a feminist or an antifeminist message. One hundred three college women watched one of four videotaped speeches that varied by content (profeminist…

  15. Characteristics of Young Children Close to the Onset of Stuttering

    ERIC Educational Resources Information Center

    Cavenagh, Penny; Costelloe, Sarah; Davis, Steve; Howell, Peter

    2015-01-01

    Five factors (gender, handedness, some aspects of general health, family history, and intelligence) that could assist speech-language pathologists in identification of children who stutter (CWS) in 42 CWS and 34 fluent control children (children who do not stutter [CWNS]) were investigated. The data reported here were obtained from assessments…

  16. 'That doesn't translate': the role of evidence-based practice in disempowering speech pathologists in acute aphasia management.

    PubMed

    Foster, Abby; Worrall, Linda; Rose, Miranda; O'Halloran, Robyn

    2015-07-01

    An evidence-practice gap has been identified in current acute aphasia management practice, with the provision of services to people with aphasia in the acute hospital widely considered in the literature to be inconsistent with best-practice recommendations. The reasons for this evidence-practice gap are unclear; however, speech pathologists practising in this setting have articulated a sense of dissonance regarding their limited service provision to this population. A clearer understanding of why this evidence-practice gap exists is essential in order to support and promote evidence-based approaches to the care of people with aphasia in acute care settings. To provide an understanding of speech pathologists' conceptualization of evidence-based practice for acute post-stroke aphasia, and its implementation. This study adopted a phenomenological approach, underpinned by a social constructivist paradigm. In-depth interviews were conducted with 14 Australian speech pathologists, recruited using a purposive sampling technique. An inductive thematic analysis of the data was undertaken. A single, overarching theme emerged from the data. Speech pathologists demonstrated a sense of disempowerment as a result of their relationship with evidence-based practice for acute aphasia management. Three subthemes contributed to this theme. The first described a restricted conceptualization of evidence-based practice. The second revealed speech pathologists' strained relationships with the research literature. The third elucidated a sense of professional unease over their perceived inability to enact evidence-based clinical recommendations, despite their desire to do so. Speech pathologists identified a current knowledge-practice gap in their management of aphasia in acute hospital settings. Speech pathologists place significant emphasis on the research evidence; however, their engagement with the research is limited, in part because it is perceived to lack clinical utility. A sense of professional dissonance arises from the conflict between a desire to provide best practice and the perceived barriers to implementing evidence-based recommendations clinically, resulting in evidence-based practice becoming a disempowering concept for some. © 2015 Royal College of Speech and Language Therapists.

  17. The Influence of High-Frequency Envelope Information on Low-Frequency Vowel Identification in Noise

    PubMed Central

    2016-01-01

    Vowel identification in noise using consonant-vowel-consonant (CVC) logatomes was used to investigate a possible interplay of speech information from different frequency regions. It was hypothesized that the periodicity conveyed by the temporal envelope of a high frequency stimulus can enhance the use of the information carried by auditory channels in the low-frequency region that share the same periodicity. It was further hypothesized that this acts as a strobe-like mechanism and would increase the signal-to-noise ratio for the voiced parts of the CVCs. In a first experiment, different high-frequency cues were provided to test this hypothesis, whereas a second experiment examined more closely the role of amplitude modulations and intact phase information within the high-frequency region (4–8 kHz). CVCs were either natural or vocoded speech (both limited to a low-pass cutoff-frequency of 2.5 kHz) and were presented in stationary 3-kHz low-pass filtered masking noise. The experimental results did not support the hypothesized use of periodicity information for aiding low-frequency perception. PMID:26730702

  18. Assessment and management of the communication difficulties of children with cerebral palsy: a UK survey of SLT practice.

    PubMed

    Watson, Rose Mary; Pennington, Lindsay

    2015-01-01

    Communication difficulties are common in cerebral palsy (CP) and are frequently associated with motor, intellectual and sensory impairments. Speech and language therapy research comprises single-case experimental design and small group studies, limiting evidence-based intervention and possibly exacerbating variation in practice. To describe the assessment and intervention practices of speech-language therapist (SLTs) in the UK in their management of communication difficulties associated with CP in childhood. An online survey of the assessments and interventions employed by UK SLTs working with children and young people with CP was conducted. The survey was publicized via NHS trusts, the Royal College of Speech and Language Therapists (RCSLT) and private practice associations using a variety of social media. The survey was open from 5 December 2011 to 30 January 2012. Two hundred and sixty-five UK SLTs who worked with children and young people with CP in England (n = 199), Wales (n = 13), Scotland (n = 36) and Northern Ireland (n = 17) completed the survey. SLTs reported using a wide variety of published, standardized tests, but most commonly reported assessing oromotor function, speech, receptive and expressive language, and communication skills by observation or using assessment schedules they had developed themselves. The most highly prioritized areas for intervention were: dysphagia, alternative and augmentative (AAC)/interaction and receptive language. SLTs reported using a wide variety of techniques to address difficulties in speech, language and communication. Some interventions used have no supporting evidence. Many SLTs felt unable to estimate the hours of therapy per year children and young people with CP and communication disorders received from their service. The assessment and management of communication difficulties associated with CP in childhood varies widely in the UK. Lack of standard assessment practices prevents comparisons across time or services. The adoption of a standard set of agreed clinical measures would enable benchmarking of service provision, permit the development of large-scale research studies using routine clinical data and facilitate the identification of potential participants for research studies in the UK. Some interventions provided lack evidence. Recent systematic reviews could guide intervention, but robust evidence is needed in most areas addressed in clinical practice. © 2015 The Authors International Journal of Language & Communication Disorders published by John Wiley & Sons Ltd on behalf of Royal College of Speech and Language Therapists.

  19. Speech recognition in one- and two-talker maskers in school-age children and adults: Development of perceptual masking and glimpsing

    PubMed Central

    Buss, Emily; Leibold, Lori J.; Porter, Heather L.; Grose, John H.

    2017-01-01

    Children perform more poorly than adults on a wide range of masked speech perception paradigms, but this effect is particularly pronounced when the masker itself is also composed of speech. The present study evaluated two factors that might contribute to this effect: the ability to perceptually isolate the target from masker speech, and the ability to recognize target speech based on sparse cues (glimpsing). Speech reception thresholds (SRTs) were estimated for closed-set, disyllabic word recognition in children (5–16 years) and adults in a one- or two-talker masker. Speech maskers were 60 dB sound pressure level (SPL), and they were either presented alone or in combination with a 50-dB-SPL speech-shaped noise masker. There was an age effect overall, but performance was adult-like at a younger age for the one-talker than the two-talker masker. Noise tended to elevate SRTs, particularly for older children and adults, and when summed with the one-talker masker. Removing time-frequency epochs associated with a poor target-to-masker ratio markedly improved SRTs, with larger effects for younger listeners; the age effect was not eliminated, however. Results were interpreted as indicating that development of speech-in-speech recognition is likely impacted by development of both perceptual masking and the ability recognize speech based on sparse cues. PMID:28464682

  20. Articulatory speech synthesis and speech production modelling

    NASA Astrophysics Data System (ADS)

    Huang, Jun

    This dissertation addresses the problem of speech synthesis and speech production modelling based on the fundamental principles of human speech production. Unlike the conventional source-filter model, which assumes the independence of the excitation and the acoustic filter, we treat the entire vocal apparatus as one system consisting of a fluid dynamic aspect and a mechanical part. We model the vocal tract by a three-dimensional moving geometry. We also model the sound propagation inside the vocal apparatus as a three-dimensional nonplane-wave propagation inside a viscous fluid described by Navier-Stokes equations. In our work, we first propose a combined minimum energy and minimum jerk criterion to estimate the dynamic vocal tract movements during speech production. Both theoretical error bound analysis and experimental results show that this method can achieve very close match at the target points and avoid the abrupt change in articulatory trajectory at the same time. Second, a mechanical vocal fold model is used to compute the excitation signal of the vocal tract. The advantage of this model is that it is closely coupled with the vocal tract system based on fundamental aerodynamics. As a result, we can obtain an excitation signal with much more detail than the conventional parametric vocal fold excitation model. Furthermore, strong evidence of source-tract interaction is observed. Finally, we propose a computational model of the fricative and stop types of sounds based on the physical principles of speech production. The advantage of this model is that it uses an exogenous process to model the additional nonsteady and nonlinear effects due to the flow mode, which are ignored by the conventional source- filter speech production model. A recursive algorithm is used to estimate the model parameters. Experimental results show that this model is able to synthesize good quality fricative and stop types of sounds. Based on our dissertation work, we carefully argue that the articulatory speech production model has the potential to flexibly synthesize natural-quality speech sounds and to provide a compact computational model for speech production that can be beneficial to a wide range of areas in speech signal processing.

Top