Advancements in robust algorithm formulation for speaker identification of whispered speech
NASA Astrophysics Data System (ADS)
Fan, Xing
Whispered speech is an alternative speech production mode from neutral speech, which is used by talkers intentionally in natural conversational scenarios to protect privacy and to avoid certain content from being overheard/made public. Due to the profound differences between whispered and neutral speech in production mechanism and the absence of whispered adaptation data, the performance of speaker identification systems trained with neutral speech degrades significantly. This dissertation therefore focuses on developing a robust closed-set speaker recognition system for whispered speech by using no or limited whispered adaptation data from non-target speakers. This dissertation proposes the concept of "High''/"Low'' performance whispered data for the purpose of speaker identification. A variety of acoustic properties are identified that contribute to the quality of whispered data. An acoustic analysis is also conducted to compare the phoneme/speaker dependency of the differences between whispered and neutral data in the feature domain. The observations from those acoustic analysis are new in this area and also serve as a guidance for developing robust speaker identification systems for whispered speech. This dissertation further proposes two systems for speaker identification of whispered speech. One system focuses on front-end processing. A two-dimensional feature space is proposed to search for "Low''-quality performance based whispered utterances and separate feature mapping functions are applied to vowels and consonants respectively in order to retain the speaker's information shared between whispered and neutral speech. The other system focuses on speech-mode-independent model training. The proposed method generates pseudo whispered features from neutral features by using the statistical information contained in a whispered Universal Background model (UBM) trained from extra collected whispered data from non-target speakers. Four modeling methods are proposed for the transformation estimation in order to generate the pseudo whispered features. Both of the above two systems demonstrate a significant improvement over the baseline system on the evaluation data. This dissertation has therefore contributed to providing a scientific understanding of the differences between whispered and neutral speech as well as improved front-end processing and modeling method for speaker identification of whispered speech. Such advancements will ultimately contribute to improve the robustness of speech processing systems.
Unsupervised real-time speaker identification for daily movies
NASA Astrophysics Data System (ADS)
Li, Ying; Kuo, C.-C. Jay
2002-07-01
The problem of identifying speakers for movie content analysis is addressed in this paper. While most previous work on speaker identification was carried out in a supervised mode using pure audio data, more robust results can be obtained in real-time by integrating knowledge from multiple media sources in an unsupervised mode. In this work, both audio and visual cues will be employed and subsequently combined in a probabilistic framework to identify speakers. Particularly, audio information is used to identify speakers with a maximum likelihood (ML)-based approach while visual information is adopted to distinguish speakers by detecting and recognizing their talking faces based on face detection/recognition and mouth tracking techniques. Moreover, to accommodate for speakers' acoustic variations along time, we update their models on the fly by adapting to their newly contributed speech data. Encouraging results have been achieved through extensive experiments, which shows a promising future of the proposed audiovisual-based unsupervised speaker identification system.
Performance enhancement for audio-visual speaker identification using dynamic facial muscle model.
Asadpour, Vahid; Towhidkhah, Farzad; Homayounpour, Mohammad Mehdi
2006-10-01
Science of human identification using physiological characteristics or biometry has been of great concern in security systems. However, robust multimodal identification systems based on audio-visual information has not been thoroughly investigated yet. Therefore, the aim of this work to propose a model-based feature extraction method which employs physiological characteristics of facial muscles producing lip movements. This approach adopts the intrinsic properties of muscles such as viscosity, elasticity, and mass which are extracted from the dynamic lip model. These parameters are exclusively dependent on the neuro-muscular properties of speaker; consequently, imitation of valid speakers could be reduced to a large extent. These parameters are applied to a hidden Markov model (HMM) audio-visual identification system. In this work, a combination of audio and video features has been employed by adopting a multistream pseudo-synchronized HMM training method. Noise robust audio features such as Mel-frequency cepstral coefficients (MFCC), spectral subtraction (SS), and relative spectra perceptual linear prediction (J-RASTA-PLP) have been used to evaluate the performance of the multimodal system once efficient audio feature extraction methods have been utilized. The superior performance of the proposed system is demonstrated on a large multispeaker database of continuously spoken digits, along with a sentence that is phonetically rich. To evaluate the robustness of algorithms, some experiments were performed on genetically identical twins. Furthermore, changes in speaker voice were simulated with drug inhalation tests. In 3 dB signal to noise ratio (SNR), the dynamic muscle model improved the identification rate of the audio-visual system from 91 to 98%. Results on identical twins revealed that there was an apparent improvement on the performance for the dynamic muscle model-based system, in which the identification rate of the audio-visual system was enhanced from 87 to 96%.
Cost-sensitive learning for emotion robust speaker recognition.
Li, Dongdong; Yang, Yingchun; Dai, Weihui
2014-01-01
In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved.
Cost-Sensitive Learning for Emotion Robust Speaker Recognition
Li, Dongdong; Yang, Yingchun
2014-01-01
In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved. PMID:24999492
Robust Recognition of Loud and Lombard speech in the Fighter Cockpit Environment
1988-08-01
the latter as inter-speaker variability. According to Zue [Z85j, inter-speaker variabilities can be attributed to sociolinguistic background, dialect...34 Journal of the Acoustical Society of America , Vol 50, 1971. [At74I B. S. Atal, "Linear prediction for speaker identification," Journal of the Acoustical...Society of America , Vol 55, 1974. [B771 B. Beek, E. P. Neuberg, and D. C. Hodge, "An Assessment of the Technology of Automatic Speech Recognition for
Robust Speaker Authentication Based on Combined Speech and Voiceprint Recognition
NASA Astrophysics Data System (ADS)
Malcangi, Mario
2009-08-01
Personal authentication is becoming increasingly important in many applications that have to protect proprietary data. Passwords and personal identification numbers (PINs) prove not to be robust enough to ensure that unauthorized people do not use them. Biometric authentication technology may offer a secure, convenient, accurate solution but sometimes fails due to its intrinsically fuzzy nature. This research aims to demonstrate that combining two basic speech processing methods, voiceprint identification and speech recognition, can provide a very high degree of robustness, especially if fuzzy decision logic is used.
Speaker Recognition by Combining MFCC and Phase Information in Noisy Conditions
NASA Astrophysics Data System (ADS)
Wang, Longbiao; Minami, Kazue; Yamamoto, Kazumasa; Nakagawa, Seiichi
In this paper, we investigate the effectiveness of phase for speaker recognition in noisy conditions and combine the phase information with mel-frequency cepstral coefficients (MFCCs). To date, almost speaker recognition methods are based on MFCCs even in noisy conditions. For MFCCs which dominantly capture vocal tract information, only the magnitude of the Fourier Transform of time-domain speech frames is used and phase information has been ignored. High complement of the phase information and MFCCs is expected because the phase information includes rich voice source information. Furthermore, some researches have reported that phase based feature was robust to noise. In our previous study, a phase information extraction method that normalizes the change variation in the phase depending on the clipping position of the input speech was proposed, and the performance of the combination of the phase information and MFCCs was remarkably better than that of MFCCs. In this paper, we evaluate the robustness of the proposed phase information for speaker identification in noisy conditions. Spectral subtraction, a method skipping frames with low energy/Signal-to-Noise (SN) and noisy speech training models are used to analyze the effect of the phase information and MFCCs in noisy conditions. The NTT database and the JNAS (Japanese Newspaper Article Sentences) database added with stationary/non-stationary noise were used to evaluate our proposed method. MFCCs outperformed the phase information for clean speech. On the other hand, the degradation of the phase information was significantly smaller than that of MFCCs for noisy speech. The individual result of the phase information was even better than that of MFCCs in many cases by clean speech training models. By deleting unreliable frames (frames having low energy/SN), the speaker identification performance was improved significantly. By integrating the phase information with MFCCs, the speaker identification error reduction rate was about 30%-60% compared with the standard MFCC-based method.
Landwehr, Markus; Fürstenberg, Dirk; Walger, Martin; von Wedel, Hasso; Meister, Hartmut
2014-01-01
Advances in speech coding strategies and electrode array designs for cochlear implants (CIs) predominantly aim at improving speech perception. Current efforts are also directed at transmitting appropriate cues of the fundamental frequency (F0) to the auditory nerve with respect to speech quality, prosody, and music perception. The aim of this study was to examine the effects of various electrode configurations and coding strategies on speech intonation identification, speaker gender identification, and music quality rating. In six MED-EL CI users electrodes were selectively deactivated in order to simulate different insertion depths and inter-electrode distances when using the high definition continuous interleaved sampling (HDCIS) and fine structure processing (FSP) speech coding strategies. Identification of intonation and speaker gender was determined and music quality rating was assessed. For intonation identification HDCIS was robust against the different electrode configurations, whereas fine structure processing showed significantly worse results when a short electrode depth was simulated. In contrast, speaker gender recognition was not affected by electrode configuration or speech coding strategy. Music quality rating was sensitive to electrode configuration. In conclusion, the three experiments revealed different outcomes, even though they all addressed the reception of F0 cues. Rapid changes in F0, as seen with intonation, were the most sensitive to electrode configurations and coding strategies. In contrast, electrode configurations and coding strategies did not show large effects when F0 information was available over a longer time period, as seen with speaker gender. Music quality relies on additional spectral cues other than F0, and was poorest when a shallow insertion was simulated.
Parker, Mark; Cunningham, Stuart; Enderby, Pam; Hawley, Mark; Green, Phil
2006-01-01
The STARDUST project developed robust computer speech recognizers for use by eight people with severe dysarthria and concomitant physical disability to access assistive technologies. Independent computer speech recognizers trained with normal speech are of limited functional use by those with severe dysarthria due to limited and inconsistent proximity to "normal" articulatory patterns. Severe dysarthric output may also be characterized by a small mass of distinguishable phonetic tokens making the acoustic differentiation of target words difficult. Speaker dependent computer speech recognition using Hidden Markov Models was achieved by the identification of robust phonetic elements within the individual speaker output patterns. A new system of speech training using computer generated visual and auditory feedback reduced the inconsistent production of key phonetic tokens over time.
Robust speaker's location detection in a vehicle environment using GMM models.
Hu, Jwu-Sheng; Cheng, Chieh-Cheng; Liu, Wei-Han
2006-04-01
Abstract-Human-computer interaction (HCI) using speech communication is becoming increasingly important, especially in driving where safety is the primary concern. Knowing the speaker's location (i.e., speaker localization) not only improves the enhancement results of a corrupted signal, but also provides assistance to speaker identification. Since conventional speech localization algorithms suffer from the uncertainties of environmental complexity and noise, as well as from the microphone mismatch problem, they are frequently not robust in practice. Without a high reliability, the acceptance of speech-based HCI would never be realized. This work presents a novel speaker's location detection method and demonstrates high accuracy within a vehicle cabinet using a single linear microphone array. The proposed approach utilize Gaussian mixture models (GMM) to model the distributions of the phase differences among the microphones caused by the complex characteristic of room acoustic and microphone mismatch. The model can be applied both in near-field and far-field situations in a noisy environment. The individual Gaussian component of a GMM represents some general location-dependent but content and speaker-independent phase difference distributions. Moreover, the scheme performs well not only in nonline-of-sight cases, but also when the speakers are aligned toward the microphone array but at difference distances from it. This strong performance can be achieved by exploiting the fact that the phase difference distributions at different locations are distinguishable in the environment of a car. The experimental results also show that the proposed method outperforms the conventional multiple signal classification method (MUSIC) technique at various SNRs.
Integrated Robust Open-Set Speaker Identification System (IROSIS)
2012-05-01
29 LIST OF TABLES Table 1. Detail of NIST Data Used for Training and Testing ............................................ 3 Table 2...scenarios are referred to as VB-YB, VL-YL, VB-YL and VL-YB respectively. Table 1. Detail of NIST Data Used for Training and Testing Purpose Source No...M is the UBM supervector, and that the difference between ( )L m and ( , )Q M m is the Kullback - Leibler divergence between the “alignment” of the
Yu, Chengzhu; Hansen, John H L
2017-03-01
Human physiology has evolved to accommodate environmental conditions, including temperature, pressure, and air chemistry unique to Earth. However, the environment in space varies significantly compared to that on Earth and, therefore, variability is expected in astronauts' speech production mechanism. In this study, the variations of astronaut voice characteristics during the NASA Apollo 11 mission are analyzed. Specifically, acoustical features such as fundamental frequency and phoneme formant structure that are closely related to the speech production system are studied. For a further understanding of astronauts' vocal tract spectrum variation in space, a maximum likelihood frequency warping based analysis is proposed to detect the vocal tract spectrum displacement during space conditions. The results from fundamental frequency, formant structure, as well as vocal spectrum displacement indicate that astronauts change their speech production mechanism when in space. Moreover, the experimental results for astronaut voice identification tasks indicate that current speaker recognition solutions are highly vulnerable to astronaut voice production variations in space conditions. Future recommendations from this study suggest that successful applications of speaker recognition during extended space missions require robust speaker modeling techniques that could effectively adapt to voice production variation caused by diverse space conditions.
Open-set speaker identification with diverse-duration speech data
NASA Astrophysics Data System (ADS)
Karadaghi, Rawande; Hertlein, Heinz; Ariyaeeinia, Aladdin
2015-05-01
The concern in this paper is an important category of applications of open-set speaker identification in criminal investigation, which involves operating with short and varied duration speech. The study presents investigations into the adverse effects of such an operating condition on the accuracy of open-set speaker identification, based on both GMMUBM and i-vector approaches. The experiments are conducted using a protocol developed for the identification task, based on the NIST speaker recognition evaluation corpus of 2008. In order to closely cover the real-world operating conditions in the considered application area, the study includes experiments with various combinations of training and testing data duration. The paper details the characteristics of the experimental investigations conducted and provides a thorough analysis of the results obtained.
NASA Astrophysics Data System (ADS)
Tovarek, Jaromir; Partila, Pavol
2017-05-01
This article discusses the speaker identification for the improvement of the security communication between law enforcement units. The main task of this research was to develop the text-independent speaker identification system which can be used for real-time recognition. This system is designed for identification in the open set. It means that the unknown speaker can be anyone. Communication itself is secured, but we have to check the authorization of the communication parties. We have to decide if the unknown speaker is the authorized for the given action. The calls are recorded by IP telephony server and then these recordings are evaluate using classification If the system evaluates that the speaker is not authorized, it sends a warning message to the administrator. This message can detect, for example a stolen phone or other unusual situation. The administrator then performs the appropriate actions. Our novel proposal system uses multilayer neural network for classification and it consists of three layers (input layer, hidden layer, and output layer). A number of neurons in input layer corresponds with the length of speech features. Output layer then represents classified speakers. Artificial Neural Network classifies speech signal frame by frame, but the final decision is done over the complete record. This rule substantially increases accuracy of the classification. Input data for the neural network are a thirteen Mel-frequency cepstral coefficients, which describe the behavior of the vocal tract. These parameters are the most used for speaker recognition. Parameters for training, testing and validation were extracted from recordings of authorized users. Recording conditions for training data correspond with the real traffic of the system (sampling frequency, bit rate). The main benefit of the research is the system developed for text-independent speaker identification which is applied to secure communication between law enforcement units.
Optimization of multilayer neural network parameters for speaker recognition
NASA Astrophysics Data System (ADS)
Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka
2016-05-01
This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.
Evaluation of speaker de-identification based on voice gender and age conversion
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Matoušek, Jindřich
2018-03-01
Two basic tasks are covered in this paper. The first one consists in the design and practical testing of a new method for voice de-identification that changes the apparent age and/or gender of a speaker by multi-segmental frequency scale transformation combined with prosody modification. The second task is aimed at verification of applicability of a classifier based on Gaussian mixture models (GMM) to detect the original Czech and Slovak speakers after applied voice deidentification. The performed experiments confirm functionality of the developed gender and age conversion for all selected types of de-identification which can be objectively evaluated by the GMM-based open-set classifier. The original speaker detection accuracy was compared also for sentences uttered by German and English speakers showing language independence of the proposed method.
INTERPOL survey of the use of speaker identification by law enforcement agencies.
Morrison, Geoffrey Stewart; Sahito, Farhan Hyder; Jardine, Gaëlle; Djokic, Djordje; Clavet, Sophie; Berghs, Sabine; Goemans Dorny, Caroline
2016-06-01
A survey was conducted of the use of speaker identification by law enforcement agencies around the world. A questionnaire was circulated to law enforcement agencies in the 190 member countries of INTERPOL. 91 responses were received from 69 countries. 44 respondents reported that they had speaker identification capabilities in house or via external laboratories. Half of these came from Europe. 28 respondents reported that they had databases of audio recordings of speakers. The clearest pattern in the responses was that of diversity. A variety of different approaches to speaker identification were used: The human-supervised-automatic approach was the most popular in North America, the auditory-acoustic-phonetic approach was the most popular in Europe, and the spectrographic/auditory-spectrographic approach was the most popular in Africa, Asia, the Middle East, and South and Central America. Globally, and in Europe, the most popular framework for reporting conclusions was identification/exclusion/inconclusive. In Europe, the second most popular framework was the use of verbal likelihood ratio scales. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard
2006-05-01
A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.
Performance of wavelet analysis and neural networks for pathological voices identification
NASA Astrophysics Data System (ADS)
Salhi, Lotfi; Talbi, Mourad; Abid, Sabeur; Cherif, Adnane
2011-09-01
Within the medical environment, diverse techniques exist to assess the state of the voice of the patient. The inspection technique is inconvenient for a number of reasons, such as its high cost, the duration of the inspection, and above all, the fact that it is an invasive technique. This study focuses on a robust, rapid and accurate system for automatic identification of pathological voices. This system employs non-invasive, non-expensive and fully automated method based on hybrid approach: wavelet transform analysis and neural network classifier. First, we present the results obtained in our previous study while using classic feature parameters. These results allow visual identification of pathological voices. Second, quantified parameters drifting from the wavelet analysis are proposed to characterise the speech sample. On the other hand, a system of multilayer neural networks (MNNs) has been developed which carries out the automatic detection of pathological voices. The developed method was evaluated using voice database composed of recorded voice samples (continuous speech) from normophonic or dysphonic speakers. The dysphonic speakers were patients of a National Hospital 'RABTA' of Tunis Tunisia and a University Hospital in Brussels, Belgium. Experimental results indicate a success rate ranging between 75% and 98.61% for discrimination of normal and pathological voices using the proposed parameters and neural network classifier. We also compared the average classification rate based on the MNN, Gaussian mixture model and support vector machines.
Discriminative analysis of lip motion features for speaker identification and speech-reading.
Cetingül, H Ertan; Yemez, Yücel; Erzin, Engin; Tekalp, A Murat
2006-10-01
There have been several studies that jointly use audio, lip intensity, and lip geometry information for speaker identification and speech-reading applications. This paper proposes using explicit lip motion information, instead of or in addition to lip intensity and/or geometry information, for speaker identification and speech-reading within a unified feature selection and discrimination analysis framework, and addresses two important issues: 1) Is using explicit lip motion information useful, and, 2) if so, what are the best lip motion features for these two applications? The best lip motion features for speaker identification are considered to be those that result in the highest discrimination of individual speakers in a population, whereas for speech-reading, the best features are those providing the highest phoneme/word/phrase recognition rate. Several lip motion feature candidates have been considered including dense motion features within a bounding box about the lip, lip contour motion features, and combination of these with lip shape features. Furthermore, a novel two-stage, spatial, and temporal discrimination analysis is introduced to select the best lip motion features for speaker identification and speech-reading applications. Experimental results using an hidden-Markov-model-based recognition system indicate that using explicit lip motion information provides additional performance gains in both applications, and lip motion features prove more valuable in the case of speech-reading application.
Analysis of human scream and its impact on text-independent speaker verification.
Hansen, John H L; Nandwana, Mahesh Kumar; Shokouhi, Navid
2017-04-01
Scream is defined as sustained, high-energy vocalizations that lack phonological structure. Lack of phonological structure is how scream is identified from other forms of loud vocalization, such as "yell." This study investigates the acoustic aspects of screams and addresses those that are known to prevent standard speaker identification systems from recognizing the identity of screaming speakers. It is well established that speaker variability due to changes in vocal effort and Lombard effect contribute to degraded performance in automatic speech systems (i.e., speech recognition, speaker identification, diarization, etc.). However, previous research in the general area of speaker variability has concentrated on human speech production, whereas less is known about non-speech vocalizations. The UT-NonSpeech corpus is developed here to investigate speaker verification from scream samples. This study considers a detailed analysis in terms of fundamental frequency, spectral peak shift, frame energy distribution, and spectral tilt. It is shown that traditional speaker recognition based on the Gaussian mixture models-universal background model framework is unreliable when evaluated with screams.
Segmentation of the Speaker's Face Region with Audiovisual Correlation
NASA Astrophysics Data System (ADS)
Liu, Yuyu; Sato, Yoichi
The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.
Training Japanese listeners to identify English /r/ and /l/: A first report
Logan, John S.; Lively, Scott E.; Pisoni, David B.
2012-01-01
Native speakers of Japanese learning English generally have difficulty differentiating the phonemes /r/ and /l/, even after years of experience with English. Previous research that attempted to train Japanese listeners to distinguish this contrast using synthetic stimuli reported little success, especially when transfer to natural tokens containing /r/ and /l/ was tested. In the present study, a different training procedure that emphasized variability among stimulus tokens was used. Japanese subjects were trained in a minimal pair identification paradigm using multiple natural exemplars contrasting /r/ and /l/ from a variety of phonetic environments as stimuli. A pretest–posttest design containing natural tokens was used to assess the effects of training. Results from six subjects showed that the new procedure was more robust than earlier training techniques. Small but reliable differences in performance were obtained between pretest and posttest scores. The results demonstrate the importance of stimulus variability and task-related factors in training nonnative speakers to perceive novel phonetic contrasts that are not distinctive in their native language. PMID:2016438
Noise Reduction with Microphone Arrays for Speaker Identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, Z
Reducing acoustic noise in audio recordings is an ongoing problem that plagues many applications. This noise is hard to reduce because of interfering sources and non-stationary behavior of the overall background noise. Many single channel noise reduction algorithms exist but are limited in that the more the noise is reduced; the more the signal of interest is distorted due to the fact that the signal and noise overlap in frequency. Specifically acoustic background noise causes problems in the area of speaker identification. Recording a speaker in the presence of acoustic noise ultimately limits the performance and confidence of speaker identificationmore » algorithms. In situations where it is impossible to control the environment where the speech sample is taken, noise reduction filtering algorithms need to be developed to clean the recorded speech of background noise. Because single channel noise reduction algorithms would distort the speech signal, the overall challenge of this project was to see if spatial information provided by microphone arrays could be exploited to aid in speaker identification. The goals are: (1) Test the feasibility of using microphone arrays to reduce background noise in speech recordings; (2) Characterize and compare different multichannel noise reduction algorithms; (3) Provide recommendations for using these multichannel algorithms; and (4) Ultimately answer the question - Can the use of microphone arrays aid in speaker identification?« less
Using Avatars for Improving Speaker Identification in Captioning
NASA Astrophysics Data System (ADS)
Vy, Quoc V.; Fels, Deborah I.
Captioning is the main method for accessing television and film content by people who are deaf or hard-of-hearing. One major difficulty consistently identified by the community is that of knowing who is speaking particularly for an off screen narrator. A captioning system was created using a participatory design method to improve speaker identification. The final prototype contained avatars and a coloured border for identifying specific speakers. Evaluation results were very positive; however participants also wanted to customize various components such as caption and avatar location.
NASA Astrophysics Data System (ADS)
Kamiński, K.; Dobrowolski, A. P.
2017-04-01
The paper presents the architecture and the results of optimization of selected elements of the Automatic Speaker Recognition (ASR) system that uses Gaussian Mixture Models (GMM) in the classification process. Optimization was performed on the process of selection of individual characteristics using the genetic algorithm and the parameters of Gaussian distributions used to describe individual voices. The system that was developed was tested in order to evaluate the impact of different compression methods used, among others, in landline, mobile, and VoIP telephony systems, on effectiveness of the speaker identification. Also, the results were presented of effectiveness of speaker identification at specific levels of noise with the speech signal and occurrence of other disturbances that could appear during phone calls, which made it possible to specify the spectrum of applications of the presented ASR system.
Hybrid Speaker Recognition Using Universal Acoustic Model
NASA Astrophysics Data System (ADS)
Nishimura, Jun; Kuroda, Tadahiro
We propose a novel speaker recognition approach using a speaker-independent universal acoustic model (UAM) for sensornet applications. In sensornet applications such as “Business Microscope”, interactions among knowledge workers in an organization can be visualized by sensing face-to-face communication using wearable sensor nodes. In conventional studies, speakers are detected by comparing energy of input speech signals among the nodes. However, there are often synchronization errors among the nodes which degrade the speaker recognition performance. By focusing on property of the speaker's acoustic channel, UAM can provide robustness against the synchronization error. The overall speaker recognition accuracy is improved by combining UAM with the energy-based approach. For 0.1s speech inputs and 4 subjects, speaker recognition accuracy of 94% is achieved at the synchronization error less than 100ms.
Ma, Joan K-Y; Whitehill, Tara L; So, Susanne Y-S
2010-08-01
Speech produced by individuals with hypokinetic dysarthria associated with Parkinson's disease (PD) is characterized by a number of features including impaired speech prosody. The purpose of this study was to investigate intonation contrasts produced by this group of speakers. Speech materials with a question-statement contrast were collected from 14 Cantonese speakers with PD. Twenty listeners then classified the productions as either questions or statements. Acoustic analyses of F0, duration, and intensity were conducted to determine which acoustic cues distinguished the production of questions from statements, and which cues appeared to be exploited by listeners in identifying intonational contrasts. The results show that listeners identified statements with a high degree of accuracy, but the accuracy of question identification ranged from 0.56% to 96% across the 14 speakers. The speakers with PD used similar acoustic cues as nondysarthric Cantonese speakers to mark the question-statement contrast, although the contrasts were not observed in all speakers. Listeners mainly used F0 cues at the final syllable for intonation identification. These data contribute to the researchers' understanding of intonation marking in speakers with PD, with specific application to the production and perception of intonation in a lexical tone language.
ERIC Educational Resources Information Center
Yook, Cheongmin; Lindemann, Stephanie
2013-01-01
This study investigates how the attitudes of 60 Korean university students towards five varieties of English are affected by the identification of the speaker's nationality and ethnicity. The study employed both a verbal guise technique and questions eliciting overt beliefs and preferences related to learning English. While the majority of the…
Neurophysiological and Behavioral Responses of Mandarin Lexical Tone Processing
Yu, Yan H.; Shafer, Valerie L.; Sussman, Elyse S.
2017-01-01
Language experience enhances discrimination of speech contrasts at a behavioral- perceptual level, as well as at a pre-attentive level, as indexed by event-related potential (ERP) mismatch negativity (MMN) responses. The enhanced sensitivity could be the result of changes in acoustic resolution and/or long-term memory representations of the relevant information in the auditory cortex. To examine these possibilities, we used a short (ca. 600 ms) vs. long (ca. 2,600 ms) interstimulus interval (ISI) in a passive, oddball discrimination task while obtaining ERPs. These ISI differences were used to test whether cross-linguistic differences in processing Mandarin lexical tone are a function of differences in acoustic resolution and/or differences in long-term memory representations. Bisyllabic nonword tokens that differed in lexical tone categories were presented using a passive listening multiple oddball paradigm. Behavioral discrimination and identification data were also collected. The ERP results revealed robust MMNs to both easy and difficult lexical tone differences for both groups at short ISIs. At long ISIs, there was either no change or an enhanced MMN amplitude for the Mandarin group, but reduced MMN amplitude for the English group. In addition, the Mandarin listeners showed a larger late negativity (LN) discriminative response than the English listeners for lexical tone contrasts in the long ISI condition. Mandarin speakers outperformed English speakers in the behavioral tasks, especially under the long ISI conditions with the more similar lexical tone pair. These results suggest that the acoustic correlates of lexical tone are fairly robust and easily discriminated at short ISIs, when the auditory sensory memory trace is strong. At longer ISIs beyond 2.5 s language-specific experience is necessary for robust discrimination. PMID:28321179
Shibboleth: An Automated Foreign Accent Identification Program
ERIC Educational Resources Information Center
Frost, Wende
2013-01-01
The speech of non-native (L2) speakers of a language contains phonological rules that differentiate them from native speakers. These phonological rules characterize or distinguish accents in an L2. The Shibboleth program creates combinatorial rule-sets to describe the phonological pattern of these accents and classifies L2 speakers into their…
Single-Word Intelligibility in Speakers with Repaired Cleft Palate
ERIC Educational Resources Information Center
Whitehill, Tara; Chau, Cynthia
2004-01-01
Many speakers with repaired cleft palate have reduced intelligibility, but there are limitations with current procedures for assessing intelligibility. The aim of this study was to construct a single-word intelligibility test for speakers with cleft palate. The test used a multiple-choice identification format, and was based on phonetic contrasts…
ERIC Educational Resources Information Center
Bryden, James D.
The purpose of this study was to specify variables which function significantly in the racial identification and speech quality rating of Negro and white speakers by Negro and white listeners. Ninety-one adults served as subjects for the speech task; 86 of these subjects, 43 Negro and 43 white, provided the listener responses. Subjects were chosen…
Identification and tracking of particular speaker in noisy environment
NASA Astrophysics Data System (ADS)
Sawada, Hideyuki; Ohkado, Minoru
2004-10-01
Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.
NASA Astrophysics Data System (ADS)
S. Al-Kaltakchi, Musab T.; Woo, Wai L.; Dlay, Satnam; Chambers, Jonathon A.
2017-12-01
In this study, a speaker identification system is considered consisting of a feature extraction stage which utilizes both power normalized cepstral coefficients (PNCCs) and Mel frequency cepstral coefficients (MFCC). Normalization is applied by employing cepstral mean and variance normalization (CMVN) and feature warping (FW), together with acoustic modeling using a Gaussian mixture model-universal background model (GMM-UBM). The main contributions are comprehensive evaluations of the effect of both additive white Gaussian noise (AWGN) and non-stationary noise (NSN) (with and without a G.712 type handset) upon identification performance. In particular, three NSN types with varying signal to noise ratios (SNRs) were tested corresponding to street traffic, a bus interior, and a crowded talking environment. The performance evaluation also considered the effect of late fusion techniques based on score fusion, namely, mean, maximum, and linear weighted sum fusion. The databases employed were TIMIT, SITW, and NIST 2008; and 120 speakers were selected from each database to yield 3600 speech utterances. As recommendations from the study, mean fusion is found to yield overall best performance in terms of speaker identification accuracy (SIA) with noisy speech, whereas linear weighted sum fusion is overall best for original database recordings.
2015-10-01
Scoring, Gaussian Backend , etc.) as shown in Fig. 39. The methods in this domain also emphasized the ability to perform data purification for both...investigation using the same infrastructure was undertaken to explore Lombard effect “flavor” detection for improved speaker ID. The study The presence of...dimension selection and compared to a common N-gram frequency based selection. 2.1.2: Exploration on NN/DBN backend : Since Deep Neural Networks (DNN) have
Sadakata, Makiko; McQueen, James M
2013-08-01
This study reports effects of a high-variability training procedure on nonnative learning of a Japanese geminate-singleton fricative contrast. Thirty native speakers of Dutch took part in a 5-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. Participants were trained with either many repetitions of a limited set of words recorded by a single speaker (low-variability training) or with fewer repetitions of a more variable set of words recorded by multiple speakers (high-variability training). Both types of training enhanced identification of speech but not of nonspeech materials, indicating that learning was domain specific. High-variability training led to superior performance in identification but not in discrimination tests, and supported better generalization of learning as shown by transfer from the trained fricatives to the identification of untrained stops and affricates. Variability thus helps nonnative listeners to form abstract categories rather than to enhance early acoustic analysis.
Brainstem Correlates of Speech-in-Noise Perception in Children
Anderson, Samira; Skoe, Erika; Chandrasekaran, Bharath; Zecker, Steven; Kraus, Nina
2010-01-01
Children often have difficulty understanding speech in challenging listening environments. In the absence of peripheral hearing loss, these speech perception difficulties may arise from dysfunction at more central levels in the auditory system, including subcortical structures. We examined brainstem encoding of pitch in a speech syllable in 38 school-age children. In children with poor speech-in-noise perception, we find impaired encoding of the fundamental frequency and the second harmonic, two important cues for pitch perception. Pitch, an important factor in speaker identification, aids the listener in tracking a specific voice from a background of voices. These results suggest that the robustness of subcortical neural encoding of pitch features in time-varying signals is an important factor in determining success with speech perception in noise. PMID:20708671
Chung, Wei-Lun; Bidelman, Gavin M
2016-01-01
We examined cross-language differences in neural encoding and tracking of intensity and pitch cues signaling English stress patterns. Auditory mismatch negativities (MMNs) were recorded in English and Mandarin listeners in response to contrastive English pseudowords whose primary stress occurred either on the first or second syllable (i.e., "nocTICity" vs. "NOCticity"). The contrastive syllable stress elicited two consecutive MMNs in both language groups, but English speakers demonstrated larger responses to stress patterns than Mandarin speakers. Correlations between the amplitude of ERPs and continuous changes in the running intensity and pitch of speech assessed how well each language group's brain activity tracked these salient acoustic features of lexical stress. We found that English speakers' neural responses tracked intensity changes in speech more closely than Mandarin speakers (higher brain-acoustic correlation). Findings demonstrate more robust and precise processing of English stress (intensity) patterns in early auditory cortical responses of native relative to nonnative speakers. Copyright © 2016 Elsevier Inc. All rights reserved.
What a speaker's choice of frame reveals: reference points, frame selection, and framing effects.
McKenzie, Craig R M; Nelson, Jonathan D
2003-09-01
Framing effects are well established: Listeners' preferences depend on how outcomes are described to them, or framed. Less well understood is what determines how speakers choose frames. Two experiments revealed that reference points systematically influenced speakers' choices between logically equivalent frames. For example, speakers tended to describe a 4-ounce cup filled to the 2-ounce line as half full if it was previously empty but described it as half empty if it was previously full. Similar results were found when speakers could describe the outcome of a medical treatment in terms of either mortality or survival (e.g., 25% die vs. 75% survive). Two additional experiments showed that listeners made accurate inferences about speakers' reference points on the basis of the selected frame (e.g., if a speaker described a cup as half empty, listeners inferred that the cup used to be full). Taken together, the data suggest that frames reliably convey implicit information in addition to their explicit content, which helps explain why framing effects are so robust.
Akomolafe, Soji
2013-01-01
Of some of the major types of discrimination, the one that gets the least attention is national origin discrimination and in particular, accent discrimination, especially when it comes to upward mobility in the workplace. Yet, unlike other forms of discrimination, accent discrimination is rarely a subject of any robust public debate. This paper is a modest attempt to help establish a framework for understanding the relative neglect to which the discourse on accent discrimination has been subjected vis-a-vis the overall national debate on diversity. Hopefully, in the process, it will stimulate a more robust conversation on the plight of foreign-accented speakers.
Gender identification from high-pass filtered vowel segments: the use of high-frequency energy.
Donai, Jeremy J; Lass, Norman J
2015-10-01
The purpose of this study was to examine the use of high-frequency information for making gender identity judgments from high-pass filtered vowel segments produced by adult speakers. Specifically, the effect of removing lower-frequency spectral detail (i.e., F3 and below) from vowel segments via high-pass filtering was evaluated. Thirty listeners (ages 18-35) with normal hearing participated in the experiment. A within-subjects design was used to measure gender identification for six 250-ms vowel segments (/æ/, /ɪ /, /ɝ/, /ʌ/, /ɔ/, and /u/), produced by ten male and ten female speakers. The results of this experiment demonstrated that despite the removal of low-frequency spectral detail, the listeners were accurate in identifying speaker gender from the vowel segments, and did so with performance significantly above chance. The removal of low-frequency spectral detail reduced gender identification by approximately 16 % relative to unfiltered vowel segments. Classification results using linear discriminant function analyses followed the perceptual data, using spectral and temporal representations derived from the high-pass filtered segments. Cumulatively, these findings indicate that normal-hearing listeners are able to make accurate perceptual judgments regarding speaker gender from vowel segments with low-frequency spectral detail removed via high-pass filtering. Therefore, it is reasonable to suggest the presence of perceptual cues related to gender identity in the high-frequency region of naturally produced vowel signals. Implications of these findings and possible mechanisms for performing the gender identification task from high-pass filtered stimuli are discussed.
ERIC Educational Resources Information Center
Bullock, Heather E.; Fernald, Julian L.
2003-01-01
Drawing on a communications model of persuasion (Hovland, Janis, & Kelley, 1953), this study examined the effect of target appearance on feminists' and nonfeminists' perceptions of a speaker delivering a feminist or an antifeminist message. One hundred three college women watched one of four videotaped speeches that varied by content (profeminist…
Accent Identification by Adults with Aphasia
ERIC Educational Resources Information Center
Newton, Caroline; Burns, Rebecca; Bruce, Carolyn
2013-01-01
The UK is a diverse society where individuals regularly interact with speakers with different accents. Whilst there is a growing body of research on the impact of speaker accent on comprehension in people with aphasia, there is none which explores their ability to identify accents. This study investigated the ability of this group to identify the…
NASA Astrophysics Data System (ADS)
Zilletti, Michele; Marker, Arthur; Elliott, Stephen John; Holland, Keith
2017-05-01
In this study model identification of the nonlinear dynamics of a micro-speaker is carried out by purely electrical measurements, avoiding any explicit vibration measurements. It is shown that a dynamic model of the micro-speaker, which takes into account the nonlinear damping characteristic of the device, can be identified by measuring the response between the voltage input and the current flowing into the coil. An analytical formulation of the quasi-linear model of the micro-speaker is first derived and an optimisation method is then used to identify a polynomial function which describes the mechanical damping behaviour of the micro-speaker. The analytical results of the quasi-linear model are compared with numerical results. This study potentially opens up the possibility of efficiently implementing nonlinear echo cancellers.
Jiang, Jun; Liu, Fang; Wan, Xuan; Jiang, Cunmei
2015-07-01
Tone language experience benefits pitch processing in music and speech for typically developing individuals. No known studies have examined pitch processing in individuals with autism who speak a tone language. This study investigated discrimination and identification of melodic contour and speech intonation in a group of Mandarin-speaking individuals with high-functioning autism. Individuals with autism showed superior melodic contour identification but comparable contour discrimination relative to controls. In contrast, these individuals performed worse than controls on both discrimination and identification of speech intonation. These findings provide the first evidence for differential pitch processing in music and speech in tone language speakers with autism, suggesting that tone language experience may not compensate for speech intonation perception deficits in individuals with autism.
Speaker recognition with temporal cues in acoustic and electric hearing
NASA Astrophysics Data System (ADS)
Vongphoe, Michael; Zeng, Fan-Gang
2005-08-01
Natural spoken language processing includes not only speech recognition but also identification of the speaker's gender, age, emotional, and social status. Our purpose in this study is to evaluate whether temporal cues are sufficient to support both speech and speaker recognition. Ten cochlear-implant and six normal-hearing subjects were presented with vowel tokens spoken by three men, three women, two boys, and two girls. In one condition, the subject was asked to recognize the vowel. In the other condition, the subject was asked to identify the speaker. Extensive training was provided for the speaker recognition task. Normal-hearing subjects achieved nearly perfect performance in both tasks. Cochlear-implant subjects achieved good performance in vowel recognition but poor performance in speaker recognition. The level of the cochlear implant performance was functionally equivalent to normal performance with eight spectral bands for vowel recognition but only to one band for speaker recognition. These results show a disassociation between speech and speaker recognition with primarily temporal cues, highlighting the limitation of current speech processing strategies in cochlear implants. Several methods, including explicit encoding of fundamental frequency and frequency modulation, are proposed to improve speaker recognition for current cochlear implant users.
English Language Schooling, Linguistic Realities, and the Native Speaker of English in Hong Kong
ERIC Educational Resources Information Center
Hansen Edwards, Jette G.
2018-01-01
The study employs a case study approach to examine the impact of educational backgrounds on nine Hong Kong tertiary students' English and Cantonese language practices and identifications as native speakers of English and Cantonese. The study employed both survey and interview data to probe the participants' English and Cantonese language use at…
Priming of Non-Speech Vocalizations in Male Adults: The Influence of the Speaker's Gender
ERIC Educational Resources Information Center
Fecteau, Shirley; Armony, Jorge L.; Joanette, Yves; Belin, Pascal
2004-01-01
Previous research reported a priming effect for voices. However, the type of information primed is still largely unknown. In this study, we examined the influence of speaker's gender and emotional category of the stimulus on priming of non-speech vocalizations in 10 male participants, who performed a gender identification task. We found a…
Effects of Phonetic Similarity in the Identification of Mandarin Tones
ERIC Educational Resources Information Center
Li, Bin; Shao, Jing; Bao, Mingzhen
2017-01-01
Tonal languages differ in how they use phonetic correlates, e.g. average pitch height and pitch direction, for tonal contrasts. Thus, native speakers of a tonal language may need to adjust their attention to familiar or unfamiliar phonetic cues when perceiving non-native tones. On the other hand, speakers of a non-tonal language may need to…
Envelope responses in single-trial EEG indicate attended speaker in a 'cocktail party'.
Horton, Cort; Srinivasan, Ramesh; D'Zmura, Michael
2014-08-01
Recent studies have shown that auditory cortex better encodes the envelope of attended speech than that of unattended speech during multi-speaker ('cocktail party') situations. We investigated whether these differences were sufficiently robust within single-trial electroencephalographic (EEG) data to accurately determine where subjects attended. Additionally, we compared this measure to other established EEG markers of attention. High-resolution EEG was recorded while subjects engaged in a two-speaker 'cocktail party' task. Cortical responses to speech envelopes were extracted by cross-correlating the envelopes with each EEG channel. We also measured steady-state responses (elicited via high-frequency amplitude modulation of the speech) and alpha-band power, both of which have been sensitive to attention in previous studies. Using linear classifiers, we then examined how well each of these features could be used to predict the subjects' side of attention at various epoch lengths. We found that the attended speaker could be determined reliably from the envelope responses calculated from short periods of EEG, with accuracy improving as a function of sample length. Furthermore, envelope responses were far better indicators of attention than changes in either alpha power or steady-state responses. These results suggest that envelope-related signals recorded in EEG data can be used to form robust auditory BCI's that do not require artificial manipulation (e.g., amplitude modulation) of stimuli to function.
Bartos, Anthony L; Cipr, Tomas; Nelson, Douglas J; Schwarz, Petr; Banowetz, John; Jerabek, Ladislav
2018-04-01
A method is presented in which conventional speech algorithms are applied, with no modifications, to improve their performance in extremely noisy environments. It has been demonstrated that, for eigen-channel algorithms, pre-training multiple speaker identification (SID) models at a lattice of signal-to-noise-ratio (SNR) levels and then performing SID using the appropriate SNR dependent model was successful in mitigating noise at all SNR levels. In those tests, it was found that SID performance was optimized when the SNR of the testing and training data were close or identical. In this current effort multiple i-vector algorithms were used, greatly improving both processing throughput and equal error rate classification accuracy. Using identical approaches in the same noisy environment, performance of SID, language identification, gender identification, and diarization were significantly improved. A critical factor in this improvement is speech activity detection (SAD) that performs reliably in extremely noisy environments, where the speech itself is barely audible. To optimize SAD operation at all SNR levels, two algorithms were employed. The first maximized detection probability at low levels (-10 dB ≤ SNR < +10 dB) using just the voiced speech envelope, and the second exploited features extracted from the original speech to improve overall accuracy at higher quality levels (SNR ≥ +10 dB).
Multimodal Speaker Diarization.
Noulas, A; Englebienne, G; Krose, B J A
2012-01-01
We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.
ERIC Educational Resources Information Center
Hayes-Harb, Rachel
2006-01-01
English as a second language (ESL) teachers have long noted that native speakers of Arabic exhibit exceptional difficulty with English reading comprehension (e.g., Thompson-Panos & Thomas-Ruzic, 1983). Most existing work in this area has looked to higher level aspects of reading such as familiarity with discourse structure and cultural knowledge…
The Effect of Scene Variation on the Redundant Use of Color in Definite Reference
ERIC Educational Resources Information Center
Koolen, Ruud; Goudbeek, Martijn; Krahmer, Emiel
2013-01-01
This study investigates to what extent the amount of variation in a visual scene causes speakers to mention the attribute color in their definite target descriptions, focusing on scenes in which this attribute is not needed for identification of the target. The results of our three experiments show that speakers are more likely to redundantly…
San Segundo, Eugenia; Tsanas, Athanasios; Gómez-Vilda, Pedro
2017-01-01
There is a growing consensus that hybrid approaches are necessary for successful speaker characterization in Forensic Speaker Comparison (FSC); hence this study explores the forensic potential of voice features combining source and filter characteristics. The former relate to the action of the vocal folds while the latter reflect the geometry of the speaker's vocal tract. This set of features have been extracted from pause fillers, which are long enough for robust feature estimation while spontaneous enough to be extracted from voice samples in real forensic casework. Speaker similarity was measured using standardized Euclidean Distances (ED) between pairs of speakers: 54 different-speaker (DS) comparisons, 54 same-speaker (SS) comparisons and 12 comparisons between monozygotic twins (MZ). Results revealed that the differences between DS and SS comparisons were significant in both high quality and telephone-filtered recordings, with no false rejections and limited false acceptances; this finding suggests that this set of voice features is highly speaker-dependent and therefore forensically useful. Mean ED for MZ pairs lies between the average ED for SS comparisons and DS comparisons, as expected according to the literature on twin voices. Specific cases of MZ speakers with very high ED (i.e. strong dissimilarity) are discussed in the context of sociophonetic and twin studies. A preliminary simplification of the Vocal Profile Analysis (VPA) Scheme is proposed, which enables the quantification of voice quality features in the perceptual assessment of speaker similarity, and allows for the calculation of perceptual-acoustic correlations. The adequacy of z-score normalization for this study is also discussed, as well as the relevance of heat maps for detecting the so-called phantoms in recent approaches to the biometric menagerie. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
The perception of FM sweeps by Chinese and English listeners.
Luo, Huan; Boemio, Anthony; Gordon, Michael; Poeppel, David
2007-02-01
Frequency-modulated (FM) signals are an integral acoustic component of ecologically natural sounds and are analyzed effectively in the auditory systems of humans and animals. Linearly frequency-modulated tone sweeps were used here to evaluate two questions. First, how rapid a sweep can listeners accurately perceive? Second, is there an effect of native language insofar as the language (phonology) is differentially associated with processing of FM signals? Speakers of English and Mandarin Chinese were tested to evaluate whether being a speaker of a tone language altered the perceptual identification of non-speech tone sweeps. In two psychophysical studies, we demonstrate that Chinese subjects perform better than English subjects in FM direction identification, but not in an FM discrimination task, in which English and Chinese speakers show similar detection thresholds of approximately 20 ms duration. We suggest that the better FM direction identification in Chinese subjects is related to their experience with FM direction analysis in the tone-language environment, even though supra-segmental tonal variation occurs over a longer time scale. Furthermore, the observed common discrimination temporal threshold across two language groups supports the conjecture that processing auditory signals at durations of approximately 20 ms constitutes a fundamental auditory perceptual threshold.
Johansson, Kerstin; Strömbergsson, Sofia; Robieux, Camille; McAllister, Anita
2017-01-01
Reduced respiratory function following lower cervical spinal cord injuries (CSCIs) may indirectly result in vocal dysfunction. Although self-reports indicate voice change and limitations following CSCI, earlier efforts using global perceptual ratings to distinguish speakers with CSCI from noninjured speakers have not been very successful. We investigate the use of an audience response system-based approach to distinguish speakers with CSCI from noninjured speakers, and explore whether specific vocal traits can be identified as characteristic for speakers with CSCI. Fourteen speech-language pathologists participated in a web-based perceptual task, where their overt reactions to vocal dysfunction were registered during the continuous playback of recordings of 36 speakers (18 with CSCI, and 18 matched controls). Dysphonic events were identified through manual perceptual analysis, to allow the exploration of connections between dysphonic events and listener reactions. More dysphonic events, and more listener reactions, were registered for speakers with CSCI than for noninjured speakers. Strain (particularly in phrase-final position) and creak (particularly in nonphrase-final position) distinguish speakers with CSCI from noninjured speakers. For the identification of intermittent and subtle signs of vocal dysfunction, an approach where the temporal distribution of symptoms is registered offers a viable means to distinguish speakers affected by voice dysfunction from non-affected speakers. In speakers with CSCI, clinicians should listen for presence of final strain and nonfinal creak, and pay attention to self-reported voice function and voice problems, to identify individuals in need for clinical assessment and intervention. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Bent, Tessa; Holt, Rachael Frush
2018-02-01
Children's ability to understand speakers with a wide range of dialects and accents is essential for efficient language development and communication in a global society. Here, the impact of regional dialect and foreign-accent variability on children's speech understanding was evaluated in both quiet and noisy conditions. Five- to seven-year-old children ( n = 90) and adults ( n = 96) repeated sentences produced by three speakers with different accents-American English, British English, and Japanese-accented English-in quiet or noisy conditions. Adults had no difficulty understanding any speaker in quiet conditions. Their performance declined for the nonnative speaker with a moderate amount of noise; their performance only substantially declined for the British English speaker (i.e., below 93% correct) when their understanding of the American English speaker was also impeded. In contrast, although children showed accurate word recognition for the American and British English speakers in quiet conditions, they had difficulty understanding the nonnative speaker even under ideal listening conditions. With a moderate amount of noise, their perception of British English speech declined substantially and their ability to understand the nonnative speaker was particularly poor. These results suggest that although school-aged children can understand unfamiliar native dialects under ideal listening conditions, their ability to recognize words in these dialects may be highly susceptible to the influence of environmental degradation. Fully adult-like word identification for speakers with unfamiliar accents and dialects may exhibit a protracted developmental trajectory.
How Captain Amerika uses neural networks to fight crime
NASA Technical Reports Server (NTRS)
Rogers, Steven K.; Kabrisky, Matthew; Ruck, Dennis W.; Oxley, Mark E.
1994-01-01
Artificial neural network models can make amazing computations. These models are explained along with their application in problems associated with fighting crime. Specific problems addressed are identification of people using face recognition, speaker identification, and fingerprint and handwriting analysis (biometric authentication).
Envelope responses in single-trial EEG indicate attended speaker in a “cocktail party”
Horton, Cort; Srinivasan, Ramesh; D’Zmura, Michael
2014-01-01
Objective Recent studies have shown that auditory cortex better encodes the envelope of attended speech than that of unattended speech during multi-speaker (“cocktail party”) situations. We investigated whether these differences were sufficiently robust within single-trial EEG data to accurately determine where subjects attended. Additionally, we compared this measure to other established EEG markers of attention. Approach High-resolution EEG was recorded while subjects engaged in a two-speaker “cocktail party” task. Cortical responses to speech envelopes were extracted by cross-correlating the envelopes with each EEG channel. We also measured steady-state responses (elicited via high-frequency amplitude modulation of the speech) and alpha-band power, both of which have been sensitive to attention in previous studies. Using linear classifiers, we then examined how well each of these features could be used to predict the subjects’ side of attention at various epoch lengths. Main Results We found that the attended speaker could be determined reliably from the envelope responses calculated from short periods of EEG, with accuracy improving as a function of sample length. Furthermore, envelope responses were far better indicators of attention than changes in either alpha power or steady-state responses. Significance These results suggest that envelope-related signals recorded in EEG data can be used to form robust auditory BCI’s that do not require artificial manipulation (e.g., amplitude modulation) of stimuli to function. PMID:24963838
The Effects of the Literal Meaning of Emotional Phrases on the Identification of Vocal Emotions.
Shigeno, Sumi
2018-02-01
This study investigates the discrepancy between the literal emotional content of speech and emotional tone in the identification of speakers' vocal emotions in both the listeners' native language (Japanese), and in an unfamiliar language (random-spliced Japanese). Both experiments involve a "congruent condition," in which the emotion contained in the literal meaning of speech (words and phrases) was compatible with vocal emotion, and an "incongruent condition," in which these forms of emotional information were discordant. Results for Japanese indicated that performance in identifying emotions did not differ significantly between the congruent and incongruent conditions. However, the results for random-spliced Japanese indicated that vocal emotion was correctly identified more often in the congruent than in the incongruent condition. The different results for Japanese and random-spliced Japanese suggested that the literal meaning of emotional phrases influences the listener's perception of the speaker's emotion, and that Japanese participants could infer speakers' intended emotions in the incongruent condition.
A language-familiarity effect for speaker discrimination without comprehension.
Fleming, David; Giordano, Bruno L; Caldara, Roberto; Belin, Pascal
2014-09-23
The influence of language familiarity upon speaker identification is well established, to such an extent that it has been argued that "Human voice recognition depends on language ability" [Perrachione TK, Del Tufo SN, Gabrieli JDE (2011) Science 333(6042):595]. However, 7-mo-old infants discriminate speakers of their mother tongue better than they do foreign speakers [Johnson EK, Westrek E, Nazzi T, Cutler A (2011) Dev Sci 14(5):1002-1011] despite their limited speech comprehension abilities, suggesting that speaker discrimination may rely on familiarity with the sound structure of one's native language rather than the ability to comprehend speech. To test this hypothesis, we asked Chinese and English adult participants to rate speaker dissimilarity in pairs of sentences in English or Mandarin that were first time-reversed to render them unintelligible. Even in these conditions a language-familiarity effect was observed: Both Chinese and English listeners rated pairs of native-language speakers as more dissimilar than foreign-language speakers, despite their inability to understand the material. Our data indicate that the language familiarity effect is not based on comprehension but rather on familiarity with the phonology of one's native language. This effect may stem from a mechanism analogous to the "other-race" effect in face recognition.
ERIC Educational Resources Information Center
Blount, Ben G.; Padgug, Elise J.
Features of parental speech to young children was studied in four English-speaking and four Spanish-speaking families. Children ranged in age from 9 to 12 months for the English speakers and from 8 to 22 months for the Spanish speakers. Examination of the utterances led to the identification of 34 prosodic, paralinguistic, and interactional…
Integrating hidden Markov model and PRAAT: a toolbox for robust automatic speech transcription
NASA Astrophysics Data System (ADS)
Kabir, A.; Barker, J.; Giurgiu, M.
2010-09-01
An automatic time-aligned phone transcription toolbox of English speech corpora has been developed. Especially the toolbox would be very useful to generate robust automatic transcription and able to produce phone level transcription using speaker independent models as well as speaker dependent models without manual intervention. The system is based on standard Hidden Markov Models (HMM) approach and it was successfully experimented over a large audiovisual speech corpus namely GRID corpus. One of the most powerful features of the toolbox is the increased flexibility in speech processing where the speech community would be able to import the automatic transcription generated by HMM Toolkit (HTK) into a popular transcription software, PRAAT, and vice-versa. The toolbox has been evaluated through statistical analysis on GRID data which shows that automatic transcription deviates by an average of 20 ms with respect to manual transcription.
More than Use it or Lose it: The Number of Speakers Effect on Heritage Language Proficiency
Gollan, Tamar H.; Starr, Jennie; Ferreira, Victor S.
2014-01-01
Acquiring a Heritage Language (HL), a minority language spoken primarily at home, is often a major step toward achieving bilingualism. Two studies examined factors that promote HL proficiency. Chinese-English and Spanish-English undergraduates and Hebrew-English children named pictures in both their languages, and they or their parents completed language history questionnaires. HL picture-naming ability correlated positively with the number of different HL speakers participants spoke to as children, independent of each language’s frequency of use, and without negatively affecting English picture naming ability. HL performance increased also when primary caregivers had lower English proficiency, with later English age-of-acquisition, and (in children) with increased age. These results suggest a prescription for increasing bilingual proficiency is regular interaction with multiple HL speakers. Responsible cognitive mechanisms could include greater variety of words used by different speakers, representational robustness from exposure to variations in form, or multiple retrieval cues, perhaps analogous to contextual diversity effects. PMID:24942146
Recognition of speaker-dependent continuous speech with KEAL
NASA Astrophysics Data System (ADS)
Mercier, G.; Bigorgne, D.; Miclet, L.; Le Guennec, L.; Querre, M.
1989-04-01
A description of the speaker-dependent continuous speech recognition system KEAL is given. An unknown utterance, is recognized by means of the followng procedures: acoustic analysis, phonetic segmentation and identification, word and sentence analysis. The combination of feature-based, speaker-independent coarse phonetic segmentation with speaker-dependent statistical classification techniques is one of the main design features of the acoustic-phonetic decoder. The lexical access component is essentially based on a statistical dynamic programming technique which aims at matching a phonemic lexical entry containing various phonological forms, against a phonetic lattice. Sentence recognition is achieved by use of a context-free grammar and a parsing algorithm derived from Earley's parser. A speaker adaptation module allows some of the system parameters to be adjusted by matching known utterances with their acoustical representation. The task to be performed, described by its vocabulary and its grammar, is given as a parameter of the system. Continuously spoken sentences extracted from a 'pseudo-Logo' language are analyzed and results are presented.
Age as a Factor in Ethnic Accent Identification in Singapore
ERIC Educational Resources Information Center
Tan, Ying Ying
2012-01-01
This study seeks to answer two research questions. First, can listeners distinguish the ethnicity of the speakers on the basis of voice quality alone? Second, do demographic differences among the listeners affect discriminability? A simple but carefully designed and controlled ethnic identification test was carried out on 325 Singaporean…
Bone, Daniel; Li, Ming; Black, Matthew P.; Narayanan, Shrikanth S.
2013-01-01
Segmental and suprasegmental speech signal modulations offer information about paralinguistic content such as affect, age and gender, pathology, and speaker state. Speaker state encompasses medium-term, temporary physiological phenomena influenced by internal or external biochemical actions (e.g., sleepiness, alcohol intoxication). Perceptual and computational research indicates that detecting speaker state from speech is a challenging task. In this paper, we present a system constructed with multiple representations of prosodic and spectral features that provided the best result at the Intoxication Subchallenge of Interspeech 2011 on the Alcohol Language Corpus. We discuss the details of each classifier and show that fusion improves performance. We additionally address the question of how best to construct a speaker state detection system in terms of robust and practical marginalization of associated variability such as through modeling speakers, utterance type, gender, and utterance length. As is the case in human perception, speaker normalization provides significant improvements to our system. We show that a held-out set of baseline (sober) data can be used to achieve comparable gains to other speaker normalization techniques. Our fused frame-level statistic-functional systems, fused GMM systems, and final combined system achieve unweighted average recalls (UARs) of 69.7%, 65.1%, and 68.8%, respectively, on the test set. More consistent numbers compared to development set results occur with matched-prompt training, where the UARs are 70.4%, 66.2%, and 71.4%, respectively. The combined system improves over the Challenge baseline by 5.5% absolute (8.4% relative), also improving upon our previously best result. PMID:24376305
Congenital amusia in speakers of a tone language: association with lexical tone agnosia.
Nan, Yun; Sun, Yanan; Peretz, Isabelle
2010-09-01
Congenital amusia is a neurogenetic disorder that affects the processing of musical pitch in speakers of non-tonal languages like English and French. We assessed whether this musical disorder exists among speakers of Mandarin Chinese who use pitch to alter the meaning of words. Using the Montreal Battery of Evaluation of Amusia, we tested 117 healthy young Mandarin speakers with no self-declared musical problems and 22 individuals who reported musical difficulties and scored two standard deviations below the mean obtained by the Mandarin speakers without amusia. These 22 amusic individuals showed a similar pattern of musical impairment as did amusic speakers of non-tonal languages, by exhibiting a more pronounced deficit in melody than in rhythm processing. Furthermore, nearly half the tested amusics had impairments in the discrimination and identification of Mandarin lexical tones. Six showed marked impairments, displaying what could be called lexical tone agnosia, but had normal tone production. Our results show that speakers of tone languages such as Mandarin may experience musical pitch disorder despite early exposure to speech-relevant pitch contrasts. The observed association between the musical disorder and lexical tone difficulty indicates that the pitch disorder as defining congenital amusia is not specific to music or culture but is rather general in nature.
Can non-interactive language input benefit young second-language learners?
Au, Terry Kit-Fong; Chan, Winnie Wailan; Cheng, Liao; Siegel, Linda S; Tso, Ricky Van Yip
2015-03-01
To fully acquire a language, especially its phonology, children need linguistic input from native speakers early on. When interaction with native speakers is not always possible - e.g. for children learning a second language that is not the societal language - audios are commonly used as an affordable substitute. But does such non-interactive input work? Two experiments evaluated the usefulness of audio storybooks in acquiring a more native-like second-language accent. Young children, first- and second-graders in Hong Kong whose native language was Cantonese Chinese, were given take-home listening assignments in a second language, either English or Putonghua Chinese. Accent ratings of the children's story reading revealed measurable benefits of non-interactive input from native speakers. The benefits were far more robust for Putonghua than English. Implications for second-language accent acquisition are discussed.
Sensing of Particular Speakers for the Construction of Voice Interface Utilized in Noisy Environment
NASA Astrophysics Data System (ADS)
Sawada, Hideyuki; Ohkado, Minoru
Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.
Understanding of emotions and false beliefs among hearing children versus deaf children.
Ziv, Margalit; Most, Tova; Cohen, Shirit
2013-04-01
Emotion understanding and theory of mind (ToM) are two major aspects of social cognition in which deaf children demonstrate developmental delays. The current study investigated these social cognition aspects in two subgroups of deaf children-those with cochlear implants who communicate orally (speakers) and those who communicate primarily using sign language (signers)-in comparison to hearing children. Participants were 53 Israeli kindergartners-20 speakers, 10 signers, and 23 hearing children. Tests included four emotion identification and understanding tasks and one false belief task (ToM). Results revealed similarities among all children's emotion labeling and affective perspective taking abilities, similarities between speakers and hearing children in false beliefs and in understanding emotions in typical contexts, and lower performance of signers on the latter three tasks. Adapting educational experiences to the unique characteristics and needs of speakers and signers is recommended.
Jung, Jeeyoun; Park, Bongki; Lee, Ju Ah; You, Sooseong; Alraek, Terje; Bian, Zhao-Xiang; Birch, Stephen; Kim, Tae-Hun; Xu, Hao; Zaslawski, Chris; Kang, Byoung-Kab; Lee, Myeong Soo
2016-09-01
An international brainstorming session on standardizing pattern identification (PI) was held at the Korea Institute of Oriental Medicine on October 1, 2013 in Daejeon, South Korea. This brainstorming session was convened to gather insights from international traditional East Asian medicine specialists regarding PI standardization. With eight presentations and discussion sessions, the meeting allowed participants to discuss research methods and diagnostic systems used in traditional medicine for PI. One speaker presented a talk titled "The diagnostic criteria for blood stasis syndrome: implications for standardization of PI". Four speakers presented on future strategies and objective measurement tools that could be used in PI research. Later, participants shared information and methodology for accurate diagnosis and PI. They also discussed the necessity for standardizing PI and methods for international collaborations in pattern research.
Perception of musical and lexical tones by Taiwanese-speaking musicians.
Lee, Chao-Yang; Lee, Yuh-Fang; Shr, Chia-Lin
2011-07-01
This study explored the relationship between music and speech by examining absolute pitch and lexical tone perception. Taiwanese-speaking musicians were asked to identify musical tones without a reference pitch and multispeaker Taiwanese level tones without acoustic cues typically present for speaker normalization. The results showed that a high percentage of the participants (65% with an exact match required and 81% with one-semitone errors allowed) possessed absolute pitch, as measured by the musical tone identification task. A negative correlation was found between occurrence of absolute pitch and age of onset of musical training, suggesting that the acquisition of absolute pitch resembles the acquisition of speech. The participants were able to identify multispeaker Taiwanese level tones with above-chance accuracy, even though the acoustic cues typically present for speaker normalization were not available in the stimuli. No correlations were found between the performance in musical tone identification and the performance in Taiwanese tone identification. Potential reasons for the lack of association between the two tasks are discussed. © 2011 Acoustical Society of America
Current trends in small vocabulary speech recognition for equipment control
NASA Astrophysics Data System (ADS)
Doukas, Nikolaos; Bardis, Nikolaos G.
2017-09-01
Speech recognition systems allow human - machine communication to acquire an intuitive nature that approaches the simplicity of inter - human communication. Small vocabulary speech recognition is a subset of the overall speech recognition problem, where only a small number of words need to be recognized. Speaker independent small vocabulary recognition can find significant applications in field equipment used by military personnel. Such equipment may typically be controlled by a small number of commands that need to be given quickly and accurately, under conditions where delicate manual operations are difficult to achieve. This type of application could hence significantly benefit by the use of robust voice operated control components, as they would facilitate the interaction with their users and render it much more reliable in times of crisis. This paper presents current challenges involved in attaining efficient and robust small vocabulary speech recognition. These challenges concern feature selection, classification techniques, speaker diversity and noise effects. A state machine approach is presented that facilitates the voice guidance of different equipment in a variety of situations.
Fifty years of progress in speech and speaker recognition
NASA Astrophysics Data System (ADS)
Furui, Sadaoki
2004-10-01
Speech and speaker recognition technology has made very significant progress in the past 50 years. The progress can be summarized by the following changes: (1) from template matching to corpus-base statistical modeling, e.g., HMM and n-grams, (2) from filter bank/spectral resonance to Cepstral features (Cepstrum + DCepstrum + DDCepstrum), (3) from heuristic time-normalization to DTW/DP matching, (4) from gdistanceh-based to likelihood-based methods, (5) from maximum likelihood to discriminative approach, e.g., MCE/GPD and MMI, (6) from isolated word to continuous speech recognition, (7) from small vocabulary to large vocabulary recognition, (8) from context-independent units to context-dependent units for recognition, (9) from clean speech to noisy/telephone speech recognition, (10) from single speaker to speaker-independent/adaptive recognition, (11) from monologue to dialogue/conversation recognition, (12) from read speech to spontaneous speech recognition, (13) from recognition to understanding, (14) from single-modality (audio signal only) to multi-modal (audio/visual) speech recognition, (15) from hardware recognizer to software recognizer, and (16) from no commercial application to many practical commercial applications. Most of these advances have taken place in both the fields of speech recognition and speaker recognition. The majority of technological changes have been directed toward the purpose of increasing robustness of recognition, including many other additional important techniques not noted above.
"Who" is saying "what"? Brain-based decoding of human voice and speech.
Formisano, Elia; De Martino, Federico; Bonte, Milene; Goebel, Rainer
2008-11-07
Can we decipher speech content ("what" is being said) and speaker identity ("who" is saying it) from observations of brain activity of a listener? Here, we combine functional magnetic resonance imaging with a data-mining algorithm and retrieve what and whom a person is listening to from the neural fingerprints that speech and voice signals elicit in the listener's auditory cortex. These cortical fingerprints are spatially distributed and insensitive to acoustic variations of the input so as to permit the brain-based recognition of learned speech from unknown speakers and of learned voices from previously unheard utterances. Our findings unravel the detailed cortical layout and computational properties of the neural populations at the basis of human speech recognition and speaker identification.
Nouns slow down speech across structurally and culturally diverse languages
Danielsen, Swintha; Hartmann, Iren; Pakendorf, Brigitte; Witzlack-Makarevich, Alena; de Jong, Nivja H.
2018-01-01
By force of nature, every bit of spoken language is produced at a particular speed. However, this speed is not constant—speakers regularly speed up and slow down. Variation in speech rate is influenced by a complex combination of factors, including the frequency and predictability of words, their information status, and their position within an utterance. Here, we use speech rate as an index of word-planning effort and focus on the time window during which speakers prepare the production of words from the two major lexical classes, nouns and verbs. We show that, when naturalistic speech is sampled from languages all over the world, there is a robust cross-linguistic tendency for slower speech before nouns compared with verbs, both in terms of slower articulation and more pauses. We attribute this slowdown effect to the increased amount of planning that nouns require compared with verbs. Unlike verbs, nouns can typically only be used when they represent new or unexpected information; otherwise, they have to be replaced by pronouns or be omitted. These conditions on noun use appear to outweigh potential advantages stemming from differences in internal complexity between nouns and verbs. Our findings suggest that, beneath the staggering diversity of grammatical structures and cultural settings, there are robust universals of language processing that are intimately tied to how speakers manage referential information when they communicate with one another. PMID:29760059
Choi, Ji Eun; Moon, Il Joon; Kim, Eun Yeon; Park, Hee-Sung; Kim, Byung Kil; Chung, Won-Ho; Cho, Yang-Sun; Brown, Carolyn J; Hong, Sung Hwa
The aim of this study was to compare binaural performance of auditory localization task and speech perception in babble measure between children who use a cochlear implant (CI) in one ear and a hearing aid (HA) in the other (bimodal fitting) and those who use bilateral CIs. Thirteen children (mean age ± SD = 10 ± 2.9 years) with bilateral CIs and 19 children with bimodal fitting were recruited to participate. Sound localization was assessed using a 13-loudspeaker array in a quiet sound-treated booth. Speakers were placed in an arc from -90° azimuth to +90° azimuth (15° interval) in horizontal plane. To assess the accuracy of sound location identification, we calculated the absolute error in degrees between the target speaker and the response speaker during each trial. The mean absolute error was computed by dividing the sum of absolute errors by the total number of trials. We also calculated the hemifield identification score to reflect the accuracy of right/left discrimination. Speech-in-babble perception was also measured in the sound field using target speech presented from the front speaker. Eight-talker babble was presented in the following four different listening conditions: from the front speaker (0°), from one of the two side speakers (+90° or -90°), from both side speakers (±90°). Speech, spatial, and quality questionnaire was administered. When the two groups of children were directly compared with each other, there was no significant difference in localization accuracy ability or hemifield identification score under binaural condition. Performance in speech perception test was also similar to each other under most babble conditions. However, when the babble was from the first device side (CI side for children with bimodal stimulation or first CI side for children with bilateral CIs), speech understanding in babble by bilateral CI users was significantly better than that by bimodal listeners. Speech, spatial, and quality scores were comparable with each other between the two groups. Overall, the binaural performance was similar to each other between children who are fit with two CIs (CI + CI) and those who use bimodal stimulation (HA + CI) in most conditions. However, the bilateral CI group showed better speech perception than the bimodal CI group when babble was from the first device side (first CI side for bilateral CI users or CI side for bimodal listeners). Therefore, if bimodal performance is significantly below the mean bilateral CI performance on speech perception in babble, these results suggest that a child should be considered to transit from bimodal stimulation to bilateral CIs.
Interface of Linguistic and Visual Information During Audience Design.
Fukumura, Kumiko
2015-08-01
Evidence suggests that speakers can take account of the addressee's needs when referring. However, what representations drive the speaker's audience design has been less clear. This study aims to go beyond previous studies by investigating the interplay between the visual and linguistic context during audience design. Speakers repeated subordinate descriptions (e.g., firefighter) given in the prior linguistic context less and used basic-level descriptions (e.g., man) more when the addressee did not hear the linguistic context than when s/he did. But crucially, this effect happened only when the referent lacked the visual attributes associated with the expressions (e.g., the referent was in plain clothes rather than in a firefighter uniform), so there was no other contextual cue available for the identification of the referent. This suggests that speakers flexibly use different contextual cues to help their addressee map the referring expression onto the intended referent. In addition, speakers used fewer pronouns when the addressee did not hear the linguistic antecedent than when s/he did. This suggests that although speakers may be egocentric during anaphoric reference (Fukumura & Van Gompel, 2012), they can cooperatively avoid pronouns when the linguistic antecedents were not shared with their addressee during initial reference. © 2014 Cognitive Science Society, Inc.
Voice input/output capabilities at Perception Technology Corporation
NASA Technical Reports Server (NTRS)
Ferber, Leon A.
1977-01-01
Condensed resumes of key company personnel at the Perception Technology Corporation are presented. The staff possesses recognition, speech synthesis, speaker authentication, and language identification. Hardware and software engineers' capabilities are included.
Noh, Heil; Lee, Dong-Hee
2012-01-01
To identify the quantitative differences between Korean and English in long-term average speech spectra (LTASS). Twenty Korean speakers, who lived in the capital of Korea and spoke standard Korean as their first language, were compared with 20 native English speakers. For the Korean speakers, a passage from a novel and a passage from a leading newspaper article were chosen. For the English speakers, the Rainbow Passage was used. The speech was digitally recorded using GenRad 1982 Precision Sound Level Meter and GoldWave® software and analyzed using MATLAB program. There was no significant difference in the LTASS between the Korean subjects reading a news article or a novel. For male subjects, the LTASS of Korean speakers was significantly lower than that of English speakers above 1.6 kHz except at 4 kHz and its difference was more than 5 dB, especially at higher frequencies. For women, the LTASS of Korean speakers showed significantly lower levels at 0.2, 0.5, 1, 1.25, 2, 2.5, 6.3, 8, and 10 kHz, but the differences were less than 5 dB. Compared with English speakers, the LTASS of Korean speakers showed significantly lower levels in frequencies above 2 kHz except at 4 kHz. The difference was less than 5 dB between 2 and 5 kHz but more than 5 dB above 6 kHz. To adjust the formula for fitting hearing aids for Koreans, our results based on the LTASS analysis suggest that one needs to raise the gain in high-frequency regions.
Perception of English palatal codas by Korean speakers of English
NASA Astrophysics Data System (ADS)
Yeon, Sang-Hee
2003-04-01
This study aimed at looking at perception of English palatal codas by Korean speakers of English to determine if perception problems are the source of production problems. In particular, first, this study looked at the possible first language effect on the perception of English palatal codas. Second, a possible perceptual source of vowel epenthesis after English palatal codas was investigated. In addition, individual factors, such as length of residence, TOEFL score, gender and academic status, were compared to determine if those affected the varying degree of the perception accuracy. Eleven adult Korean speakers of English as well as three native speakers of English participated in the study. Three sets of a perception test including identification of minimally different English pseudo- or real words were carried out. The results showed that, first, the Korean speakers perceived the English codas significantly worse than the Americans. Second, the study supported the idea that Koreans perceived an extra /i/ after the final affricates due to final release. Finally, none of the individual factors explained the varying degree of the perceptional accuracy. In particular, TOEFL scores and the perception test scores did not have any statistically significant association.
Pruitt, John S; Jenkins, James J; Strange, Winifred
2006-03-01
Perception of second language speech sounds is influenced by one's first language. For example, speakers of American English have difficulty perceiving dental versus retroflex stop consonants in Hindi although English has both dental and retroflex allophones of alveolar stops. Japanese, unlike English, has a contrast similar to Hindi, specifically, the Japanese /d/ versus the flapped /r/ which is sometimes produced as a retroflex. This study compared American and Japanese speakers' identification of the Hindi contrast in CV syllable contexts where C varied in voicing and aspiration. The study then evaluated the participants' increase in identifying the distinction after training with a computer-interactive program. Training sessions progressively increased in difficulty by decreasing the extent of vowel truncation in stimuli and by adding new speakers. Although all participants improved significantly, Japanese participants were more accurate than Americans in distinguishing the contrast on pretest, during training, and on posttest. Transfer was observed to three new consonantal contexts, a new vowel context, and a new speaker's productions. Some abstract aspect of the contrast was apparently learned during training. It is suggested that allophonic experience with dental and retroflex stops may be detrimental to perception of the new contrast.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., or transcriptions of electronic recordings including the identification of speakers, shall to the... cost of transcription. (c) The agency shall maintain a complete verbatim copy of the transcript, a...
Code of Federal Regulations, 2010 CFR
2010-07-01
..., or transcriptions of electronic recordings including the identification of speakers, shall to the... cost of transcription. (c) The agency shall maintain a complete verbatim copy of the transcript, a...
Kharlamov, Viktor; Campbell, Kenneth; Kazanina, Nina
2011-11-01
Speech sounds are not always perceived in accordance with their acoustic-phonetic content. For example, an early and automatic process of perceptual repair, which ensures conformity of speech inputs to the listener's native language phonology, applies to individual input segments that do not exist in the native inventory or to sound sequences that are illicit according to the native phonotactic restrictions on sound co-occurrences. The present study with Russian and Canadian English speakers shows that listeners may perceive phonetically distinct and licit sound sequences as equivalent when the native language system provides robust evidence for mapping multiple phonetic forms onto a single phonological representation. In Russian, due to an optional but productive t-deletion process that affects /stn/ clusters, the surface forms [sn] and [stn] may be phonologically equivalent and map to a single phonological form /stn/. In contrast, [sn] and [stn] clusters are usually phonologically distinct in (Canadian) English. Behavioral data from identification and discrimination tasks indicated that [sn] and [stn] clusters were more confusable for Russian than for English speakers. The EEG experiment employed an oddball paradigm with nonwords [asna] and [astna] used as the standard and deviant stimuli. A reliable mismatch negativity response was elicited approximately 100 msec postchange in the English group but not in the Russian group. These findings point to a perceptual repair mechanism that is engaged automatically at a prelexical level to ensure immediate encoding of speech inputs in phonological terms, which in turn enables efficient access to the meaning of a spoken utterance.
NASA Astrophysics Data System (ADS)
Wegner, K.; Herrin, S.; Schmidt, C.
2015-12-01
Scientists play an integral role in the development of climate literacy skills - for both teachers and students alike. By partnering with local scientists, teachers can gain valuable insights into the science practices highlighted by the Next Generation Science Standards (NGSS), as well as a deeper understanding of cutting-edge scientific discoveries and local impacts of climate change. For students, connecting to local scientists can provide a relevant connection to climate science and STEM skills. Over the past two years, the Climate Voices Science Speakers Network (climatevoices.org) has grown to a robust network of nearly 400 climate science speakers across the United States. Formal and informal educators, K-12 students, and community groups connect with our speakers through our interactive map-based website and invite them to meet through face-to-face and virtual presentations, such as webinars and podcasts. But creating a common language between scientists and educators requires coaching on both sides. In this presentation, we will present the "nitty-gritty" of setting up scientist-educator collaborations, as well as the challenges and opportunities that arise from these partnerships. We will share the impact of these collaborations through case studies, including anecdotal feedback and metrics.
NASA Technical Reports Server (NTRS)
Wegner, Kristin; Herrin, Sara; Schmidt, Cynthia
2015-01-01
Scientists play an integral role in the development of climate literacy skills - for both teachers and students alike. By partnering with local scientists, teachers can gain valuable insights into the science practices highlighted by the Next Generation Science Standards (NGSS), as well as a deeper understanding of cutting-edge scientific discoveries and local impacts of climate change. For students, connecting to local scientists can provide a relevant connection to climate science and STEM skills. Over the past two years, the Climate Voices Science Speakers Network (climatevoices.org) has grown to a robust network of nearly 400 climate science speakers across the United States. Formal and informal educators, K-12 students, and community groups connect with our speakers through our interactive map-based website and invite them to meet through face-to-face and virtual presentations, such as webinars and podcasts. But creating a common language between scientists and educators requires coaching on both sides. In this presentation, we will present the "nitty-gritty" of setting up scientist-educator collaborations, as well as the challenges and opportunities that arise from these partnerships. We will share the impact of these collaborations through case studies, including anecdotal feedback and metrics.
Speaker gender identification based on majority vote classifiers
NASA Astrophysics Data System (ADS)
Mezghani, Eya; Charfeddine, Maha; Nicolas, Henri; Ben Amar, Chokri
2017-03-01
Speaker gender identification is considered among the most important tools in several multimedia applications namely in automatic speech recognition, interactive voice response systems and audio browsing systems. Gender identification systems performance is closely linked to the selected feature set and the employed classification model. Typical techniques are based on selecting the best performing classification method or searching optimum tuning of one classifier parameters through experimentation. In this paper, we consider a relevant and rich set of features involving pitch, MFCCs as well as other temporal and frequency-domain descriptors. Five classification models including decision tree, discriminant analysis, nave Bayes, support vector machine and k-nearest neighbor was experimented. The three best perming classifiers among the five ones will contribute by majority voting between their scores. Experimentations were performed on three different datasets spoken in three languages: English, German and Arabic in order to validate language independency of the proposed scheme. Results confirm that the presented system has reached a satisfying accuracy rate and promising classification performance thanks to the discriminating abilities and diversity of the used features combined with mid-level statistics.
Report of an international symposium on drugs and driving
DOT National Transportation Integrated Search
1975-06-30
This report presents the proceedings of a Symposium on Drugs (other than alcohol) and Driving. Speaker's papers and work session summaries are included. Major topics include: Overview of Problem, Risk Identification, Drug Measurement in Biological Ma...
Individual differences in selective attention predict speech identification at a cocktail party.
Oberfeld, Daniel; Klöckner-Nowotny, Felicitas
2016-08-31
Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise.
Embedding speech into virtual realities
NASA Technical Reports Server (NTRS)
Bohn, Christian-Arved; Krueger, Wolfgang
1993-01-01
In this work a speaker-independent speech recognition system is presented, which is suitable for implementation in Virtual Reality applications. The use of an artificial neural network in connection with a special compression of the acoustic input leads to a system, which is robust, fast, easy to use and needs no additional hardware, beside a common VR-equipment.
Greek perception and production of an English vowel contrast: A preliminary study
NASA Astrophysics Data System (ADS)
Podlipský, Václav J.
2005-04-01
This study focused on language-independent principles functioning in acquisition of second language (L2) contrasts. Specifically, it tested Bohn's Desensitization Hypothesis [in Speech perception and linguistic experience: Issues in Cross Language Research, edited by W. Strange (York Press, Baltimore, 1995)] which predicted that Greek speakers of English as an L2 would base their perceptual identification of English /i/ and /I/ on durational differences. Synthetic vowels differing orthogonally in duration and spectrum between the /i/ and /I/ endpoints served as stimuli for a forced-choice identification test. To assess L2 proficiency and to evaluate the possibility of cross-language category assimilation, productions of English /i/, /I/, and /ɛ/ and of Greek /i/ and /e/ were elicited and analyzed acoustically. The L2 utterances were also rated for the degree of foreign accent. Two native speakers of Modern Greek with low and 2 with intermediate experience in English participated. Six native English (NE) listeners and 6 NE speakers tested in an earlier study constituted the control groups. Heterogeneous perceptual behavior was observed for the L2 subjects. It is concluded that until acquisition in completely naturalistic settings is tested, possible interference of formally induced meta-linguistic differentiation between a ``short'' and a ``long'' vowel cannot be eliminated.
Multimodal fusion of polynomial classifiers for automatic person recgonition
NASA Astrophysics Data System (ADS)
Broun, Charles C.; Zhang, Xiaozheng
2001-03-01
With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late integration approach, based on a probabilistic model, is employed to combine the two modalities. The system is tested on the XM2VTS database combined with AWGN in the audio domain over a range of signal-to-noise ratios.
An automatic speech recognition system with speaker-independent identification support
NASA Astrophysics Data System (ADS)
Caranica, Alexandru; Burileanu, Corneliu
2015-02-01
The novelty of this work relies on the application of an open source research software toolkit (CMU Sphinx) to train, build and evaluate a speech recognition system, with speaker-independent support, for voice-controlled hardware applications. Moreover, we propose to use the trained acoustic model to successfully decode offline voice commands on embedded hardware, such as an ARMv6 low-cost SoC, Raspberry PI. This type of single-board computer, mainly used for educational and research activities, can serve as a proof-of-concept software and hardware stack for low cost voice automation systems.
Multilevel Analysis in Analyzing Speech Data
ERIC Educational Resources Information Center
Guddattu, Vasudeva; Krishna, Y.
2011-01-01
The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…
A dynamic multi-channel speech enhancement system for distributed microphones in a car environment
NASA Astrophysics Data System (ADS)
Matheja, Timo; Buck, Markus; Fingscheidt, Tim
2013-12-01
Supporting multiple active speakers in automotive hands-free or speech dialog applications is an interesting issue not least due to comfort reasons. Therefore, a multi-channel system for enhancement of speech signals captured by distributed distant microphones in a car environment is presented. Each of the potential speakers in the car has a dedicated directional microphone close to his position that captures the corresponding speech signal. The aim of the resulting overall system is twofold: On the one hand, a combination of an arbitrary pre-defined subset of speakers' signals can be performed, e.g., to create an output signal in a hands-free telephone conference call for a far-end communication partner. On the other hand, annoying cross-talk components from interfering sound sources occurring in multiple different mixed output signals are to be eliminated, motivated by the possibility of other hands-free applications being active in parallel. The system includes several signal processing stages. A dedicated signal processing block for interfering speaker cancellation attenuates the cross-talk components of undesired speech. Further signal enhancement comprises the reduction of residual cross-talk and background noise. Subsequently, a dynamic signal combination stage merges the processed single-microphone signals to obtain appropriate mixed signals at the system output that may be passed to applications such as telephony or a speech dialog system. Based on signal power ratios between the particular microphone signals, an appropriate speaker activity detection and therewith a robust control mechanism of the whole system is presented. The proposed system may be dynamically configured and has been evaluated for a car setup with four speakers sitting in the car cabin disturbed in various noise conditions.
Voice Register in Mon: Acoustics and Electroglottography
Abramson, Arthur S.; Tiede, Mark K.; Luangthongkum, Theraphan
2016-01-01
Mon is spoken in villages in Thailand and Myanmar. The dialect of Ban Nakhonchum, Thailand has two voice registers, modal and breathy; these phonation types, along with other phonetic properties, distinguish minimal pairs. Four native speakers of this dialect recorded repetitions of 14 randomized words (seven minimal pairs) for acoustic analysis. We used a subset of these pairs in a listening test to verify the perceptual robustness of the register distinction. Acoustic analysis found significant differences in noise component, spectral slope, and fundamental frequency. In a subsequent session four speakers were also recorded using electroglottography (EGG), which showed systematic differences in the contact quotient (CQ). The salience of these properties in maintaining the register distinction is discussed in the context of possible tonogenesis for this language. PMID:26636544
Seeing a singer helps comprehension of the song's lyrics.
Jesse, Alexandra; Massaro, Dominic W
2010-06-01
When listening to speech, we often benefit when also seeing the speaker's face. If this advantage is not domain specific for speech, the recognition of sung lyrics should also benefit from seeing the singer's face. By independently varying the sight and sound of the lyrics, we found a substantial comprehension benefit of seeing a singer. This benefit was robust across participants, lyrics, and repetition of the test materials. This benefit was much larger than the benefit for sung lyrics obtained in previous research, which had not provided the visual information normally present in singing. Given that the comprehension of sung lyrics benefits from seeing the singer, just like speech comprehension benefits from seeing the speaker, both speech and music perception appear to be multisensory processes.
A cross-language study of perception of lexical stress in English.
Yu, Vickie Y; Andruski, Jean E
2010-08-01
This study investigates the question of whether language background affects the perception of lexical stress in English. Thirty native English speakers and 30 native Chinese learners of English participated in a stressed-syllable identification task and a discrimination task involving three types of stimuli (real words/pseudowords/hums). The results show that both language groups were able to identify and discriminate stress patterns. Lexical and segmental information affected the English and Chinese speakers in varying degrees. English and Chinese speakers showed different response patterns to trochaic vs. iambic stress across the three types of stimuli. An acoustic analysis revealed that two language groups used different acoustic cues to process lexical stress. The findings suggest that the different degrees of lexical and segmental effects can be explained by language background, which in turn supports the hypothesis that language background affects the perception of lexical stress in English.
Brain Plasticity in Speech Training in Native English Speakers Learning Mandarin Tones
NASA Astrophysics Data System (ADS)
Heinzen, Christina Carolyn
The current study employed behavioral and event-related potential (ERP) measures to investigate brain plasticity associated with second-language (L2) phonetic learning based on an adaptive computer training program. The program utilized the acoustic characteristics of Infant-Directed Speech (IDS) to train monolingual American English-speaking listeners to perceive Mandarin lexical tones. Behavioral identification and discrimination tasks were conducted using naturally recorded speech, carefully controlled synthetic speech, and non-speech control stimuli. The ERP experiments were conducted with selected synthetic speech stimuli in a passive listening oddball paradigm. Identical pre- and post- tests were administered on nine adult listeners, who completed two-to-three hours of perceptual training. The perceptual training sessions used pair-wise lexical tone identification, and progressed through seven levels of difficulty for each tone pair. The levels of difficulty included progression in speaker variability from one to four speakers and progression through four levels of acoustic exaggeration of duration, pitch range, and pitch contour. Behavioral results for the natural speech stimuli revealed significant training-induced improvement in identification of Tones 1, 3, and 4. Improvements in identification of Tone 4 generalized to novel stimuli as well. Additionally, comparison between discrimination of across-category and within-category stimulus pairs taken from a synthetic continuum revealed a training-induced shift toward more native-like categorical perception of the Mandarin lexical tones. Analysis of the Mismatch Negativity (MMN) responses in the ERP data revealed increased amplitude and decreased latency for pre-attentive processing of across-category discrimination as a result of training. There were also laterality changes in the MMN responses to the non-speech control stimuli, which could reflect reallocation of brain resources in processing pitch patterns for the across-category lexical tone contrast. Overall, the results support the use of IDS characteristics in training non-native speech contrasts and provide impetus for further research.
Shattuck-Hufnagel, S.; Choi, J. Y.; Moro-Velázquez, L.; Gómez-García, J. A.
2017-01-01
Although a large amount of acoustic indicators have already been proposed in the literature to evaluate the hypokinetic dysarthria of people with Parkinson’s Disease, the goal of this work is to identify and interpret new reliable and complementary articulatory biomarkers that could be applied to predict/evaluate Parkinson’s Disease from a diadochokinetic test, contributing to the possibility of a further multidimensional analysis of the speech of parkinsonian patients. The new biomarkers proposed are based on the kinetic behaviour of the envelope trace, which is directly linked with the articulatory dysfunctions introduced by the disease since the early stages. The interest of these new articulatory indicators stands on their easiness of identification and interpretation, and their potential to be translated into computer based automatic methods to screen the disease from the speech. Throughout this paper, the accuracy provided by these acoustic kinetic biomarkers is compared with the one obtained with a baseline system based on speaker identification techniques. Results show accuracies around 85% that are in line with those obtained with the complex state of the art speaker recognition techniques, but with an easier physical interpretation, which open the possibility to be transferred to a clinical setting. PMID:29240814
The non-trusty clown attack on model-based speaker recognition systems
NASA Astrophysics Data System (ADS)
Farrokh Baroughi, Alireza; Craver, Scott
2015-03-01
Biometric detectors for speaker identification commonly employ a statistical model for a subject's voice, such as a Gaussian Mixture Model, that combines multiple means to improve detector performance. This allows a malicious insider to amend or append a component of a subject's statistical model so that a detector behaves normally except under a carefully engineered circumstance. This allows an attacker to force a misclassification of his or her voice only when desired, by smuggling data into a database far in advance of an attack. Note that the attack is possible if attacker has access to database even for a limited time to modify victim's model. We exhibit such an attack on a speaker identification, in which an attacker can force a misclassification by speaking in an unusual voice, and replacing the least weighted component of victim's model by the most weighted competent of the unusual voice of the attacker's model. The reason attacker make his or her voice unusual during the attack is because his or her normal voice model can be in database, and by attacking with unusual voice, the attacker has the option to be recognized as himself or herself when talking normally or as the victim when talking in the unusual manner. By attaching an appropriately weighted vector to a victim's model, we can impersonate all users in our simulations, while avoiding unwanted false rejections.
Application of the wavelet transform for speech processing
NASA Technical Reports Server (NTRS)
Maes, Stephane
1994-01-01
Speaker identification and word spotting will shortly play a key role in space applications. An approach based on the wavelet transform is presented that, in the context of the 'modulation model,' enables extraction of speech features which are used as input for the classification process.
Individual differences in selective attention predict speech identification at a cocktail party
Oberfeld, Daniel; Klöckner-Nowotny, Felicitas
2016-01-01
Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise. DOI: http://dx.doi.org/10.7554/eLife.16747.001 PMID:27580272
Factors Affecting Grammatical and Lexical Complexity of Long-Term L2 Speakers' Oral Proficiency
ERIC Educational Resources Information Center
Lahmann, Cornelia; Steinkrauss, Rasmus; Schmid, Monika S.
2016-01-01
There remains considerable disagreement about which factors drive second language (L2) ultimate attainment. Age of onset (AO) appears to be a robust factor, lending support to theories of maturational constraints on L2 acquisition. The present study is an investigation of factors that influence grammatical and lexical complexity at the stage of L2…
Flege, J E; Hillenbrand, J
1986-02-01
This study examined the effect of linguistic experience on perception of the English /s/-/z/ contrast in word-final position. The durations of the periodic ("vowel") and aperiodic ("fricative") portions of stimuli, ranging from peas to peace, were varied in a 5 X 5 factorial design. Forced-choice identification judgments were elicited from two groups of native speakers of American English differing in dialect, and from two groups each of native speakers of French, Swedish, and Finnish differing in English-language experience. The results suggested that the non-native subjects used cues established for the perception of phonetic contrasts in their native language to identify fricatives as /s/ or /z/. Lengthening vowel duration increased /z/ judgments in all eight subject groups, although the effect was smaller for native speakers of French than for native speakers of the other languages. Shortening fricative duration, on the other hand, significantly decreased /z/ judgments only by the English and French subjects. It did not influence voicing judgments by the Swedish and Finnish subjects, even those who had lived for a year or more in an English-speaking environment. These findings raise the question of whether adults who learn a foreign language can acquire the ability to integrate multiple acoustic cues to a phonetic contrast which does not exist in their native language.
Identification and robust control of an experimental servo motor.
Adam, E J; Guestrin, E D
2002-04-01
In this work, the design of a robust controller for an experimental laboratory-scale position control system based on a dc motor drive as well as the corresponding identification and robust stability analysis are presented. In order to carry out the robust design procedure, first, a classic closed-loop identification technique is applied and then, the parametrization by internal model control is used. The model uncertainty is evaluated under both parametric and global representation. For the latter case, an interesting discussion about the conservativeness of this description is presented by means of a comparison between the uncertainty disk and the critical perturbation radius approaches. Finally, conclusions about the performance of the experimental system with the robust controller are discussed using comparative graphics of the controlled variable and the Nyquist stability margin as a robustness measurement.
The emotional impact of being myself: Emotions and foreign-language processing.
Ivaz, Lela; Costa, Albert; Duñabeitia, Jon Andoni
2016-03-01
Native languages are acquired in emotionally rich contexts, whereas foreign languages are typically acquired in emotionally neutral academic environments. As a consequence of this difference, it has been suggested that bilinguals' emotional reactivity in foreign-language contexts is reduced as compared with native language contexts. In the current study, we investigated whether this emotional distance associated with foreign languages could modulate automatic responses to self-related linguistic stimuli. Self-related stimuli enhance performance by boosting memory, speed, and accuracy as compared with stimuli unrelated to the self (the so-called self-bias effect). We explored whether this effect depends on the language context by comparing self-biases in a native and a foreign language. Two experiments were conducted with native Spanish speakers with a high level of English proficiency in which they were asked to complete a perceptual matching task during which they associated simple geometric shapes (circles, squares, and triangles) with the labels "you," "friend," and "other" either in their native or foreign language. Results showed a robust asymmetry in the self-bias in the native- and foreign-language contexts: A larger self-bias was found in the native than in the foreign language. An additional control experiment demonstrated that the same materials administered to a group of native English speakers yielded robust self-bias effects that were comparable in magnitude to the ones obtained with the Spanish speakers when tested in their native language (but not in their foreign language). We suggest that the emotional distance evoked by the foreign-language contexts caused these differential effects across language contexts. These results demonstrate that the foreign-language effects are pervasive enough to affect automatic stages of emotional processing. (c) 2016 APA, all rights reserved).
Korean Word Frequency and Commonality Study for Augmentative and Alternative Communication
ERIC Educational Resources Information Center
Shin, Sangeun; Hill, Katya
2016-01-01
Background: Vocabulary frequency results have been reported to design and support augmentative and alternative communication (AAC) interventions. A few studies exist for adult speakers and for other natural languages. With the increasing demand on AAC treatment for Korean adults, identification of high-frequency or core vocabulary (CV) becomes…
A Report by the Air Pollution Committee
ERIC Educational Resources Information Center
Kirkpatrick, Lane
1972-01-01
Description of a Symposium on Air Resource Protection and the Environment,'' held at the 1972 Environmental Health Conference and Exposition. Reports included a mathematical model for predicting future levels of air pollution, evaluation and identification of transportation controls, and a panel discussion of points raised by the speakers. (LK)
Myths and Political Rhetoric: Jimmy Carter Accepts the Nomination.
ERIC Educational Resources Information Center
Corso, Dianne M.
Like other political speakers who have drawn on the personification, identification, and dramatic encounter images of mythology to pressure and persuade audiences, Jimmy Carter evoked the myths of the hero, the American Dream, and the ideal political process in his presidential nomination acceptance speech. By stressing his unknown status, his…
Gender Differences in the Recognition of Vocal Emotions
Lausen, Adi; Schacht, Annekathrin
2018-01-01
The conflicting findings from the few studies conducted with regard to gender differences in the recognition of vocal expressions of emotion have left the exact nature of these differences unclear. Several investigators have argued that a comprehensive understanding of gender differences in vocal emotion recognition can only be achieved by replicating these studies while accounting for influential factors such as stimulus type, gender-balanced samples, number of encoders, decoders, and emotional categories. This study aimed to account for these factors by investigating whether emotion recognition from vocal expressions differs as a function of both listeners' and speakers' gender. A total of N = 290 participants were randomly and equally allocated to two groups. One group listened to words and pseudo-words, while the other group listened to sentences and affect bursts. Participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. Overall, females were more accurate than males when decoding vocal emotions, however, when testing for specific emotions these differences were small in magnitude. Speakers' gender had a significant impact on how listeners' judged emotions from the voice. The group listening to words and pseudo-words had higher identification rates for emotions spoken by male than by female actors, whereas in the group listening to sentences and affect bursts the identification rates were higher when emotions were uttered by female than male actors. The mixed pattern for emotion-specific effects, however, indicates that, in the vocal channel, the reliability of emotion judgments is not systematically influenced by speakers' gender and the related stereotypes of emotional expressivity. Together, these results extend previous findings by showing effects of listeners' and speakers' gender on the recognition of vocal emotions. They stress the importance of distinguishing these factors to explain recognition ability in the processing of emotional prosody. PMID:29922202
Hearing history influences voice gender perceptual performance in cochlear implant users.
Kovačić, Damir; Balaban, Evan
2010-12-01
The study was carried out to assess the role that five hearing history variables (chronological age, age at onset of deafness, age of first cochlear implant [CI] activation, duration of CI use, and duration of known deafness) play in the ability of CI users to identify speaker gender. Forty-one juvenile CI users participated in two voice gender identification tasks. In a fixed, single-interval task, subjects listened to a single speech item from one of 20 adult male or 20 adult female speakers and had to identify speaker gender. In an adaptive speech-based voice gender discrimination task with the fundamental frequency difference between the voices as the adaptive parameter, subjects listened to a pair of speech items presented in sequential order, one of which was always spoken by an adult female and the other by an adult male. Subjects had to identify the speech item spoken by the female voice. Correlation and regression analyses between perceptual scores in the two tasks and the hearing history variables were performed. Subjects fell into three performance groups: (1) those who could distinguish voice gender in both tasks, (2) those who could distinguish voice gender in the adaptive but not the fixed task, and (3) those who could not distinguish voice gender in either task. Gender identification performance for single voices in the fixed task was significantly and negatively related to the duration of deafness before cochlear implantation (shorter deafness yielded better performance), whereas performance in the adaptive task was weakly but significantly related to age at first activation of the CI device, with earlier activations yielding better scores. The existence of a group of subjects able to perform adaptive discrimination but unable to identify the gender of singly presented voices demonstrates the potential dissociability of the skills required for these two tasks, suggesting that duration of deafness and age of cochlear implantation could have dissociable effects on the development of different skills required by CI users to identify speaker gender.
Acoustic and perceptual effects of overall F0 range in a lexical pitch accent distinction
NASA Astrophysics Data System (ADS)
Wade, Travis
2002-05-01
A speaker's overall fundamental frequency range is generally considered a variable, nonlinguistic element of intonation. This study examined the precision with which overall F0 is predictable based on previous intonational context and the extent to which it may be perceptually significant. Speakers of Tokyo Japanese produced pairs of sentences differing lexically only in the presence or absence of a single pitch accent as responses to visual and prerecorded speech cues presented in an interactive manner. F0 placement of high tones (previously observed to be relatively variable in pitch contours) was found to be consistent across speakers and uniformly dependent on the intonation of the different sentences used as cues. In a subsequent perception experiment, continuous manipulation of these same sentences between typical accented and typical non-accent-containing versions were presented to Japanese listeners for lexical identification. Results showed that listeners' perception was not significantly altered in compensation for artificial manipulation of preceding intonation. Implications are discussed within an autosegmental analysis of tone. The current results are consistent with the notion that pitch range (i.e., specific vertical locations of tonal peaks) does not simply vary gradiently across speakers and situations but constitutes a predictable part of the phonetic specification of tones.
Future Tense and Economic Decisions: Controlling for Cultural Evolution
Roberts, Seán G.; Winters, James; Chen, Keith
2015-01-01
A previous study by Chen demonstrates a correlation between languages that grammatically mark future events and their speakers' propensity to save, even after controlling for numerous economic and demographic factors. The implication is that languages which grammatically distinguish the present and the future may bias their speakers to distinguish them psychologically, leading to less future-oriented decision making. However, Chen's original analysis assumed languages are independent. This neglects the fact that languages are related, causing correlations to appear stronger than is warranted (Galton's problem). In this paper, we test the robustness of Chen's correlations to corrections for the geographic and historical relatedness of languages. While the question seems simple, the answer is complex. In general, the statistical correlation between the two variables is weaker when controlling for relatedness. When applying the strictest tests for relatedness, and when data is not aggregated across individuals, the correlation is not significant. However, the correlation did remain reasonably robust under a number of tests. We argue that any claims of synchronic patterns between cultural variables should be tested for spurious correlations, with the kinds of approaches used in this paper. However, experiments or case-studies would be more fruitful avenues for future research on this specific topic, rather than further large-scale cross-cultural correlational studies. PMID:26186527
Future Tense and Economic Decisions: Controlling for Cultural Evolution.
Roberts, Seán G; Winters, James; Chen, Keith
2015-01-01
A previous study by Chen demonstrates a correlation between languages that grammatically mark future events and their speakers' propensity to save, even after controlling for numerous economic and demographic factors. The implication is that languages which grammatically distinguish the present and the future may bias their speakers to distinguish them psychologically, leading to less future-oriented decision making. However, Chen's original analysis assumed languages are independent. This neglects the fact that languages are related, causing correlations to appear stronger than is warranted (Galton's problem). In this paper, we test the robustness of Chen's correlations to corrections for the geographic and historical relatedness of languages. While the question seems simple, the answer is complex. In general, the statistical correlation between the two variables is weaker when controlling for relatedness. When applying the strictest tests for relatedness, and when data is not aggregated across individuals, the correlation is not significant. However, the correlation did remain reasonably robust under a number of tests. We argue that any claims of synchronic patterns between cultural variables should be tested for spurious correlations, with the kinds of approaches used in this paper. However, experiments or case-studies would be more fruitful avenues for future research on this specific topic, rather than further large-scale cross-cultural correlational studies.
The Language of Persuasion, English, Vocabulary: 5114.68.
ERIC Educational Resources Information Center
Groff, Irvin
Developed for a high school quinmester unit on the language of persuasion, this guide provides the teacher with teaching strategies for a study of the speaker or writer as a persuader, the identification of the logical and psychological tools of persuasion, an examination of the levels of abstraction, the techniques of propaganda, and the…
ERIC Educational Resources Information Center
Jiang, Jun; Liu, Fang; Wan, Xuan; Jiang, Cunmei
2015-01-01
Tone language experience benefits pitch processing in music and speech for typically developing individuals. No known studies have examined pitch processing in individuals with autism who speak a tone language. This study investigated discrimination and identification of melodic contour and speech intonation in a group of Mandarin-speaking…
A Cross-Language Study of Perception of Lexical Stress in English
ERIC Educational Resources Information Center
Yu, Vickie Y.; Andruski, Jean E.
2010-01-01
This study investigates the question of whether language background affects the perception of lexical stress in English. Thirty native English speakers and 30 native Chinese learners of English participated in a stressed-syllable identification task and a discrimination task involving three types of stimuli (real words/pseudowords/hums). The…
Processing of Acoustic Cues in Lexical-Tone Identification by Pediatric Cochlear-Implant Recipients
ERIC Educational Resources Information Center
Peng, Shu-Chen; Lu, Hui-Ping; Lu, Nelson; Lin, Yung-Song; Deroche, Mickael L. D.; Chatterjee, Monita
2017-01-01
Purpose: The objective was to investigate acoustic cue processing in lexical-tone recognition by pediatric cochlear-implant (CI) recipients who are native Mandarin speakers. Method: Lexical-tone recognition was assessed in pediatric CI recipients and listeners with normal hearing (NH) in 2 tasks. In Task 1, participants identified naturally…
Acquisition of L2 Vowel Duration in Japanese by Native English Speakers
ERIC Educational Resources Information Center
Okuno, Tomoko
2013-01-01
Research has demonstrated that focused perceptual training facilitates L2 learners' segmental perception and spoken word identification. Hardison (2003) and Motohashi-Saigo and Hardison (2009) found benefits of visual cues in the training for acquisition of L2 contrasts. The present study examined factors affecting perception and production…
Improving Speaker Recognition by Biometric Voice Deconstruction
Mazaira-Fernandez, Luis Miguel; Álvarez-Marquina, Agustín; Gómez-Vilda, Pedro
2015-01-01
Person identification, especially in critical environments, has always been a subject of great interest. However, it has gained a new dimension in a world threatened by a new kind of terrorism that uses social networks (e.g., YouTube) to broadcast its message. In this new scenario, classical identification methods (such as fingerprints or face recognition) have been forcedly replaced by alternative biometric characteristics such as voice, as sometimes this is the only feature available. The present study benefits from the advances achieved during last years in understanding and modeling voice production. The paper hypothesizes that a gender-dependent characterization of speakers combined with the use of a set of features derived from the components, resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches. A general description about the main hypothesis and the methodology followed to extract the gender-dependent extended biometric parameters is given. Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions. PMID:26442245
Improving Speaker Recognition by Biometric Voice Deconstruction.
Mazaira-Fernandez, Luis Miguel; Álvarez-Marquina, Agustín; Gómez-Vilda, Pedro
2015-01-01
Person identification, especially in critical environments, has always been a subject of great interest. However, it has gained a new dimension in a world threatened by a new kind of terrorism that uses social networks (e.g., YouTube) to broadcast its message. In this new scenario, classical identification methods (such as fingerprints or face recognition) have been forcedly replaced by alternative biometric characteristics such as voice, as sometimes this is the only feature available. The present study benefits from the advances achieved during last years in understanding and modeling voice production. The paper hypothesizes that a gender-dependent characterization of speakers combined with the use of a set of features derived from the components, resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches. A general description about the main hypothesis and the methodology followed to extract the gender-dependent extended biometric parameters is given. Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions.
Catalan speakers' perception of word stress in unaccented contexts.
Ortega-Llebaria, Marta; del Mar Vanrell, Maria; Prieto, Pilar
2010-01-01
In unaccented contexts, formant frequency differences related to vowel reduction constitute a consistent cue to word stress in English, whereas in languages such as Spanish that have no systematic vowel reduction, stress perception is based on duration and intensity cues. This article examines the perception of word stress by speakers of Central Catalan, in which, due to its vowel reduction patterns, words either alternate stressed open vowels with unstressed mid-central vowels as in English or contain no vowel quality cues to stress, as in Spanish. Results show that Catalan listeners perceive stress based mainly on duration cues in both word types. Other cues pattern together with duration to make stress perception more robust. However, no single cue is absolutely necessary and trading effects compensate for a lack of differentiation in one dimension by changes in another dimension. In particular, speakers identify longer mid-central vowels as more stressed than shorter open vowels. These results and those obtained in other stress-accent languages provide cumulative evidence that word stress is perceived independently of pitch accents by relying on a set of cues with trading effects so that no single cue, including formant frequency differences related to vowel reduction, is absolutely necessary for stress perception.
Robust matching for voice recognition
NASA Astrophysics Data System (ADS)
Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.
1994-10-01
This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.
A constrained robust least squares approach for contaminant release history identification
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.
2006-04-01
Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.
Event identification by acoustic signature recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dress, W.B.; Kercel, S.W.
1995-07-01
Many events of interest to the security commnnity produce acoustic emissions that are, in principle, identifiable as to cause. Some obvious examples are gunshots, breaking glass, takeoffs and landings of small aircraft, vehicular engine noises, footsteps (high frequencies when on gravel, very low frequencies. when on soil), and voices (whispers to shouts). We are investigating wavelet-based methods to extract unique features of such events for classification and identification. We also discuss methods of classification and pattern recognition specifically tailored for acoustic signatures obtained by wavelet analysis. The paper is divided into three parts: completed work, work in progress, and futuremore » applications. The completed phase has led to the successful recognition of aircraft types on landing and takeoff. Both small aircraft (twin-engine turboprop) and large (commercial airliners) were included in the study. The project considered the design of a small, field-deployable, inexpensive device. The techniques developed during the aircraft identification phase were then adapted to a multispectral electromagnetic interference monitoring device now deployed in a nuclear power plant. This is a general-purpose wavelet analysis engine, spanning 14 octaves, and can be adapted for other specific tasks. Work in progress is focused on applying the methods previously developed to speaker identification. Some of the problems to be overcome include recognition of sounds as voice patterns and as distinct from possible background noises (e.g., music), as well as identification of the speaker from a short-duration voice sample. A generalization of the completed work and the work in progress is a device capable of classifying any number of acoustic events-particularly quasi-stationary events such as engine noises and voices and singular events such as gunshots and breaking glass. We will show examples of both kinds of events and discuss their recognition likelihood.« less
Scalable Learning for Geostatistics and Speaker Recognition
2011-01-01
of prior knowledge of the model or due to improved robustness requirements). Both these methods have their own advantages and disadvantages. The use...application. If the data is well-correlated and low-dimensional, any prior knowledge available on the data can be used to build a parametric model. In the...absence of prior knowledge , non-parametric methods can be used. If the data is high-dimensional, PCA based dimensionality reduction is often the first
Perception of Non-Native Consonant Length Contrast: The Role of Attention in Phonetic Processing
ERIC Educational Resources Information Center
Porretta, Vincent J.; Tucker, Benjamin V.
2015-01-01
The present investigation examines English speakers' ability to identify and discriminate non-native consonant length contrast. Three groups (L1 English No-Instruction, L1 English Instruction, and L1 Finnish control) performed a speeded forced-choice identification task and a speeded AX discrimination task on Finnish non-words (e.g.…
The Effect of Pitch Peak Alignment on Sentence Type Identification in Russian
ERIC Educational Resources Information Center
Makarova, Veronika
2007-01-01
This paper reports the results of an experimental phonetic study examining pitch peak alignment in production and perception of three-syllable one-word sentences with phonetic rising-falling pitch movement by speakers of Russian. The first part of the study (Experiment 1) utilizes 22 one-word three-syllable utterances read by five female speakers…
ERIC Educational Resources Information Center
Liu, Fang; Xu, Yi; Patel, Aniruddh D.; Francart, Tom; Jiang, Cunmei
2012-01-01
This study examined whether "melodic contour deafness" (insensitivity to the direction of pitch movement) in congenital amusia is associated with specific types of pitch patterns (discrete versus gliding pitches) or stimulus types (speech syllables versus complex tones). Thresholds for identification of pitch direction were obtained using discrete…
Automatic Method of Pause Measurement for Normal and Dysarthric Speech
ERIC Educational Resources Information Center
Rosen, Kristin; Murdoch, Bruce; Folker, Joanne; Vogel, Adam; Cahill, Louise; Delatycki, Martin; Corben, Louise
2010-01-01
This study proposes an automatic method for the detection of pauses and identification of pause types in conversational speech for the purpose of measuring the effects of Friedreich's Ataxia (FRDA) on speech. Speech samples of [approximately] 3 minutes were recorded from 13 speakers with FRDA and 18 healthy controls. Pauses were measured from the…
Effects of Audio-Visual Integration on the Detection of Masked Speech and Non-Speech Sounds
ERIC Educational Resources Information Center
Eramudugolla, Ranmalee; Henderson, Rachel; Mattingley, Jason B.
2011-01-01
Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that…
Response Identification in the Extremely Low Frequency Region of an Electret Condenser Microphone
Jeng, Yih-Nen; Yang, Tzung-Ming; Lee, Shang-Yin
2011-01-01
This study shows that a small electret condenser microphone connected to a notebook or a personal computer (PC) has a prominent response in the extremely low frequency region in a specific environment. It confines most acoustic waves within a tiny air cell as follows. The air cell is constructed by drilling a small hole in a digital versatile disk (DVD) plate. A small speaker and an electret condenser microphone are attached to the two sides of the hole. Thus, the acoustic energy emitted by the speaker and reaching the microphone is strong enough to actuate the diaphragm of the latter. The experiments showed that, once small air leakages are allowed on the margin of the speaker, the microphone captured the signal in the range of 0.5 to 20 Hz. Moreover, by removing the plastic cover of the microphone and attaching the microphone head to the vibration surface, the low frequency signal can be effectively captured too. Two examples are included to show the convenience of applying the microphone to pick up the low frequency vibration information of practical systems. PMID:22346594
Response identification in the extremely low frequency region of an electret condenser microphone.
Jeng, Yih-Nen; Yang, Tzung-Ming; Lee, Shang-Yin
2011-01-01
This study shows that a small electret condenser microphone connected to a notebook or a personal computer (PC) has a prominent response in the extremely low frequency region in a specific environment. It confines most acoustic waves within a tiny air cell as follows. The air cell is constructed by drilling a small hole in a digital versatile disk (DVD) plate. A small speaker and an electret condenser microphone are attached to the two sides of the hole. Thus, the acoustic energy emitted by the speaker and reaching the microphone is strong enough to actuate the diaphragm of the latter. The experiments showed that, once small air leakages are allowed on the margin of the speaker, the microphone captured the signal in the range of 0.5 to 20 Hz. Moreover, by removing the plastic cover of the microphone and attaching the microphone head to the vibration surface, the low frequency signal can be effectively captured too. Two examples are included to show the convenience of applying the microphone to pick up the low frequency vibration information of practical systems.
Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.
Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D
2016-01-01
The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.
The Mechanism of Speech Processing in Congenital Amusia: Evidence from Mandarin Speakers
Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren
2012-01-01
Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results. PMID:22347374
The mechanism of speech processing in congenital amusia: evidence from Mandarin speakers.
Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren
2012-01-01
Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results.
Voice recognition through phonetic features with Punjabi utterances
NASA Astrophysics Data System (ADS)
Kaur, Jasdeep; Juglan, K. C.; Sharma, Vishal; Upadhyay, R. K.
2017-07-01
This paper deals with perception and disorders of speech in view of Punjabi language. Visualizing the importance of voice identification, various parameters of speaker identification has been studied. The speech material was recorded with a tape recorder in their normal and disguised mode of utterances. Out of the recorded speech materials, the utterances free from noise, etc were selected for their auditory and acoustic spectrographic analysis. The comparison of normal and disguised speech of seven subjects is reported. The fundamental frequency (F0) at similar places, Plosive duration at certain phoneme, Amplitude ratio (A1:A2) etc. were compared in normal and disguised speech. It was found that the formant frequency of normal and disguised speech remains almost similar only if it is compared at the position of same vowel quality and quantity. If the vowel is more closed or more open in the disguised utterance the formant frequency will be changed in comparison to normal utterance. The ratio of the amplitude (A1: A2) is found to be speaker dependent. It remains unchanged in the disguised utterance. However, this value may shift in disguised utterance if cross sectioning is not done at the same location.
Linear array of photodiodes to track a human speaker for video recording
NASA Astrophysics Data System (ADS)
DeTone, D.; Neal, H.; Lougheed, R.
2012-12-01
Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.
Degraded Vowel Acoustics and the Perceptual Consequences in Dysarthria
NASA Astrophysics Data System (ADS)
Lansford, Kaitlin L.
Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been paid to quantifying the vowel production deficits associated with the specific dysarthrias. Attempts to characterize the relationship between naturally degraded vowel production in dysarthria with overall intelligibility have met with mixed results, leading some to question the nature of this relationship. It has been suggested that aberrant vowel acoustics may be an index of overall severity of the impairment and not an "integral component" of the intelligibility deficit. A limitation of previous work detailing perceptual consequences of disordered vowel acoustics is that overall intelligibility, not vowel identification accuracy, has been the perceptual measure of interest. A series of three experiments were conducted to address the problems outlined herein. The goals of the first experiment were to identify subsets of vowel metrics that reliably distinguish speakers with dysarthria from non-disordered speakers and differentiate the dysarthria subtypes. Vowel metrics that capture vowel centralization and reduced spectral distinctiveness among vowels differentiated dysarthric from non-disordered speakers. Vowel metrics generally failed to differentiate speakers according to their dysarthria diagnosis. The second and third experiments were conducted to evaluate the relationship between degraded vowel acoustics and the resulting percept. In the second experiment, correlation and regression analyses revealed vowel metrics that capture vowel centralization and distinctiveness and movement of the second formant frequency were most predictive of vowel identification accuracy and overall intelligibility. The third experiment was conducted to evaluate the extent to which the nature of the acoustic degradation predicts the resulting percept. Results suggest distinctive vowel tokens are better identified and, likewise, better-identified tokens are more distinctive. Further, an above-chance level agreement between nature of vowel misclassification and misidentification errors was demonstrated for all vowels, suggesting degraded vowel acoustics are not merely an index of severity in dysarthria, but rather are an integral component of the resultant intelligibility disorder.
2016-01-01
Purpose The purpose of this research forum article is to provide an overview of a collection of invited articles on the topic “specific language impairment (SLI) in children with concomitant health conditions or nonmainstream language backgrounds.” Topics include SLI, attention-deficit/hyperactivity disorder, autism spectrum disorder, cochlear implants, bilingualism, and dialectal language learning contexts. Method The topic is timely due to current debates about the diagnosis of SLI. An overarching comparative conceptual framework is provided for comparisons of SLI with other clinical conditions. Comparisons of SLI in children with low-normal or normal nonverbal IQ illustrate the unexpected outcomes of 2 × 2 comparison designs. Results Comparative studies reveal unexpected relationships among speech, language, cognitive, and social dimensions of children's development as well as precise ways to identify children with SLI who are bilingual or dialect speakers. Conclusions The diagnosis of SLI is essential for elucidating possible causal pathways of language impairments, risks for language impairments, assessments for identification of language impairments, linguistic dimensions of language impairments, and long-term outcomes. Although children's language acquisition is robust under high levels of risk, unexplained individual variations in language acquisition lead to persistent language impairments. PMID:26502218
A robust firearm identification algorithm of forensic ballistics specimens
NASA Astrophysics Data System (ADS)
Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.
2017-09-01
There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.
Parallel Processing of Large Scale Microphone Arrays for Sound Capture
NASA Astrophysics Data System (ADS)
Jan, Ea-Ee.
1995-01-01
Performance of microphone sound pick up is degraded by deleterious properties of the acoustic environment, such as multipath distortion (reverberation) and ambient noise. The degradation becomes more prominent in a teleconferencing environment in which the microphone is positioned far away from the speaker. Besides, the ideal teleconference should feel as easy and natural as face-to-face communication with another person. This suggests hands-free sound capture with no tether or encumbrance by hand-held or body-worn sound equipment. Microphone arrays for this application represent an appropriate approach. This research develops new microphone array and signal processing techniques for high quality hands-free sound capture in noisy, reverberant enclosures. The new techniques combine matched-filtering of individual sensors and parallel processing to provide acute spatial volume selectivity which is capable of mitigating the deleterious effects of noise interference and multipath distortion. The new method outperforms traditional delay-and-sum beamformers which provide only directional spatial selectivity. The research additionally explores truncated matched-filtering and random distribution of transducers to reduce complexity and improve sound capture quality. All designs are first established by computer simulation of array performance in reverberant enclosures. The simulation is achieved by a room model which can efficiently calculate the acoustic multipath in a rectangular enclosure up to a prescribed order of images. It also calculates the incident angle of the arriving signal. Experimental arrays were constructed and their performance was measured in real rooms. Real room data were collected in a hard-walled laboratory and a controllable variable acoustics enclosure of similar size, approximately 6 x 6 x 3 m. An extensive speech database was also collected in these two enclosures for future research on microphone arrays. The simulation results are shown to be consistent with the real room data. Localization of sound sources has been explored using cross-power spectrum time delay estimation and has been evaluated using real room data under slightly, moderately and highly reverberant conditions. To improve the accuracy and reliability of the source localization, an outlier detector that removes incorrect time delay estimation has been invented. To provide speaker selectivity for microphone array systems, a hands-free speaker identification system has been studied. A recently invented feature using selected spectrum information outperforms traditional recognition methods. Measured results demonstrate the capabilities of speaker selectivity from a matched-filtered array. In addition, simulation utilities, including matched -filtering processing of the array and hands-free speaker identification, have been implemented on the massively -parallel nCube super-computer. This parallel computation highlights the requirements for real-time processing of array signals.
You had me at "Hello": Rapid extraction of dialect information from spoken words.
Scharinger, Mathias; Monahan, Philip J; Idsardi, William J
2011-06-15
Research on the neuronal underpinnings of speaker identity recognition has identified voice-selective areas in the human brain with evolutionary homologues in non-human primates who have comparable areas for processing species-specific calls. Most studies have focused on estimating the extent and location of these areas. In contrast, relatively few experiments have investigated the time-course of speaker identity, and in particular, dialect processing and identification by electro- or neuromagnetic means. We show here that dialect extraction occurs speaker-independently, pre-attentively and categorically. We used Standard American English and African-American English exemplars of 'Hello' in a magnetoencephalographic (MEG) Mismatch Negativity (MMN) experiment. The MMN as an automatic change detection response of the brain reflected dialect differences that were not entirely reducible to acoustic differences between the pronunciations of 'Hello'. Source analyses of the M100, an auditory evoked response to the vowels suggested additional processing in voice-selective areas whenever a dialect change was detected. These findings are not only relevant for the cognitive neuroscience of language, but also for the social sciences concerned with dialect and race perception. Copyright © 2011 Elsevier Inc. All rights reserved.
Development of equally intelligible Telugu sentence-lists to test speech recognition in noise.
Tanniru, Kishore; Narne, Vijaya Kumar; Jain, Chandni; Konadath, Sreeraj; Singh, Niraj Kumar; Sreenivas, K J Ramadevi; K, Anusha
2017-09-01
To develop sentence lists in the Telugu language for the assessment of speech recognition threshold (SRT) in the presence of background noise through identification of the mean signal-to-noise ratio required to attain a 50% sentence recognition score (SRTn). This study was conducted in three phases. The first phase involved the selection and recording of Telugu sentences. In the second phase, 20 lists, each consisting of 10 sentences with equal intelligibility, were formulated using a numerical optimisation procedure. In the third phase, the SRTn of the developed lists was estimated using adaptive procedures on individuals with normal hearing. A total of 68 native Telugu speakers with normal hearing participated in the study. Of these, 18 (including the speakers) performed on various subjective measures in first phase, 20 performed on sentence/word recognition in noise for second phase and 30 participated in the list equivalency procedures in third phase. In all, 15 lists of comparable difficulty were formulated as test material. The mean SRTn across these lists corresponded to -2.74 (SD = 0.21). The developed sentence lists provided a valid and reliable tool to measure SRTn in Telugu native speakers.
Impact of Cyrillic on Native English Speakers' Phono-lexical Acquisition of Russian.
Showalter, Catherine E
2018-03-01
We investigated the influence of grapheme familiarity and native language grapheme-phoneme correspondences during second language lexical learning. Native English speakers learned Russian-like words via auditory presentations containing only familiar first language phones, pictured meanings, and exposure to either Cyrillic orthographic forms (Orthography condition) or the sequence
Development of emergent processing loops as a system of systems concept
NASA Astrophysics Data System (ADS)
Gainey, James C., Jr.; Blasch, Erik P.
1999-03-01
This paper describes an engineering approach toward implementing the current neuroscientific understanding of how the primate brain fuses, or integrates, 'information' in the decision-making process. We describe a System of Systems (SoS) design for improving the overall performance, capabilities, operational robustness, and user confidence in Identification (ID) systems and show how it could be applied to biometrics security. We use the Physio-associative temporal sensor integration algorithm (PATSIA) which is motivated by observed functions and interactions of the thalamus, hippocampus, and cortical structures in the brain. PATSIA utilizes signal theory mathematics to model how the human efficiently perceives and uses information from the environment. The hybrid architecture implements a possible SoS-level description of the Joint Directors of US Laboratories for Fusion Working Group's functional description involving 5 levels of fusion and their associated definitions. This SoS architecture propose dynamic sensor and knowledge-source integration by implementing multiple Emergent Processing Loops for predicting, feature extracting, matching, and Searching both static and dynamic database like MSTAR's PEMS loops. Biologically, this effort demonstrates these objectives by modeling similar processes from the eyes, ears, and somatosensory channels, through the thalamus, and to the cortices as appropriate while using the hippocampus for short-term memory search and storage as necessary. The particular approach demonstrated incorporates commercially available speaker verification and face recognition software and hardware to collect data and extract features to the PATSIA. The PATSIA maximizes the confidence levels for target identification or verification in dynamic situations using a belief filter. The proof of concept described here is easily adaptable and scaleable to other military and nonmilitary sensor fusion applications.
Computer-Mediated Assessment of Intelligibility in Aphasia and Apraxia of Speech
Haley, Katarina L.; Roth, Heidi; Grindstaff, Enetta; Jacks, Adam
2011-01-01
Background Previous work indicates that single word intelligibility tests developed for dysarthria are sensitive to segmental production errors in aphasic individuals with and without apraxia of speech. However, potential listener learning effects and difficulties adapting elicitation procedures to coexisting language impairments limit their applicability to left hemisphere stroke survivors. Aims The main purpose of this study was to examine basic psychometric properties for a new monosyllabic intelligibility test developed for individuals with aphasia and/or AOS. A related purpose was to examine clinical feasibility and potential to standardize a computer-mediated administration approach. Methods & Procedures A 600-item monosyllabic single word intelligibility test was constructed by assembling sets of phonetically similar words. Custom software was used to select 50 target words from this test in a pseudo-random fashion and to elicit and record production of these words by 23 speakers with aphasia and 20 neurologically healthy participants. To evaluate test-retest reliability, two identical sets of 50-word lists were elicited by requesting repetition after a live speaker model. To examine the effect of a different word set and auditory model, an additional set of 50 different words was elicited with a pre-recorded model. The recorded words were presented to normal-hearing listeners for identification via orthographic and multiple-choice response formats. To examine construct validity, production accuracy for each speaker was estimated via phonetic transcription and rating of overall articulation. Outcomes & Results Recording and listening tasks were completed in less than six minutes for all speakers and listeners. Aphasic speakers were significantly less intelligible than neurologically healthy speakers and displayed a wide range of intelligibility scores. Test-retest and inter-listener reliability estimates were strong. No significant difference was found in scores based on recordings from a live model versus a pre-recorded model, but some individual speakers favored the live model. Intelligibility test scores correlated highly with segmental accuracy derived from broad phonetic transcription of the same speech sample and a motor speech evaluation. Scores correlated moderately with rated articulation difficulty. Conclusions We describe a computerized, single-word intelligibility test that yields clinically feasible, reliable, and valid measures of segmental speech production in adults with aphasia. This tool can be used in clinical research to facilitate appropriate participant selection and to establish matching across comparison groups. For a majority of speakers, elicitation procedures can be standardized by using a pre-recorded auditory model for repetition. This assessment tool has potential utility for both clinical assessment and outcomes research. PMID:22215933
Real-Time Tracking of Selective Auditory Attention From M/EEG: A Bayesian Filtering Approach
Miran, Sina; Akram, Sahar; Sheikhattar, Alireza; Simon, Jonathan Z.; Zhang, Tao; Babadi, Behtash
2018-01-01
Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, an ability which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from non-invasive neuroimaging recordings such as magnetoencephalography (MEG) and electroencephalography (EEG). To this end, most existing approaches compute correlation-based measures by either regressing the features of each speech stream to the M/EEG channels (the decoding approach) or vice versa (the encoding approach). To produce robust results, these procedures require multiple trials for training purposes. Also, their decoding accuracy drops significantly when operating at high temporal resolutions. Thus, they are not well-suited for emerging real-time applications such as smart hearing aid devices or brain-computer interface systems, where training data might be limited and high temporal resolutions are desired. In this paper, we close this gap by developing an algorithmic pipeline for real-time decoding of the attentional state. Our proposed framework consists of three main modules: (1) Real-time and robust estimation of encoding or decoding coefficients, achieved by sparse adaptive filtering, (2) Extracting reliable markers of the attentional state, and thereby generalizing the widely-used correlation-based measures thereof, and (3) Devising a near real-time state-space estimator that translates the noisy and variable attention markers to robust and statistically interpretable estimates of the attentional state with minimal delay. Our proposed algorithms integrate various techniques including forgetting factor-based adaptive filtering, ℓ1-regularization, forward-backward splitting algorithms, fixed-lag smoothing, and Expectation Maximization. We validate the performance of our proposed framework using comprehensive simulations as well as application to experimentally acquired M/EEG data. Our results reveal that the proposed real-time algorithms perform nearly as accurately as the existing state-of-the-art offline techniques, while providing a significant degree of adaptivity, statistical robustness, and computational savings. PMID:29765298
Real-Time Tracking of Selective Auditory Attention From M/EEG: A Bayesian Filtering Approach.
Miran, Sina; Akram, Sahar; Sheikhattar, Alireza; Simon, Jonathan Z; Zhang, Tao; Babadi, Behtash
2018-01-01
Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, an ability which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from non-invasive neuroimaging recordings such as magnetoencephalography (MEG) and electroencephalography (EEG). To this end, most existing approaches compute correlation-based measures by either regressing the features of each speech stream to the M/EEG channels (the decoding approach) or vice versa (the encoding approach). To produce robust results, these procedures require multiple trials for training purposes. Also, their decoding accuracy drops significantly when operating at high temporal resolutions. Thus, they are not well-suited for emerging real-time applications such as smart hearing aid devices or brain-computer interface systems, where training data might be limited and high temporal resolutions are desired. In this paper, we close this gap by developing an algorithmic pipeline for real-time decoding of the attentional state. Our proposed framework consists of three main modules: (1) Real-time and robust estimation of encoding or decoding coefficients, achieved by sparse adaptive filtering, (2) Extracting reliable markers of the attentional state, and thereby generalizing the widely-used correlation-based measures thereof, and (3) Devising a near real-time state-space estimator that translates the noisy and variable attention markers to robust and statistically interpretable estimates of the attentional state with minimal delay. Our proposed algorithms integrate various techniques including forgetting factor-based adaptive filtering, ℓ 1 -regularization, forward-backward splitting algorithms, fixed-lag smoothing, and Expectation Maximization. We validate the performance of our proposed framework using comprehensive simulations as well as application to experimentally acquired M/EEG data. Our results reveal that the proposed real-time algorithms perform nearly as accurately as the existing state-of-the-art offline techniques, while providing a significant degree of adaptivity, statistical robustness, and computational savings.
The impact of musical training and tone language experience on talker identification
Xie, Xin; Myers, Emily
2015-01-01
Listeners can use pitch changes in speech to identify talkers. Individuals exhibit large variability in sensitivity to pitch and in accuracy perceiving talker identity. In particular, people who have musical training or long-term tone language use are found to have enhanced pitch perception. In the present study, the influence of pitch experience on talker identification was investigated as listeners identified talkers in native language as well as non-native languages. Experiment 1 was designed to explore the influence of pitch experience on talker identification in two groups of individuals with potential advantages for pitch processing: musicians and tone language speakers. Experiment 2 further investigated individual differences in pitch processing and the contribution to talker identification by testing a mediation model. Cumulatively, the results suggested that (a) musical training confers an advantage for talker identification, supporting a shared resources hypothesis regarding music and language and (b) linguistic use of lexical tones also increases accuracy in hearing talker identity. Importantly, these two types of hearing experience enhance talker identification by sharpening pitch perception skills in a domain-general manner. PMID:25618071
The impact of musical training and tone language experience on talker identification.
Xie, Xin; Myers, Emily
2015-01-01
Listeners can use pitch changes in speech to identify talkers. Individuals exhibit large variability in sensitivity to pitch and in accuracy perceiving talker identity. In particular, people who have musical training or long-term tone language use are found to have enhanced pitch perception. In the present study, the influence of pitch experience on talker identification was investigated as listeners identified talkers in native language as well as non-native languages. Experiment 1 was designed to explore the influence of pitch experience on talker identification in two groups of individuals with potential advantages for pitch processing: musicians and tone language speakers. Experiment 2 further investigated individual differences in pitch processing and the contribution to talker identification by testing a mediation model. Cumulatively, the results suggested that (a) musical training confers an advantage for talker identification, supporting a shared resources hypothesis regarding music and language and (b) linguistic use of lexical tones also increases accuracy in hearing talker identity. Importantly, these two types of hearing experience enhance talker identification by sharpening pitch perception skills in a domain-general manner.
Lee, Kichol; Casali, John G
2016-01-01
To investigate the effect of controlled low-speed wind-noise on the auditory situation awareness performance afforded by military hearing protection/enhancement devices (HPED) and tactical communication and protective systems (TCAPS). Recognition/identification and pass-through communications tasks were separately conducted under three wind conditions (0, 5, and 10 mph). Subjects wore two in-ear-type TCAPS, one earmuff-type TCAPS, a Combat Arms Earplug in its 'open' or pass-through setting, and an EB-15LE electronic earplug. Devices with electronic gain systems were tested under two gain settings: 'unity' and 'max'. Testing without any device (open ear) was conducted as a control. Ten subjects were recruited from the student population at Virginia Tech. Audiometric requirements were 25 dBHL or better at 500, 1000, 2000, 4000, and 8000 Hz in both ears. Performance on the interaction of communication task-by-device was significantly different only in 0 mph wind speed. The between-device performance differences varied with azimuthal speaker locations. It is evident from this study that stable (non-gusting) wind speeds up to 10 mph did not significantly degrade recognition/identification task performance and pass-through communication performance of the group of HPEDs and TCAPS tested. However, the various devices performed differently as the test sound signal speaker location was varied and it appears that physical as well as electronic features may have contributed to this directional result.
Vanrell, Maria del Mar; Mascaró, Ignasi; Torres-Tamarit, Francesc; Prieto, Pilar
2013-06-01
Recent studies in the field of intonational phonology have shown that information-seeking questions can be distinguished from confirmation-seeking questions by prosodic means in a variety of languages (Armstrong, 2010, for Puerto Rican Spanish; Grice & Savino, 1997, for Bari Italian; Kügler, 2003, for Leipzig German; Mata & Santos, 2010, for European Portuguese; Vanrell, Mascaró, Prieto, & Torres-Tamarit, 2010, for Catalan). However, all these studies have relied on production experiments and little is known about the perceptual relevance of these intonational cues. This paper explores whether Majorcan Catalan listeners distinguish information- and confirmation-seeking questions by means of two distinct nuclear falling pitch accents. Three behavioral tasks were conducted with 20 Majorcan Catalan subjects, namely a semantic congruity test, a rating test, and a classical categorical perception identification/discrimination test. The results show that a difference in pitch scaling on the leading H tone of the H+L* nuclear pitch accent is the main cue used by Majorcan Catalan listeners to distinguish confirmation questions from information-seeking questions. Thus, while a iH+L* pitch accent signals an information-seeking question (i.e., the speaker has no expectation about the nature of the answer), the H+L* pitch accent indicates that the speaker is asking about mutually shared information. We argue that these results have implications in representing the distinctions of tonal height in Catalan. The results also support the claim that phonological contrasts in intonation, together with other linguistic strategies, can signal the speakers' beliefs about the certainty of the proposition expressed.
An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-01-01
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-07-07
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.
Perceptual Learning of Time-Compressed Speech: More than Rapid Adaptation
Banai, Karen; Lavner, Yizhar
2012-01-01
Background Time-compressed speech, a form of rapidly presented speech, is harder to comprehend than natural speech, especially for non-native speakers. Although it is possible to adapt to time-compressed speech after a brief exposure, it is not known whether additional perceptual learning occurs with further practice. Here, we ask whether multiday training on time-compressed speech yields more learning than that observed during the initial adaptation phase and whether the pattern of generalization following successful learning is different than that observed with initial adaptation only. Methodology/Principal Findings Two groups of non-native Hebrew speakers were tested on five different conditions of time-compressed speech identification in two assessments conducted 10–14 days apart. Between those assessments, one group of listeners received five practice sessions on one of the time-compressed conditions. Between the two assessments, trained listeners improved significantly more than untrained listeners on the trained condition. Furthermore, the trained group generalized its learning to two untrained conditions in which different talkers presented the trained speech materials. In addition, when the performance of the non-native speakers was compared to that of a group of naïve native Hebrew speakers, performance of the trained group was equivalent to that of the native speakers on all conditions on which learning occurred, whereas performance of the untrained non-native listeners was substantially poorer. Conclusions/Significance Multiday training on time-compressed speech results in significantly more perceptual learning than brief adaptation. Compared to previous studies of adaptation, the training induced learning is more stimulus specific. Taken together, the perceptual learning of time-compressed speech appears to progress from an initial, rapid adaptation phase to a subsequent prolonged and more stimulus specific phase. These findings are consistent with the predictions of the Reverse Hierarchy Theory of perceptual learning and suggest constraints on the use of perceptual-learning regimens during second language acquisition. PMID:23056592
Analysis and Classification of Voice Pathologies Using Glottal Signal Parameters.
Forero M, Leonardo A; Kohler, Manoela; Vellasco, Marley M B R; Cataldo, Edson
2016-09-01
The classification of voice diseases has many applications in health, in diseases treatment, and in the design of new medical equipment for helping doctors in diagnosing pathologies related to the voice. This work uses the parameters of the glottal signal to help the identification of two types of voice disorders related to the pathologies of the vocal folds: nodule and unilateral paralysis. The parameters of the glottal signal are obtained through a known inverse filtering method, and they are used as inputs to an Artificial Neural Network, a Support Vector Machine, and also to a Hidden Markov Model, to obtain the classification, and to compare the results, of the voice signals into three different groups: speakers with nodule in the vocal folds; speakers with unilateral paralysis of the vocal folds; and speakers with normal voices, that is, without nodule or unilateral paralysis present in the vocal folds. The database is composed of 248 voice recordings (signals of vowels production) containing samples corresponding to the three groups mentioned. In this study, a larger database was used for the classification when compared with similar studies, and its classification rate is superior to other studies, reaching 97.2%. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Native language shapes automatic neural processing of speech.
Intartaglia, Bastien; White-Schwoch, Travis; Meunier, Christine; Roman, Stéphane; Kraus, Nina; Schön, Daniele
2016-08-01
The development of the phoneme inventory is driven by the acoustic-phonetic properties of one's native language. Neural representation of speech is known to be shaped by language experience, as indexed by cortical responses, and recent studies suggest that subcortical processing also exhibits this attunement to native language. However, most work to date has focused on the differences between tonal and non-tonal languages that use pitch variations to convey phonemic categories. The aim of this cross-language study is to determine whether subcortical encoding of speech sounds is sensitive to language experience by comparing native speakers of two non-tonal languages (French and English). We hypothesized that neural representations would be more robust and fine-grained for speech sounds that belong to the native phonemic inventory of the listener, and especially for the dimensions that are phonetically relevant to the listener such as high frequency components. We recorded neural responses of American English and French native speakers, listening to natural syllables of both languages. Results showed that, independently of the stimulus, American participants exhibited greater neural representation of the fundamental frequency compared to French participants, consistent with the importance of the fundamental frequency to convey stress patterns in English. Furthermore, participants showed more robust encoding and more precise spectral representations of the first formant when listening to the syllable of their native language as compared to non-native language. These results align with the hypothesis that language experience shapes sensory processing of speech and that this plasticity occurs as a function of what is meaningful to a listener. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lotto, A J; Kluender, K R
1998-05-01
When members of a series of synthesized stop consonants varying acoustically in F3 characteristics and varying perceptually from /da/ to /ga/ are preceded by /al/, subjects report hearing more /ga/ syllables relative to when each member is preceded by /ar/ (Mann, 1980). It has been suggested that this result demonstrates the existence of a mechanism that compensates for coarticulation via tacit knowledge of articulatory dynamics and constraints, or through perceptual recovery of vocal-tract dynamics. The present study was designed to assess the degree to which these perceptual effects are specific to qualities of human articulatory sources. In three experiments, series of consonant-vowel (CV) stimuli varying in F3-onset frequency (/da/-/ga/) were preceded by speech versions or nonspeech analogues of /al/ and /ar/. The effect of liquid identity on stop consonant labeling remained when the preceding VC was produced by a female speaker and the CV syllable was modeled after a male speaker's productions. Labeling boundaries also shifted when the CV was preceded by a sine wave glide modeled after F3 characteristics of /al/ and /ar/. Identifications shifted even when the preceding sine wave was of constant frequency equal to the offset frequency of F3 from a natural production. These results suggest an explanation in terms of general auditory processes as opposed to recovery of or knowledge of specific articulatory dynamics.
NASA Astrophysics Data System (ADS)
Liu, Xiyao; Lou, Jieting; Wang, Yifan; Du, Jingyu; Zou, Beiji; Chen, Yan
2018-03-01
Authentication and copyright identification are two critical security issues for medical images. Although zerowatermarking schemes can provide durable, reliable and distortion-free protection for medical images, the existing zerowatermarking schemes for medical images still face two problems. On one hand, they rarely considered the distinguishability for medical images, which is critical because different medical images are sometimes similar to each other. On the other hand, their robustness against geometric attacks, such as cropping, rotation and flipping, is insufficient. In this study, a novel discriminative and robust zero-watermarking (DRZW) is proposed to address these two problems. In DRZW, content-based features of medical images are first extracted based on completed local binary pattern (CLBP) operator to ensure the distinguishability and robustness, especially against geometric attacks. Then, master shares and ownership shares are generated from the content-based features and watermark according to (2,2) visual cryptography. Finally, the ownership shares are stored for authentication and copyright identification. For queried medical images, their content-based features are extracted and master shares are generated. Their watermarks for authentication and copyright identification are recovered by stacking the generated master shares and stored ownership shares. 200 different medical images of 5 types are collected as the testing data and our experimental results demonstrate that DRZW ensures both the accuracy and reliability of authentication and copyright identification. When fixing the false positive rate to 1.00%, the average value of false negative rates by using DRZW is only 1.75% under 20 common attacks with different parameters.
A voting-based star identification algorithm utilizing local and global distribution
NASA Astrophysics Data System (ADS)
Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua
2018-03-01
A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.
NASA Astrophysics Data System (ADS)
Peng, Bo; Zheng, Sifa; Liao, Xiangning; Lian, Xiaomin
2018-03-01
In order to achieve sound field reproduction in a wide frequency band, multiple-type speakers are used. The reproduction accuracy is not only affected by the signals sent to the speakers, but also depends on the position and the number of each type of speaker. The method of optimizing a mixed speaker array is investigated in this paper. A virtual-speaker weighting method is proposed to optimize both the position and the number of each type of speaker. In this method, a virtual-speaker model is proposed to quantify the increment of controllability of the speaker array when the speaker number increases. While optimizing a mixed speaker array, the gain of the virtual-speaker transfer function is used to determine the priority orders of the candidate speaker positions, which optimizes the position of each type of speaker. Then the relative gain of the virtual-speaker transfer function is used to determine whether the speakers are redundant, which optimizes the number of each type of speaker. Finally the virtual-speaker weighting method is verified by reproduction experiments of the interior sound field in a passenger car. The results validate that the optimum mixed speaker array can be obtained using the proposed method.
How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?
Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.
2010-01-01
We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774
Futrell, Richard; Hickey, Tina; Lee, Aldrin; Lim, Eunice; Luchkina, Elena; Gibson, Edward
2015-03-01
In communicating events by gesture, participants create codes that recapitulate the patterns of word order in the world's vocal languages (Gibson et al., 2013; Goldin-Meadow, So, Ozyurek, & Mylander, 2008; Hall, Mayberry, & Ferreria, 2013; Hall, Ferreira, & Mayberry, 2014; Langus & Nespor, 2010; and others). Participants most often convey simple transitive events using gestures in the order Subject-Object-Verb (SOV), the most common word order in human languages. When there is a possibility of confusion between subject and object, participants use the order Subject-Verb-Object (SVO). This overall pattern has been explained by positing an underlying cognitive preference for subject-initial, verb-final orders, with the verb-medial order SVO order emerging to facilitate robust communication in a noisy channel (Gibson et al., 2013). However, whether the subject-initial and verb-final biases are innate or the result of languages that the participants already know has been unclear, because participants in previous studies all spoke either SVO or SOV languages, which could induce a subject-initial, verb-late bias. Furthermore, the exact manner in which known languages influence gestural orders has been unclear. In this paper we demonstrate that there is a subject-initial and verb-final gesturing bias cross-linguistically by comparing gestures of speakers of SVO languages English and Russian to those of speakers of VSO languages Irish and Tagalog. The findings show that subject-initial and verb-final order emerges even in speakers of verb-initial languages, and that interference from these languages takes the form of occasionally gesturing in VSO order, without an additional bias toward other orders. The results provides further support for the idea that improvised gesture is a window into the pressures shaping language formation, independently of the languages that participants already know. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wong, Nicole W.
The palatal lateral is a rare sound in the world's languages; a review of the literature reveals just 23 languages that currently possess the palatal lateral. Similarly, only 15 (or 3.33%) of the languages in the UCLA Phonological Segment Inventory Database (UPSID) (Maddieson and Precoda, 1991) can claim to currently possess the palatal lateral. While UPSID reports that an additional five languages (Basque, Guarani, Iate, Spanish, Turkish) possess the palatal lateral, these languages have either lost the palatal lateral or were included erroneously. Understanding the production and perception of rare speech sounds is important for understanding the distribution of speech sounds cross-linguistically, especially with regards to the establishment of a single phonetic alphabet (i.e. the International Phonetic Alphabet (IPA)) that can be used to describe and transcribe the languages of the world (Ladefoged and Everett, 1996). An investigation of rare speech sounds can also reveal important findings regarding the physical limitations of the vocal tract and human auditory system. Given that the palatal lateral is a rare speech sound, a complete description of the articulation, acoustics, and perception of this sound does not currently exist. Accounts of the palatal lateral vary with regards to terminology; the palatal lateral has also been referred to as a so-called "phonemically" palatalized lateral (Zilyns'kyj, 1979), a laminal post-alveolar lateral (Ladefoged and Maddieson, 1996), and an alveolopalatal lateral (Recasens, 2013). Furthermore, current literature also does not distinguish between the palatal lateral and a palatalized lateral. The lack of agreement in literature regarding terminology can present problems when attempting to assess whether a palatal lateral in one language is similar to a palatal lateral in another language. This dissertation provides a comprehensive description of the palatal lateral, as a means of initiating cross-linguistic comparisons of the palatal lateral as well as understanding the difference between a palatal and palatalized lateral. A two-part study of the articulation and acoustics of the palatal lateral in Brazilian Portuguese (BP) was undertaken in this dissertation. Articulatory data was collected using electromagenetic articulography (EMA) from 10 female native speakers of BP from Sao Paulo state in Brazil, which permitted the simultaneous collection of acoustic information. Study 1 investigated the articulation of the palatal lateral through a battery of measures and compares the palatal lateral against the palatalized lateral approximant, alveolar lateral approximant, palatal approximant, palatal nasal, palatalized nasal, and alveolar nasal. Study 2 analyzes the acoustics of the palatal lateral in comparison to the palatalized lateral approximant, alveolar lateral approximant, and palatal approximant. A third study was included in the appendix. This study incorporates a phone identification task to understand the role of acoustic saliency in the rareness of the palatal lateral, i.e. compared to other palatal sounds, is the palatal lateral more likely to be misidentified and if so, as which sounds? This task also investigates whether there is a perceived difference between the palatal and palatalized lateral that may not be captured by Study 1 and 2, in addition to whether native speakers of BP are better at distinguishing the two sounds than non-native speakers (here, native speakers of American English). The palatal lateral was compared to the palatalized lateral, palatal approximant, alveolar lateral approximant, palatal nasal, palatalized nasal, alveolar nasal, voiced alveolar stop, and voiced palatalized alveolar stop. 25 (11 male, 14 female) natives speakers of BP and 20 (11 male, 9 female) native speakers of American English with no extensive exposure to BP participated in this study. Results from Study 1 show that the palatal lateral is articulated laminally with a high front tongue body and concave anterior tongue shape that gradually becomes straighter as the phone progresses. Acoustic results in Study 2 indicate a median F1, F2, and F3 of 367 Hz, 1954 Hz, and 3035 Hz respectively for female speakers of BP. Statistical analysis reveals little or no evidence of significant difference between the palatal lateral and palatalized lateral with regards to the shape of the tongue body, duration of the phone, or formant frequencies. The perception study included in the appendix finds that while both native and non-native speakers of BP distinguish between the palatal lateral and palatalized lateral at chance level, native speakers of BP perform better than the non-native speakers at correctly identifying the palatal and palatalized nasal. This study also finds that of all the sounds included in this task, the palatal and palatalized lateral are the most likely to be misidentified as the palatal approximant for both participant groups, with the addition of -3 dB of speech-shaped noise greatly increasing the rate of confusion. However, the palatalized lateral is inaccurately identified as a palatal approximant at a confusion rate nearly double or more than the palatal lateral. This dissertation reveals that the palatal and palatalized lateral are essentially the same sound in BP. Furthermore, there is no evidence that indicates that the palatal or palatalized lateral are composed of two separate phones, i.e. an alveolar lateral approximant followed by a palatal approximant. Findings from the perception study support the proposal that yeismo (i.e. the merger of the palatal lateral in favor of the palatal approximant (Colantoni, 2001; Hualde et al., 2005)) occurs because lateral sounds are less robust against added noise than nasal sounds. I argue here that this contributes directly to the rareness of the palatal lateral.
Speech coding, reconstruction and recognition using acoustics and electromagnetic waves
Holzrichter, J.F.; Ng, L.C.
1998-03-17
The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.
Speech coding, reconstruction and recognition using acoustics and electromagnetic waves
Holzrichter, John F.; Ng, Lawrence C.
1998-01-01
The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.
Aeroservoelastic Uncertainty Model Identification from Flight Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
2001-01-01
Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.
An experimental study of nonlinear dynamic system identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1990-01-01
A technique for robust identification of nonlinear dynamic systems is developed and illustrated using both simulations and analog experiments. The technique is based on the Minimum Model Error optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature of the current work is the ability to identify nonlinear dynamic systems without prior assumptions regarding the form of the nonlinearities, in constrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Wang, Lei
2013-04-01
Understanding the transport mechanism of 1,3-propanediol (1,3-PD) is of critical importance to do further research on gene regulation. Due to the lack of intracellular information, on the basis of enzyme-catalytic system, using biological robustness as performance index, we present a system identification model to infer the most possible transport mechanism of 1,3-PD, in which the performance index consists of the relative error of the extracellular substance concentrations and biological robustness of the intracellular substance concentrations. We will not use a Boolean framework but prefer a model description based on ordinary differential equations. Among other advantages, this also facilitates the robustness analysis, which is the main goal of this paper. An algorithm is constructed to seek the solution of the identification model. Numerical results show that the most possible transport way is active transport coupled with passive diffusion.
2008-12-01
AUTHOR(S) H.T. Kung, Chit-Kwan Lin, Chia-Yung Su, Dario Vlah, John Grieco, Mark Huggins, and Bruce Suter 5d. PROJECT NUMBER WCNA 5e. TASK NUMBER...APPLICATION H. T. Kung, Chit-Kwan Lin, Chia-Yung Su, Dario Vlah John Grieco†, Mark Huggins‡, Bruce Suter† Harvard University Air Force Research Lab†, Oasis...contributing its C7 processors used in our wireless testbed. REFERENCES [1] R. North, N. Browne, and L. Schiavone , “Joint tactical radio system - connecting
1989-08-18
for public release; distribution 2b. DECLASSIFICATION / DOWNGRADING SCHEDULE ufnl i mi ted. 4. PERFORMING ORGANIZATION REPORT NUMBER(S) S. MONITORING... ORGANIZATION REPORT NUMBER(S) 6a. NAME OF PERFORMING ORGANIZATION 6b OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION University of California (if...OF FUNDING/SPONSORING 8b OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (If applicable) N00014-85-K0562 8c. ADDRESS (City
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Speaker normalization for chinese vowel recognition in cochlear implants.
Luo, Xin; Fu, Qian-Jie
2005-07-01
Because of the limited spectra-temporal resolution associated with cochlear implants, implant patients often have greater difficulty with multitalker speech recognition. The present study investigated whether multitalker speech recognition can be improved by applying speaker normalization techniques to cochlear implant speech processing. Multitalker Chinese vowel recognition was tested with normal-hearing Chinese-speaking subjects listening to a 4-channel cochlear implant simulation, with and without speaker normalization. For each subject, speaker normalization was referenced to the speaker that produced the best recognition performance under conditions without speaker normalization. To match the remaining speakers to this "optimal" output pattern, the overall frequency range of the analysis filter bank was adjusted for each speaker according to the ratio of the mean third formant frequency values between the specific speaker and the reference speaker. Results showed that speaker normalization provided a small but significant improvement in subjects' overall recognition performance. After speaker normalization, subjects' patterns of recognition performance across speakers changed, demonstrating the potential for speaker-dependent effects with the proposed normalization technique.
Oral-diadochokinesis rates across languages: English and Hebrew norms.
Icht, Michal; Ben-David, Boaz M
2014-01-01
Oro-facial and speech motor control disorders represent a variety of speech and language pathologies. Early identification of such problems is important and carries clinical implications. A common and simple tool for gauging the presence and severity of speech motor control impairments is oral-diadochokinesis (oral-DDK). Surprisingly, norms for adult performance are missing from the literature. The goals of this study were: (1) to establish a norm for oral-DDK rate for (young to middle-age) adult English speakers, by collecting data from the literature (five studies, N=141); (2) to investigate the possible effect of language (and culture) on oral-DDK performance, by analyzing studies conducted in other languages (five studies, N=140), alongside the English norm; and (3) to find a new norm for adult Hebrew speakers, by testing 115 speakers. We first offer an English norm with a mean of 6.2syllables/s (SD=.8), and a lower boundary of 5.4syllables/s that can be used to indicate possible abnormality. Next, we found significant differences between four tested languages (English, Portuguese, Farsi and Greek) in oral-DDK rates. Results suggest the need to set language and culture sensitive norms for the application of the oral-DDK task world-wide. Finally, we found the oral-DDK performance for adult Hebrew speakers to be 6.4syllables/s (SD=.8), not significantly different than the English norms. This implies possible phonological similarities between English and Hebrew. We further note that no gender effects were found in our study. We recommend using oral-DDK as an important tool in the speech language pathologist's arsenal. Yet, application of this task should be done carefully, comparing individual performance to a set norm within the specific language. Readers will be able to: (1) identify the Speech-Language Pathologist assessment process using the oral-DDK task, by comparing an individual performance to the present English norm, (2) describe the impact of language on oral-DDK performance, and (3) accurately detect Hebrew speakers' patients using this tool. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Jin; Cheng, Wen; Lopresti, Daniel
2011-01-01
Since real data is time-consuming and expensive to collect and label, researchers have proposed approaches using synthetic variations for the tasks of signature verification, speaker authentication, handwriting recognition, keyword spotting, etc. However, the limitation of real data is particularly critical in the field of writer identification in that in forensics, adversaries cannot be expected to provide sufficient data to train a classifier. Therefore, it is unrealistic to always assume sufficient real data to train classifiers extensively for writer identification. In addition, this field differs from many others in that we strive to preserve as much inter-writer variations, but model-perturbed handwriting might break such discriminability among writers. Building on work described in another paper where human subjects were involved in calibrating realistic-looking transformation, we then measured the effects of incorporating perturbed handwriting into the training dataset. Experimental results justified our hypothesis that with limited real data, model-perturbed handwriting improved the performance of writer identification. Particularly, if only one single sample for each writer was available, incorporating perturbed data achieved a 36x performance gain.
Data Driven Model Development for the Supersonic Semispan Transport (S(sup 4)T)
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2011-01-01
We investigate two common approaches to model development for robust control synthesis in the aerospace community; namely, reduced order aeroservoelastic modelling based on structural finite-element and computational fluid dynamics based aerodynamic models and a data-driven system identification procedure. It is shown via analysis of experimental Super- Sonic SemiSpan Transport (S4T) wind-tunnel data using a system identification approach it is possible to estimate a model at a fixed Mach, which is parsimonious and robust across varying dynamic pressures.
Speech coding, reconstruction and recognition using acoustics and electromagnetic waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzrichter, J.F.; Ng, L.C.
The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used formore » purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.« less
Blue-green color categorization in Mandarin-English speakers.
Wuerger, Sophie; Xiao, Kaida; Mylonas, Dimitris; Huang, Qingmei; Karatzas, Dimosthenis; Hird, Emily; Paramei, Galina
2012-02-01
Observers are faster to detect a target among a set of distracters if the targets and distracters come from different color categories. This cross-boundary advantage seems to be limited to the right visual field, which is consistent with the dominance of the left hemisphere for language processing [Gilbert et al., Proc. Natl. Acad. Sci. USA 103, 489 (2006)]. Here we study whether a similar visual field advantage is found in the color identification task in speakers of Mandarin, a language that uses a logographic system. Forty late Mandarin-English bilinguals performed a blue-green color categorization task, in a blocked design, in their first language (L1: Mandarin) or second language (L2: English). Eleven color singletons ranging from blue to green were presented for 160 ms, randomly in the left visual field (LVF) or right visual field (RVF). Color boundary and reaction times (RTs) at the color boundary were estimated in L1 and L2, for both visual fields. We found that the color boundary did not differ between the languages; RTs at the color boundary, however, were on average more than 100 ms shorter in the English compared to the Mandarin sessions, but only when the stimuli were presented in the RVF. The finding may be explained by the script nature of the two languages: Mandarin logographic characters are analyzed visuospatially in the right hemisphere, which conceivably facilitates identification of color presented to the LVF. © 2012 Optical Society of America
Speech endpoint detection with non-language speech sounds for generic speech processing applications
NASA Astrophysics Data System (ADS)
McClain, Matthew; Romanowski, Brian
2009-05-01
Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.
Qi, Beier; Liu, Bo; Liu, Sha; Liu, Haihong; Dong, Ruijuan; Zhang, Ning; Gong, Shusheng
2011-05-01
To study the effect of cochlear electrode coverage and different insertion region on speech recognition, especially tone perception of cochlear implant users whose native language is Mandarin Chinese. Setting seven test conditions by fitting software. All conditions were created by switching on/off respective channels in order to simulate different insertion position. Then Mandarin CI users received 4 Speech tests, including Vowel Identification test, Consonant Identification test, Tone Identification test-male speaker, Mandarin HINT test (SRS) in quiet and noise. To all test conditions: the average score of vowel identification was significantly different, from 56% to 91% (Rank sum test, P < 0.05). The average score of consonant identification was significantly different, from 72% to 85% (ANOVNA, P < 0.05). The average score of Tone identification was not significantly different (ANOVNA, P > 0.05). However the more channels activated, the higher scores obtained, from 68% to 81%. This study shows that there is a correlation between insertion depth and speech recognition. Because all parts of the basement membrane can help CI users to improve their speech recognition ability, it is very important to enhance verbal communication ability and social interaction ability of CI users by increasing insertion depth and actively stimulating the top region of cochlear.
Multi-damage identification based on joint approximate diagonalisation and robust distance measure
NASA Astrophysics Data System (ADS)
Cao, S.; Ouyang, H.
2017-05-01
Mode shapes or operational deflection shapes are highly sensitive to damage and can be used for multi-damage identification. Nevertheless, one drawback of this kind of methods is that the extracted spatial shape features tend to be compromised by noise, which degrades their damage identification accuracy, especially for incipient damage. To overcome this, joint approximate diagonalisation (JAD) also known as simultaneous diagonalisation is investigated to estimate mode shapes (MS’s) statistically. The major advantage of JAD method is that it efficiently provides the common Eigen-structure of a set of power spectral density matrices. In this paper, a new criterion in terms of coefficient of variation (CV) is utilised to numerically demonstrate the better noise robustness and accuracy of JAD method over traditional frequency domain decomposition method (FDD). Another original contribution is that a new robust damage index (DI) is proposed, which is comprised of local MS distortions of several modes weighted by their associated vibration participation factors. The advantage of doing this is to include fair contributions from changes of all modes concerned. Moreover, the proposed DI provides a measure of damage-induced changes in ‘modal vibration energy’ in terms of the selected mode shapes. Finally, an experimental study is presented to verify the efficiency and noise robustness of JAD method and the proposed DI. The results show that the proposed DI is effective and robust under random vibration situations, which indicates that it has the potential to be applied to practical engineering structures with ambient excitations.
Acoustic Sources of Accent in Second Language Japanese Speech.
Idemaru, Kaori; Wei, Peipei; Gubbins, Lucy
2018-05-01
This study reports an exploratory analysis of the acoustic characteristics of second language (L2) speech which give rise to the perception of a foreign accent. Japanese speech samples were collected from American English and Mandarin Chinese speakers ( n = 16 in each group) studying Japanese. The L2 participants and native speakers ( n = 10) provided speech samples modeling after six short sentences. Segmental (vowels and stops) and prosodic features (rhythm, tone, and fluency) were examined. Native Japanese listeners ( n = 10) rated the samples with regard to degrees of foreign accent. The analyses predicting accent ratings based on the acoustic measurements indicated that one of the prosodic features in particular, tone (defined as high and low patterns of pitch accent and intonation in this study), plays an important role in robustly predicting accent rating in L2 Japanese across the two first language (L1) backgrounds. These results were consistent with the prediction based on phonological and phonetic comparisons between Japanese and English, as well as Japanese and Mandarin Chinese. The results also revealed L1-specific predictors of perceived accent in Japanese. The findings of this study contribute to the growing literature that examines sources of perceived foreign accent.
Design of a digital voice data compression technique for orbiter voice channels
NASA Technical Reports Server (NTRS)
1975-01-01
Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.
De-identification of health records using Anonym: effectiveness and robustness across datasets.
Zuccon, Guido; Kotzur, Daniel; Nguyen, Anthony; Bergheim, Anton
2014-07-01
Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Speaking fundamental frequency and vowel formant frequencies: effects on perception of gender.
Gelfer, Marylou Pausewang; Bennett, Quinn E
2013-09-01
The purpose of the present study was to investigate the contribution of vowel formant frequencies to gender identification in connected speech, the distinctiveness of vowel formants in males versus females, and how ambiguous speaking fundamental frequencies (SFFs) and vowel formants might affect perception of gender. Multivalent experimental. Speakers subjects (eight tall males, eight short females, and seven males and seven females of "middle" height) were recorded saying two carrier phrases to elicit the vowels /i/ and /α/ and a sentence. The gender/height groups were selected to (presumably) maximize formant differences between some groups (tall vs short) and minimize differences between others (middle height). Each subjects' samples were digitally altered to distinct SFFs (116, 145, 155, 165, and 207 Hz) to represent SFFs typical of average males, average females, and in an ambiguous range. Listeners judged the gender of each randomized altered speech sample. Results indicated that female speakers were perceived as female even with an SFF in the typical male range. For male speakers, gender perception was less accurate at SFFs of 165 Hz and higher. Although the ranges of vowel formants had considerable overlap between genders, significant differences in formant frequencies of males and females were seen. Vowel formants appeared to be important to perception of gender, especially for SFFs in the range of 145-165 Hz; however, formants may be a more salient cue in connected speech when compared with isolated vowels or syllables. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Effect of gender on communication of health information to older adults.
Dearborn, Jennifer L; Panzer, Victoria P; Burleson, Joseph A; Hornung, Frederick E; Waite, Harrison; Into, Frances H
2006-04-01
To examine the effect of gender on three key elements of communication with elderly individuals: effectiveness of the communication, perceived relevance to the individual, and effect of gender-stereotyped content. Survey. University of Connecticut Health Center. Thirty-three subjects (17 female); aged 69 to 91 (mean+/-standard deviation 82+/-5.4). Older adults listened to 16 brief narratives randomized in order and by the sex of the speaker (Narrator Voice). Effectiveness was measured according to ability to identify key features (Risks), and subjects were asked to rate the relevance (Plausibility). Number of Risks detected and determinations of plausibility were analyzed according to Subject Gender and Narrator Voice. Narratives were written for either sex or included male or female bias (Neutral or Stereotyped). Female subjects identified a significantly higher number of Risks across all narratives (P=.01). Subjects perceived a significantly higher number of Risks with a female Narrator Voice (P=.03). A significant Voice-by-Stereotype interaction was present for female-stereotyped narratives (P=.009). In narratives rated as Plausible, subjects detected more Risks (P=.02). Subject Gender influenced communication effectiveness. A female speaker resulted in identification of more Risks for subjects of both sexes, particularly for Stereotyped narratives. There was no significant effect of matching Subject Gender and Narrator Voice. This study suggests that the sex of the speaker influences the effectiveness of communication with older adults. These findings should motivate future research into the means by which medical providers can improve communication with their patients.
Robust finger vein ROI localization based on flexible segmentation.
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-10-24
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.
Robust Finger Vein ROI Localization Based on Flexible Segmentation
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-01-01
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769
Lee, Soomin; Katsuura, Tetsuo; Shimomura, Yoshihiro
2011-01-01
In recent years, a new type of speaker called the parametric speaker has been used to generate highly directional sound, and these speakers are now commercially available. In our previous study, we verified that the burden of the parametric speaker was lower than that of the general speaker for endocrine functions. However, nothing has yet been demonstrated about the effects of the shorter distance than 2.6 m between parametric speakers and the human body. Therefore, we investigated the distance effect on endocrinological function and subjective evaluation. Nine male subjects participated in this study. They completed three consecutive sessions: a 20-min quiet period as a baseline, a 30-min mental task period with general speakers or parametric speakers, and a 20-min recovery period. We measured salivary cortisol and chromogranin A (CgA) concentrations. Furthermore, subjects took the Kwansei-gakuin Sleepiness Scale (KSS) test before and after the task and also a sound quality evaluation test after it. Four experiments, one with a speaker condition (general speaker and parametric speaker), the other with a distance condition (0.3 m and 1.0 m), were conducted, respectively, at the same time of day on separate days. We used three-way repeated measures ANOVA (speaker factor × distance factor × time factor) to examine the effects of the parametric speaker. We found that the endocrinological functions were not significantly different between the speaker condition and the distance condition. The results also showed that the physiological burdens increased with progress in time independent of the speaker condition and distance condition.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
The Communication of Public Speaking Anxiety: Perceptions of Asian and American Speakers.
ERIC Educational Resources Information Center
Martini, Marianne; And Others
1992-01-01
Finds that U.S. audiences perceive Asian speakers to have more speech anxiety than U.S. speakers, even though Asian speakers do not self-report higher anxiety levels. Confirms that speech state anxiety is not communicated effectively between speakers and audiences for Asian or U.S. speakers. (SR)
An Investigation of Syntactic Priming among German Speakers at Varying Proficiency Levels
ERIC Educational Resources Information Center
Ruf, Helena T.
2011-01-01
This dissertation investigates syntactic priming in second language (L2) development among three speaker populations: (1) less proficient L2 speakers; (2) advanced L2 speakers; and (3) LI speakers. Using confederate scripting this study examines how German speakers choose certain word orders in locative constructions (e.g., "Auf dem Tisch…
Modeling Speaker Proficiency, Comprehensibility, and Perceived Competence in a Language Use Domain
ERIC Educational Resources Information Center
Schmidgall, Jonathan Edgar
2013-01-01
Research suggests that listener perceptions of a speaker's oral language use, or a speaker's "comprehensibility," may be influenced by a variety of speaker-, listener-, and context-related factors. Primary speaker factors include aspects of the speaker's proficiency in the target language such as pronunciation and…
Methods and apparatus for non-acoustic speech characterization and recognition
Holzrichter, John F.
1999-01-01
By simultaneously recording EM wave reflections and acoustic speech information, the positions and velocities of the speech organs as speech is articulated can be defined for each acoustic speech unit. Well defined time frames and feature vectors describing the speech, to the degree required, can be formed. Such feature vectors can uniquely characterize the speech unit being articulated each time frame. The onset of speech, rejection of external noise, vocalized pitch periods, articulator conditions, accurate timing, the identification of the speaker, acoustic speech unit recognition, and organ mechanical parameters can be determined.
Methods and apparatus for non-acoustic speech characterization and recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzrichter, J.F.
By simultaneously recording EM wave reflections and acoustic speech information, the positions and velocities of the speech organs as speech is articulated can be defined for each acoustic speech unit. Well defined time frames and feature vectors describing the speech, to the degree required, can be formed. Such feature vectors can uniquely characterize the speech unit being articulated each time frame. The onset of speech, rejection of external noise, vocalized pitch periods, articulator conditions, accurate timing, the identification of the speaker, acoustic speech unit recognition, and organ mechanical parameters can be determined.
Bergstra, Myrthe; DE Mulder, Hannah N M; Coopmans, Peter
2018-04-06
This study investigated how speaker certainty (a rational cue) and speaker benevolence (an emotional cue) influence children's willingness to learn words in a selective learning paradigm. In two experiments four- to six-year-olds learnt novel labels from two speakers and, after a week, their memory for these labels was reassessed. Results demonstrated that children retained the label-object pairings for at least a week. Furthermore, children preferred to learn from certain over uncertain speakers, but they had no significant preference for nice over nasty speakers. When the cues were combined, children followed certain speakers, even if they were nasty. However, children did prefer to learn from nice and certain speakers over nasty and certain speakers. These results suggest that rational cues regarding a speaker's linguistic competence trump emotional cues regarding a speaker's affective status in word learning. However, emotional cues were found to have a subtle influence on this process.
Chun, Audrey; Reinhardt, Joann P; Ramirez, Mildred; Ellis, Julie M; Silver, Stephanie; Burack, Orah; Eimicke, Joseph P; Cimarolli, Verena; Teresi, Jeanne A
2017-12-01
To examine agreement between Minimum Data Set clinician ratings and researcher assessments of depression among ethnically diverse nursing home residents using the 9-item Patient Health Questionnaire. Although depression is common among nursing homes residents, its recognition remains a challenge. Observational baseline data from a longitudinal intervention study. Sample of 155 residents from 12 long-term care units in one US facility; 50 were interviewed in Spanish. Convergence between clinician and researcher ratings was examined for (i) self-report capacity, (ii) suicidal ideation, (iii) at least moderate depression, (iv) Patient Health Questionnaire severity scores. Experiences by clinical raters using the depression assessment were analysed. The intraclass correlation coefficient was used to examine concordance and Cohen's kappa to examine agreement between clinicians and researchers. Moderate agreement (κ = 0.52) was observed in determination of capacity and poor to fair agreement in reporting suicidal ideation (κ = 0.10-0.37) across time intervals. Poor agreement was observed in classification of at least moderate depression (κ = -0.02 to 0.24), lower than the maximum kappa obtainable (0.58-0.85). Eight assessors indicated problems assessing Spanish-speaking residents. Among Spanish speakers, researchers identified 16% with Patient Health Questionnaire scores of 10 or greater, and 14% with thoughts of self-harm whilst clinicians identified 6% and 0%, respectively. This study advances the field of depression recognition in long-term care by identification of possible challenges in assessing Spanish speakers. Use of the Patient Health Questionnaire requires further investigation, particularly among non-English speakers. Depression screening for ethnically diverse nursing home residents is required, as underreporting of depression and suicidal ideation among Spanish speakers may result in lack of depression recognition and referral for evaluation and treatment. Training in depression recognition is imperative to improve the recognition, evaluation and treatment of depression in older people living in nursing homes. © 2017 John Wiley & Sons Ltd.
Gayle, Alberto Alexander; Shimaoka, Motomu
2017-01-01
The predominance of English in scientific research has created hurdles for "non-native speakers" of English. Here we present a novel application of native language identification (NLI) for the assessment of medical-scientific writing. For this purpose, we created a novel classification system whereby scoring would be based solely on text features found to be distinctive among native English speakers (NS) within a given context. We dubbed this the "Genuine Index" (GI). This methodology was validated using a small set of journals in the field of pediatric oncology. Our dataset consisted of 5,907 abstracts, representing work from 77 countries. A support vector machine (SVM) was used to generate our model and for scoring. Accuracy, precision, and recall of the classification model were 93.3%, 93.7%, and 99.4%, respectively. Class specific F-scores were 96.5% for NS and 39.8% for our benchmark class, Japan. Overall kappa was calculated to be 37.2%. We found significant differences between countries with respect to the GI score. Significant correlation was found between GI scores and two validated objective measures of writing proficiency and readability. Two sets of key terms and phrases differentiating NS and non-native writing were identified. Our GI model was able to detect, with a high degree of reliability, subtle differences between the terms and phrasing used by native and non-native speakers in peer reviewed journals, in the field of pediatric oncology. In addition, L1 language transfer was found to be very likely to survive revision, especially in non-Western countries such as Japan. These findings show that even when the language used is technically correct, there may still be some phrasing or usage that impact quality.
Speaker's voice as a memory cue.
Campeanu, Sandra; Craik, Fergus I M; Alain, Claude
2015-02-01
Speaker's voice occupies a central role as the cornerstone of auditory social interaction. Here, we review the evidence suggesting that speaker's voice constitutes an integral context cue in auditory memory. Investigation into the nature of voice representation as a memory cue is essential to understanding auditory memory and the neural correlates which underlie it. Evidence from behavioral and electrophysiological studies suggest that while specific voice reinstatement (i.e., same speaker) often appears to facilitate word memory even without attention to voice at study, the presence of a partial benefit of similar voices between study and test is less clear. In terms of explicit memory experiments utilizing unfamiliar voices, encoding methods appear to play a pivotal role. Voice congruency effects have been found when voice is specifically attended at study (i.e., when relatively shallow, perceptual encoding takes place). These behavioral findings coincide with neural indices of memory performance such as the parietal old/new recollection effect and the late right frontal effect. The former distinguishes between correctly identified old words and correctly identified new words, and reflects voice congruency only when voice is attended at study. Characterization of the latter likely depends upon voice memory, rather than word memory. There is also evidence to suggest that voice effects can be found in implicit memory paradigms. However, the presence of voice effects appears to depend greatly on the task employed. Using a word identification task, perceptual similarity between study and test conditions is, like for explicit memory tests, crucial. In addition, the type of noise employed appears to have a differential effect. While voice effects have been observed when white noise is used at both study and test, using multi-talker babble does not confer the same results. In terms of neuroimaging research modulations, characterization of an implicit memory effect reflective of voice congruency is currently lacking. Copyright © 2014 Elsevier B.V. All rights reserved.
Callan, Daniel E.; Jones, Jeffery A.; Callan, Akiko
2014-01-01
Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures. PMID:24860526
Improvements of ModalMax High-Fidelity Piezoelectric Audio Device
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.
2005-01-01
ModalMax audio speakers have been enhanced by innovative means of tailoring the vibration response of thin piezoelectric plates to produce a high-fidelity audio response. The ModalMax audio speakers are 1 mm in thickness. The device completely supplants the need to have a separate driver and speaker cone. ModalMax speakers can perform the same applications of cone speakers, but unlike cone speakers, ModalMax speakers can function in harsh environments such as high humidity or extreme wetness. New design features allow the speakers to be completely submersed in salt water, making them well suited for maritime applications. The sound produced from the ModalMax audio speakers has sound spatial resolution that is readily discernable for headset users.
Data Driven Model Development for the SuperSonic SemiSpan Transport (S(sup 4)T)
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2011-01-01
In this report, we will investigate two common approaches to model development for robust control synthesis in the aerospace community; namely, reduced order aeroservoelastic modelling based on structural finite-element and computational fluid dynamics based aerodynamic models, and a data-driven system identification procedure. It is shown via analysis of experimental SuperSonic SemiSpan Transport (S4T) wind-tunnel data that by using a system identification approach it is possible to estimate a model at a fixed Mach, which is parsimonious and robust across varying dynamic pressures.
Chang, Son-A; Won, Jong Ho; Kim, HyangHee; Oh, Seung-Ha; Tyler, Richard S.; Cho, Chang Hyun
2018-01-01
Background and Objectives It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. Subjects and Methods Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. Results CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. Conclusions This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing. PMID:29325391
Chang, Son-A; Won, Jong Ho; Kim, HyangHee; Oh, Seung-Ha; Tyler, Richard S; Cho, Chang Hyun
2017-12-01
It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing.
Partially supervised speaker clustering.
Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S
2012-05-01
Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.
Sound-symbolism boosts novel word learning.
Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter
2016-08-01
The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory information, to investigate how sensitive Dutch speakers are to sound-symbolism in Japanese in a learning task. Participants were taught 2 sets of Japanese ideophones; 1 set with the ideophones' real meanings in Dutch, the other set with their opposite meanings. In Experiment 1, participants learned the ideophones and their real meanings much better than the ideophones with their opposite meanings. Moreover, despite the learning rounds, participants were still able to guess the real meanings of the ideophones in a 2-alternative forced-choice test after they were informed of the manipulation. This shows that natural language sound-symbolism is robust beyond 2-alternative forced-choice paradigms and affects broader language processes such as word learning. In Experiment 2, participants learned regular Japanese adjectives with the same manipulation, and there was no difference between real and opposite conditions. This shows that natural language sound-symbolism is especially strong in ideophones, and that people learn words better when form and meaning match. The highlights of this study are as follows: (a) Dutch speakers learn real meanings of Japanese ideophones better than opposite meanings, (b) Dutch speakers accurately guess meanings of Japanese ideophones, (c) this sensitivity happens despite learning some opposite pairings, (d) no such learning effect exists for regular Japanese adjectives, and (e) this shows the importance of sound-symbolism in scaffolding language learning. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Subtirelu, Nicholas Close; Lindemann, Stephanie
2016-01-01
While most research in applied linguistics has focused on second language (L2) speakers and their language capabilities, the success of interaction between such speakers and first language (L1) speakers also relies on the positive attitudes and communication skills of the L1 speakers. However, some research has suggested that many L1 speakers lack…
Temporal and acoustic characteristics of Greek vowels produced by adults with cerebral palsy
NASA Astrophysics Data System (ADS)
Botinis, Antonis; Orfanidou, Ioanna; Fourakis, Marios; Fourakis, Marios
2005-09-01
The present investigation examined the temporal and spectral characteristics of Greek vowels as produced by speakers with intact (NO) versus cerebral palsy affected (CP) neuromuscular systems. Six NO and six CP native speakers of Greek produced the Greek vowels [i, e, a, o, u] in the first syllable of CVCV nonsense words in a short carrier phrase. Stress could be on either the first or second syllable. There were three female and three male speakers in each group. In terms of temporal characteristics, the results showed that: vowels produced by CP speakers were longer than vowels produced by NO speakers; stressed vowels were longer than unstressed vowels; vowels produced by female speakers were longer than vowels produced by male speakers. In terms of spectral characteristics the results showed that the vowel space of the CP speakers was smaller than that of the NO speakers. This is similar to the results recently reported by Liu et al. [J. Acoust. Soc. Am. 117, 3879-3889 (2005)] for CP speakers of Mandarin. There was also a reduction of the acoustic vowel space defined by unstressed vowels, but this reduction was much more pronounced in the vowel productions of CP speakers than NO speakers.
Consistency between verbal and non-verbal affective cues: a clue to speaker credibility.
Gillis, Randall L; Nilsen, Elizabeth S
2017-06-01
Listeners are exposed to inconsistencies in communication; for example, when speakers' words (i.e. verbal) are discrepant with their demonstrated emotions (i.e. non-verbal). Such inconsistencies introduce ambiguity, which may render a speaker to be a less credible source of information. Two experiments examined whether children make credibility discriminations based on the consistency of speakers' affect cues. In Experiment 1, school-age children (7- to 8-year-olds) preferred to solicit information from consistent speakers (e.g. those who provided a negative statement with negative affect), over novel speakers, to a greater extent than they preferred to solicit information from inconsistent speakers (e.g. those who provided a negative statement with positive affect) over novel speakers. Preschoolers (4- to 5-year-olds) did not demonstrate this preference. Experiment 2 showed that school-age children's ratings of speakers were influenced by speakers' affect consistency when the attribute being judged was related to information acquisition (speakers' believability, "weird" speech), but not general characteristics (speakers' friendliness, likeability). Together, findings suggest that school-age children are sensitive to, and use, the congruency of affect cues to determine whether individuals are credible sources of information.
Inferring speaker attributes in adductor spasmodic dysphonia: ratings from unfamiliar listeners.
Isetti, Derek; Xuereb, Linnea; Eadie, Tanya L
2014-05-01
To determine whether unfamiliar listeners' perceptions of speakers with adductor spasmodic dysphonia (ADSD) differ from control speakers on the parameters of relative age, confidence, tearfulness, and vocal effort and are related to speaker-rated vocal effort or voice-specific quality of life. Twenty speakers with ADSD (including 6 speakers with ADSD plus tremor) and 20 age- and sex-matched controls provided speech recordings, completed a voice-specific quality-of-life instrument (Voice Handicap Index; Jacobson et al., 1997), and rated their own vocal effort. Twenty listeners evaluated speech samples for relative age, confidence, tearfulness, and vocal effort using rating scales. Listeners judged speakers with ADSD as sounding significantly older, less confident, more tearful, and more effortful than control speakers (p < .01). Increased vocal effort was strongly associated with decreased speaker confidence (rs = .88-.89) and sounding more tearful (rs = .83-.85). Self-rated speaker effort was moderately related (rs = .45-.52) to listener impressions. Listeners' perceptions of confidence and tearfulness were also moderately associated with higher Voice Handicap Index scores (rs = .65-.70). Unfamiliar listeners judge speakers with ADSD more negatively than control speakers, with judgments extending beyond typical clinical measures. The results have implications for counseling and understanding the psychosocial effects of ADSD.
NASA Astrophysics Data System (ADS)
Dimakis, Nikolaos; Soldatos, John; Polymenakos, Lazaros; Sturm, Janienke; Neumann, Joachim; Casas, Josep R.
The CHIL Memory Jog service focuses on facilitating the collaboration of participants in meetings, lectures, presentations, and other human interactive events, occurring in indoor CHIL spaces. It exploits the whole set of the perceptual components that have been developed by the CHIL Consortium partners (e.g., person tracking, face identification, audio source localization, etc) along with a wide range of actuating devices such as projectors, displays, targeted audio devices, speakers, etc. The underlying set of perceptual components provides a constant flow of elementary contextual information, such as “person at location x0,y0”, “speech at location x0,y0”, information that alone is not of significant use. However, the CHIL Memory Jog service is accompanied by powerful situation identification techniques that fuse all the incoming information and creates complex states that drive the actuating logic.
Waaramaa, Teija; Leisiö, Timo
2013-01-01
The present study focused on voice quality and the perception of the basic emotions from speech samples in cross-cultural conditions. It was examined whether voice quality, cultural, or language background, age, or gender were related to the identification of the emotions. Professional actors (n2) and actresses (n2) produced non-sense sentences (n32) and protracted vowels (n8) expressing the six basic emotions, interest, and a neutral emotional state. The impact of musical interests on the ability to distinguish between emotions or valence (on an axis positivity – neutrality – negativity) from voice samples was studied. Listening tests were conducted on location in five countries: Estonia, Finland, Russia, Sweden, and the USA with 50 randomly chosen participants (25 males and 25 females) in each country. The participants (total N = 250) completed a questionnaire eliciting their background information and musical interests. The responses in the listening test and the questionnaires were statistically analyzed. Voice quality parameters and the share of the emotions and valence identified correlated significantly with each other for both genders. The percentage of emotions and valence identified was clearly above the chance level in each of the five countries studied, however, the countries differed significantly from each other for the identified emotions and the gender of the speaker. The samples produced by females were identified significantly better than those produced by males. Listener's age was a significant variable. Only minor gender differences were found for the identification. Perceptual confusion in the listening test between emotions seemed to be dependent on their similar voice production types. Musical interests tended to have a positive effect on the identification of the emotions. The results also suggest that identifying emotions from speech samples may be easier for those listeners who share a similar language or cultural background with the speaker. PMID:23801972
Robust Fault Detection and Isolation for Stochastic Systems
NASA Technical Reports Server (NTRS)
George, Jemin; Gregory, Irene M.
2010-01-01
This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.
Speaker Linking and Applications using Non-Parametric Hashing Methods
2016-09-08
clustering method based on hashing—canopy- clustering . We apply this method to a large corpus of speaker recordings, demonstrate performance tradeoffs...and compare to other hash- ing methods. Index Terms: speaker recognition, clustering , hashing, locality sensitive hashing. 1. Introduction We assume...speaker in our corpus. Second, given a QBE method, how can we perform speaker clustering —each clustering should be a single speaker, and a cluster should
On the Time Course of Vocal Emotion Recognition
Pell, Marc D.; Kotz, Sonja A.
2011-01-01
How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing. PMID:22087275
Robust detection-isolation-accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Weiss, J. L.; Pattipati, K. R.; Willsky, A. S.; Eterno, J. S.; Crawford, J. T.
1985-01-01
The results of a one year study to: (1) develop a theory for Robust Failure Detection and Identification (FDI) in the presence of model uncertainty, (2) develop a design methodology which utilizes the robust FDI ththeory, (3) apply the methodology to a sensor FDI problem for the F-100 jet engine, and (4) demonstrate the application of the theory to the evaluation of alternative FDI schemes are presented. Theoretical results in statistical discrimination are used to evaluate the robustness of residual signals (or parity relations) in terms of their usefulness for FDI. Furthermore, optimally robust parity relations are derived through the optimization of robustness metrics. The result is viewed as decentralization of the FDI process. A general structure for decentralized FDI is proposed and robustness metrics are used for determining various parameters of the algorithm.
The effect of tonal changes on voice onset time in Mandarin esophageal speech.
Liu, Hanjun; Ng, Manwa L; Wan, Mingxi; Wang, Supin; Zhang, Yi
2008-03-01
The present study investigated the effect of tonal changes on voice onset time (VOT) between normal laryngeal (NL) and superior esophageal (SE) speakers of Mandarin Chinese. VOT values were measured from the syllables /pha/, /tha/, and /kha/ produced at four tone levels by eight NL and seven SE speakers who were native speakers of Mandarin. Results indicated that Mandarin tones were associated with significantly different VOT values for NL speakers, in which high-falling tone was associated with significantly shorter VOT values than mid-rising tone and falling-rising tone. Regarding speaker group, SE speakers showed significantly shorter VOT values than NL speakers across all tone levels. This may be related to their use of pharyngoesophageal (PE) segment as another sound source. SE speakers appear to take a shorter time to start PE segment vibration compared to NL speakers using the vocal folds for vibration.
Ng, Manwa L; Chen, Yang
2011-12-01
The present study examined English sentence stress produced by native Cantonese speakers who were speaking English as a second language (ESL). Cantonese ESL speakers' proficiency in English stress production as perceived by English-speaking listeners was also studied. Acoustical parameters associated with sentence stress including fundamental frequency (F0), vowel duration, and intensity were measured from the English sentences produced by 40 Cantonese ESL speakers. Data were compared with those obtained from 40 native speakers of American English. The speech samples were also judged by eight native listeners who were native speakers of American English for placement, degree, and naturalness of stress. Results showed that Cantonese ESL speakers were able to use F0, vowel duration, and intensity to differentiate sentence stress patterns. Yet, both female and male Cantonese ESL speakers exhibited consistently higher F0 in stressed words than English speakers. Overall, Cantonese ESL speakers were found to be proficient in using duration and intensity to signal sentence stress, in a way comparable with English speakers. In addition, F0 and intensity were found to correlate closely with perceptual judgement and the degree of stress with the naturalness of stress.
Audio fingerprint extraction for content identification
NASA Astrophysics Data System (ADS)
Shiu, Yu; Yeh, Chia-Hung; Kuo, C. C. J.
2003-11-01
In this work, we present an audio content identification system that identifies some unknown audio material by comparing its fingerprint with those extracted off-line and saved in the music database. We will describe in detail the procedure to extract audio fingerprints and demonstrate that they are robust to noise and content-preserving manipulations. The main feature in the proposed system is the zero-crossing rate extracted with the octave-band filter bank. The zero-crossing rate can be used to describe the dominant frequency in each subband with a very low computational cost. The size of audio fingerprint is small and can be efficiently stored along with the compressed files in the database. It is also robust to many modifications such as tempo change and time-alignment distortion. Besides, the octave-band filter bank is used to enhance the robustness to distortion, especially those localized on some frequency regions.
Smart phones: platform enabling modular, chemical, biological, and explosives sensing
NASA Astrophysics Data System (ADS)
Finch, Amethist S.; Coppock, Matthew; Bickford, Justin R.; Conn, Marvin A.; Proctor, Thomas J.; Stratis-Cullum, Dimitra N.
2013-05-01
Reliable, robust, and portable technologies are needed for the rapid identification and detection of chemical, biological, and explosive (CBE) materials. A key to addressing the persistent threat to U.S. troops in the current war on terror is the rapid detection and identification of the precursor materials used in development of improvised explosive devices, homemade explosives, and bio-warfare agents. However, a universal methodology for detection and prevention of CBE materials in the use of these devices has proven difficult. Herein, we discuss our efforts towards the development of a modular, robust, inexpensive, pervasive, archival, and compact platform (android based smart phone) enabling the rapid detection of these materials.
Leaf epidermis images for robust identification of plants
da Silva, Núbia Rosa; Oliveira, Marcos William da Silva; Filho, Humberto Antunes de Almeida; Pinheiro, Luiz Felipe Souza; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez
2016-01-01
This paper proposes a methodology for plant analysis and identification based on extracting texture features from microscopic images of leaf epidermis. All the experiments were carried out using 32 plant species with 309 epidermal samples captured by an optical microscope coupled to a digital camera. The results of the computational methods using texture features were compared to the conventional approach, where quantitative measurements of stomatal traits (density, length and width) were manually obtained. Epidermis image classification using texture has achieved a success rate of over 96%, while success rate was around 60% for quantitative measurements taken manually. Furthermore, we verified the robustness of our method accounting for natural phenotypic plasticity of stomata, analysing samples from the same species grown in different environments. Texture methods were robust even when considering phenotypic plasticity of stomatal traits with a decrease of 20% in the success rate, as quantitative measurements proved to be fully sensitive with a decrease of 77%. Results from the comparison between the computational approach and the conventional quantitative measurements lead us to discover how computational systems are advantageous and promising in terms of solving problems related to Botany, such as species identification. PMID:27217018
The Speaker Gender Gap at Critical Care Conferences.
Mehta, Sangeeta; Rose, Louise; Cook, Deborah; Herridge, Margaret; Owais, Sawayra; Metaxa, Victoria
2018-06-01
To review women's participation as faculty at five critical care conferences over 7 years. Retrospective analysis of five scientific programs to identify the proportion of females and each speaker's profession based on conference conveners, program documents, or internet research. Three international (European Society of Intensive Care Medicine, International Symposium on Intensive Care and Emergency Medicine, Society of Critical Care Medicine) and two national (Critical Care Canada Forum, U.K. Intensive Care Society State of the Art Meeting) annual critical care conferences held between 2010 and 2016. Female faculty speakers. None. Male speakers outnumbered female speakers at all five conferences, in all 7 years. Overall, women represented 5-31% of speakers, and female physicians represented 5-26% of speakers. Nursing and allied health professional faculty represented 0-25% of speakers; in general, more than 50% of allied health professionals were women. Over the 7 years, Society of Critical Care Medicine had the highest representation of female (27% overall) and nursing/allied health professional (16-25%) speakers; notably, male physicians substantially outnumbered female physicians in all years (62-70% vs 10-19%, respectively). Women's representation on conference program committees ranged from 0% to 40%, with Society of Critical Care Medicine having the highest representation of women (26-40%). The female proportions of speakers, physician speakers, and program committee members increased significantly over time at the Society of Critical Care Medicine and U.K. Intensive Care Society State of the Art Meeting conferences (p < 0.05), but there was no temporal change at the other three conferences. There is a speaker gender gap at critical care conferences, with male faculty outnumbering female faculty. This gap is more marked among physician speakers than those speakers representing nursing and allied health professionals. Several organizational strategies can address this gender gap.
Reflecting on Native Speaker Privilege
ERIC Educational Resources Information Center
Berger, Kathleen
2014-01-01
The issues surrounding native speakers (NSs) and nonnative speakers (NNSs) as teachers (NESTs and NNESTs, respectively) in the field of teaching English to speakers of other languages (TESOL) are a current topic of interest. In many contexts, the native speaker of English is viewed as the model teacher, thus putting the NEST into a position of…
ERIC Educational Resources Information Center
Kersten, Alan W.; Meissner, Christian A.; Lechuga, Julia; Schwartz, Bennett L.; Albrechtsen, Justin S.; Iglesias, Adam
2010-01-01
Three experiments provide evidence that the conceptualization of moving objects and events is influenced by one's native language, consistent with linguistic relativity theory. Monolingual English speakers and bilingual Spanish/English speakers tested in an English-speaking context performed better than monolingual Spanish speakers and bilingual…
The contribution of dynamic visual cues to audiovisual speech perception.
Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador
2015-08-01
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analytical redundancy and the design of robust failure detection systems
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Willsky, A. S.
1984-01-01
The Failure Detection and Identification (FDI) process is viewed as consisting of two stages: residual generation and decision making. It is argued that a robust FDI system can be achieved by designing a robust residual generation process. Analytical redundancy, the basis for residual generation, is characterized in terms of a parity space. Using the concept of parity relations, residuals can be generated in a number of ways and the design of a robust residual generation process can be formulated as a minimax optimization problem. An example is included to illustrate this design methodology. Previously announcedd in STAR as N83-20653
Robust recognition of loud and Lombard speech in the fighter cockpit environment
NASA Astrophysics Data System (ADS)
Stanton, Bill J., Jr.
1988-08-01
There are a number of challenges associated with incorporating speech recognition technology into the fighter cockpit. One of the major problems is the wide range of variability in the pilot's voice. That can result from changing levels of stress and workload. Increasing the training set to include abnormal speech is not an attractive option because of the innumerable conditions that would have to be represented and the inordinate amount of time to collect such a training set. A more promising approach is to study subsets of abnormal speech that have been produced under controlled cockpit conditions with the purpose of characterizing reliable shifts that occur relative to normal speech. Such was the initiative of this research. Analyses were conducted for 18 features on 17671 phoneme tokens across eight speakers for normal, loud, and Lombard speech. It was discovered that there was a consistent migration of energy in the sonorants. This discovery of reliable energy shifts led to the development of a method to reduce or eliminate these shifts in the Euclidean distances between LPC log magnitude spectra. This combination significantly improved recognition performance of loud and Lombard speech. Discrepancies in recognition error rates between normal and abnormal speech were reduced by approximately 50 percent for all eight speakers combined.
Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash
2015-01-01
The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490
Sidhu, David M; Pexman, Penny M; Saint-Aubin, Jean
2016-09-01
Although it is often assumed that language involves an arbitrary relationship between form and meaning, many studies have demonstrated that nonwords like maluma are associated with round shapes, while nonwords like takete are associated with sharp shapes (i.e., the Maluma/Takete effect, Köhler, 1929/1947). The majority of the research on sound symbolism has used nonwords, but Sidhu and Pexman (2015) recently extended this effect to existing labels: real English first names (i.e., the Bob/Kirk effect). In the present research we tested whether the effects of name sound symbolism generalize to French speakers (Experiment 1) and French names (Experiment 2). In addition, we assessed the underlying mechanism of name sound symbolism, investigating the roles of phonology and orthography in the effect. Results showed that name sound symbolism does generalize to French speakers and French names. Further, this robust effect remained the same when names were presented in a curved vs. angular font (Experiment 3), or when the salience of orthographic information was reduced through auditory presentation (Experiment 4). Together these results suggest that the Bob/Kirk effect is pervasive, and that it is based on fundamental features of name phonemes. Copyright © 2016 Elsevier B.V. All rights reserved.
Analysis of wolves and sheep. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, J.; Papcun, G.; Zlokarnik, I.
1997-08-01
In evaluating speaker verification systems, asymmetries have been observed in the ease with which people are able to break into other people`s voice locks. People who are good at breaking into voice locks are called wolves, and people whose locks are easy to break into are called sheep. (Goats are people that have a difficult time opening their own voice locks.) Analyses of speaker verification algorithms could be used to understand wolf/sheep asymmetries. Using the notion of a ``speaker space``, it is demonstrated that such asymmetries could arise even though the similarity of voice 1 to voice 2 is themore » same as the inverse similarity. This explains partially the wolf/sheep asymmetries, although there may be other factors. The speaker space can be computed from interspeaker similarity data using multidimensional scaling, and such speaker space can be used to given a good approximation of the interspeaker similarities. The derived speaker space can be used to predict which of the enrolled speakers are likely to be wolves and which are likely to be sheep. However, a speaker must first enroll in the speaker key system and then be compared to each of the other speakers; a good estimate of a person`s speaker space position could be obtained using only a speech sample.« less
Investigating Auditory Processing of Syntactic Gaps with L2 Speakers Using Pupillometry
ERIC Educational Resources Information Center
Fernandez, Leigh; Höhle, Barbara; Brock, Jon; Nickels, Lyndsey
2018-01-01
According to the Shallow Structure Hypothesis (SSH), second language (L2) speakers, unlike native speakers, build shallow syntactic representations during sentence processing. In order to test the SSH, this study investigated the processing of a syntactic movement in both native speakers of English and proficient late L2 speakers of English using…
A Model of Mandarin Tone Categories--A Study of Perception and Production
ERIC Educational Resources Information Center
Yang, Bei
2010-01-01
The current study lays the groundwork for a model of Mandarin tones based on both native speakers' and non-native speakers' perception and production. It demonstrates that there is variability in non-native speakers' tone productions and that there are differences in the perceptual boundaries in native speakers and non-native speakers. There…
Literacy Skill Differences between Adult Native English and Native Spanish Speakers
ERIC Educational Resources Information Center
Herman, Julia; Cote, Nicole Gilbert; Reilly, Lenore; Binder, Katherine S.
2013-01-01
The goal of this study was to compare the literacy skills of adult native English and native Spanish ABE speakers. Participants were 169 native English speakers and 124 native Spanish speakers recruited from five prior research projects. The results showed that the native Spanish speakers were less skilled on morphology and passage comprehension…
ERIC Educational Resources Information Center
Lee, Jiyeon; Yoshida, Masaya; Thompson, Cynthia K.
2015-01-01
Purpose: Grammatical encoding (GE) is impaired in agrammatic aphasia; however, the nature of such deficits remains unclear. We examined grammatical planning units during real-time sentence production in speakers with agrammatic aphasia and control speakers, testing two competing models of GE. We queried whether speakers with agrammatic aphasia…
Ahadi, Mohsen; Pourbakht, Akram; Jafari, Amir Homayoun; Shirjian, Zahra; Jafarpisheh, Amir Salar
2014-06-01
To investigate the influence of gender on subcortical representation of speech acoustic parameters where simultaneously presented to both ears. Two-channel speech-evoked auditory brainstem responses were obtained in 25 female and 23 male normal hearing young adults by using binaural presentation of the 40 ms synthetic consonant-vowel/da/, and the encoding of the fast and slow elements of speech stimuli at subcortical level were compared in the temporal and spectral domains between the sexes using independent sample, two tailed t-test. Highly detectable responses were established in both groups. Analysis in the time domain revealed earlier and larger Fast-onset-responses in females but there was no gender related difference in sustained segment and offset of the response. Interpeak intervals between Frequency Following Response peaks were also invariant to sex. Based on shorter onset responses in females, composite onset measures were also sex dependent. Analysis in the spectral domain showed more robust and better representation of fundamental frequency as well as the first formant and high frequency components of first formant in females than in males. Anatomical, biological and biochemical distinctions between females and males could alter the neural encoding of the acoustic cues of speech stimuli at subcortical level. Females have an advantage in binaural processing of the slow and fast elements of speech. This could be a physiological evidence for better identification of speaker and emotional tone of voice, as well as better perceiving the phonetic information of speech in women. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Development of panel loudspeaker system: design, evaluation and enhancement.
Bai, M R; Huang, T
2001-06-01
Panel speakers are investigated in terms of structural vibration and acoustic radiation. A panel speaker primarily consists of a panel and an inertia exciter. Contrary to conventional speakers, flexural resonance is encouraged such that the panel vibrates as randomly as possible. Simulation tools are developed to facilitate system integration of panel speakers. In particular, electro-mechanical analogy, finite element analysis, and fast Fourier transform are employed to predict panel vibration and the acoustic radiation. Design procedures are also summarized. In order to compare the panel speakers with the conventional speakers, experimental investigations were undertaken to evaluate frequency response, directional response, sensitivity, efficiency, and harmonic distortion of both speakers. The results revealed that the panel speakers suffered from a problem of sensitivity and efficiency. To alleviate the problem, a woofer using electronic compensation based on H2 model matching principle is utilized to supplement the bass response. As indicated in the result, significant improvement over the panel speaker alone was achieved by using the combined panel-woofer system.
And then I saw her race: Race-based expectations affect infants' word processing.
Weatherhead, Drew; White, Katherine S
2018-08-01
How do our expectations about speakers shape speech perception? Adults' speech perception is influenced by social properties of the speaker (e.g., race). When in development do these influences begin? In the current study, 16-month-olds heard familiar words produced in their native accent (e.g., "dog") and in an unfamiliar accent involving a vowel shift (e.g., "dag"), in the context of an image of either a same-race speaker or an other-race speaker. Infants' interpretation of the words depended on the speaker's race. For the same-race speaker, infants only recognized words produced in the familiar accent; for the other-race speaker, infants recognized both versions of the words. Two additional experiments showed that infants only recognized an other-race speaker's atypical pronunciations when they differed systematically from the native accent. These results provide the first evidence that expectations driven by unspoken properties of speakers, such as race, influence infants' speech processing. Copyright © 2018 Elsevier B.V. All rights reserved.
Word Durations in Non-Native English
Baker, Rachel E.; Baese-Berk, Melissa; Bonnasse-Gahot, Laurent; Kim, Midam; Van Engen, Kristin J.; Bradlow, Ann R.
2010-01-01
In this study, we compare the effects of English lexical features on word duration for native and non-native English speakers and for non-native speakers with different L1s and a range of L2 experience. We also examine whether non-native word durations lead to judgments of a stronger foreign accent. We measured word durations in English paragraphs read by 12 American English (AE), 20 Korean, and 20 Chinese speakers. We also had AE listeners rate the `accentedness' of these non-native speakers. AE speech had shorter durations, greater within-speaker word duration variance, greater reduction of function words, and less between-speaker variance than non-native speech. However, both AE and non-native speakers showed sensitivity to lexical predictability by reducing second mentions and high frequency words. Non-native speakers with more native-like word durations, greater within-speaker word duration variance, and greater function word reduction were perceived as less accented. Overall, these findings identify word duration as an important and complex feature of foreign-accented English. PMID:21516172
Experimental study on GMM-based speaker recognition
NASA Astrophysics Data System (ADS)
Ye, Wenxing; Wu, Dapeng; Nucci, Antonio
2010-04-01
Speaker recognition plays a very important role in the field of biometric security. In order to improve the recognition performance, many pattern recognition techniques have be explored in the literature. Among these techniques, the Gaussian Mixture Model (GMM) is proved to be an effective statistic model for speaker recognition and is used in most state-of-the-art speaker recognition systems. The GMM is used to represent the 'voice print' of a speaker through modeling the spectral characteristic of speech signals of the speaker. In this paper, we implement a speaker recognition system, which consists of preprocessing, Mel-Frequency Cepstrum Coefficients (MFCCs) based feature extraction, and GMM based classification. We test our system with TIDIGITS data set (325 speakers) and our own recordings of more than 200 speakers; our system achieves 100% correct recognition rate. Moreover, we also test our system under the scenario that training samples are from one language but test samples are from a different language; our system also achieves 100% correct recognition rate, which indicates that our system is language independent.
Steensberg, Alvilda T; Eriksen, Mette M; Andersen, Lars B; Hendriksen, Ole M; Larsen, Heinrich D; Laier, Gunnar H; Thougaard, Thomas
2017-06-01
The European Resuscitation Council Guidelines 2015 recommend bystanders to activate their mobile phone speaker function, if possible, in case of suspected cardiac arrest. This is to facilitate continuous dialogue with the dispatcher including (if required) cardiopulmonary resuscitation instructions. The aim of this study was to measure the bystander capability to activate speaker function in case of suspected cardiac arrest. In 87days, a systematic prospective registration of bystander capability to activate the speaker function, when cardiac arrest was suspected, was performed. For those asked, "can you activate your mobile phone's speaker function", audio recordings were examined and categorized into groups according to the bystanders capability to activate speaker function on their own initiative, without instructions, or with instructions from the emergency medical dispatcher. Time delay was measured, in seconds, for the bystanders without pre-activated speaker function. 42.0% (58) was able to activate the speaker function without instructions, 2.9% (4) with instructions, 18.1% (25) on own initiative and 37.0% (51) were unable to activate the speaker function. The median time to activate speaker function was 19s and 8s, with and without instructions, respectively. Dispatcher assisted cardiopulmonary resuscitation with activated speaker function, in cases of suspected cardiac arrest, allows for continuous dialogue between the emergency medical dispatcher and the bystander. In this study, we found a 63.0% success rate of activating the speaker function in such situations. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Ellis, Elizabeth M.
2016-01-01
Teacher linguistic identity has so far mainly been researched in terms of whether a teacher identifies (or is identified by others) as a native speaker (NEST) or nonnative speaker (NNEST) (Moussu & Llurda, 2008; Reis, 2011). Native speakers are presumed to be monolingual, and nonnative speakers, although by definition bilingual, tend to be…
How Cognitive Load Influences Speakers' Choice of Referring Expressions.
Vogels, Jorrig; Krahmer, Emiel; Maes, Alfons
2015-08-01
We report on two experiments investigating the effect of an increased cognitive load for speakers on the choice of referring expressions. Speakers produced story continuations to addressees, in which they referred to characters that were either salient or non-salient in the discourse. In Experiment 1, referents that were salient for the speaker were non-salient for the addressee, and vice versa. In Experiment 2, all discourse information was shared between speaker and addressee. Cognitive load was manipulated by the presence or absence of a secondary task for the speaker. The results show that speakers under load are more likely to produce pronouns, at least when referring to less salient referents. We take this finding as evidence that speakers under load have more difficulties taking discourse salience into account, resulting in the use of expressions that are more economical for themselves. © 2014 Cognitive Science Society, Inc.
Structure Computation of Quiet Spike[Trademark] Flight-Test Data During Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2008-01-01
System identification or mathematical modeling is used in the aerospace community for development of simulation models for robust control law design. These models are often described as linear time-invariant processes. Nevertheless, it is well known that the underlying process is often nonlinear. The reason for using a linear approach has been due to the lack of a proper set of tools for the identification of nonlinear systems. Over the past several decades, the controls and biomedical communities have made great advances in developing tools for the identification of nonlinear systems. These approaches are robust and readily applicable to aerospace systems. In this paper, we show the application of one such nonlinear system identification technique, structure detection, for the analysis of F-15B Quiet Spike(TradeMark) aeroservoelastic flight-test data. Structure detection is concerned with the selection of a subset of candidate terms that best describe the observed output. This is a necessary procedure to compute an efficient system description that may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance for the development of robust parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion, which may save significant development time and costs. The objectives of this study are to demonstrate via analysis of F-15B Quiet Spike aeroservoelastic flight-test data for several flight conditions that 1) linear models are inefficient for modeling aeroservoelastic data, 2) nonlinear identification provides a parsimonious model description while providing a high percent fit for cross-validated data, and 3) the model structure and parameters vary as the flight condition is altered.
ArcAtlas in the Classroom: Pattern Identification, Description, and Explanation
ERIC Educational Resources Information Center
DeMers, Michael N.; Vincent, Jeffrey S.
2007-01-01
The use of geographic information systems (GIS) in the classroom provides a robust and effective method of teaching the primary spatial skills of identification, description, and explanation of spatial pattern. A major handicap for the development of GIS-based learning experiences, especially for non-GIS specialist educators, is the availability…
Long-Term Experience with Chinese Language Shapes the Fusiform Asymmetry of English Reading
Mei, Leilei; Xue, Gui; Lu, Zhong-Lin; Chen, Chuansheng; Wei, Miao; He, Qinghua; Dong, Qi
2015-01-01
Previous studies have suggested differential engagement of the bilateral fusiform gyrus in the processing of Chinese and English. The present study tested the possibility that long-term experience with Chinese language affects the fusiform laterality of English reading by comparing three samples: Chinese speakers, English speakers with Chinese experience, and English speakers without Chinese experience. We found that, when reading words in their respective native language, Chinese and English speakers without Chinese experience differed in functional laterality of the posterior fusiform region (right laterality for Chinese speakers, but left laterality for English speakers). More importantly, compared with English speakers without Chinese experience, English speakers with Chinese experience showed more recruitment of the right posterior fusiform cortex for English words and pseudowords, which is similar to how Chinese speakers processed Chinese. These results suggest that long-term experience with Chinese shapes the fusiform laterality of English reading and have important implications for our understanding of the cross-language influences in terms of neural organization and of the functions of different fusiform subregions in reading. PMID:25598049
Statistical Evaluation of Biometric Evidence in Forensic Automatic Speaker Recognition
NASA Astrophysics Data System (ADS)
Drygajlo, Andrzej
Forensic speaker recognition is the process of determining if a specific individual (suspected speaker) is the source of a questioned voice recording (trace). This paper aims at presenting forensic automatic speaker recognition (FASR) methods that provide a coherent way of quantifying and presenting recorded voice as biometric evidence. In such methods, the biometric evidence consists of the quantified degree of similarity between speaker-dependent features extracted from the trace and speaker-dependent features extracted from recorded speech of a suspect. The interpretation of recorded voice as evidence in the forensic context presents particular challenges, including within-speaker (within-source) variability and between-speakers (between-sources) variability. Consequently, FASR methods must provide a statistical evaluation which gives the court an indication of the strength of the evidence given the estimated within-source and between-sources variabilities. This paper reports on the first ENFSI evaluation campaign through a fake case, organized by the Netherlands Forensic Institute (NFI), as an example, where an automatic method using the Gaussian mixture models (GMMs) and the Bayesian interpretation (BI) framework were implemented for the forensic speaker recognition task.
Intergration of system identification and robust controller designs for flexible structures in space
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Lew, Jiann-Shiun
1990-01-01
An approach is developed using experimental data to identify a reduced-order model and its model error for a robust controller design. There are three steps involved in the approach. First, an approximately balanced model is identified using the Eigensystem Realization Algorithm, which is an identification algorithm. Second, the model error is calculated and described in frequency domain in terms of the H(infinity) norm. Third, a pole placement technique in combination with a H(infinity) control method is applied to design a controller for the considered system. A set experimental data from an existing setup, namely the Mini-Mast system, is used to illustrate and verify the approach.
The 32nd CDC: System identification using interval dynamic models
NASA Technical Reports Server (NTRS)
Keel, L. H.; Lew, J. S.; Bhattacharyya, S. P.
1992-01-01
Motivated by the recent explosive development of results in the area of parametric robust control, a new technique to identify a family of uncertain systems is identified. The new technique takes the frequency domain input and output data obtained from experimental test signals and produces an 'interval transfer function' that contains the complete frequency domain behavior with respect to the test signals. This interval transfer function is one of the key concepts in the parametric robust control approach and identification with such an interval model allows one to predict the worst case performance and stability margins using recent results on interval systems. The algorithm is illustrated by applying it to an 18 bay Mini-Mast truss structure.
Precision pointing and control of flexible spacecraft
NASA Technical Reports Server (NTRS)
Bantell, M. H., Jr.
1987-01-01
The problem and long term objectives for the precision pointing and control of flexible spacecraft are given. The four basic objectives are stated in terms of two principle tasks. Under Task 1, robust low order controllers, improved structural modeling methods for control applications and identification methods for structural dynamics are being developed. Under Task 2, a lab test experiment for verification of control laws and system identification algorithms is being developed. For Task 1, work has focused on robust low order controller design and some initial considerations for structural modeling in control applications. For Task 2, work has focused on experiment design and fabrication, along with sensor selection and initial digital controller implementation. Conclusions are given.
Miyake, Tetsuaki; McDermott, John C.; Gramolini, Anthony O.
2011-01-01
Identification of differentiating muscle cells generally requires fixation, antibodies directed against muscle specific proteins, and lengthy staining processes or, alternatively, transfection of muscle specific reporter genes driving GFP expression. In this study, we examined the possibility of using the robust mitochondrial network seen in maturing muscle cells as a marker of cellular differentiation. The mitochondrial fluorescent tracking dye, MitoTracker, which is a cell-permeable, low toxicity, fluorescent dye, allowed us to distinguish and track living differentiating muscle cells visually by epi-fluorescence microscopy. MitoTracker staining provides a robust and simple detection strategy for living differentiating cells in culture without the need for fixation or biochemical processing. PMID:22174849
High performance data acquisition, identification, and monitoring for active magnetic bearings
NASA Technical Reports Server (NTRS)
Herzog, Raoul; Siegwart, Roland
1994-01-01
Future active magnetic bearing systems (AMB) must feature easier on-site tuning, higher stiffness and damping, better robustness with respect to undesirable vibrations in housing and foundation, and enhanced monitoring and identification abilities. To get closer to these goals we developed a fast parallel link from the digitally controlled AMB to Matlab, which is used on a host computer for data processing, identification, and controller layout. This enables the magnetic bearing to take its frequency responses without using any additional measurement equipment. These measurements can be used for AMB identification.
The Effects of Self-Disclosure on Male and Female Perceptions of Individuals Who Stutter.
Byrd, Courtney T; McGill, Megann; Gkalitsiou, Zoi; Cappellini, Colleen
2017-02-01
The purpose of this study was to examine the influence of self-disclosure on observers' perceptions of persons who stutter. Participants (N = 173) were randomly assigned to view 2 of 4 possible videos (i.e., male self-disclosure, male no self-disclosure, female self-disclosure, and female no self-disclosure). After viewing both videos, participants completed a survey assessing their perceptions of the speakers. Controlling for observer and speaker gender, listeners were more likely to select speakers who self-disclosed their stuttering as more friendly, outgoing, and confident compared with speakers who did not self-disclose. Observers were more likely to select speakers who did not self-disclose as unfriendly and shy compared with speakers who used a self-disclosure statement. Controlling for self-disclosure and observer gender, observers were less likely to choose the female speaker as friendlier, outgoing, and confident compared with the male speaker. Observers also were more likely to select the female speaker as unfriendly, shy, unintelligent, and insecure compared with the male speaker and were more likely to report that they were more distracted when viewing the videos. Results lend support to the effectiveness of self-disclosure as a technique that persons who stutter can use to positively influence the perceptions of listeners.
Law, Sam-Po; Chak, Gigi Wan-Chi
2017-01-01
Purpose Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Method Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Results Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. Conclusions The current results supported the sketch model of language–gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed. PMID:28609510
Kaland, Constantijn; Swerts, Marc; Krahmer, Emiel
2013-09-01
The present research investigates what drives the prosodic marking of contrastive information. For example, a typically developing speaker of a Germanic language like Dutch generally refers to a pink car as a "PINK car" (accented words in capitals) when a previously mentioned car was red. The main question addressed in this paper is whether contrastive intonation is produced with respect to the speaker's or (also) the listener's perspective on the preceding discourse. Furthermore, this research investigates the production of contrastive intonation by typically developing speakers and speakers with autism. The latter group is investigated because people with autism are argued to have difficulties accounting for another person's mental state and exhibit difficulties in the production and perception of accentuation and pitch range. To this end, utterances with contrastive intonation are elicited from both groups and analyzed in terms of function and form of prosody using production and perception measures. Contrary to expectations, typically developing speakers and speakers with autism produce functionally similar contrastive intonation as both groups account for both their own and their listener's perspective. However, typically developing speakers use a larger pitch range and are perceived as speaking more dynamically than speakers with autism, suggesting differences in their use of prosodic form.
Kong, Anthony Pak-Hin; Law, Sam-Po; Chak, Gigi Wan-Chi
2017-07-12
Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. The current results supported the sketch model of language-gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed.
Alpermann, Anke; Huber, Walter; Natke, Ulrich; Willmes, Klaus
2010-09-01
Improved fluency after stuttering therapy is usually measured by the percentage of stuttered syllables. However, outcome studies rarely evaluate the use of trained speech patterns that speakers use to manage stuttering. This study investigated whether the modified time interval analysis can distinguish between trained speech patterns, fluent speech, and stuttered speech. Seventeen German experts on stuttering judged a speech sample on two occasions. Speakers of the sample were stuttering adults, who were not undergoing therapy, as well as participants in a fluency shaping and a stuttering modification therapy. Results showed satisfactory inter-judge and intra-judge agreement above 80%. Intervals with trained speech patterns were identified as consistently as stuttered and fluent intervals. We discuss limitations of the study, as well as implications of our findings for the development of training for identification of trained speech patterns and future outcome studies. The reader will be able to (a) explain different methods to measure the use of trained speech patterns, (b) evaluate whether German experts are able to discriminate intervals with trained speech patterns reliably from fluent and stuttered intervals and (c) describe how the measurement of trained speech patterns can contribute to outcome studies.
The human genome: Some assembly required. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1994-12-31
The Human Genome Project promises to be one of the most rewarding endeavors in modern biology. The cost and the ethical and social implications, however, have made this project the source of considerable debate both in the scientific community and in the public at large. The 1994 Graduate Student Symposium addresses the scientific merits of the project, the technical issues involved in accomplishing the task, as well as the medical and social issues which stem from the wealth of knowledge which the Human Genome Project will help create. To this end, speakers were brought together who represent the diverse areasmore » of expertise characteristic of this multidisciplinary project. The keynote speaker addresses the project`s motivations and goals in the larger context of biological and medical sciences. The first two sessions address relevant technical issues, data collection with a focus on high-throughput sequencing methods and data analysis with an emphasis on identification of coding sequences. The third session explores recent advances in the understanding of genetic diseases and possible routes to treatment. Finally, the last session addresses some of the ethical, social and legal issues which will undoubtedly arise from having a detailed knowledge of the human genome.« less
Sentence durations and accentedness judgments
NASA Astrophysics Data System (ADS)
Bond, Z. S.; Stockmal, Verna; Markus, Dace
2003-04-01
Talkers in a second language can frequently be identified as speaking with a foreign accent. It is not clear to what degree a foreign accent represents specific deviations from a target language versus more general characteristics. We examined the identifications of native and non-native talkers by listeners with various amount of knowledge of the target language. Native and non-native speakers of Latvian provided materials. All the non-native talkers spoke Russian as their first language and were long-term residents of Latvia. A listening test, containing sentences excerpted from a short recorded passage, was presented to three groups of listeners: native speakers of Latvian, Russians for whom Latvian was a second language, and Americans with no knowledge of either of the two languages. The listeners were asked to judge whether each utterance was produced by a native or non-native talker. The Latvians identified the non-native talkers very accurately, 88%. The Russians were somewhat less accurate, 83%. The American listeners were least accurate, but still identified the non-native talkers at above chance levels, 62%. Sentence durations correlated with the judgments provided by the American listeners but not with the judgments provided by native or L2 listeners.
Design and implementation of robust controllers for a gait trainer.
Wang, F C; Yu, C H; Chou, T Y
2009-08-01
This paper applies robust algorithms to control an active gait trainer for children with walking disabilities. Compared with traditional rehabilitation procedures, in which two or three trainers are required to assist the patient, a motor-driven mechanism was constructed to improve the efficiency of the procedures. First, a six-bar mechanism was designed and constructed to mimic the trajectory of children's ankles in walking. Second, system identification techniques were applied to obtain system transfer functions at different operating points by experiments. Third, robust control algorithms were used to design Hinfinity robust controllers for the system. Finally, the designed controllers were implemented to verify experimentally the system performance. From the results, the proposed robust control strategies are shown to be effective.
How Do Speakers Avoid Ambiguous Linguistic Expressions?
ERIC Educational Resources Information Center
Ferreira, V.S.; Slevc, L.R.; Rogers, E.S.
2005-01-01
Three experiments assessed how speakers avoid linguistically and nonlinguistically ambiguous expressions. Speakers described target objects (a flying mammal, bat) in contexts including foil objects that caused linguistic (a baseball bat) and nonlinguistic (a larger flying mammal) ambiguity. Speakers sometimes avoided linguistic-ambiguity, and they…
Zhang, Juan; Meng, Yaxuan; McBride, Catherine; Fan, Xitao; Yuan, Zhen
2018-01-01
The present study investigated the impact of Chinese dialects on McGurk effect using behavioral and event-related potential (ERP) methodologies. Specifically, intra-language comparison of McGurk effect was conducted between Mandarin and Cantonese speakers. The behavioral results showed that Cantonese speakers exhibited a stronger McGurk effect in audiovisual speech perception compared to Mandarin speakers, although both groups performed equally in the auditory and visual conditions. ERP results revealed that Cantonese speakers were more sensitive to visual cues than Mandarin speakers, though this was not the case for the auditory cues. Taken together, the current findings suggest that the McGurk effect generated by Chinese speakers is mainly influenced by segmental phonology during audiovisual speech integration.
Zhang, Juan; Meng, Yaxuan; McBride, Catherine; Fan, Xitao; Yuan, Zhen
2018-01-01
The present study investigated the impact of Chinese dialects on McGurk effect using behavioral and event-related potential (ERP) methodologies. Specifically, intra-language comparison of McGurk effect was conducted between Mandarin and Cantonese speakers. The behavioral results showed that Cantonese speakers exhibited a stronger McGurk effect in audiovisual speech perception compared to Mandarin speakers, although both groups performed equally in the auditory and visual conditions. ERP results revealed that Cantonese speakers were more sensitive to visual cues than Mandarin speakers, though this was not the case for the auditory cues. Taken together, the current findings suggest that the McGurk effect generated by Chinese speakers is mainly influenced by segmental phonology during audiovisual speech integration. PMID:29780312
Can you hear my age? Influences of speech rate and speech spontaneity on estimation of speaker age
Skoog Waller, Sara; Eriksson, Mårten; Sörqvist, Patrik
2015-01-01
Cognitive hearing science is mainly about the study of how cognitive factors contribute to speech comprehension, but cognitive factors also partake in speech processing to infer non-linguistic information from speech signals, such as the intentions of the talker and the speaker’s age. Here, we report two experiments on age estimation by “naïve” listeners. The aim was to study how speech rate influences estimation of speaker age by comparing the speakers’ natural speech rate with increased or decreased speech rate. In Experiment 1, listeners were presented with audio samples of read speech from three different speaker age groups (young, middle aged, and old adults). They estimated the speakers as younger when speech rate was faster than normal and as older when speech rate was slower than normal. This speech rate effect was slightly greater in magnitude for older (60–65 years) speakers in comparison with younger (20–25 years) speakers, suggesting that speech rate may gain greater importance as a perceptual age cue with increased speaker age. This pattern was more pronounced in Experiment 2, in which listeners estimated age from spontaneous speech. Faster speech rate was associated with lower age estimates, but only for older and middle aged (40–45 years) speakers. Taken together, speakers of all age groups were estimated as older when speech rate decreased, except for the youngest speakers in Experiment 2. The absence of a linear speech rate effect in estimates of younger speakers, for spontaneous speech, implies that listeners use different age estimation strategies or cues (possibly vocabulary) depending on the age of the speaker and the spontaneity of the speech. Potential implications for forensic investigations and other applied domains are discussed. PMID:26236259
Evitts, Paul; Gallop, Robert
2011-01-01
There is a large body of research demonstrating the impact of visual information on speaker intelligibility in both normal and disordered speaker populations. However, there is minimal information on which specific visual features listeners find salient during conversational discourse. To investigate listeners' eye-gaze behaviour during face-to-face conversation with normal, laryngeal and proficient alaryngeal speakers. Sixty participants individually participated in a 10-min conversation with one of four speakers (typical laryngeal, tracheoesophageal, oesophageal, electrolaryngeal; 15 participants randomly assigned to one mode of speech). All speakers were > 85% intelligible and were judged to be 'proficient' by two certified speech-language pathologists. Participants were fitted with a head-mounted eye-gaze tracking device (Mobile Eye, ASL) that calculated the region of interest and mean duration of eye-gaze. Self-reported gaze behaviour was also obtained following the conversation using a 10 cm visual analogue scale. While listening, participants viewed the lower facial region of the oesophageal speaker more than the normal or tracheoesophageal speaker. Results of non-hierarchical cluster analyses showed that while listening, the pattern of eye-gaze was predominantly directed at the lower face of the oesophageal and electrolaryngeal speaker and more evenly dispersed among the background, lower face, and eyes of the normal and tracheoesophageal speakers. Finally, results show a low correlation between self-reported eye-gaze behaviour and objective regions of interest data. Overall, results suggest similar eye-gaze behaviour when healthy controls converse with normal and tracheoesophageal speakers and that participants had significantly different eye-gaze patterns when conversing with an oesophageal speaker. Results are discussed in terms of existing eye-gaze data and its potential implications on auditory-visual speech perception. © 2011 Royal College of Speech & Language Therapists.
Reilly, Kevin J.; Spencer, Kristie A.
2013-01-01
The current study investigated the processes responsible for selection of sounds and syllables during production of speech sequences in 10 adults with hypokinetic dysarthria from Parkinson’s disease, five adults with ataxic dysarthria, and 14 healthy control speakers. Speech production data from a choice reaction time task were analyzed to evaluate the effects of sequence length and practice on speech sound sequencing. Speakers produced sequences that were between one and five syllables in length over five experimental runs of 60 trials each. In contrast to the healthy speakers, speakers with hypokinetic dysarthria demonstrated exaggerated sequence length effects for both inter-syllable intervals (ISIs) and speech error rates. Conversely, speakers with ataxic dysarthria failed to demonstrate a sequence length effect on ISIs and were also the only group that did not exhibit practice-related changes in ISIs and speech error rates over the five experimental runs. The exaggerated sequence length effects in the hypokinetic speakers with Parkinson’s disease are consistent with an impairment of action selection during speech sequence production. The absent length effects observed in the speakers with ataxic dysarthria is consistent with previous findings that indicate a limited capacity to buffer speech sequences in advance of their execution. In addition, the lack of practice effects in these speakers suggests that learning-related improvements in the production rate and accuracy of speech sequences involves processing by structures of the cerebellum. Together, the current findings inform models of serial control for speech in healthy speakers and support the notion that sequencing deficits contribute to speech symptoms in speakers with hypokinetic or ataxic dysarthria. In addition, these findings indicate that speech sequencing is differentially impaired in hypokinetic and ataxic dysarthria. PMID:24137121
Structured Uncertainty Bound Determination From Data for Control and Performance Validation
NASA Technical Reports Server (NTRS)
Lim, Kyong B.
2003-01-01
This report attempts to document the broad scope of issues that must be satisfactorily resolved before one can expect to methodically obtain, with a reasonable confidence, a near-optimal robust closed loop performance in physical applications. These include elements of signal processing, noise identification, system identification, model validation, and uncertainty modeling. Based on a recently developed methodology involving a parameterization of all model validating uncertainty sets for a given linear fractional transformation (LFT) structure and noise allowance, a new software, Uncertainty Bound Identification (UBID) toolbox, which conveniently executes model validation tests and determine uncertainty bounds from data, has been designed and is currently available. This toolbox also serves to benchmark the current state-of-the-art in uncertainty bound determination and in turn facilitate benchmarking of robust control technology. To help clarify the methodology and use of the new software, two tutorial examples are provided. The first involves the uncertainty characterization of a flexible structure dynamics, and the second example involves a closed loop performance validation of a ducted fan based on an uncertainty bound from data. These examples, along with other simulation and experimental results, also help describe the many factors and assumptions that determine the degree of success in applying robust control theory to practical problems.
Fractal dimension based damage identification incorporating multi-task sparse Bayesian learning
NASA Astrophysics Data System (ADS)
Huang, Yong; Li, Hui; Wu, Stephen; Yang, Yongchao
2018-07-01
Sensitivity to damage and robustness to noise are critical requirements for the effectiveness of structural damage detection. In this study, a two-stage damage identification method based on the fractal dimension analysis and multi-task Bayesian learning is presented. The Higuchi’s fractal dimension (HFD) based damage index is first proposed, directly examining the time-frequency characteristic of local free vibration data of structures based on the irregularity sensitivity and noise robustness analysis of HFD. Katz’s fractal dimension is then presented to analyze the abrupt irregularity change of the spatial curve of the displacement mode shape along the structure. At the second stage, the multi-task sparse Bayesian learning technique is employed to infer the final damage localization vector, which borrow the dependent strength of the two fractal dimension based damage indication information and also incorporate the prior knowledge that structural damage occurs at a limited number of locations in a structure in the absence of its collapse. To validate the capability of the proposed method, a steel beam and a bridge, named Yonghe Bridge, are analyzed as illustrative examples. The damage identification results demonstrate that the proposed method is capable of localizing single and multiple damages regardless of its severity, and show superior robustness under heavy noise as well.
Discourse comprehension in L2: Making sense of what is not explicitly said.
Foucart, Alice; Romero-Rivas, Carlos; Gort, Bernharda Lottie; Costa, Albert
2016-12-01
Using ERPs, we tested whether L2 speakers can integrate multiple sources of information (e.g., semantic, pragmatic information) during discourse comprehension. We presented native speakers and L2 speakers with three-sentence scenarios in which the final sentence was highly causally related, intermediately related, or causally unrelated to its context; its interpretation therefore required simple or complex inferences. Native speakers revealed a gradual N400-like effect, larger in the causally unrelated condition than in the highly related condition, and falling in-between in the intermediately related condition, replicating previous results. In the crucial intermediately related condition, L2 speakers behaved like native speakers, however, showing extra processing in a later time-window. Overall, the results show that, when reading, L2 speakers are able to process information from the local context and prior information (e.g., world knowledge) to build global coherence, suggesting that they process different sources of information to make inferences online during discourse comprehension, like native speakers. Copyright © 2016 Elsevier Inc. All rights reserved.
Gender differences in identifying emotions from auditory and visual stimuli.
Waaramaa, Teija
2017-12-01
The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.
Integrated structural control design of large space structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, J.J.; Lauffer, J.P.
1995-01-01
Active control of structures has been under intensive development for the last ten years. Reference 2 reviews much of the identification and control technology for structural control developed during this time. The technology was initially focused on space structure and weapon applications; however, recently the technology is also being directed toward applications in manufacturing and transportation. Much of this technology focused on multiple-input/multiple-output (MIMO) identification and control methodology because many of the applications require a coordinated control involving multiple disturbances and control objectives where multiple actuators and sensors are necessary for high performance. There have been many optimal robust controlmore » methods developed for the design of MIMO robust control laws; however, there appears to be a significant gap between the theoretical development and experimental evaluation of control and identification methods to address structural control applications. Many methods have been developed for MIMO identification and control of structures, such as the Eigensystem Realization Algorithm (ERA), Q-Markov Covariance Equivalent Realization (Q-Markov COVER) for identification; and, Linear Quadratic Gaussian (LQG), Frequency Weighted LQG and H-/ii-synthesis methods for control. Upon implementation, many of the identification and control methods have shown limitations such as the excitation of unmodelled dynamics and sensitivity to system parameter variations. As a result, research on methods which address these problems have been conducted.« less
A robust star identification algorithm with star shortlisting
NASA Astrophysics Data System (ADS)
Mehta, Deval Samirbhai; Chen, Shoushun; Low, Kay Soon
2018-05-01
A star tracker provides the most accurate attitude solution in terms of arc seconds compared to the other existing attitude sensors. When no prior attitude information is available, it operates in "Lost-In-Space (LIS)" mode. Star pattern recognition, also known as star identification algorithm, forms the most crucial part of a star tracker in the LIS mode. Recognition reliability and speed are the two most important parameters of a star pattern recognition technique. In this paper, a novel star identification algorithm with star ID shortlisting is proposed. Firstly, the star IDs are shortlisted based on worst-case patch mismatch, and later stars are identified in the image by an initial match confirmed with a running sequential angular match technique. The proposed idea is tested on 16,200 simulated star images having magnitude uncertainty, noise stars, positional deviation, and varying size of the field of view. The proposed idea is also benchmarked with the state-of-the-art star pattern recognition techniques. Finally, the real-time performance of the proposed technique is tested on the 3104 real star images captured by a star tracker SST-20S currently mounted on a satellite. The proposed technique can achieve an identification accuracy of 98% and takes only 8.2 ms for identification on real images. Simulation and real-time results depict that the proposed technique is highly robust and achieves a high speed of identification suitable for actual space applications.
Meeting review: Bioinformatics and Medicine - from molecules to humans, virtual and real.
Russell, Roslin
2002-01-01
The Industrialization Workshop Series aims to promote and discuss integration, automation, simulation, quality, availability and standards in the high-throughput life sciences. The main issues addressed being the transformation of bioinformatics and bioinformaticsbased drug design into a robust discipline in industry, the government, research institutes and academia. The latest workshop emphasized the influence of the post-genomic era on medicine and healthcare with reference to advanced biological systems modeling and simulation, protein structure research, protein-protein interactions, metabolism and physiology. Speakers included Michael Ashburner, Kenneth Buetow, Francois Cambien, Cyrus Chothia, Jean Garnier, Francois Iris, Matthias Mann, Maya Natarajan, Peter Murray-Rust, Richard Mushlin, Barry Robson, David Rubin, Kosta Steliou, John Todd, Janet Thornton, Pim van der Eijk, Michael Vieth and Richard Ward.
Noise-immune multisensor transduction of speech
NASA Astrophysics Data System (ADS)
Viswanathan, Vishu R.; Henry, Claudia M.; Derr, Alan G.; Roucos, Salim; Schwartz, Richard M.
1986-08-01
Two types of configurations of multiple sensors were developed, tested and evaluated in speech recognition application for robust performance in high levels of acoustic background noise: One type combines the individual sensor signals to provide a single speech signal input, and the other provides several parallel inputs. For single-input systems, several configurations of multiple sensors were developed and tested. Results from formal speech intelligibility and quality tests in simulated fighter aircraft cockpit noise show that each of the two-sensor configurations tested outperforms the constituent individual sensors in high noise. Also presented are results comparing the performance of two-sensor configurations and individual sensors in speaker-dependent, isolated-word speech recognition tests performed using a commercial recognizer (Verbex 4000) in simulated fighter aircraft cockpit noise.
Mühler, Roland; Ziese, Michael; Rostalski, Dorothea
2009-01-01
The purpose of the study was to develop a speaker discrimination test for cochlear implant (CI) users. The speech material was drawn from the Oldenburg Logatome (OLLO) corpus, which contains 150 different logatomes read by 40 German and 10 French native speakers. The prototype test battery included 120 logatome pairs spoken by 5 male and 5 female speakers with balanced representations of the conditions 'same speaker' and 'different speaker'. Ten adult normal-hearing listeners and 12 adult postlingually deafened CI users were included in a study to evaluate the suitability of the test. The mean speaker discrimination score for the CI users was 67.3% correct and for the normal-hearing listeners 92.2% correct. A significant influence of voice gender and fundamental frequency difference on the speaker discrimination score was found in CI users as well as in normal-hearing listeners. Since the test results of the CI users were significantly above chance level and no ceiling effect was observed, we conclude that subsets of the OLLO corpus are very well suited to speaker discrimination experiments in CI users. Copyright 2008 S. Karger AG, Basel.
Speaker Clustering for a Mixture of Singing and Reading (Preprint)
2012-03-01
diarization [2, 3] which answers the ques- tion of ”who spoke when?” is a combination of speaker segmentation and clustering. Although it is possible to...focuses on speaker clustering, the techniques developed here can be applied to speaker diarization . For the remainder of this paper, the term ”speech...and retrieval,” Proceedings of the IEEE, vol. 88, 2000. [2] S. Tranter and D. Reynolds, “An overview of automatic speaker diarization systems,” IEEE
ERIC Educational Resources Information Center
Bowers, Jeffrey S.; Davis, Colin J.; Mattys, Sven L.; Damian, Markus F.; Hanley, Derek
2009-01-01
Three picture-word interference (PWI) experiments assessed the extent to which embedded subset words are activated during the identification of spoken superset words (e.g., "bone" in "trombone"). Participants named aloud pictures (e.g., "brain") while spoken distractors were presented. In the critical condition,…
Multicultural issues in test interpretation.
Langdon, Henriette W; Wiig, Elisabeth H
2009-11-01
Designing the ideal test or series of tests to assess individuals who speak languages other than English is difficult. This article first describes some of the roadblocks-one of which is the lack of identification criteria for language and learning disabilities in monolingual and bilingual populations in most countries of the non-English-speaking world. This lag exists, in part, because access to general education is often limited. The second section describes tests that have been developed in the United States, primarily for Spanish-speaking individuals because they now represent the largest first-language majority in the United States (80% of English-language learners [ELLs] speak Spanish at home). We discuss tests developed for monolingual and bilingual English-Spanish speakers in the United States and divide this coverage into two parts: The first addresses assessment of students' first language (L1) and second language (L2), usually English, with different versions of the same test; the second describes assessment of L1 and L2 using the same version of the test, administered in the two languages. Examples of tests that fit a priori-determined criteria are briefly discussed throughout the article. Suggestions how to develop tests for speakers of languages other than English are also provided. In conclusion, we maintain that there will never be a perfect test or set of tests to adequately assess the communication skills of a bilingual individual. This is not surprising because we have yet to develop an ideal test or set of tests that fits monolingual Anglo speakers perfectly. Tests are tools, and the speech-language pathologist needs to know how to use those tools most effectively and equitably. The goal of this article is to provide such guidance. Thieme Medical Publishers.
Do Listeners Store in Memory a Speaker's Habitual Utterance-Final Phonation Type?
Bőhm, Tamás; Shattuck-Hufnagel, Stefanie
2009-01-01
Earlier studies report systematic differences across speakers in the occurrence of utterance-final irregular phonation; the work reported here investigated whether human listeners remember this speaker-specific information and can access it when necessary (a prerequisite for using this cue in speaker recognition). Listeners personally familiar with the voices of the speakers were presented with pairs of speech samples: one with the original and the other with transformed final phonation type. Asked to select the member of the pair that was closer to the talker's voice, most listeners tended to choose the unmanipulated token (even though they judged them to sound essentially equally natural). This suggests that utterance-final pitch period irregularity is part of the mental representation of individual speaker voices, although this may depend on the individual speaker and listener to some extent. PMID:19776665
Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T
2015-01-01
MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.
Learning Words from Speakers with False Beliefs
ERIC Educational Resources Information Center
Papafragou, Anna; Fairchild, Sarah; Cohen, Matthew L.; Friedberg, Carlyn
2017-01-01
During communication, hearers try to infer the speaker's intentions to be able to understand what the speaker means. Nevertheless, whether (and how early) preschoolers track their interlocutors' mental states is still a matter of debate. Furthermore, there is disagreement about how children's ability to consult a speaker's belief in communicative…
International Student Speaker Programs: "Someone from Another World."
ERIC Educational Resources Information Center
Wilson, Angene
This study surveyed members of the Association of International Educators and community volunteers to find out how international student speaker programs actually work. An international student speaker program provides speakers (from the university foreign student population) for community organizations and schools. The results of the survey (49…
Linguistic "Mudes" and the De-Ethnicization of Language Choice in Catalonia
ERIC Educational Resources Information Center
Pujolar, Joan; Gonzalez, Isaac
2013-01-01
Catalan speakers have traditionally constructed the Catalan language as the main emblem of their identity even as migration filled the country with substantial numbers of speakers of Castilian. Although Catalan speakers have been bilingual in Catalan and Castilian for generations, sociolinguistic research has shown how speakers' bilingual…
Embodied Communication: Speakers' Gestures Affect Listeners' Actions
ERIC Educational Resources Information Center
Cook, Susan Wagner; Tanenhaus, Michael K.
2009-01-01
We explored how speakers and listeners use hand gestures as a source of perceptual-motor information during naturalistic communication. After solving the Tower of Hanoi task either with real objects or on a computer, speakers explained the task to listeners. Speakers' hand gestures, but not their speech, reflected properties of the particular…
Speech Breathing in Speakers Who Use an Electrolarynx
ERIC Educational Resources Information Center
Bohnenkamp, Todd A.; Stowell, Talena; Hesse, Joy; Wright, Simon
2010-01-01
Speakers who use an electrolarynx following a total laryngectomy no longer require pulmonary support for speech. Subsequently, chest wall movements may be affected; however, chest wall movements in these speakers are not well defined. The purpose of this investigation was to evaluate speech breathing in speakers who use an electrolarynx during…
NASA Technical Reports Server (NTRS)
Woodard, Mark; Rohrbaugh, Dave
1995-01-01
The Advanced Composition Explorer (ACE) spacecraft is designed to fly in a spin-stabilized attitude. The spacecraft will carry two attitude sensors - a digital fine Sun sensor and a charge coupled device (CCD) star tracker - to allow ground-based determination of the spacecraft attitude and spin rate. Part of the processing that must be performed on the CCD star tracker data is the star identification. Star data received from the spacecraft must be matched with star information in the SKYMAP catalog to determine exactly which stars the sensor is tracking. This information, along with the Sun vector measured by the Sun sensor, is used to determine the spacecraft attitude. Several existing star identification (star ID) systems were examined to determine whether they could be modified for use on the ACE mission. Star ID systems which exist for three-axis stabilized spacecraft tend to be complex in nature and many require fairly good knowledge of the spacecraft attitude, making their use for ACE excessive. Star ID systems used for spinners carrying traditional slit star sensors would have to be modified to model the CCD star tracker. The ACE star ID algorithm must also be robust, in that it will be able to correctly identify stars even though the attitude is not known to a high degree of accuracy, and must be very efficient to allow real-time star identification. The paper presents the star ID algorithm that was developed for ACE. Results from prototype testing are also presented to demonstrate the efficiency, accuracy, and robustness of the algorithm.
Linear control of oscillator and amplifier flows*
NASA Astrophysics Data System (ADS)
Schmid, Peter J.; Sipp, Denis
2016-08-01
Linear control applied to fluid systems near an equilibrium point has important applications for many flows of industrial or fundamental interest. In this article we give an exposition of tools and approaches for the design of control strategies for globally stable or unstable flows. For unstable oscillator flows a feedback configuration and a model-based approach is proposed, while for stable noise-amplifier flows a feedforward setup and an approach based on system identification is advocated. Model reduction and robustness issues are addressed for the oscillator case; statistical learning techniques are emphasized for the amplifier case. Effective suppression of global and convective instabilities could be demonstrated for either case, even though the system-identification approach results in a superior robustness to off-design conditions.
The speakers' bureau system: a form of peer selling.
Reid, Lynette; Herder, Matthew
2013-01-01
In the speakers' bureau system, physicians are recruited and trained by pharmaceutical, biotechnology, and medical device companies to deliver information about products to other physicians, in exchange for a fee. Using publicly available disclosures, we assessed the thesis that speakers' bureau involvement is not a feature of academic medicine in Canada, by estimating the prevalence of participation in speakers' bureaus among Canadian faculty in one medical specialty, cardiology. We analyzed the relevant features of an actual contract made public by the physician addressee and applied the Canadian Medical Association (CMA) guidelines on physician-industry relations to participation in a speakers' bureau. We argue that speakers' bureau participation constitutes a form of peer selling that should be understood to contravene the prohibition on product endorsement in the CMA Code of Ethics. Academic medical institutions, in conjunction with regulatory colleges, should continue and strengthen their policies to address participation in speakers' bureaus.
Identification of Terrestrial Reflectance From Remote Sensing
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel; Nolf, Scott R.; Stacy, Kathryn (Technical Monitor)
2000-01-01
Correcting for atmospheric effects is an essential part of surface-reflectance recovery from radiance measurements. Model-based atmospheric correction techniques enable an accurate identification and classification of terrestrial reflectances from multi-spectral imagery. Successful and efficient removal of atmospheric effects from remote-sensing data is a key factor in the success of Earth observation missions. This report assesses the performance, robustness and sensitivity of two atmospheric-correction and reflectance-recovery techniques as part of an end-to-end simulation of hyper-spectral acquisition, identification and classification.
Simultaneous Talk--From the Perspective of Floor Management of English and Japanese Speakers.
ERIC Educational Resources Information Center
Hayashi, Reiko
1988-01-01
Investigates simultaneous talk in face-to-face conversation using the analytic framework of "floor" proposed by Edelsky (1981). Analysis of taped conversation among speakers of Japanese and among speakers of English shows that, while both groups use simultaneous talk, it is used more frequently by Japanese speakers. A reference list…
Respiratory Control in Stuttering Speakers: Evidence from Respiratory High-Frequency Oscillations.
ERIC Educational Resources Information Center
Denny, Margaret; Smith, Anne
2000-01-01
This study examined whether stuttering speakers (N=10) differed from fluent speakers in relations between the neural control systems for speech and life support. It concluded that in some stuttering speakers the relations between respiratory controllers are atypical, but that high participation by the high frequency oscillation-producing circuitry…
The Effects of Source Unreliability on Prior and Future Word Learning
ERIC Educational Resources Information Center
Faught, Gayle G.; Leslie, Alicia D.; Scofield, Jason
2015-01-01
Young children regularly learn words from interactions with other speakers, though not all speakers are reliable informants. Interestingly, children will reverse to trusting a reliable speaker when a previously endorsed speaker proves unreliable. When later asked to identify the referent of a novel word, children who reverse trust are less willing…
ERIC Educational Resources Information Center
Binder, Richard
The thesis of this paper is that the "do so" test described by Lakoff and Ross (1966) is a test of the speaker's belief system regarding the relationship of verbs to their surface subject, and that judgments of grammaticality concerning "do so" are based on the speaker's underlying semantic beliefs. ("Speaker" refers here to both speakers and…
Speaker Reliability Guides Children's Inductive Inferences about Novel Properties
ERIC Educational Resources Information Center
Kim, Sunae; Kalish, Charles W.; Harris, Paul L.
2012-01-01
Prior work shows that children can make inductive inferences about objects based on their labels rather than their appearance (Gelman, 2003). A separate line of research shows that children's trust in a speaker's label is selective. Children accept labels from a reliable speaker over an unreliable speaker (e.g., Koenig & Harris, 2005). In the…
Native-Speakerism and the Complexity of Personal Experience: A Duoethnographic Study
ERIC Educational Resources Information Center
Lowe, Robert J.; Kiczkowiak, Marek
2016-01-01
This paper presents a duoethnographic study into the effects of native-speakerism on the professional lives of two English language teachers, one "native", and one "non-native speaker" of English. The goal of the study was to build on and extend existing research on the topic of native-speakerism by investigating, through…
Research Timeline: Second Language Communication Strategies
ERIC Educational Resources Information Center
Kennedy, Sara; Trofimovich, Pavel
2016-01-01
Speakers of a second language (L2), regardless of profciency level, communicate for specifc purposes. For example, an L2 speaker of English may wish to build rapport with a co-worker by chatting about the weather. The speaker will draw on various resources to accomplish her communicative purposes. For instance, the speaker may say "falling…
Word Stress and Pronunciation Teaching in English as a Lingua Franca Contexts
ERIC Educational Resources Information Center
Lewis, Christine; Deterding, David
2018-01-01
Traditionally, pronunciation was taught by reference to native-speaker models. However, as speakers around the world increasingly interact in English as a lingua franca (ELF) contexts, there is less focus on native-speaker targets, and there is wide acceptance that achieving intelligibility is crucial while mimicking native-speaker pronunciation…
Defining "Native Speaker" in Multilingual Settings: English as a Native Language in Asia
ERIC Educational Resources Information Center
Hansen Edwards, Jette G.
2017-01-01
The current study examines how and why speakers of English from multilingual contexts in Asia are identifying as native speakers of English. Eighteen participants from different contexts in Asia, including Singapore, Malaysia, India, Taiwan, and The Philippines, who self-identified as native speakers of English participated in hour-long interviews…
Speaker Identity Supports Phonetic Category Learning
ERIC Educational Resources Information Center
Mani, Nivedita; Schneider, Signe
2013-01-01
Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The…
The Interpretability Hypothesis: Evidence from Wh-Interrogatives in Second Language Acquisition
ERIC Educational Resources Information Center
Tsimpli, Ianthi Maria; Dimitrakopoulou, Maria
2007-01-01
The second language acquisition (SLA) literature reports numerous studies of proficient second language (L2) speakers who diverge significantly from native speakers despite the evidence offered by the L2 input. Recent SLA theories have attempted to account for native speaker/non-native speaker (NS/NNS) divergence by arguing for the dissociation…
NASA Astrophysics Data System (ADS)
Smith, David R. R.; Patterson, Roy D.
2005-11-01
Glottal-pulse rate (GPR) and vocal-tract length (VTL) are related to the size, sex, and age of the speaker but it is not clear how the two factors combine to influence our perception of speaker size, sex, and age. This paper describes experiments designed to measure the effect of the interaction of GPR and VTL upon judgements of speaker size, sex, and age. Vowels were scaled to represent people with a wide range of GPRs and VTLs, including many well beyond the normal range of the population, and listeners were asked to judge the size and sex/age of the speaker. The judgements of speaker size show that VTL has a strong influence upon perceived speaker size. The results for the sex and age categorization (man, woman, boy, or girl) show that, for vowels with GPR and VTL values in the normal range, judgements of speaker sex and age are influenced about equally by GPR and VTL. For vowels with abnormal combinations of low GPRs and short VTLs, the VTL information appears to decide the sex/age judgement.
A robust set of black walnut microsatellites for parentage and clonal identification
Rodney L. Robichaud; Jeffrey C. Glaubitz; Olin E. Rhodes; Keith Woeste
2006-01-01
We describe the development of a robust and powerful suite of 12 microsatellite marker loci for use in genetic investigations of black walnut and related species. These 12 loci were chosen from a set of 17 candidate loci used to genotype 222 trees sampled from a 38-year-old black walnut progeny test. The 222 genotypes represent a sampling from the broad geographic...
Oliveira Barrichelo, V M; Heuer, R J; Dean, C M; Sataloff, R T
2001-09-01
Many studies have described and analyzed the singer's formant. A similar phenomenon produced by trained speakers led some authors to examine the speaker's ring. If we consider these phenomena as resonance effects associated with vocal tract adjustments and training, can we hypothesize that trained singers can carry over their singing formant ability into speech, also obtaining a speaker's ring? Can we find similar differences for energy distribution in continuous speech? Forty classically trained singers and forty untrained normal speakers performed an all-voiced reading task and produced a sample of a sustained spoken vowel /a/. The singers were also requested to perform a sustained sung vowel /a/ at a comfortable pitch. The reading was analyzed by the long-term average spectrum (LTAS) method. The sustained vowels were analyzed through power spectrum analysis. The data suggest that singers show more energy concentration in the singer's formant/speaker's ring region in both sung and spoken vowels. The singers' spoken vowel energy in the speaker's ring area was found to be significantly larger than that of the untrained speakers. The LTAS showed similar findings suggesting that those differences also occur in continuous speech. This finding supports the value of further research on the effect of singing training on the resonance of the speaking voice.
Talker and accent variability effects on spoken word recognition
NASA Astrophysics Data System (ADS)
Nyang, Edna E.; Rogers, Catherine L.; Nishi, Kanae
2003-04-01
A number of studies have shown that words in a list are recognized less accurately in noise and with longer response latencies when they are spoken by multiple talkers, rather than a single talker. These results have been interpreted as support for an exemplar-based model of speech perception, in which it is assumed that detailed information regarding the speaker's voice is preserved in memory and used in recognition, rather than being eliminated via normalization. In the present study, the effects of varying both accent and talker are investigated using lists of words spoken by (a) a single native English speaker, (b) six native English speakers, (c) three native English speakers and three Japanese-accented English speakers. Twelve /hVd/ words were mixed with multi-speaker babble at three signal-to-noise ratios (+10, +5, and 0 dB) to create the word lists. Native English-speaking listeners' percent-correct recognition for words produced by native English speakers across the three talker conditions (single talker native, multi-talker native, and multi-talker mixed native and non-native) and three signal-to-noise ratios will be compared to determine whether sources of speaker variability other than voice alone add to the processing demands imposed by simple (i.e., single accent) speaker variability in spoken word recognition.
Sulpizio, Simone; Fasoli, Fabio; Maass, Anne; Paladino, Maria Paola; Vespignani, Francesco; Eyssel, Friederike; Bentler, Dominik
2015-01-01
Empirical research had initially shown that English listeners are able to identify the speakers' sexual orientation based on voice cues alone. However, the accuracy of this voice-based categorization, as well as its generalizability to other languages (language-dependency) and to non-native speakers (language-specificity), has been questioned recently. Consequently, we address these open issues in 5 experiments: First, we tested whether Italian and German listeners are able to correctly identify sexual orientation of same-language male speakers. Then, participants of both nationalities listened to voice samples and rated the sexual orientation of both Italian and German male speakers. We found that listeners were unable to identify the speakers' sexual orientation correctly. However, speakers were consistently categorized as either heterosexual or gay on the basis of how they sounded. Moreover, a similar pattern of results emerged when listeners judged the sexual orientation of speakers of their own and of the foreign language. Overall, this research suggests that voice-based categorization of sexual orientation reflects the listeners' expectations of how gay voices sound rather than being an accurate detector of the speakers' actual sexual identity. Results are discussed with regard to accuracy, acoustic features of voices, language dependency and language specificity.
Speaker and Observer Perceptions of Physical Tension during Stuttering.
Tichenor, Seth; Leslie, Paula; Shaiman, Susan; Yaruss, J Scott
2017-01-01
Speech-language pathologists routinely assess physical tension during evaluation of those who stutter. If speakers experience tension that is not visible to clinicians, then judgments of severity may be inaccurate. This study addressed this potential discrepancy by comparing judgments of tension by people who stutter and expert clinicians to determine if clinicians could accurately identify the speakers' experience of physical tension. Ten adults who stutter were audio-video recorded in two speaking samples. Two board-certified specialists in fluency evaluated the samples using the Stuttering Severity Instrument-4 and a checklist adapted for this study. Speakers rated their tension using the same forms, and then discussed their experiences in a qualitative interview so that themes related to physical tension could be identified. The degree of tension reported by speakers was higher than that observed by specialists. Tension in parts of the body that were less visible to the observer (chest, abdomen, throat) was reported more by speakers than by specialists. The thematic analysis revealed that speakers' experience of tension changes over time and that these changes may be related to speakers' acceptance of stuttering. The lack of agreement between speaker and specialist perceptions of tension suggests that using self-reports is a necessary component for supporting the accurate diagnosis of tension in stuttering. © 2018 S. Karger AG, Basel.
Speech Prosody Across Stimulus Types for Individuals with Parkinson's Disease.
K-Y Ma, Joan; Schneider, Christine B; Hoffmann, Rüdiger; Storch, Alexander
2015-01-01
Up to 89% of the individuals with Parkinson's disease (PD) experience speech problem over the course of the disease. Speech prosody and intelligibility are two of the most affected areas in hypokinetic dysarthria. However, assessment of these areas could potentially be problematic as speech prosody and intelligibility could be affected by the type of speech materials employed. To comparatively explore the effects of different types of speech stimulus on speech prosody and intelligibility in PD speakers. Speech prosody and intelligibility of two groups of individuals with varying degree of dysarthria resulting from PD was compared to that of a group of control speakers using sentence reading, passage reading and monologue. Acoustic analysis including measures on fundamental frequency (F0), intensity and speech rate was used to form a prosodic profile for each individual. Speech intelligibility was measured for the speakers with dysarthria using direct magnitude estimation. Difference in F0 variability between the speakers with dysarthria and control speakers was only observed in sentence reading task. Difference in the average intensity level was observed for speakers with mild dysarthria to that of the control speakers. Additionally, there were stimulus effect on both intelligibility and prosodic profile. The prosodic profile of PD speakers was different from that of the control speakers in the more structured task, and lower intelligibility was found in less structured task. This highlighted the value of both structured and natural stimulus to evaluate speech production in PD speakers.
Social dominance orientation, nonnative accents, and hiring recommendations.
Hansen, Karolina; Dovidio, John F
2016-10-01
Discrimination against nonnative speakers is widespread and largely socially acceptable. Nonnative speakers are evaluated negatively because accent is a sign that they belong to an outgroup and because understanding their speech requires unusual effort from listeners. The present research investigated intergroup bias, based on stronger support for hierarchical relations between groups (social dominance orientation [SDO]), as a predictor of hiring recommendations of nonnative speakers. In an online experiment using an adaptation of the thin-slices methodology, 65 U.S. adults (54% women; 80% White; Mage = 35.91, range = 18-67) heard a recording of a job applicant speaking with an Asian (Mandarin Chinese) or a Latino (Spanish) accent. Participants indicated how likely they would be to recommend hiring the speaker, answered questions about the text, and indicated how difficult it was to understand the applicant. Independent of objective comprehension, participants high in SDO reported that it was more difficult to understand a Latino speaker than an Asian speaker. SDO predicted hiring recommendations of the speakers, but this relationship was mediated by the perception that nonnative speakers were difficult to understand. This effect was stronger for speakers from lower status groups (Latinos relative to Asians) and was not related to objective comprehension. These findings suggest a cycle of prejudice toward nonnative speakers: Not only do perceptions of difficulty in understanding cause prejudice toward them, but also prejudice toward low-status groups can lead to perceived difficulty in understanding members of these groups. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Goller, Florian; Lee, Donghoon; Ansorge, Ulrich; Choi, Soonja
2017-01-01
Languages differ in how they categorize spatial relations: While German differentiates between containment (in) and support (auf) with distinct spatial words—(a) den Kuli IN die Kappe stecken (”put pen in cap”); (b) die Kappe AUF den Kuli stecken (”put cap on pen”)—Korean uses a single spatial word (kkita) collapsing (a) and (b) into one semantic category, particularly when the spatial enclosure is tight-fit. Korean uses a different word (i.e., netha) for loose-fits (e.g., apple in bowl). We tested whether these differences influence the attention of the speaker. In a crosslinguistic study, we compared native German speakers with native Korean speakers. Participants rated the similarity of two successive video clips of several scenes where two objects were joined or nested (either in a tight or loose manner). The rating data show that Korean speakers base their rating of similarity more on tight- versus loose-fit, whereas German speakers base their rating more on containment versus support (in vs. auf). Throughout the experiment, we also measured the participants’ eye movements. Korean speakers looked equally long at the moving Figure object and at the stationary Ground object, whereas German speakers were more biased to look at the Ground object. Additionally, Korean speakers also looked more at the region where the two objects touched than did German speakers. We discuss our data in the light of crosslinguistic semantics and the extent of their influence on spatial cognition and perception. PMID:29362644
Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins
NASA Technical Reports Server (NTRS)
Brenner, Marty; Lind, Rick
1998-01-01
Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.
Factor analysis of auto-associative neural networks with application in speaker verification.
Garimella, Sri; Hermansky, Hynek
2013-04-01
Auto-associative neural network (AANN) is a fully connected feed-forward neural network, trained to reconstruct its input at its output through a hidden compression layer, which has fewer numbers of nodes than the dimensionality of input. AANNs are used to model speakers in speaker verification, where a speaker-specific AANN model is obtained by adapting (or retraining) the universal background model (UBM) AANN, an AANN trained on multiple held out speakers, using corresponding speaker data. When the amount of speaker data is limited, this adaptation procedure may lead to overfitting as all the parameters of UBM-AANN are adapted. In this paper, we introduce and develop the factor analysis theory of AANNs to alleviate this problem. We hypothesize that only the weight matrix connecting the last nonlinear hidden layer and the output layer is speaker-specific, and further restrict it to a common low-dimensional subspace during adaptation. The subspace is learned using large amounts of development data, and is held fixed during adaptation. Thus, only the coordinates in a subspace, also known as i-vector, need to be estimated using speaker-specific data. The update equations are derived for learning both the common low-dimensional subspace and the i-vectors corresponding to speakers in the subspace. The resultant i-vector representation is used as a feature for the probabilistic linear discriminant analysis model. The proposed system shows promising results on the NIST-08 speaker recognition evaluation (SRE), and yields a 23% relative improvement in equal error rate over the previously proposed weighted least squares-based subspace AANNs system. The experiments on NIST-10 SRE confirm that these improvements are consistent and generalize across datasets.
Speaker and Accent Variation Are Handled Differently: Evidence in Native and Non-Native Listeners
Kriengwatana, Buddhamas; Terry, Josephine; Chládková, Kateřina; Escudero, Paola
2016-01-01
Listeners are able to cope with between-speaker variability in speech that stems from anatomical sources (i.e. individual and sex differences in vocal tract size) and sociolinguistic sources (i.e. accents). We hypothesized that listeners adapt to these two types of variation differently because prior work indicates that adapting to speaker/sex variability may occur pre-lexically while adapting to accent variability may require learning from attention to explicit cues (i.e. feedback). In Experiment 1, we tested our hypothesis by training native Dutch listeners and Australian-English (AusE) listeners without any experience with Dutch or Flemish to discriminate between the Dutch vowels /I/ and /ε/ from a single speaker. We then tested their ability to classify /I/ and /ε/ vowels of a novel Dutch speaker (i.e. speaker or sex change only), or vowels of a novel Flemish speaker (i.e. speaker or sex change plus accent change). We found that both Dutch and AusE listeners could successfully categorize vowels if the change involved a speaker/sex change, but not if the change involved an accent change. When AusE listeners were given feedback on their categorization responses to the novel speaker in Experiment 2, they were able to successfully categorize vowels involving an accent change. These results suggest that adapting to accents may be a two-step process, whereby the first step involves adapting to speaker differences at a pre-lexical level, and the second step involves adapting to accent differences at a contextual level, where listeners have access to word meaning or are given feedback that allows them to appropriately adjust their perceptual category boundaries. PMID:27309889
Performance study of LMS based adaptive algorithms for unknown system identification
NASA Astrophysics Data System (ADS)
Javed, Shazia; Ahmad, Noor Atinah
2014-07-01
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Performance study of LMS based adaptive algorithms for unknown system identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javed, Shazia; Ahmad, Noor Atinah
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less
Schelinski, Stefanie; Riedel, Philipp; von Kriegstein, Katharina
2014-12-01
In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory-only speech recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.
Edmonds, Lisa A; Donovan, Neila J
2012-04-01
There is a pressing need for psychometrically sound naming materials for Spanish/English bilingual adults. To address this need, in this study the authors examined the psychometric properties of An Object and Action Naming Battery (An O&A Battery; Druks & Masterson, 2000) in bilingual speakers. Ninety-one Spanish/English bilinguals named O&A Battery items in English and Spanish. Responses underwent a Rasch analysis. Using correlation and regression analyses, the authors evaluated the effect of psycholinguistic (e.g., imageability) and participant (e.g., proficiency ratings) variables on accuracy. Rasch analysis determined unidimensionality across English and Spanish nouns and verbs and robust item-level psychometric properties, evidence for content validity. Few items did not fit the model, there were no ceiling or floor effects after uninformative and misfit items were removed, and items reflected a range of difficulty. Reliability coefficients were high, and the number of statistically different ability levels provided indices of sensitivity. Regression analyses revealed significant correlations between psycholinguistic variables and accuracy, providing preliminary construct validity. The participant variables that contributed most to accuracy were proficiency ratings and time of language use. Results suggest adequate content and construct validity of O&A items retained in the analysis for Spanish/English bilingual adults and support future efforts to evaluate naming in older bilinguals and persons with bilingual aphasia.
NASA Astrophysics Data System (ADS)
Flege, James; Mackay, Ian; Imai, Satomi
2003-04-01
This study evaluated potential causes of foreign accent (FA) by including native Italian (NI) speakers with a later age of arrival (AOA) in Canada than in previous studies. Three NI groups (n=18 each) differing in AOA (means=10, 18, and 26 years) participated. Listeners used a 9-point scale to rate sentences produced by the three NI groups and native English controls. The ratings obtained for all four groups differed significantly. The stronger foreign accents of the AOA-18 than AOA-10 group might be attributed to the passing of a critical period, or to stronger cross-language interference by more robust Italian phonetic categories. The difference might also be attributed to differences in language use. This is because the AOA-10 and AOA-18 groups (but not the AOA-18 and AOA-26 groups) differed significantly in percentage of English and Italian use, length of residence in Canada, and years of education in Canada. None of these explanations will apparently explain the stronger FAs of the AOA-26 than AOA-18 group. The difference between these groups might be attributed to cognitive aging [Hakuta et al., Appl. Psycholinguistics (in press)], which results in gradually less successful second-language acquisition across the adult life span. [Work supported by NIH.
Rate of language evolution is affected by population size
Bromham, Lindell; Hua, Xia; Fitzpatrick, Thomas G.; Greenhill, Simon J.
2015-01-01
The effect of population size on patterns and rates of language evolution is controversial. Do languages with larger speaker populations change faster due to a greater capacity for innovation, or do smaller populations change faster due to more efficient diffusion of innovations? Do smaller populations suffer greater loss of language elements through founder effects or drift, or do languages with more speakers lose features due to a process of simplification? Revealing the influence of population size on the tempo and mode of language evolution not only will clarify underlying mechanisms of language change but also has practical implications for the way that language data are used to reconstruct the history of human cultures. Here, we provide, to our knowledge, the first empirical, statistically robust test of the influence of population size on rates of language evolution, controlling for the evolutionary history of the populations and formally comparing the fit of different models of language evolution. We compare rates of gain and loss of cognate words for basic vocabulary in Polynesian languages, an ideal test case with a well-defined history. We demonstrate that larger populations have higher rates of gain of new words whereas smaller populations have higher rates of word loss. These results show that demographic factors can influence rates of language evolution and that rates of gain and loss are affected differently. These findings are strikingly consistent with general predictions of evolutionary models. PMID:25646448
Shin, Young Hoon; Seo, Jiwon
2016-10-29
People with hearing or speaking disabilities are deprived of the benefits of conventional speech recognition technology because it is based on acoustic signals. Recent research has focused on silent speech recognition systems that are based on the motions of a speaker's vocal tract and articulators. Because most silent speech recognition systems use contact sensors that are very inconvenient to users or optical systems that are susceptible to environmental interference, a contactless and robust solution is hence required. Toward this objective, this paper presents a series of signal processing algorithms for a contactless silent speech recognition system using an impulse radio ultra-wide band (IR-UWB) radar. The IR-UWB radar is used to remotely and wirelessly detect motions of the lips and jaw. In order to extract the necessary features of lip and jaw motions from the received radar signals, we propose a feature extraction algorithm. The proposed algorithm noticeably improved speech recognition performance compared to the existing algorithm during our word recognition test with five speakers. We also propose a speech activity detection algorithm to automatically select speech segments from continuous input signals. Thus, speech recognition processing is performed only when speech segments are detected. Our testbed consists of commercial off-the-shelf radar products, and the proposed algorithms are readily applicable without designing specialized radar hardware for silent speech processing.
ERIC Educational Resources Information Center
Magis, David; De Boeck, Paul
2011-01-01
We focus on the identification of differential item functioning (DIF) when more than two groups of examinees are considered. We propose to consider items as elements of a multivariate space, where DIF items are outlying elements. Following this approach, the situation of multiple groups is a quite natural case. A robust statistics technique is…
Human Language Technology: Opportunities and Challenges
2005-01-01
because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with
ERIC Educational Resources Information Center
Tsurutani, Chiharu
2012-01-01
Foreign-accented speakers are generally regarded as less educated, less reliable and less interesting than native speakers and tend to be associated with cultural stereotypes of their country of origin. This discrimination against foreign accents has, however, been discussed mainly using accented English in English-speaking countries. This study…
The Employability of Non-Native-Speaker Teachers of EFL: A UK Survey
ERIC Educational Resources Information Center
Clark, Elizabeth; Paran, Amos
2007-01-01
The native speaker still has a privileged position in English language teaching, representing both the model speaker and the ideal teacher. Non-native-speaker teachers of English are often perceived as having a lower status than their native-speaking counterparts, and have been shown to face discriminatory attitudes when applying for teaching…
Generic Language and Speaker Confidence Guide Preschoolers' Inferences about Novel Animate Kinds
ERIC Educational Resources Information Center
Stock, Hayli R.; Graham, Susan A.; Chambers, Craig G.
2009-01-01
We investigated the influence of speaker certainty on 156 four-year-old children's sensitivity to generic and nongeneric statements. An inductive inference task was implemented, in which a speaker described a nonobvious property of a novel creature using either a generic or a nongeneric statement. The speaker appeared to be confident, neutral, or…
Modern Greek Language: Acquisition of Morphology and Syntax by Non-Native Speakers
ERIC Educational Resources Information Center
Andreou, Georgia; Karapetsas, Anargyros; Galantomos, Ioannis
2008-01-01
This study investigated the performance of native and non native speakers of Modern Greek language on morphology and syntax tasks. Non-native speakers of Greek whose native language was English, which is a language with strict word order and simple morphology, made more errors and answered more slowly than native speakers on morphology but not…
ERIC Educational Resources Information Center
Kong, Anthony Pak-Hin; Law, Sam-Po; Chak, Gigi Wan-Chi
2017-01-01
Purpose: Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Method: Multimedia data of…
ERIC Educational Resources Information Center
Gorman, Kristen S.; Gegg-Harrison, Whitney; Marsh, Chelsea R.; Tanenhaus, Michael K.
2013-01-01
When referring to named objects, speakers can choose either a name ("mbira") or a description ("that gourd-like instrument with metal strips"); whether the name provides useful information depends on whether the speaker's knowledge of the name is shared with the addressee. But, how do speakers determine what is shared? In 2…
Accent Attribution in Speakers with Foreign Accent Syndrome
ERIC Educational Resources Information Center
Verhoeven, Jo; De Pauw, Guy; Pettinato, Michele; Hirson, Allen; Van Borsel, John; Marien, Peter
2013-01-01
Purpose: The main aim of this experiment was to investigate the perception of Foreign Accent Syndrome in comparison to speakers with an authentic foreign accent. Method: Three groups of listeners attributed accents to conversational speech samples of 5 FAS speakers which were embedded amongst those of 5 speakers with a real foreign accent and 5…
Race in Conflict with Heritage: "Black" Heritage Language Speaker of Japanese
ERIC Educational Resources Information Center
Doerr, Neriko Musha; Kumagai, Yuri
2014-01-01
"Heritage language speaker" is a relatively new term to denote minority language speakers who grew up in a household where the language was used or those who have a family, ancestral, or racial connection to the minority language. In research on heritage language speakers, overlap between these 2 definitions is often assumed--that is,…
ERIC Educational Resources Information Center
Montrul, Silvina; Davidson, Justin; De La Fuente, Israel; Foote, Rebecca
2014-01-01
We examined how age of acquisition in Spanish heritage speakers and L2 learners interacts with implicitness vs. explicitness of tasks in gender processing of canonical and non-canonical ending nouns. Twenty-three Spanish native speakers, 29 heritage speakers, and 33 proficiency-matched L2 learners completed three on-line spoken word recognition…
The Role of Interaction in Native Speaker Comprehension of Nonnative Speaker Speech.
ERIC Educational Resources Information Center
Polio, Charlene; Gass, Susan M.
1998-01-01
Because interaction gives language learners an opportunity to modify their speech upon a signal of noncomprehension, it should also have a positive effect on native speakers' (NS) comprehension of nonnative speakers (NNS). This study shows that interaction does help NSs comprehend NNSs, contrasting the claims of an earlier study that found no…
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2003-10-01
Humans communicate with each other through speech by focusing on the target speech among environmental sounds in real acoustic environments. We can easily identify the target sound from other environmental sounds. For hands-free speech recognition, the identification of the target speech from environmental sounds is imperative. This mechanism may also be important for a self-moving robot to sense the acoustic environments and communicate with humans. Therefore, this paper first proposes hidden Markov model (HMM)-based environmental sound source identification. Environmental sounds are modeled by three states of HMMs and evaluated using 92 kinds of environmental sounds. The identification accuracy was 95.4%. This paper also proposes a new HMM composition method that composes speech HMMs and an HMM of categorized environmental sounds for robust environmental sound-added speech recognition. As a result of the evaluation experiments, we confirmed that the proposed HMM composition outperforms the conventional HMM composition with speech HMMs and a noise (environmental sound) HMM trained using noise periods prior to the target speech in a captured signal. [Work supported by Ministry of Public Management, Home Affairs, Posts and Telecommunications of Japan.
Takamura, Ayari; Watanabe, Ken; Akutsu, Tomoko; Ozawa, Takeaki
2018-05-31
Body fluid (BF) identification is a critical part of a criminal investigation because of its ability to suggest how the crime was committed and to provide reliable origins of DNA. In contrast to current methods using serological and biochemical techniques, vibrational spectroscopic approaches provide alternative advantages for forensic BF identification, such as non-destructivity and versatility for various BF types and analytical interests. However, unexplored issues remain for its practical application to forensics; for example, a specific BF needs to be discriminated from all other suspicious materials as well as other BFs, and the method should be applicable even to aged BF samples. Herein, we describe an innovative modeling method for discriminating the ATR FT-IR spectra of various BFs, including peripheral blood, saliva, semen, urine and sweat, to meet the practical demands described above. Spectra from unexpected non-BF samples were efficiently excluded as outliers by adopting the Q-statistics technique. The robustness of the models against aged BFs was significantly improved by using the discrimination scheme of a dichotomous classification tree with hierarchical clustering. The present study advances the use of vibrational spectroscopy and a chemometric strategy for forensic BF identification.
Aeroservoelastic Model Validation and Test Data Analysis of the F/A-18 Active Aeroelastic Wing
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Prazenica, Richard J.
2003-01-01
Model validation and flight test data analysis require careful consideration of the effects of uncertainty, noise, and nonlinearity. Uncertainty prevails in the data analysis techniques and results in a composite model uncertainty from unmodeled dynamics, assumptions and mechanics of the estimation procedures, noise, and nonlinearity. A fundamental requirement for reliable and robust model development is an attempt to account for each of these sources of error, in particular, for model validation, robust stability prediction, and flight control system development. This paper is concerned with data processing procedures for uncertainty reduction in model validation for stability estimation and nonlinear identification. F/A-18 Active Aeroelastic Wing (AAW) aircraft data is used to demonstrate signal representation effects on uncertain model development, stability estimation, and nonlinear identification. Data is decomposed using adaptive orthonormal best-basis and wavelet-basis signal decompositions for signal denoising into linear and nonlinear identification algorithms. Nonlinear identification from a wavelet-based Volterra kernel procedure is used to extract nonlinear dynamics from aeroelastic responses, and to assist model development and uncertainty reduction for model validation and stability prediction by removing a class of nonlinearity from the uncertainty.
The perception of syllable affiliation of singleton stops in repetitive speech.
de Jong, Kenneth J; Lim, Byung-Jin; Nagao, Kyoko
2004-01-01
Stetson (1951) noted that repeating singleton coda consonants at fast speech rates makes them be perceived as onset consonants affiliated with a following vowel. The current study documents the perception of rate-induced resyllabification, as well as what temporal properties give rise to the perception of syllable affiliation. Stimuli were extracted from a previous study of repeated stop + vowel and vowel + stop syllables (de Jong, 2001a, 2001b). Forced-choice identification tasks show that slow repetitions are clearly distinguished. As speakers increase rate, they reach a point after which listeners disagree as to the affiliation of the stop. This pattern is found for voiced and voiceless consonants using different stimulus extraction techniques. Acoustic models of the identifications indicate that the sudden shift in syllabification occurs with the loss of an acoustic hiatus between successive syllables. Acoustic models of the fast rate identifications indicate various other qualities, such as consonant voicing, affect the probability that the consonants will be perceived as onsets. These results indicate a model of syllabic affiliation where specific juncture-marking aspects of the signal dominate parsing, and in their absence other differences provide additional, weaker cues to syllabic affiliation.
Person authentication using brainwaves (EEG) and maximum a posteriori model adaptation.
Marcel, Sébastien; Millán, José Del R
2007-04-01
In this paper, we investigate the use of brain activity for person authentication. It has been shown in previous studies that the brain-wave pattern of every individual is unique and that the electroencephalogram (EEG) can be used for biometric identification. EEG-based biometry is an emerging research topic and we believe that it may open new research directions and applications in the future. However, very little work has been done in this area and was focusing mainly on person identification but not on person authentication. Person authentication aims to accept or to reject a person claiming an identity, i.e., comparing a biometric data to one template, while the goal of person identification is to match the biometric data against all the records in a database. We propose the use of a statistical framework based on Gaussian Mixture Models and Maximum A Posteriori model adaptation, successfully applied to speaker and face authentication, which can deal with only one training session. We perform intensive experimental simulations using several strict train/test protocols to show the potential of our method. We also show that there are some mental tasks that are more appropriate for person authentication than others.
The cognitive neuroscience of person identification.
Biederman, Irving; Shilowich, Bryan E; Herald, Sarah B; Margalit, Eshed; Maarek, Rafael; Meschke, Emily X; Hacker, Catrina M
2018-02-14
We compare and contrast five differences between person identification by voice and face. 1. There is little or no cost when a familiar face is to be recognized from an unrestricted set of possible faces, even at Rapid Serial Visual Presentation (RSVP) rates, but the accuracy of familiar voice recognition declines precipitously when the set of possible speakers is increased from one to a mere handful. 2. Whereas deficits in face recognition are typically perceptual in origin, those with normal perception of voices can manifest severe deficits in their identification. 3. Congenital prosopagnosics (CPros) and congenital phonagnosics (CPhon) are generally unable to imagine familiar faces and voices, respectively. Only in CPros, however, is this deficit a manifestation of a general inability to form visual images of any kind. CPhons report no deficit in imaging non-voice sounds. 4. The prevalence of CPhons of 3.2% is somewhat higher than the reported prevalence of approximately 2.0% for CPros in the population. There is evidence that CPhon represents a distinct condition statistically and not just normal variation. 5. Face and voice recognition proficiency are uncorrelated rather than reflecting limitations of a general capacity for person individuation. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Pestel, Ann
1989-01-01
The author discusses working with speakers from business and industry to present career information at the secondary level. Advice for speakers is presented, as well as tips for program coordinators. (CH)
Wenning, Mareike; Breitenwieser, Franziska; Konrad, Regina; Huber, Ingrid; Busch, Ulrich; Scherer, Siegfried
2014-08-01
The food industry requires easy, accurate, and cost-effective techniques for microbial identification to ensure safe products and identify microbial contaminations. In this work, FTIR spectroscopy and MALDI-TOF mass spectrometry were assessed for their suitability and applicability for routine microbial diagnostics of food-related microorganisms by analyzing their robustness according to changes in incubation time and medium, identification accuracy and their ability to differentiate isolates down to the strain level. Changes in the protocol lead to a significantly impaired performance of FTIR spectroscopy, whereas they had only little effects on MALDI-TOF MS. Identification accuracy was tested using 174 food-related bacteria (93 species) from an in-house strain collection and 40 fresh isolates from routine food analyses. For MALDI-TOF MS, weaknesses in the identification of bacilli and pseudomonads were observed; FTIR spectroscopy had most difficulties in identifying pseudomonads and enterobacteria. In general, MALDI-TOF MS obtained better results (52-85% correct at species level), since the analysis of mainly ribosomal proteins is more robust and seems to be more reliable. FTIR spectroscopy suffers from the fact that it generates a whole-cell fingerprint and intraspecies diversity may lead to overlapping species borders which complicates identification. In the present study values between 56% and 67% correct species identification were obtained. On the opposite, this high sensitivity offers the opportunity of typing below the species level which was not possible using MALDI-TOF MS. Using fresh isolates from routine diagnostics, both techniques performed well with 88% (MALDI-TOF) and 75% (FTIR) correct identifications at species level, respectively. Copyright © 2014 Elsevier B.V. All rights reserved.
Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
NASA Astrophysics Data System (ADS)
Baldacchino, Tara; Worden, Keith; Rowson, Jennifer
2017-02-01
A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.
Lee, Jiyeon; Yoshida, Masaya; Thompson, Cynthia K
2015-08-01
Grammatical encoding (GE) is impaired in agrammatic aphasia; however, the nature of such deficits remains unclear. We examined grammatical planning units during real-time sentence production in speakers with agrammatic aphasia and control speakers, testing two competing models of GE. We queried whether speakers with agrammatic aphasia produce sentences word by word without advanced planning or whether hierarchical syntactic structure (i.e., verb argument structure; VAS) is encoded as part of the advanced planning unit. Experiment 1 examined production of sentences with a predefined structure (i.e., "The A and the B are above the C") using eye tracking. Experiment 2 tested production of transitive and unaccusative sentences without a predefined sentence structure in a verb-priming study. In Experiment 1, both speakers with agrammatic aphasia and young and age-matched control speakers used word-by-word strategies, selecting the first lemma (noun A) only prior to speech onset. However, in Experiment 2, unlike controls, speakers with agrammatic aphasia preplanned transitive and unaccusative sentences, encoding VAS before speech onset. Speakers with agrammatic aphasia show incremental, word-by-word production for structurally simple sentences, requiring retrieval of multiple noun lemmas. However, when sentences involve functional (thematic to grammatical) structure building, advanced planning strategies (i.e., VAS encoding) are used. This early use of hierarchical syntactic information may provide a scaffold for impaired GE in agrammatism.
Grammatical Encoding and Learning in Agrammatic Aphasia: Evidence from Structural Priming
Cho-Reyes, Soojin; Mack, Jennifer E.; Thompson, Cynthia K.
2017-01-01
The present study addressed open questions about the nature of sentence production deficits in agrammatic aphasia. In two structural priming experiments, 13 aphasic and 13 age-matched control speakers repeated visually- and auditorily-presented prime sentences, and then used visually-presented word arrays to produce dative sentences. Experiment 1 examined whether agrammatic speakers form structural and thematic representations during sentence production, whereas Experiment 2 tested the lasting effects of structural priming in lags of two and four sentences. Results of Experiment 1 showed that, like unimpaired speakers, the aphasic speakers evinced intact structural priming effects, suggesting that they are able to generate such representations. Unimpaired speakers also evinced reliable thematic priming effects, whereas agrammatic speakers did so in some experimental conditions, suggesting that access to thematic representations may be intact. Results of Experiment 2 showed structural priming effects of comparable magnitude for aphasic and unimpaired speakers. In addition, both groups showed lasting structural priming effects in both lag conditions, consistent with implicit learning accounts. In both experiments, aphasic speakers with more severe language impairments exhibited larger priming effects, consistent with the “inverse preference” prediction of implicit learning accounts. The findings indicate that agrammatic speakers are sensitive to structural priming across levels of representation and that such effects are lasting, suggesting that structural priming may be beneficial for the treatment of sentence production deficits in agrammatism. PMID:28924328
Shape detection of Gaborized outline versions of everyday objects
Sassi, Michaël; Machilsen, Bart; Wagemans, Johan
2012-01-01
We previously tested the identifiability of six versions of Gaborized outlines of everyday objects, differing in the orientations assigned to elements inside and outside the outline. We found significant differences in identifiability between the versions, and related a number of stimulus metrics to identifiability [Sassi, M., Vancleef, K., Machilsen, B., Panis, S., & Wagemans, J. (2010). Identification of everyday objects on the basis of Gaborized outline versions. i-Perception, 1(3), 121–142]. In this study, after retesting the identifiability of new variants of three of the stimulus versions, we tested their robustness to local orientation jitter in a detection experiment. In general, our results replicated the key findings from the previous study, and allowed us to substantiate our earlier interpretations of the effects of our stimulus metrics and of the performance differences between the different stimulus versions. The results of the detection task revealed a different ranking order of stimulus versions than the identification task. By examining the parallels and differences between the effects of our stimulus metrics in the two tasks, we found evidence for a trade-off between shape detectability and identifiability. The generally simple and smooth shapes that yield the strongest contour integration and most robust detectability tend to lack the distinguishing features necessary for clear-cut identification. Conversely, contours that do contain such identifying features tend to be inherently more complex and, therefore, yield weaker integration and less robust detectability. PMID:23483752
ERIC Educational Resources Information Center
Paul, Rhea; Shriberg, Lawrence D.; McSweeny, Jane; Cicchetti, Domenic; Klin, Ami; Volkmar, Fred
2005-01-01
Shriberg "et al." [Shriberg, L. "et al." (2001). "Journal of Speech, Language and Hearing Research, 44," 1097-1115] described prosody-voice features of 30 high functioning speakers with autistic spectrum disorder (ASD) compared to age-matched control speakers. The present study reports additional information on the speakers with ASD, including…
Investigating Holistic Measures of Speech Prosody
ERIC Educational Resources Information Center
Cunningham, Dana Aliel
2012-01-01
Speech prosody is a multi-faceted dimension of speech which can be measured and analyzed in a variety of ways. In this study, the speech prosody of Mandarin L1 speakers, English L2 speakers, and English L1 speakers was assessed by trained raters who listened to sound clips of the speakers responding to a graph prompt and reading a short passage.…
Young Children's Sensitivity to Speaker Gender When Learning from Others
ERIC Educational Resources Information Center
Ma, Lili; Woolley, Jacqueline D.
2013-01-01
This research explores whether young children are sensitive to speaker gender when learning novel information from others. Four- and 6-year-olds ("N" = 144) chose between conflicting statements from a male versus a female speaker (Studies 1 and 3) or decided which speaker (male or female) they would ask (Study 2) when learning about the functions…
ERIC Educational Resources Information Center
McNaughton, Stephanie; McDonough, Kim
2015-01-01
This exploratory study investigated second language (L2) French speakers' service encounters in the multilingual setting of Montreal, specifically whether switches to English during French service encounters were related to L2 speakers' willingness to communicate or motivation. Over a two-week period, 17 French L2 speakers in Montreal submitted…
ERIC Educational Resources Information Center
Gilbert, Harvey R.; Ferrand, Carole T.
1987-01-01
Respirometric quotients (RQ), the ratio of oral air volume expended to total volume expended, were obtained from the productions of oral and nasal airflow of 10 speakers with cleft palate, with and without their prosthetic appliances, and 10 normal speakers. Cleft palate speakers without their appliances exhibited the lowest RQ values. (Author/DB)
ERIC Educational Resources Information Center
Polio, Charlene; Gass, Susan; Chapin, Laura
2006-01-01
Implicit negative feedback has been shown to facilitate SLA, and the extent to which such feedback is given is related to a variety of task and interlocutor variables. The background of a native speaker (NS), in terms of amount of experience in interactions with nonnative speakers (NNSs), has been shown to affect the quantity of implicit negative…
ERIC Educational Resources Information Center
Tatsumi, Naofumi
2012-01-01
Previous research shows that American learners of Japanese (AJs) tend to differ from native Japanese speakers in their compliment responses (CRs). Yokota (1986) and Shimizu (2009) have reported that AJs tend to respond more negatively than native Japanese speakers. It has also been reported that AJs' CRs tend to lack the use of avoidance or…
Intelligibility of clear speech: effect of instruction.
Lam, Jennifer; Tjaden, Kris
2013-10-01
The authors investigated how clear speech instructions influence sentence intelligibility. Twelve speakers produced sentences in habitual, clear, hearing impaired, and overenunciate conditions. Stimuli were amplitude normalized and mixed with multitalker babble for orthographic transcription by 40 listeners. The main analysis investigated percentage-correct intelligibility scores as a function of the 4 conditions and speaker sex. Additional analyses included listener response variability, individual speaker trends, and an alternate intelligibility measure: proportion of content words correct. Relative to the habitual condition, the overenunciate condition was associated with the greatest intelligibility benefit, followed by the hearing impaired and clear conditions. Ten speakers followed this trend. The results indicated different patterns of clear speech benefit for male and female speakers. Greater listener variability was observed for speakers with inherently low habitual intelligibility compared to speakers with inherently high habitual intelligibility. Stable proportions of content words were observed across conditions. Clear speech instructions affected the magnitude of the intelligibility benefit. The instruction to overenunciate may be most effective in clear speech training programs. The findings may help explain the range of clear speech intelligibility benefit previously reported. Listener variability analyses suggested the importance of obtaining multiple listener judgments of intelligibility, especially for speakers with inherently low habitual intelligibility.
Smith, David R R; Walters, Thomas C; Patterson, Roy D
2007-12-01
A recent study [Smith and Patterson, J. Acoust. Soc. Am. 118, 3177-3186 (2005)] demonstrated that both the glottal-pulse rate (GPR) and the vocal-tract length (VTL) of vowel sounds have a large effect on the perceived sex and age (or size) of a speaker. The vowels for all of the "different" speakers in that study were synthesized from recordings of the sustained vowels of one, adult male speaker. This paper presents a follow-up study in which a range of vowels were synthesized from recordings of four different speakers--an adult man, an adult woman, a young boy, and a young girl--to determine whether the sex and age of the original speaker would have an effect upon listeners' judgments of whether a vowel was spoken by a man, woman, boy, or girl, after they were equated for GPR and VTL. The sustained vowels of the four speakers were scaled to produce the same combinations of GPR and VTL, which covered the entire range normally encountered in every day life. The results show that listeners readily distinguish children from adults based on their sustained vowels but that they struggle to distinguish the sex of the speaker.
Dikker, Suzanne; Silbert, Lauren J; Hasson, Uri; Zevin, Jason D
2014-04-30
Recent research has shown that the degree to which speakers and listeners exhibit similar brain activity patterns during human linguistic interaction is correlated with communicative success. Here, we used an intersubject correlation approach in fMRI to test the hypothesis that a listener's ability to predict a speaker's utterance increases such neural coupling between speakers and listeners. Nine subjects listened to recordings of a speaker describing visual scenes that varied in the degree to which they permitted specific linguistic predictions. In line with our hypothesis, the temporal profile of listeners' brain activity was significantly more synchronous with the speaker's brain activity for highly predictive contexts in left posterior superior temporal gyrus (pSTG), an area previously associated with predictive auditory language processing. In this region, predictability differentially affected the temporal profiles of brain responses in the speaker and listeners respectively, in turn affecting correlated activity between the two: whereas pSTG activation increased with predictability in the speaker, listeners' pSTG activity instead decreased for more predictable sentences. Listeners additionally showed stronger BOLD responses for predictive images before sentence onset, suggesting that highly predictable contexts lead comprehenders to preactivate predicted words.
When speaker identity is unavoidable: Neural processing of speaker identity cues in natural speech.
Tuninetti, Alba; Chládková, Kateřina; Peter, Varghese; Schiller, Niels O; Escudero, Paola
2017-11-01
Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners' native language. In a multi-deviant oddball paradigm, native and nonnative speakers of Dutch were exposed to naturally-produced Dutch vowels varying in speaker, sex, accent, and phoneme identity. Unexpectedly, the analysis of mismatch negativity (MMN) amplitudes elicited by each type of change shows a large degree of early perceptual sensitivity to non-linguistic cues. This finding on perception of naturally-produced stimuli contrasts with previous studies examining the perception of synthetic stimuli wherein adult listeners automatically disregard acoustic cues to speaker identity. The present finding bears relevance to speech normalization theories, suggesting that at an unattended level of processing, listeners are indeed sensitive to changes in fundamental frequency in natural speech tokens. Copyright © 2017 Elsevier Inc. All rights reserved.
Patterns of lung volume use during an extemporaneous speech task in persons with Parkinson disease.
Bunton, Kate
2005-01-01
This study examined patterns of lung volume use in speakers with Parkinson disease (PD) during an extemporaneous speaking task. The performance of a control group was also examined. Behaviors described are based on acoustic, kinematic and linguistic measures. Group differences were found in breath group duration, lung volume initiation, and lung volume termination measures. Speakers in the control group alternated between a longer and shorter breath groups. With starting lung volumes being higher for the longer breath groups and lower for shorter breath groups. Speech production was terminated before reaching tidal end expiratory level. This pattern was also seen in 4 of 7 speakers with PD. The remaining 3 PD speakers initiated speech at low starting lung volumes and continued speaking below EEL. This subgroup of PD speakers ended breath groups at agrammatical boundaries, whereas control speakers ended at appropriate grammatical boundaries. As a result of participating in this exercise, the reader will (1) be able to describe the patterns of lung volume use in speakers with Parkinson disease and compare them with those employed by control speakers; and (2) obtain information about the influence of speaking task on speech breathing.
Hanulíková, Adriana; van Alphen, Petra M; van Goch, Merel M; Weber, Andrea
2012-04-01
How do native listeners process grammatical errors that are frequent in non-native speech? We investigated whether the neural correlates of syntactic processing are modulated by speaker identity. ERPs to gender agreement errors in sentences spoken by a native speaker were compared with the same errors spoken by a non-native speaker. In line with previous research, gender violations in native speech resulted in a P600 effect (larger P600 for violations in comparison with correct sentences), but when the same violations were produced by the non-native speaker with a foreign accent, no P600 effect was observed. Control sentences with semantic violations elicited comparable N400 effects for both the native and the non-native speaker, confirming no general integration problem in foreign-accented speech. The results demonstrate that the P600 is modulated by speaker identity, extending our knowledge about the role of speaker's characteristics on neural correlates of speech processing.
Factors affecting the perception of Korean-accented American English
NASA Astrophysics Data System (ADS)
Cho, Kwansun; Harris, John G.; Shrivastav, Rahul
2005-09-01
This experiment examines the relative contribution of two factors, intonation and articulation errors, on the perception of foreign accent in Korean-accented American English. Ten native speakers of Korean and ten native speakers of American English were asked to read ten English sentences. These sentences were then modified using high-quality speech resynthesis techniques [STRAIGHT Kawahara et al., Speech Commun. 27, 187-207 (1999)] to generate four sets of stimuli. In the first two sets of stimuli, the intonation patterns of the Korean speakers and American speakers were switched with one another. The articulatory errors for each speaker were not modified. In the final two sets, the sentences from the Korean and American speakers were resynthesized without any modifications. Fifteen listeners were asked to rate all the stimuli for the degree of foreign accent. Preliminary results show that, for native speakers of American English, articulation errors may play a greater role in the perception of foreign accent than errors in intonation patterns. [Work supported by KAIM.
Eiesland, Eli Anne; Lind, Marianne
2012-03-01
Compounds are words that are made up of at least two other words (lexemes), featuring lexical and syntactic characteristics and thus particularly interesting for the study of language processing. Most studies of compounds and language processing have been based on data from experimental single word production and comprehension tasks. To enhance the ecological validity of morphological processing research, data from other contexts, such as discourse production, need to be considered. This study investigates the production of nominal compounds in semi-spontaneous spoken texts by a group of speakers with fluent types of aphasia compared to a group of neurologically healthy speakers. The speakers with aphasia produce significantly fewer nominal compound types in their texts than the non-aphasic speakers, and the compounds they produce exhibit fewer different types of semantic relations than the compounds produced by the non-aphasic speakers. The results are discussed in relation to theories of language processing.
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Belcastro, Christine; Khong, thuan
2006-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems developed for failure detection, identification, and reconfiguration, as well as upset recovery, need to be evaluated over broad regions of the flight envelope or under extreme flight conditions, and should include various sources of uncertainty. To apply formal robustness analysis, formulation of linear fractional transformation (LFT) models of complex parameter-dependent systems is required, which represent system uncertainty due to parameter uncertainty and actuator faults. This paper describes a detailed LFT model formulation procedure from the nonlinear model of a transport aircraft by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The closed-loop system is evaluated over the entire flight envelope based on the generated LFT model which can cover nonlinear dynamics. The robustness analysis results of the closed-loop fault tolerant control system of a transport aircraft are presented. A reliable flight envelope (safe flight regime) is also calculated from the robust performance analysis results, over which the closed-loop system can achieve the desired performance of command tracking and failure detection.
Clancy, Cornelius J; Pappas, Peter; Vazquez, Jose; Judson, Marc A; Tobin, Ellis; Kontoyiannis, Dimitrios P; Thompson, George R; Reboli, Annette; Garey, Kevin W; Greenberg, Richard N; Ostrosky-Zeichner, Luis; Wu, Alan; Lyon, G Marshall; Apewokin, Senu; Nguyen, M Hong; Caliendo, Angela
2017-01-01
Abstract Background Blood cultures (BC) are the diagnostic gold standard for candidemia, but sensitivity is <50%. T2 Candida (T2) is a novel, FDA-approved nanodiagnostic panel, which utilizes T2 magnetic resonance and a dedicated instrument to detect Candida within whole blood samples. Methods Candidemic adults were identified at 14 centers by diagnostic BC (dBC). Follow-up blood samples were collected from all patients (pts) for testing by T2 and companion BC (cBC). T2 was run-in batch at a central lab; results are reported qualitatively for three groups of spp. (Candida albicans/C. tropicalis (CA/CT), C. glabrata/C. krusei (CG/CK), or C. parapsilosis (CP)). T2 and cBC were defined as positive (+) if they detected a sp. identified in dBC. Results 152 patients were enrolled (median age: 54 yrs (18–93); 54% (82) men). Candidemia risk factors included indwelling catheters (82%, 125), abdominal surgery (24%, 36), transplant (22%, 33), cancer (22%, 33), hemodialysis (17%, 26), neutropenia (10%, 15). Mean times to Candida detection/spp. identification by dBC were 47/133 hours (2/5.5 d). dBC revealed CA (30%, 46), CG (29%, 45), CP (28%, 43), CT (11%, 17) and CK (3%, 4). Mean time to collection of T2/cBC was 62 hours (2.6 d). 74% (112) of patients received antifungal (AF) therapy prior to T2/cBC (mean: 55 hours (2.3 d)). Overall, T2 results were more likely than cBC to be + (P < 0.0001; Table), a result driven by performance in AF-treated patients (P < 0.0001). T2 was more likely to be + among patients originally infected with CA (61% (28) vs. 20% (9); P = 0.001); there were trends toward higher positivity in patients infected with CT (59% (17) vs. 23% (4; P = 0.08) and CP (42% (18) vs. 28% (12); P = 0.26). T2 was + in 89% (32/36) of patients with + cBC. Conclusion T2 was sensitive for diagnosing candidemia at the time of + cBC, and it was significantly more like to be + than cBC among AF-treated patients. T2 is an important advance in the diagnosis of candidemia, which is likely to be particularly useful in patients receiving prophylactic, pre-emptive or empiric AF therapy. Test results, n (%) Pt group (n) T2+ T2- cBC+ cBC- T2+/cBC+ T2+/cBC- T2-/cBC+ T2-/cBC- All (152) 69 (45%) 83 (55%) 36 (24%) 116 (76%) 32 (21%) 37 (24%) 4 (3%) 79 (52%) Prior AF (112) 55 (49%) 57 (51%) 23 (20%) 89 (80%) 20 (18%) 35 (31%) 3 (3%) 54 (48%) No AF (40) 14 (35%) 26 (65%) 13 (32%) 27 (68%) 12 (30%) 2 (5%) 1 (2%) 25 (62%) Disclosure D. P. Kontoyiannis, Pfizer: Research Contractor, Research support and Speaker honorarium; Astellas: Research Contractor, Research support and Speaker honorarium; Merck: Honorarium, Speaker honorarium; Cidara: Honorarium, Speaker honorarium; Amplyx: Honorarium, Speaker honorarium; F2G: Honorarium, Speaker honorarium; L. Ostrosky-Zeichner, Astellas: Consultant and Grant Investigator, Consulting fee and Research grant; Merck: Scientific Advisor and Speaker’s Bureau, Consulting fee and Speaker honorarium; Pfizer: Grant Investigator and Speaker’s Bureau, Grant recipient and Speaker honorarium; Scynexis: Grant Investigator and Scientific Advisor, Consulting fee and Grant recipient; Cidara: Grant Investigator and Scientific Advisor, Consulting fee and Research grant; S. Apewokin, T2 biosystems: Investigator, Research support; Astellas: Scientific Advisor, Consulting fee
NASA Astrophysics Data System (ADS)
Adhi Pradana, Wisnu; Adiwijaya; Novia Wisesty, Untari
2018-03-01
Support Vector Machine or commonly called SVM is one method that can be used to process the classification of a data. SVM classifies data from 2 different classes with hyperplane. In this study, the system was built using SVM to develop Arabic Speech Recognition. In the development of the system, there are 2 kinds of speakers that have been tested that is dependent speakers and independent speakers. The results from this system is an accuracy of 85.32% for speaker dependent and 61.16% for independent speakers.
F-15B QuietSpike(TradeMark) Aeroservoelastic Flight Test Data Analysis
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
System identification or mathematical modelling is utilised in the aerospace community for the development of simulation models for robust control law design. These models are often described as linear, time-invariant processes and assumed to be uniform throughout the flight envelope. Nevertheless, it is well known that the underlying process is inherently nonlinear. The reason for utilising a linear approach has been due to the lack of a proper set of tools for the identification of nonlinear systems. Over the past several decades the controls and biomedical communities have made great advances in developing tools for the identification of nonlinear systems. These approaches are robust and readily applicable to aerospace systems. In this paper, we show the application of one such nonlinear system identification technique, structure detection, for the analysis of F-15B QuietSpike(TradeMark) aeroservoelastic flight test data. Structure detection is concerned with the selection of a subset of candidate terms that best describe the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance for the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. The objectives of this study are to demonstrate via analysis of F-15B QuietSpike(TradeMark) aeroservoelastic flight test data for several flight conditions (Mach number) that (i) linear models are inefficient for modelling aeroservoelastic data, (ii) nonlinear identification provides a parsimonious model description whilst providing a high percent fit for cross-validated data and (iii) the model structure and parameters vary as the flight condition is altered.
Watch what you say, your computer might be listening: A review of automated speech recognition
NASA Technical Reports Server (NTRS)
Degennaro, Stephen V.
1991-01-01
Spoken language is the most convenient and natural means by which people interact with each other and is, therefore, a promising candidate for human-machine interactions. Speech also offers an additional channel for hands-busy applications, complementing the use of motor output channels for control. Current speech recognition systems vary considerably across a number of important characteristics, including vocabulary size, speaking mode, training requirements for new speakers, robustness to acoustic environments, and accuracy. Algorithmically, these systems range from rule-based techniques through more probabilistic or self-learning approaches such as hidden Markov modeling and neural networks. This tutorial begins with a brief summary of the relevant features of current speech recognition systems and the strengths and weaknesses of the various algorithmic approaches.
NASA Technical Reports Server (NTRS)
Brenner, Malcolm; Shipp, Thomas
1988-01-01
In a study of the validity of eight candidate voice measures (fundamental frequency, amplitude, speech rate, frequency jitter, amplitude shimmer, Psychological Stress Evaluator scores, energy distribution, and the derived measure of the above measures) for determining psychological stress, 17 males age 21 to 35 were subjected to a tracking task on a microcomputer CRT while parameters of vocal production as well as heart rate were measured. Findings confirm those of earlier studies that increases in fundamental frequency, amplitude, and speech rate are found in speakers involved in extreme levels of stress. In addition, it was found that the same changes appear to occur in a regular fashion within a more subtle level of stress that may be characteristic, for example, of routine flying situations. None of the individual speech measures performed as robustly as did heart rate.
NASA Astrophysics Data System (ADS)
Sachau, D.; Jukkert, S.; Hövelmann, N.
2016-08-01
This paper presents the development and experimental validation of an ANC (active noise control)-system designed for a particular application in the exhaust line of a submarine. Thereby, tonal components of the exhaust noise in the frequency band from 75 Hz to 120 Hz are reduced by more than 30 dB. The ANC-system is based on the feedforward leaky FxLMS-algorithm. The observability of the sound pressure in standing wave field is ensured by using two error microphones. The noninvasive online plant identification method is used to increase the robustness of the controller. Online plant identification is extended by a time-varying convergence gain to improve the performance in the presence of slight error in the frequency of the reference signal.
Identification and MS-assisted interpretation of genetically influenced NMR signals in human plasma
2013-01-01
Nuclear magnetic resonance spectroscopy (NMR) provides robust readouts of many metabolic parameters in one experiment. However, identification of clinically relevant markers in 1H NMR spectra is a major challenge. Association of NMR-derived quantities with genetic variants can uncover biologically relevant metabolic traits. Using NMR data of plasma samples from 1,757 individuals from the KORA study together with 655,658 genetic variants, we show that ratios between NMR intensities at two chemical shift positions can provide informative and robust biomarkers. We report seven loci of genetic association with NMR-derived traits (APOA1, CETP, CPS1, GCKR, FADS1, LIPC, PYROXD2) and characterize these traits biochemically using mass spectrometry. These ratios may now be used in clinical studies. PMID:23414815
The Research Triangle Park Speakers Bureau page is a free resource that schools, universities, and community groups in the Raleigh-Durham-Chapel Hill, N.C. area can use to request speakers and find educational resources.
ERIC Educational Resources Information Center
Mitchell, Peter; Robinson, Elizabeth J.; Thompson, Doreen E.
1999-01-01
Three experiments examined 3- to 6-year olds' ability to use a speaker's utterance based on false belief to identify which of several referents was intended. Found that many 4- to 5-year olds performed correctly only when it was unnecessary to consider the speaker's belief. When the speaker gave an ambiguous utterance, many 3- to 6-year olds…
Speaker Introductions at Internal Medicine Grand Rounds: Forms of Address Reveal Gender Bias.
Files, Julia A; Mayer, Anita P; Ko, Marcia G; Friedrich, Patricia; Jenkins, Marjorie; Bryan, Michael J; Vegunta, Suneela; Wittich, Christopher M; Lyle, Melissa A; Melikian, Ryan; Duston, Trevor; Chang, Yu-Hui H; Hayes, Sharonne N
2017-05-01
Gender bias has been identified as one of the drivers of gender disparity in academic medicine. Bias may be reinforced by gender subordinating language or differential use of formality in forms of address. Professional titles may influence the perceived expertise and authority of the referenced individual. The objective of this study is to examine how professional titles were used in the same and mixed-gender speaker introductions at Internal Medicine Grand Rounds (IMGR). A retrospective observational study of video-archived speaker introductions at consecutive IMGR was conducted at two different locations (Arizona, Minnesota) of an academic medical center. Introducers and speakers at IMGR were physician and scientist peers holding MD, PhD, or MD/PhD degrees. The primary outcome was whether or not a speaker's professional title was used during the first form of address during speaker introductions at IMGR. As secondary outcomes, we evaluated whether or not the speakers professional title was used in any form of address during the introduction. Three hundred twenty-one forms of address were analyzed. Female introducers were more likely to use professional titles when introducing any speaker during the first form of address compared with male introducers (96.2% [102/106] vs. 65.6% [141/215]; p < 0.001). Female dyads utilized formal titles during the first form of address 97.8% (45/46) compared with male dyads who utilized a formal title 72.4% (110/152) of the time (p = 0.007). In mixed-gender dyads, where the introducer was female and speaker male, formal titles were used 95.0% (57/60) of the time. Male introducers of female speakers utilized professional titles 49.2% (31/63) of the time (p < 0.001). In this study, women introduced by men at IMGR were less likely to be addressed by professional title than were men introduced by men. Differential formality in speaker introductions may amplify isolation, marginalization, and professional discomfiture expressed by women faculty in academic medicine.
Improving Leishmania Species Identification in Different Types of Samples from Cutaneous Lesions
Cruz-Barrera, Mónica L.; Ovalle-Bracho, Clemencia; Ortegon-Vergara, Viviana; Pérez-Franco, Jairo E.
2015-01-01
The discrimination of Leishmania species from patient samples has epidemiological and clinical relevance. In this study, different gene target PCR-restriction fragment length polymorphism (RFLP) protocols were evaluated for their robustness as Leishmania species discriminators in 61 patients with cutaneous leishmaniasis. We modified the hsp70-PCR-RFLP protocol and found it to be the most reliable protocol for species identification. PMID:25609727
Substructure System Identification for Finite Element Model Updating
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.; Blades, Eric L.
1997-01-01
This report summarizes research conducted under a NASA grant on the topic 'Substructure System Identification for Finite Element Model Updating.' The research concerns ongoing development of the Substructure System Identification Algorithm (SSID Algorithm), a system identification algorithm that can be used to obtain mathematical models of substructures, like Space Shuttle payloads. In the present study, particular attention was given to the following topics: making the algorithm robust to noisy test data, extending the algorithm to accept experimental FRF data that covers a broad frequency bandwidth, and developing a test analytical model (TAM) for use in relating test data to reduced-order finite element models.
Implementation of facial recognition with Microsoft Kinect v2 sensor for patient verification.
Silverstein, Evan; Snyder, Michael
2017-06-01
The aim of this study was to present a straightforward implementation of facial recognition using the Microsoft Kinect v2 sensor for patient identification in a radiotherapy setting. A facial recognition system was created with the Microsoft Kinect v2 using a facial mapping library distributed with the Kinect v2 SDK as a basis for the algorithm. The system extracts 31 fiducial points representing various facial landmarks which are used in both the creation of a reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. ROC curves were plotted to display system performance and identify thresholds for match determination. In addition, system performance as a function of ambient light intensity was tested. Using optimized parameters in the matching algorithm, the sensitivity of the system for 5299 trials was 96.5% and the specificity was 96.7%. The results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a precollected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 s, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants and most robust when consistent ambient light conditions were maintained across both the reference recording session and subsequent real-time identification sessions. A facial recognition system can be implemented for patient identification using the Microsoft Kinect v2 sensor and the distributed SDK. In its present form, the system is accurate-if time consuming-and further iterations of the method could provide a robust, easy to implement, and cost-effective supplement to traditional patient identification methods. © 2017 American Association of Physicists in Medicine.
Koenig, Melissa A; Echols, Catharine H
2003-04-01
The four studies reported here examine whether 16-month-old infants' responses to true and false utterances interact with their knowledge of human agents. In Study 1, infants heard repeated instances either of true or false labeling of common objects; labels came from an active human speaker seated next to the infant. In Study 2, infants experienced the same stimuli and procedure; however, we replaced the human speaker of Study 1 with an audio speaker in the same location. In Study 3, labels came from a hidden audio speaker. In Study 4, a human speaker labeled the objects while facing away from them. In Study 1, infants looked significantly longer to the human agent when she falsely labeled than when she truthfully labeled the objects. Infants did not show a similar pattern of attention for the audio speaker of Study 2, the silent human of Study 3 or the facing-backward speaker of Study 4. In fact, infants who experienced truthful labeling looked significantly longer to the facing-backward labeler of Study 4 than to true labelers of the other three contexts. Additionally, infants were more likely to correct false labels when produced by the human labeler of Study 1 than in any of the other contexts. These findings suggest, first, that infants are developing a critical conception of other human speakers as truthful communicators, and second, that infants understand that human speakers may provide uniquely useful information when a word fails to match its referent. These findings are consistent with the view that infants can recognize differences in knowledge and that such differences can be based on differences in the availability of perceptual experience.
Zheng, Dandan; Todor, Dorin A
2011-01-01
In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Autonomous frequency domain identification: Theory and experiment
NASA Technical Reports Server (NTRS)
Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.
1989-01-01
The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.
Gu, Jinghua; Xuan, Jianhua; Riggins, Rebecca B; Chen, Li; Wang, Yue; Clarke, Robert
2012-08-01
Identification of transcriptional regulatory networks (TRNs) is of significant importance in computational biology for cancer research, providing a critical building block to unravel disease pathways. However, existing methods for TRN identification suffer from the inclusion of excessive 'noise' in microarray data and false-positives in binding data, especially when applied to human tumor-derived cell line studies. More robust methods that can counteract the imperfection of data sources are therefore needed for reliable identification of TRNs in this context. In this article, we propose to establish a link between the quality of one target gene to represent its regulator and the uncertainty of its expression to represent other target genes. Specifically, an outlier sum statistic was used to measure the aggregated evidence for regulation events between target genes and their corresponding transcription factors. A Gibbs sampling method was then developed to estimate the marginal distribution of the outlier sum statistic, hence, to uncover underlying regulatory relationships. To evaluate the effectiveness of our proposed method, we compared its performance with that of an existing sampling-based method using both simulation data and yeast cell cycle data. The experimental results show that our method consistently outperforms the competing method in different settings of signal-to-noise ratio and network topology, indicating its robustness for biological applications. Finally, we applied our method to breast cancer cell line data and demonstrated its ability to extract biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. The Gibbs sampler MATLAB package is freely available at http://www.cbil.ece.vt.edu/software.htm. xuan@vt.edu Supplementary data are available at Bioinformatics online.
Gu, Jinghua; Xuan, Jianhua; Riggins, Rebecca B.; Chen, Li; Wang, Yue; Clarke, Robert
2012-01-01
Motivation: Identification of transcriptional regulatory networks (TRNs) is of significant importance in computational biology for cancer research, providing a critical building block to unravel disease pathways. However, existing methods for TRN identification suffer from the inclusion of excessive ‘noise’ in microarray data and false-positives in binding data, especially when applied to human tumor-derived cell line studies. More robust methods that can counteract the imperfection of data sources are therefore needed for reliable identification of TRNs in this context. Results: In this article, we propose to establish a link between the quality of one target gene to represent its regulator and the uncertainty of its expression to represent other target genes. Specifically, an outlier sum statistic was used to measure the aggregated evidence for regulation events between target genes and their corresponding transcription factors. A Gibbs sampling method was then developed to estimate the marginal distribution of the outlier sum statistic, hence, to uncover underlying regulatory relationships. To evaluate the effectiveness of our proposed method, we compared its performance with that of an existing sampling-based method using both simulation data and yeast cell cycle data. The experimental results show that our method consistently outperforms the competing method in different settings of signal-to-noise ratio and network topology, indicating its robustness for biological applications. Finally, we applied our method to breast cancer cell line data and demonstrated its ability to extract biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. Availability and implementation: The Gibbs sampler MATLAB package is freely available at http://www.cbil.ece.vt.edu/software.htm. Contact: xuan@vt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22595208
Speech recognition: Acoustic-phonetic knowledge acquisition and representation
NASA Astrophysics Data System (ADS)
Zue, Victor W.
1988-09-01
The long-term research goal is to develop and implement speaker-independent continuous speech recognition systems. It is believed that the proper utilization of speech-specific knowledge is essential for such advanced systems. This research is thus directed toward the acquisition, quantification, and representation, of acoustic-phonetic and lexical knowledge, and the application of this knowledge to speech recognition algorithms. In addition, we are exploring new speech recognition alternatives based on artificial intelligence and connectionist techniques. We developed a statistical model for predicting the acoustic realization of stop consonants in various positions in the syllable template. A unification-based grammatical formalism was developed for incorporating this model into the lexical access algorithm. We provided an information-theoretic justification for the hierarchical structure of the syllable template. We analyzed segmented duration for vowels and fricatives in continuous speech. Based on contextual information, we developed durational models for vowels and fricatives that account for over 70 percent of the variance, using data from multiple, unknown speakers. We rigorously evaluated the ability of human spectrogram readers to identify stop consonants spoken by many talkers and in a variety of phonetic contexts. Incorporating the declarative knowledge used by the readers, we developed a knowledge-based system for stop identification. We achieved comparable system performance to that to the readers.
Wong, Raymond
2013-01-01
Voice biometrics is one kind of physiological characteristics whose voice is different for each individual person. Due to this uniqueness, voice classification has found useful applications in classifying speakers' gender, mother tongue or ethnicity (accent), emotion states, identity verification, verbal command control, and so forth. In this paper, we adopt a new preprocessing method named Statistical Feature Extraction (SFX) for extracting important features in training a classification model, based on piecewise transformation treating an audio waveform as a time-series. Using SFX we can faithfully remodel statistical characteristics of the time-series; together with spectral analysis, a substantial amount of features are extracted in combination. An ensemble is utilized in selecting only the influential features to be used in classification model induction. We focus on the comparison of effects of various popular data mining algorithms on multiple datasets. Our experiment consists of classification tests over four typical categories of human voice data, namely, Female and Male, Emotional Speech, Speaker Identification, and Language Recognition. The experiments yield encouraging results supporting the fact that heuristically choosing significant features from both time and frequency domains indeed produces better performance in voice classification than traditional signal processing techniques alone, like wavelets and LPC-to-CC. PMID:24288684
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers.
Liu, Fang; Chan, Alice H D; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C M
2016-07-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers
Liu, Fang; Chan, Alice H. D.; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C. M.
2016-01-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production. PMID:27475178
Flanagan, Sheila; Zorilă, Tudor-Cătălin; Stylianou, Yannis; Moore, Brian C J
2018-01-01
Auditory processing disorder (APD) may be diagnosed when a child has listening difficulties but has normal audiometric thresholds. For adults with normal hearing and with mild-to-moderate hearing impairment, an algorithm called spectral shaping with dynamic range compression (SSDRC) has been shown to increase the intelligibility of speech when background noise is added after the processing. Here, we assessed the effect of such processing using 8 children with APD and 10 age-matched control children. The loudness of the processed and unprocessed sentences was matched using a loudness model. The task was to repeat back sentences produced by a female speaker when presented with either speech-shaped noise (SSN) or a male competing speaker (CS) at two signal-to-background ratios (SBRs). Speech identification was significantly better with SSDRC processing than without, for both groups. The benefit of SSDRC processing was greater for the SSN than for the CS background. For the SSN, scores were similar for the two groups at both SBRs. For the CS, the APD group performed significantly more poorly than the control group. The overall improvement produced by SSDRC processing could be useful for enhancing communication in a classroom where the teacher's voice is broadcast using a wireless system.
Anumanchipalli, Gopala K.; Dichter, Benjamin; Chaisanguanthum, Kris S.; Johnson, Keith; Chang, Edward F.
2016-01-01
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial—especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship across speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics. PMID:27019106
Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.; ...
2016-03-28
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less
An acoustic comparison of two women's infant- and adult-directed speech
NASA Astrophysics Data System (ADS)
Andruski, Jean; Katz-Gershon, Shiri
2003-04-01
In addition to having prosodic characteristics that are attractive to infant listeners, infant-directed (ID) speech shares certain characteristics of adult-directed (AD) clear speech, such as increased acoustic distance between vowels, that might be expected to make ID speech easier for adults to perceive in noise than AD conversational speech. However, perceptual tests of two women's ID productions by Andruski and Bessega [J. Acoust. Soc. Am. 112, 2355] showed that is not always the case. In a word identification task that compared ID speech with AD clear and conversational speech, one speaker's ID productions were less well-identified than AD clear speech, but better identified than AD conversational speech. For the second woman, ID speech was the least accurately identified of the three speech registers. For both speakers, hard words (infrequent words with many lexical neighbors) were also at an increased disadvantage relative to easy words (frequent words with few lexical neighbors) in speech registers that were less accurately perceived. This study will compare several acoustic properties of these women's productions, including pitch and formant-frequency characteristics. Results of the acoustic analyses will be examined with the original perceptual results to suggest reasons for differences in listener's accuracy in identifying these two women's ID speech in noise.
Crossmodal plasticity in the fusiform gyrus of late blind individuals during voice recognition.
Hölig, Cordula; Föcker, Julia; Best, Anna; Röder, Brigitte; Büchel, Christian
2014-12-01
Blind individuals are trained in identifying other people through voices. In congenitally blind adults the anterior fusiform gyrus has been shown to be active during voice recognition. Such crossmodal changes have been associated with a superiority of blind adults in voice perception. The key question of the present functional magnetic resonance imaging (fMRI) study was whether visual deprivation that occurs in adulthood is followed by similar adaptive changes of the voice identification system. Late blind individuals and matched sighted participants were tested in a priming paradigm, in which two voice stimuli were subsequently presented. The prime (S1) and the target (S2) were either from the same speaker (person-congruent voices) or from two different speakers (person-incongruent voices). Participants had to classify the S2 as either coming from an old or a young person. Only in late blind but not in matched sighted controls, the activation in the anterior fusiform gyrus was modulated by voice identity: late blind volunteers showed an increase of the BOLD signal in response to person-incongruent compared with person-congruent trials. These results suggest that the fusiform gyrus adapts to input of a new modality even in the mature brain and thus demonstrate an adult type of crossmodal plasticity. Copyright © 2014 Elsevier Inc. All rights reserved.
Brain systems mediating voice identity processing in blind humans.
Hölig, Cordula; Föcker, Julia; Best, Anna; Röder, Brigitte; Büchel, Christian
2014-09-01
Blind people rely more on vocal cues when they recognize a person's identity than sighted people. Indeed, a number of studies have reported better voice recognition skills in blind than in sighted adults. The present functional magnetic resonance imaging study investigated changes in the functional organization of neural systems involved in voice identity processing following congenital blindness. A group of congenitally blind individuals and matched sighted control participants were tested in a priming paradigm, in which two voice stimuli (S1, S2) were subsequently presented. The prime (S1) and the target (S2) were either from the same speaker (person-congruent voices) or from two different speakers (person-incongruent voices). Participants had to classify the S2 as either a old or a young person. Person-incongruent voices (S2) compared with person-congruent voices elicited an increased activation in the right anterior fusiform gyrus in congenitally blind individuals but not in matched sighted control participants. In contrast, only matched sighted controls showed a higher activation in response to person-incongruent compared with person-congruent voices (S2) in the right posterior superior temporal sulcus. These results provide evidence for crossmodal plastic changes of the person identification system in the brain after visual deprivation. Copyright © 2014 Wiley Periodicals, Inc.
DARPA super resolution vision system (SRVS) robust turbulence data collection and analysis
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Thompson, Roger; Tofsted, David; D'Arcy, Sean
2014-05-01
Atmospheric turbulence degrades the range performance of military imaging systems, specifically those intended for long range, ground-to-ground target identification. The recent Defense Advanced Research Projects Agency (DARPA) Super Resolution Vision System (SRVS) program developed novel post-processing system components to mitigate turbulence effects on visible and infrared sensor systems. As part of the program, the US Army RDECOM CERDEC NVESD and the US Army Research Laboratory Computational & Information Sciences Directorate (CISD) collaborated on a field collection and atmospheric characterization of a two-handed weapon identification dataset through a diurnal cycle for a variety of ranges and sensor systems. The robust dataset is useful in developing new models and simulations of turbulence, as well for providing as a standard baseline for comparison of sensor systems in the presence of turbulence degradation and mitigation. In this paper, we describe the field collection and atmospheric characterization and present the robust dataset to the defense, sensing, and security community. In addition, we present an expanded model validation of turbulence degradation using the field collected video sequences.
Performance analysis of robust road sign identification
NASA Astrophysics Data System (ADS)
Ali, Nursabillilah M.; Mustafah, Y. M.; Rashid, N. K. A. M.
2013-12-01
This study describes performance analysis of a robust system for road sign identification that incorporated two stages of different algorithms. The proposed algorithms consist of HSV color filtering and PCA techniques respectively in detection and recognition stages. The proposed algorithms are able to detect the three standard types of colored images namely Red, Yellow and Blue. The hypothesis of the study is that road sign images can be used to detect and identify signs that are involved with the existence of occlusions and rotational changes. PCA is known as feature extraction technique that reduces dimensional size. The sign image can be easily recognized and identified by the PCA method as is has been used in many application areas. Based on the experimental result, it shows that the HSV is robust in road sign detection with minimum of 88% and 77% successful rate for non-partial and partial occlusions images. For successful recognition rates using PCA can be achieved in the range of 94-98%. The occurrences of all classes are recognized successfully is between 5% and 10% level of occlusions.
. Northern Command Speakers Program The U.S. Northern Command Speaker's Program works to increase face-to -face contact with our public to help build and sustain public understanding of our command missions and
Speakers of Different Languages Process the Visual World Differently
Chabal, Sarah; Marian, Viorica
2015-01-01
Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct linguistic input, showing that language is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. PMID:26030171
Learning foreign labels from a foreign speaker: the role of (limited) exposure to a second language.
Akhtar, Nameera; Menjivar, Jennifer; Hoicka, Elena; Sabbagh, Mark A
2012-11-01
Three- and four-year-olds (N = 144) were introduced to novel labels by an English speaker and a foreign speaker (of Nordish, a made-up language), and were asked to endorse one of the speaker's labels. Monolingual English-speaking children were compared to bilingual children and English-speaking children who were regularly exposed to a language other than English. All children tended to endorse the English speaker's labels when asked 'What do you call this?', but when asked 'What do you call this in Nordish?', children with exposure to a second language were more likely to endorse the foreign label than monolingual and bilingual children. The findings suggest that, at this age, exposure to, but not necessarily immersion in, more than one language may promote the ability to learn foreign words from a foreign speaker.
Byers-Heinlein, Krista; Chen, Ke Heng; Xu, Fei
2014-03-01
Languages function as independent and distinct conventional systems, and so each language uses different words to label the same objects. This study investigated whether 2-year-old children recognize that speakers of their native language and speakers of a foreign language do not share the same knowledge. Two groups of children unfamiliar with Mandarin were tested: monolingual English-learning children (n=24) and bilingual children learning English and another language (n=24). An English speaker taught children the novel label fep. On English mutual exclusivity trials, the speaker asked for the referent of a novel label (wug) in the presence of the fep and a novel object. Both monolingual and bilingual children disambiguated the reference of the novel word using a mutual exclusivity strategy, choosing the novel object rather than the fep. On similar trials with a Mandarin speaker, children were asked to find the referent of a novel Mandarin label kuò. Monolinguals again chose the novel object rather than the object with the English label fep, even though the Mandarin speaker had no access to conventional English words. Bilinguals did not respond systematically to the Mandarin speaker, suggesting that they had enhanced understanding of the Mandarin speaker's ignorance of English words. The results indicate that monolingual children initially expect words to be conventionally shared across all speakers-native and foreign. Early bilingual experience facilitates children's discovery of the nature of foreign language words. Copyright © 2013 Elsevier Inc. All rights reserved.
Content-specific coordination of listeners' to speakers' EEG during communication.
Kuhlen, Anna K; Allefeld, Carsten; Haynes, John-Dylan
2012-01-01
Cognitive neuroscience has recently begun to extend its focus from the isolated individual mind to two or more individuals coordinating with each other. In this study we uncover a coordination of neural activity between the ongoing electroencephalogram (EEG) of two people-a person speaking and a person listening. The EEG of one set of twelve participants ("speakers") was recorded while they were narrating short stories. The EEG of another set of twelve participants ("listeners") was recorded while watching audiovisual recordings of these stories. Specifically, listeners watched the superimposed videos of two speakers simultaneously and were instructed to attend either to one or the other speaker. This allowed us to isolate neural coordination due to processing the communicated content from the effects of sensory input. We find several neural signatures of communication: First, the EEG is more similar among listeners attending to the same speaker than among listeners attending to different speakers, indicating that listeners' EEG reflects content-specific information. Secondly, listeners' EEG activity correlates with the attended speakers' EEG, peaking at a time delay of about 12.5 s. This correlation takes place not only between homologous, but also between non-homologous brain areas in speakers and listeners. A semantic analysis of the stories suggests that listeners coordinate with speakers at the level of complex semantic representations, so-called "situation models". With this study we link a coordination of neural activity between individuals directly to verbally communicated information.
Choi, Yaelin
2017-01-01
Purpose The present study aimed to compare acoustic models of speech intelligibility in individuals with the same disease (Parkinson's disease [PD]) and presumably similar underlying neuropathologies but with different native languages (American English [AE] and Korean). Method A total of 48 speakers from the 4 speaker groups (AE speakers with PD, Korean speakers with PD, healthy English speakers, and healthy Korean speakers) were asked to read a paragraph in their native languages. Four acoustic variables were analyzed: acoustic vowel space, voice onset time contrast scores, normalized pairwise variability index, and articulation rate. Speech intelligibility scores were obtained from scaled estimates of sentences extracted from the paragraph. Results The findings indicated that the multiple regression models of speech intelligibility were different in Korean and AE, even with the same set of predictor variables and with speakers matched on speech intelligibility across languages. Analysis of the descriptive data for the acoustic variables showed the expected compression of the vowel space in speakers with PD in both languages, lower normalized pairwise variability index scores in Korean compared with AE, and no differences within or across language in articulation rate. Conclusions The results indicate that the basis of an intelligibility deficit in dysarthria is likely to depend on the native language of the speaker and listener. Additional research is required to explore other potential predictor variables, as well as additional language comparisons to pursue cross-linguistic considerations in classification and diagnosis of dysarthria types. PMID:28821018
An adaptive deep learning approach for PPG-based identification.
Jindal, V; Birjandtalab, J; Pouyan, M Baran; Nourani, M
2016-08-01
Wearable biosensors have become increasingly popular in healthcare due to their capabilities for low cost and long term biosignal monitoring. This paper presents a novel two-stage technique to offer biometric identification using these biosensors through Deep Belief Networks and Restricted Boltzman Machines. Our identification approach improves robustness in current monitoring procedures within clinical, e-health and fitness environments using Photoplethysmography (PPG) signals through deep learning classification models. The approach is tested on TROIKA dataset using 10-fold cross validation and achieved an accuracy of 96.1%.
The ICSI+ Multilingual Sentence Segmentation System
2006-01-01
these steps the ASR output needs to be enriched with information additional to words, such as speaker diarization , sentence segmentation, or story...and the out- of a speaker diarization is considered as well. We first detail extraction of the prosodic features, and then describe the clas- ation...also takes into account the speaker turns that estimated by the diarization system. In addition to the Max- 1) model speaker turn unigrams, trigram
Speaker Segmentation and Clustering Using Gender Information
2006-02-01
used in the first stages of segmentation forder information in the clustering of the opposite-gender speaker diarization of news broadcasts. files, the...AFRL-HE-WP-TP-2006-0026 AIR FORCE RESEARCH LABORATORY Speaker Segmentation and Clustering Using Gender Information Brian M. Ore General Dynamics...COVERED (From - To) February 2006 ProceedinLgs 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Speaker Segmentation and Clustering Using Gender Information 5b
The 2016 NIST Speaker Recognition Evaluation
2017-08-20
The 2016 NIST Speaker Recognition Evaluation Seyed Omid Sadjadi1,∗, Timothée Kheyrkhah1,†, Audrey Tong1, Craig Greenberg1, Douglas Reynolds2, Elliot...recent in an ongoing series of speaker recognition evaluations (SRE) to foster research in ro- bust text-independent speaker recognition, as well as...online evaluation platform, a fixed training data condition, more variability in test segment duration (uni- formly distributed between 10s and 60s
Magnetic Fluids Deliver Better Speaker Sound Quality
NASA Technical Reports Server (NTRS)
2015-01-01
In the 1960s, Glenn Research Center developed a magnetized fluid to draw rocket fuel into spacecraft engines while in space. Sony has incorporated the technology into its line of slim speakers by using the fluid as a liquid stand-in for the speaker's dampers, which prevent the speaker from blowing out while adding stability. The fluid helps to deliver more volume and hi-fidelity sound while reducing distortion.
Special Observance Planning Guide
2015-11-01
Finding the right speaker for an event can be a challenge. Many speakers are recommended based on word-of-mouth or through a group connected to...An unprepared, rambling speaker or one who intentionally or unintentionally attacks a group or its members can be extremely damaging to a program...Don’t assume that an organizational senior leader is an adequate speaker based on position, rank, and/or affiliation with a reference group
ERIC Educational Resources Information Center
Bressmann, Tim; Flowers, Heather; Wong, Willy; Irish, Jonathan C.
2010-01-01
The goal of this study was to quantitatively describe aspects of coronal tongue movement in different anatomical regions of the tongue. Four normal speakers and a speaker with partial glossectomy read four repetitions of a metronome-paced poem. Their tongue movement was recorded in four coronal planes using two-dimensional B-mode ultrasound…
ERIC Educational Resources Information Center
McKain, Danielle R.
2012-01-01
The term real world is often used in mathematics education, yet the definition of real-world problems and how to incorporate them in the classroom remains ambiguous. One way real-world connections can be made is through guest speakers. Guest speakers can offer different perspectives and share knowledge about various subject areas, yet the impact…
When pitch Accents Encode Speaker Commitment: Evidence from French Intonation.
Michelas, Amandine; Portes, Cristel; Champagne-Lavau, Maud
2016-06-01
Recent studies on a variety of languages have shown that a speaker's commitment to the propositional content of his or her utterance can be encoded, among other strategies, by pitch accent types. Since prior research mainly relied on lexical-stress languages, our understanding of how speakers of a non-lexical-stress language encode speaker commitment is limited. This paper explores the contribution of the last pitch accent of an intonation phrase to convey speaker commitment in French, a language that has stress at the phrasal level as well as a restricted set of pitch accents. In a production experiment, participants had to produce sentences in two pragmatic contexts: unbiased questions (the speaker had no particular belief with respect to the expected answer) and negatively biased questions (the speaker believed the proposition to be false). Results revealed that negatively biased questions consistently exhibited an additional unaccented F0 peak in the preaccentual syllable (an H+!H* pitch accent) while unbiased questions were often realized with a rising pattern across the accented syllable (an H* pitch accent). These results provide evidence that pitch accent types in French can signal the speaker's belief about the certainty of the proposition expressed in French. It also has implications for the phonological model of French intonation.
Sociological effects on vocal aging: Age related F0 effects in two languages
NASA Astrophysics Data System (ADS)
Nagao, Kyoko
2005-04-01
Listeners can estimate the age of a speaker fairly accurately from their speech (Ptacek and Sander, 1966). It is generally considered that this perception is based on physiologically determined aspects of the speech. However, the degree to which it is due to conventional sociolinguistic aspects of speech is unknown. The current study examines the degree to which fundamental frequency (F0) changes due to advanced aging across two language groups of speakers. It also examines the degree to which the speakers associate these changes with aging in a voice disguising task. Thirty native speakers each of English and Japanese, taken from three age groups, read a target phrase embedded in a carrier sentence in their native language. Each speaker also read the sentence pretending to be 20-years younger or 20-years older than their own age. Preliminary analysis of eighteen Japanese speakers indicates that the mean and maximum F0 values increase when the speakers pretended to be younger than when they pretended to be older. Some previous studies on age perception, however, suggested that F0 has minor effects on listeners' age estimation. The acoustic results will also be discussed in conjunction with the results of the listeners' age estimation of the speakers.
Brener, Loren; Wilson, Hannah; Rose, Grenville; Mackenzie, Althea; de Wit, John
2013-01-01
Positive Speakers programs consist of people who are trained to speak publicly about their illness. The focus of these programs, especially with stigmatised illnesses such as hepatitis C (HCV), is to inform others of the speakers' experiences, thereby humanising the illness and reducing ignorance associated with the disease. This qualitative research aimed to understand the perceived impact of Positive Speakers programs on changing audience members' attitudes towards people with HCV. Interviews were conducted with nine Positive Speakers and 16 of their audience members to assess the way in which these sessions were perceived by both speakers and the audience to challenge stereotypes and stigma associated with HCV and promote positive attitude change amongst the audience. Data were analysed using Intergroup Contact Theory to frame the analysis with a focus on whether the program met the optimal conditions to promote attitude change. Findings suggest that there are a number of vital components to this Positive Speakers program which ensures that the program meets the requirements for successful and equitable intergroup contact. This Positive Speakers program thereby helps to deconstruct stereotypes about people with HCV, while simultaneously increasing positive attitudes among audience members with the ultimate aim of improving quality of health care and treatment for people with HCV.
Maass, Anne; Paladino, Maria Paola; Vespignani, Francesco; Eyssel, Friederike; Bentler, Dominik
2015-01-01
Empirical research had initially shown that English listeners are able to identify the speakers' sexual orientation based on voice cues alone. However, the accuracy of this voice-based categorization, as well as its generalizability to other languages (language-dependency) and to non-native speakers (language-specificity), has been questioned recently. Consequently, we address these open issues in 5 experiments: First, we tested whether Italian and German listeners are able to correctly identify sexual orientation of same-language male speakers. Then, participants of both nationalities listened to voice samples and rated the sexual orientation of both Italian and German male speakers. We found that listeners were unable to identify the speakers' sexual orientation correctly. However, speakers were consistently categorized as either heterosexual or gay on the basis of how they sounded. Moreover, a similar pattern of results emerged when listeners judged the sexual orientation of speakers of their own and of the foreign language. Overall, this research suggests that voice-based categorization of sexual orientation reflects the listeners' expectations of how gay voices sound rather than being an accurate detector of the speakers' actual sexual identity. Results are discussed with regard to accuracy, acoustic features of voices, language dependency and language specificity. PMID:26132820
NASA Technical Reports Server (NTRS)
Costanza, Bryan T.; Horne, William C.; Schery, S. D.; Babb, Alex T.
2011-01-01
The Aero-Physics Branch at NASA Ames Research Center utilizes a 32- by 48-inch subsonic wind tunnel for aerodynamics research. The feasibility of acquiring acoustic measurements with a phased microphone array was recently explored. Acoustic characterization of the wind tunnel was carried out with a floor-mounted 24-element array and two ceiling-mounted speakers. The minimum speaker level for accurate level measurement was evaluated for various tunnel speeds up to a Mach number of 0.15 and streamwise speaker locations. A variety of post-processing procedures, including conventional beamforming and deconvolutional processing such as TIDY, were used. The speaker measurements, with and without flow, were used to compare actual versus simulated in-flow speaker calibrations. Data for wind-off speaker sound and wind-on tunnel background noise were found valuable for predicting sound levels for which the speakers were detectable when the wind was on. Speaker sources were detectable 2 - 10 dB below the peak background noise level with conventional data processing. The effectiveness of background noise cross-spectral matrix subtraction was assessed and found to improve the detectability of test sound sources by approximately 10 dB over a wide frequency range.
Engaging spaces: Intimate electro-acoustic display in alternative performance venues
NASA Astrophysics Data System (ADS)
Bahn, Curtis; Moore, Stephan
2004-05-01
In past presentations to the ASA, we have described the design and construction of four generations of unique spherical speakers (multichannel, outward-radiating geodesic speaker arrays) and Sensor-Speaker-Arrays, (SenSAs: combinations of various sensor devices with outward-radiating multichannel speaker arrays). This presentation will detail the ways in which arrays of these speakers have been employed in alternative performance venues-providing presence and intimacy in the performance of electro-acoustic chamber music and sound installation, while engaging natural and unique acoustical qualities of various locations. We will present documentation of the use of multichannel sonic diffusion arrays in small clubs, ``black-box'' theaters, planetariums, and art galleries.
Speaker diarization system on the 2007 NIST rich transcription meeting recognition evaluation
NASA Astrophysics Data System (ADS)
Sun, Hanwu; Nwe, Tin Lay; Koh, Eugene Chin Wei; Bin, Ma; Li, Haizhou
2007-09-01
This paper presents a speaker diarization system developed at the Institute for Infocomm Research (I2R) for NIST Rich Transcription 2007 (RT-07) evaluation task. We describe in details our primary approaches for the speaker diarization on the Multiple Distant Microphones (MDM) conditions in conference room scenario. Our proposed system consists of six modules: 1). Least-mean squared (NLMS) adaptive filter for the speaker direction estimate via Time Difference of Arrival (TDOA), 2). An initial speaker clustering via two-stage TDOA histogram distribution quantization approach, 3). Multiple microphone speaker data alignment via GCC-PHAT Time Delay Estimate (TDE) among all the distant microphone channel signals, 4). A speaker clustering algorithm based on GMM modeling approach, 5). Non-speech removal via speech/non-speech verification mechanism and, 6). Silence removal via "Double-Layer Windowing"(DLW) method. We achieves error rate of 31.02% on the 2006 Spring (RT-06s) MDM evaluation task and a competitive overall error rate of 15.32% for the NIST Rich Transcription 2007 (RT-07) MDM evaluation task.
Intonation and gender perception: applications for transgender speakers.
Hancock, Adrienne; Colton, Lindsey; Douglas, Fiacre
2014-03-01
Intonation is commonly addressed in voice and communication feminization therapy, yet empirical evidence of gender differences for intonation is scarce and rarely do studies examine how it relates to gender perception of transgender speakers. This study examined intonation of 12 males, 12 females, six female-to-male, and 14 male-to-female transgender speakers describing a Norman Rockwell image. Several intonation measures were compared between biological gender groups, between perceived gender groups, and between male-to-female (MTF) speakers who were perceived as male, female, or ambiguous gender. Speakers with a larger percentage of utterances with upward intonation and a larger utterance semitone range were perceived as female by listeners, despite no significant differences between the actual intonation of the four gender groups. MTF speakers who do not pass as female appear to use less upward and more downward intonations than female and passing MTF speakers. Intonation has potential for use in transgender communication therapy because it can influence perception to some degree. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Kamimura, Akiko; Ashby, Jeanie; Tabler, Jennifer; Nourian, Maziar M; Trinh, Ha Ngoc; Chen, Jason; Reel, Justine J
2017-01-01
The abuse of substances is a significant public health issue. Perceived stress and depression have been found to be related to the abuse of substances. The purpose of this study is to examine the prevalence of substance use (i.e., alcohol problems, smoking, and drug use) and the association between substance use, perceived stress, and depression among free clinic patients. Patients completed a self-administered survey in 2015 (N = 504). The overall prevalence of substance use among free clinic patients was not high compared to the U.S. general population. U.S.-born English speakers reported a higher prevalence rate of tobacco smoking and drug use than did non-U.S.-born English speakers and Spanish speakers. Alcohol problems and smoking were significantly related to higher levels of perceived stress and depression. Substance use prevention and education should be included in general health education programs. U.S.-born English speakers would need additional attention. Mental health intervention would be essential to prevention and intervention.
ERIC Educational Resources Information Center
Köroglu, Zehra; Tüm, Gülden
2017-01-01
This study has been conducted to evaluate the TM usage in the MA theses written by the native speakers (NSs) of English and the Turkish speakers (TSs) of English. The purpose is to compare the TM usage in the introduction, results and discussion, and conclusion sections by both groups' randomly selected MA theses in the field of ELT between the…
Improving the Effectiveness of Speaker Verification Domain Adaptation With Inadequate In-Domain Data
2017-08-20
Improving the Effectiveness of Speaker Verification Domain Adaptation With Inadequate In-Domain Data Bengt J. Borgström1, Elliot Singer1, Douglas...ll.mit.edu.edu, dar@ll.mit.edu, es@ll.mit.edu, omid.sadjadi@nist.gov Abstract This paper addresses speaker verification domain adaptation with...contain speakers with low channel diversity. Existing domain adaptation methods are reviewed, and their shortcomings are discussed. We derive an
Mortality inequality in two native population groups.
Saarela, Jan; Finnäs, Fjalar
2005-11-01
A sample of people aged 40-67 years, taken from a longitudinal register compiled by Statistics Finland, is used to analyse mortality differences between Swedish speakers and Finnish speakers in Finland. Finnish speakers are known to have higher death rates than Swedish speakers. The purpose is to explore whether labour-market experience and partnership status, treated as proxies for measures of variation in health-related characteristics, are related to the mortality differential. Persons who are single, disability pensioners, and those having experienced unemployment are found to have substantially higher death rates than those with a partner and employed persons. Swedish speakers have a more favourable distribution on both variables, which thus notably helps to reduce the Finnish-Swedish mortality gradient. A conclusion from this study is that future analyses on the topic should focus on mechanisms that bring a greater proportion of Finnish speakers into the groups with poor health or supposed unhealthy behaviour.
How Psychological Stress Affects Emotional Prosody.
Paulmann, Silke; Furnes, Desire; Bøkenes, Anne Ming; Cozzolino, Philip J
2016-01-01
We explored how experimentally induced psychological stress affects the production and recognition of vocal emotions. In Study 1a, we demonstrate that sentences spoken by stressed speakers are judged by naïve listeners as sounding more stressed than sentences uttered by non-stressed speakers. In Study 1b, negative emotions produced by stressed speakers are generally less well recognized than the same emotions produced by non-stressed speakers. Multiple mediation analyses suggest this poorer recognition of negative stimuli was due to a mismatch between the variation of volume voiced by speakers and the range of volume expected by listeners. Together, this suggests that the stress level of the speaker affects judgments made by the receiver. In Study 2, we demonstrate that participants who were induced with a feeling of stress before carrying out an emotional prosody recognition task performed worse than non-stressed participants. Overall, findings suggest detrimental effects of induced stress on interpersonal sensitivity.
In the eye of the beholder: eye contact increases resistance to persuasion.
Chen, Frances S; Minson, Julia A; Schöne, Maren; Heinrichs, Markus
2013-11-01
Popular belief holds that eye contact increases the success of persuasive communication, and prior research suggests that speakers who direct their gaze more toward their listeners are perceived as more persuasive. In contrast, we demonstrate that more eye contact between the listener and speaker during persuasive communication predicts less attitude change in the direction advocated. In Study 1, participants freely watched videos of speakers expressing various views on controversial sociopolitical issues. Greater direct gaze at the speaker's eyes was associated with less attitude change in the direction advocated by the speaker. In Study 2, we instructed participants to look at either the eyes or the mouths of speakers presenting arguments counter to participants' own attitudes. Intentionally maintaining direct eye contact led to less persuasion than did gazing at the mouth. These findings suggest that efforts at increasing eye contact may be counterproductive across a variety of persuasion contexts.
How Psychological Stress Affects Emotional Prosody
Paulmann, Silke; Furnes, Desire; Bøkenes, Anne Ming; Cozzolino, Philip J.
2016-01-01
We explored how experimentally induced psychological stress affects the production and recognition of vocal emotions. In Study 1a, we demonstrate that sentences spoken by stressed speakers are judged by naïve listeners as sounding more stressed than sentences uttered by non-stressed speakers. In Study 1b, negative emotions produced by stressed speakers are generally less well recognized than the same emotions produced by non-stressed speakers. Multiple mediation analyses suggest this poorer recognition of negative stimuli was due to a mismatch between the variation of volume voiced by speakers and the range of volume expected by listeners. Together, this suggests that the stress level of the speaker affects judgments made by the receiver. In Study 2, we demonstrate that participants who were induced with a feeling of stress before carrying out an emotional prosody recognition task performed worse than non-stressed participants. Overall, findings suggest detrimental effects of induced stress on interpersonal sensitivity. PMID:27802287
Don't Underestimate the Benefits of Being Misunderstood.
Gibson, Edward; Tan, Caitlin; Futrell, Richard; Mahowald, Kyle; Konieczny, Lars; Hemforth, Barbara; Fedorenko, Evelina
2017-06-01
Being a nonnative speaker of a language poses challenges. Individuals often feel embarrassed by the errors they make when talking in their second language. However, here we report an advantage of being a nonnative speaker: Native speakers give foreign-accented speakers the benefit of the doubt when interpreting their utterances; as a result, apparently implausible utterances are more likely to be interpreted in a plausible way when delivered in a foreign than in a native accent. Across three replicated experiments, we demonstrated that native English speakers are more likely to interpret implausible utterances, such as "the mother gave the candle the daughter," as similar plausible utterances ("the mother gave the candle to the daughter") when the speaker has a foreign accent. This result follows from the general model of language interpretation in a noisy channel, under the hypothesis that listeners assume a higher error rate in foreign-accented than in nonaccented speech.
Rhythmic patterning in Malaysian and Singapore English.
Tan, Rachel Siew Kuang; Low, Ee-Ling
2014-06-01
Previous work on the rhythm of Malaysian English has been based on impressionistic observations. This paper utilizes acoustic analysis to measure the rhythmic patterns of Malaysian English. Recordings of the read speech and spontaneous speech of 10 Malaysian English speakers were analyzed and compared with recordings of an equivalent sample of Singaporean English speakers. Analysis was done using two rhythmic indexes, the PVI and VarcoV. It was found that although the rhythm of read speech of the Singaporean speakers was syllable-based as described by previous studies, the rhythm of the Malaysian speakers was even more syllable-based. Analysis of the syllables in specific utterances showed that Malaysian speakers did not reduce vowels as much as Singaporean speakers in cases of syllables in utterances. Results of the spontaneous speech confirmed the findings for the read speech; that is, the same rhythmic patterning was found which normally triggers vowel reductions.
Speakers of different languages process the visual world differently.
Chabal, Sarah; Marian, Viorica
2015-06-01
Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (c) 2015 APA, all rights reserved).
Robust nonlinear control of vectored thrust aircraft
NASA Technical Reports Server (NTRS)
Doyle, John C.; Murray, Richard; Morris, John
1993-01-01
An interdisciplinary program in robust control for nonlinear systems with applications to a variety of engineering problems is outlined. Major emphasis will be placed on flight control, with both experimental and analytical studies. This program builds on recent new results in control theory for stability, stabilization, robust stability, robust performance, synthesis, and model reduction in a unified framework using Linear Fractional Transformations (LFT's), Linear Matrix Inequalities (LMI's), and the structured singular value micron. Most of these new advances have been accomplished by the Caltech controls group independently or in collaboration with researchers in other institutions. These recent results offer a new and remarkably unified framework for all aspects of robust control, but what is particularly important for this program is that they also have important implications for system identification and control of nonlinear systems. This combines well with Caltech's expertise in nonlinear control theory, both in geometric methods and methods for systems with constraints and saturations.
Processing ser and estar to locate objects and events: An ERP study with L2 speakers of Spanish.
Dussias, Paola E; Contemori, Carla; Román, Patricia
2014-01-01
In Spanish locative constructions, a different form of the copula is selected in relation to the semantic properties of the grammatical subject: sentences that locate objects require estar while those that locate events require ser (both translated in English as 'to be'). In an ERP study, we examined whether second language (L2) speakers of Spanish are sensitive to the selectional restrictions that the different types of subjects impose on the choice of the two copulas. Twenty-four native speakers of Spanish and two groups of L2 Spanish speakers (24 beginners and 18 advanced speakers) were recruited to investigate the processing of 'object/event + estar/ser ' permutations. Participants provided grammaticality judgments on correct (object + estar ; event + ser ) and incorrect (object + ser ; event + estar ) sentences while their brain activity was recorded. In line with previous studies (Leone-Fernández, Molinaro, Carreiras, & Barber, 2012; Sera, Gathje, & Pintado, 1999), the results of the grammaticality judgment for the native speakers showed that participants correctly accepted object + estar and event + ser constructions. In addition, while 'object + ser ' constructions were considered grossly ungrammatical, 'event + estar ' combinations were perceived as unacceptable to a lesser degree. For these same participants, ERP recording time-locked to the onset of the critical word ' en ' showed a larger P600 for the ser predicates when the subject was an object than when it was an event (*La silla es en la cocina vs. La fiesta es en la cocina). This P600 effect is consistent with syntactic repair of the defining predicate when it does not fit with the adequate semantic properties of the subject. For estar predicates (La silla está en la cocina vs. *La fiesta está en la cocina), the findings showed a central-frontal negativity between 500-700 ms. Grammaticality judgment data for the L2 speakers of Spanish showed that beginners were significantly less accurate than native speakers in all conditions, while the advanced speakers only differed from the natives in the event+ ser and event+ estar conditions. For the ERPs, the beginning learners did not show any effects in the time-windows under analysis. The advanced speakers showed a pattern similar to that of native speakers: (1) a P600 response to 'object + ser ' violation more central and frontally distributed, and (2) a central-frontal negativity between 500-700 ms for 'event + estar ' violation. Findings for the advanced speakers suggest that behavioral methods commonly used to assess grammatical knowledge in the L2 may be underestimating what L2 speakers have actually learned.
Reasoning about knowledge: Children's evaluations of generality and verifiability.
Koenig, Melissa A; Cole, Caitlin A; Meyer, Meredith; Ridge, Katherine E; Kushnir, Tamar; Gelman, Susan A
2015-12-01
In a series of experiments, we examined 3- to 8-year-old children's (N=223) and adults' (N=32) use of two properties of testimony to estimate a speaker's knowledge: generality and verifiability. Participants were presented with a "Generic speaker" who made a series of 4 general claims about "pangolins" (a novel animal kind), and a "Specific speaker" who made a series of 4 specific claims about "this pangolin" as an individual. To investigate the role of verifiability, we systematically varied whether the claim referred to a perceptually-obvious feature visible in a picture (e.g., "has a pointy nose") or a non-evident feature that was not visible (e.g., "sleeps in a hollow tree"). Three main findings emerged: (1) young children showed a pronounced reliance on verifiability that decreased with age. Three-year-old children were especially prone to credit knowledge to speakers who made verifiable claims, whereas 7- to 8-year-olds and adults credited knowledge to generic speakers regardless of whether the claims were verifiable; (2) children's attributions of knowledge to generic speakers was not detectable until age 5, and only when those claims were also verifiable; (3) children often generalized speakers' knowledge outside of the pangolin domain, indicating a belief that a person's knowledge about pangolins likely extends to new facts. Findings indicate that young children may be inclined to doubt speakers who make claims they cannot verify themselves, as well as a developmentally increasing appreciation for speakers who make general claims. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Florian, Michael K.; Gladders, Michael D.; Li, Nan; Sharon, Keren
2016-01-01
The sample of cosmological strong lensing systems has been steadily growing in recent years and with the advent of the next generation of space-based survey telescopes, the sample will reach into the thousands. The accuracy of strong lens models relies on robust identification of multiple image families of lensed galaxies. For the most massive lenses, often more than one background galaxy is magnified and multiply imaged, and even in the cases of only a single lensed source, identification of counter images is not always robust. Recently, we have shown that the Gini coefficient in space-telescope-quality imaging is a measurement of galaxy morphology that is relatively well-preserved by strong gravitational lensing. Here, we investigate its usefulness as a diagnostic for the purposes of image family identification and show that it can remove some of the degeneracies encountered when using color as the sole diagnostic, and can do so without the need for additional observations since whenever a color is available, two Gini coefficients are as well.
A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification
Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; ...
2013-01-01
Background . The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective . To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods . The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expertmore » knowledge was integrated into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results . The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions . Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification.« less
A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification
Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Varnum, Susan M.; Brown, Joseph N.; Riensche, Roderick M.; Adkins, Joshua N.; Jacobs, Jon M.; Hoidal, John R.; Scholand, Mary Beth; Pounds, Joel G.; Blackburn, Michael R.; Rodland, Karin D.; McDermott, Jason E.
2013-01-01
Background. The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective. To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods. The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expert knowledge was integrated into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results. The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions. Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification. PMID:24223463
A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.
2013-10-01
Background. The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective. To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods. The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expert knowledge was integratedmore » into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results. The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions. Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification.« less
F-15B Quiet Spike(TradeMark) Aeroservoelastic Flight-Test Data Analysis
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
System identification is utilized in the aerospace community for development of simulation models for robust control law design. These models are often described as linear, time-invariant processes and assumed to be uniform throughout the flight envelope. Nevertheless, it is well known that the underlying process is inherently nonlinear. Over the past several decades the controls and biomedical communities have made great advances in developing tools for the identification of nonlin ear systems. In this report, we show the application of one such nonlinear system identification technique, structure detection, for the an alysis of Quiet Spike(TradeMark)(Gulfstream Aerospace Corporation, Savannah, Georgia) aeroservoelastic flight-test data. Structure detectio n is concerned with the selection of a subset of candidate terms that best describe the observed output. Structure computation as a tool fo r black-box modeling may be of critical importance for the development of robust, parsimonious models for the flight-test community. The ob jectives of this study are to demonstrate via analysis of Quiet Spike(TradeMark) aeroservoelastic flight-test data for several flight conditions that: linear models are inefficient for modelling aeroservoelast ic data, nonlinear identification provides a parsimonious model description whilst providing a high percent fit for cross-validated data an d the model structure and parameters vary as the flight condition is altered.
Robust volcano plot: identification of differential metabolites in the presence of outliers.
Kumar, Nishith; Hoque, Md Aminul; Sugimoto, Masahiro
2018-04-11
The identification of differential metabolites in metabolomics is still a big challenge and plays a prominent role in metabolomics data analyses. Metabolomics datasets often contain outliers because of analytical, experimental, and biological ambiguity, but the currently available differential metabolite identification techniques are sensitive to outliers. We propose a kernel weight based outlier-robust volcano plot for identifying differential metabolites from noisy metabolomics datasets. Two numerical experiments are used to evaluate the performance of the proposed technique against nine existing techniques, including the t-test and the Kruskal-Wallis test. Artificially generated data with outliers reveal that the proposed method results in a lower misclassification error rate and a greater area under the receiver operating characteristic curve compared with existing methods. An experimentally measured breast cancer dataset to which outliers were artificially added reveals that our proposed method produces only two non-overlapping differential metabolites whereas the other nine methods produced between seven and 57 non-overlapping differential metabolites. Our data analyses show that the performance of the proposed differential metabolite identification technique is better than that of existing methods. Thus, the proposed method can contribute to analysis of metabolomics data with outliers. The R package and user manual of the proposed method are available at https://github.com/nishithkumarpaul/Rvolcano .
Comparison of frequency-domain and time-domain rotorcraft vibration control methods
NASA Technical Reports Server (NTRS)
Gupta, N. K.
1984-01-01
Active control of rotor-induced vibration in rotorcraft has received significant attention recently. Two classes of techniques have been proposed. The more developed approach works with harmonic analysis of measured time histories and is called the frequency-domain approach. The more recent approach computes the control input directly using the measured time history data and is called the time-domain approach. The report summarizes the results of a theoretical investigation to compare the two approaches. Five specific areas were addressed: (1) techniques to derive models needed for control design (system identification methods), (2) robustness with respect to errors, (3) transient response, (4) susceptibility to noise, and (5) implementation difficulties. The system identification methods are more difficult for the time-domain models. The time-domain approach is more robust (e.g., has higher gain and phase margins) than the frequency-domain approach. It might thus be possible to avoid doing real-time system identification in the time-domain approach by storing models at a number of flight conditions. The most significant error source is the variation in open-loop vibrations caused by pilot inputs, maneuvers or gusts. The implementation requirements are similar except that the time-domain approach can be much simpler to implement if real-time system identification were not necessary.
Becker, Pierre T; de Bel, Annelies; Martiny, Delphine; Ranque, Stéphane; Piarroux, Renaud; Cassagne, Carole; Detandt, Monique; Hendrickx, Marijke
2014-11-01
The identification of filamentous fungi by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) relies mainly on a robust and extensive database of reference spectra. To this end, a large in-house library containing 760 strains and representing 472 species was built and evaluated on 390 clinical isolates by comparing MALDI-TOF MS with the classical identification method based on morphological observations. The use of MALDI-TOF MS resulted in the correct identification of 95.4% of the isolates at species level, without considering LogScore values. Taking into account the Brukers' cutoff value for reliability (LogScore >1.70), 85.6% of the isolates were correctly identified. For a number of isolates, microscopic identification was limited to the genus, resulting in only 61.5% of the isolates correctly identified at species level while the correctness reached 94.6% at genus level. Using this extended in-house database, MALDI-TOF MS thus appears superior to morphology in order to obtain a robust and accurate identification of filamentous fungi. A continuous extension of the library is however necessary to further improve its reliability. Indeed, 15 isolates were still not represented while an additional three isolates were not recognized, probably because of a lack of intraspecific variability of the corresponding species in the database. © The Author 2014. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Why We Serve - U.S. Department of Defense Official Website
described by a soldier, sailor, airman or Marine who lives it. Story HOW TO HOST A SPEAKER Organizations other organizations. Speakers Photos MEET THE SPEAKERS January 2008 Army Major Lisa L. Carter Navy
Formant transitions in the fluent speech of Farsi-speaking people who stutter.
Dehqan, Ali; Yadegari, Fariba; Blomgren, Michael; Scherer, Ronald C
2016-06-01
Second formant (F2) transitions can be used to infer attributes of articulatory transitions. This study compared formant transitions during fluent speech segments of Farsi (Persian) speaking people who stutter and normally fluent Farsi speakers. Ten Iranian males who stutter and 10 normally fluent Iranian males participated. Sixteen different "CVt" tokens were embedded within the phrase "Begu CVt an". Measures included overall F2 transition frequency extents, durations, and derived overall slopes, initial F2 transition slopes at 30ms and 60ms, and speaking rate. (1) Mean overall formant frequency extent was significantly greater in 14 of the 16 CVt tokens for the group of stuttering speakers. (2) Stuttering speakers exhibited significantly longer overall F2 transitions for all 16 tokens compared to the nonstuttering speakers. (3) The overall F2 slopes were similar between the two groups. (4) The stuttering speakers exhibited significantly greater initial F2 transition slopes (positive or negative) for five of the 16 tokens at 30ms and six of the 16 tokens at 60ms. (5) The stuttering group produced a slower syllable rate than the non-stuttering group. During perceptually fluent utterances, the stuttering speakers had greater F2 frequency extents during transitions, took longer to reach vowel steady state, exhibited some evidence of steeper slopes at the beginning of transitions, had overall similar F2 formant slopes, and had slower speaking rates compared to nonstuttering speakers. Findings support the notion of different speech motor timing strategies in stuttering speakers. Findings are likely to be independent of the language spoken. Educational objectives This study compares aspects of F2 formant transitions between 10 stuttering and 10 nonstuttering speakers. Readers will be able to describe: (a) characteristics of formant frequency as a specific acoustic feature used to infer speech movements in stuttering and nonstuttering speakers, (b) two methods of measuring second formant (F2) transitions: the visual criteria method and fixed time criteria method, (c) characteristics of F2 transitions in the fluent speech of stuttering speakers and how those characteristics appear to differ from normally fluent speakers, and (d) possible cross-linguistic effects on acoustic analyses of stuttering. Copyright © 2016 Elsevier Inc. All rights reserved.
Referential first mention in narratives by mildly mentally retarded adults.
Kernan, K T; Sabsay, S
1987-01-01
Referential first mentions in narrative reports of a short film by 40 mildly mentally retarded adults and 20 nonretarded adults were compared. The mentally retarded sample included equal numbers of male and female, and black and white speakers. The mentally retarded speakers made significantly fewer first mentions and significantly more errors in the form of the first mentions than did nonretarded speakers. A pattern of better performance by black males than by other mentally retarded speakers was found. It is suggested that task difficulty and incomplete mastery of the use of definite and indefinite forms for encoding old and new information, rather than some global type of egocentrism, accounted for the poorer performance by mentally retarded speakers.
Huynh, Que-Lam; Devos, Thierry; Goldberg, Robyn
2013-01-01
A robust relationship between perceived racial discrimination and psychological distress has been established. Yet, mixed evidence exists regarding the extent to which ethnic identification moderates this relationship, and scarce attention has been paid to the moderating role of national identification. We propose that the role of group identifications in the perceived discrimination–psychological distress relationship is best understood by simultaneously and interactively considering ethnic and national identifications. A sample of 259 Asian American students completed measures of perceived discrimination, group identifications (specific ethnic identification stated by respondents and national or “mainstream American” identification), and psychological distress (anxiety and depression symptoms). Regression analyses revealed a significant three-way interaction of perceived discrimination, ethnic identification, and national identification on psychological distress. Simple-slope analyses indicated that dual identification (strong ethnic and national identifications) was linked to a weaker relationship between perceived discrimination and psychological distress compared with other group identification configurations. These findings underscore the need to consider the interconnections between ethnic and national identifications to better understand the circumstances under which group identifications are likely to buffer individuals against the adverse effects of racial discrimination. PMID:25258674
Entropy Based Classifier Combination for Sentence Segmentation
2007-01-01
speaker diarization system to divide the audio data into hypothetical speakers [17...the prosodic feature also includes turn-based features which describe the position of a word in relation to diarization seg- mentation. The speaker ...ro- bust speaker segmentation: the ICSI-SRI fall 2004 diarization system,” in Proc. RT-04F Workshop, 2004. [18] “The rich transcription fall 2003,” http://nist.gov/speech/tests/rt/rt2003/fall/docs/rt03-fall-eval- plan-v9.pdf.
Somatotype and Body Composition of Normal and Dysphonic Adult Speakers.
Franco, Débora; Fragoso, Isabel; Andrea, Mário; Teles, Júlia; Martins, Fernando
2017-01-01
Voice quality provides information about the anatomical characteristics of the speaker. The patterns of somatotype and body composition can provide essential knowledge to characterize the individuality of voice quality. The aim of this study was to verify if there were significant differences in somatotype and body composition between normal and dysphonic speakers. Cross-sectional study. Anthropometric measurements were taken of a sample of 72 adult participants (40 normal speakers and 32 dysphonic speakers) according to International Society for the Advancement of Kinanthropometry standards, which allowed the calculation of endomorphism, mesomorphism, ectomorphism components, body density, body mass index, fat mass, percentage fat, and fat-free mass. Perception and acoustic evaluations as well as nasoendoscopy were used to assign speakers into normal or dysphonic groups. There were no significant differences between normal and dysphonic speakers in the mean somatotype attitudinal distance and somatotype dispersion distance (in spite of marginally significant differences [P < 0.10] in somatotype attitudinal distance and somatotype dispersion distance between groups) and in the mean vector of the somatotype components. Furthermore, no significant differences were found between groups concerning the mean of percentage fat, fat mass, fat-free mass, body density, and body mass index after controlling by sex. The findings suggested no significant differences in the somatotype and body composition variables, between normal and dysphonic speakers. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Strength of German accent under altered auditory feedback
HOWELL, PETER; DWORZYNSKI, KATHARINA
2007-01-01
Borden’s (1979, 1980) hypothesis that speakers with vulnerable speech systems rely more heavily on feedback monitoring than do speakers with less vulnerable systems was investigated. The second language (L2) of a speaker is vulnerable, in comparison with the native language, so alteration to feedback should have a detrimental effect on it, according to this hypothesis. Here, we specifically examined whether altered auditory feedback has an effect on accent strength when speakers speak L2. There were three stages in the experiment. First, 6 German speakers who were fluent in English (their L2) were recorded under six conditions—normal listening, amplified voice level, voice shifted in frequency, delayed auditory feedback, and slowed and accelerated speech rate conditions. Second, judges were trained to rate accent strength. Training was assessed by whether it was successful in separating German speakers speaking English from native English speakers, also speaking English. In the final stage, the judges ranked recordings of each speaker from the first stage as to increasing strength of German accent. The results show that accents were more pronounced under frequency-shifted and delayed auditory feedback conditions than under normal or amplified feedback conditions. Control tests were done to ensure that listeners were judging accent, rather than fluency changes caused by altered auditory feedback. The findings are discussed in terms of Borden’s hypothesis and other accounts about why altered auditory feedback disrupts speech control. PMID:11414137
Huang, Laura; Frideger, Marcia; Pearce, Jone L
2013-11-01
We propose and test a new theory explaining glass-ceiling bias against nonnative speakers as driven by perceptions that nonnative speakers have weak political skill. Although nonnative accent is a complex signal, its effects on assessments of the speakers' political skill are something that speakers can actively mitigate; this makes it an important bias to understand. In Study 1, White and Asian nonnative speakers using the same scripted responses as native speakers were found to be significantly less likely to be recommended for a middle-management position, and this bias was fully mediated by assessments of their political skill. The alternative explanations of race, communication skill, and collaborative skill were nonsignificant. In Study 2, entrepreneurial start-up pitches from national high-technology, new-venture funding competitions were shown to experienced executive MBA students. Nonnative speakers were found to have a significantly lower likelihood of receiving new-venture funding, and this was fully mediated by the coders' assessments of their political skill. The entrepreneurs' race, communication skill, and collaborative skill had no effect. We discuss the value of empirically testing various posited reasons for glass-ceiling biases, how the importance and ambiguity of political skill for executive success serve as an ostensibly meritocratic cover for nonnative speaker bias, and other theoretical and practical implications of this work. (c) 2013 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Li, Wei; Xiao, Chuan; Liu, Yaduo
2013-12-01
Audio identification via fingerprint has been an active research field for years. However, most previously reported methods work on the raw audio format in spite of the fact that nowadays compressed format audio, especially MP3 music, has grown into the dominant way to store music on personal computers and/or transmit it over the Internet. It will be interesting if a compressed unknown audio fragment could be directly recognized from the database without decompressing it into the wave format at first. So far, very few algorithms run directly on the compressed domain for music information retrieval, and most of them take advantage of the modified discrete cosine transform coefficients or derived cepstrum and energy type of features. As a first attempt, we propose in this paper utilizing compressed domain auditory Zernike moment adapted from image processing techniques as the key feature to devise a novel robust audio identification algorithm. Such fingerprint exhibits strong robustness, due to its statistically stable nature, against various audio signal distortions such as recompression, noise contamination, echo adding, equalization, band-pass filtering, pitch shifting, and slight time scale modification. Experimental results show that in a music database which is composed of 21,185 MP3 songs, a 10-s long music segment is able to identify its original near-duplicate recording, with average top-5 hit rate up to 90% or above even under severe audio signal distortions.
NASA Astrophysics Data System (ADS)
Ammazzalorso, F.; Bednarz, T.; Jelen, U.
2014-03-01
We demonstrate acceleration on graphic processing units (GPU) of automatic identification of robust particle therapy beam setups, minimizing negative dosimetric effects of Bragg peak displacement caused by treatment-time patient positioning errors. Our particle therapy research toolkit, RobuR, was extended with OpenCL support and used to implement calculation on GPU of the Port Homogeneity Index, a metric scoring irradiation port robustness through analysis of tissue density patterns prior to dose optimization and computation. Results were benchmarked against an independent native CPU implementation. Numerical results were in agreement between the GPU implementation and native CPU implementation. For 10 skull base cases, the GPU-accelerated implementation was employed to select beam setups for proton and carbon ion treatment plans, which proved to be dosimetrically robust, when recomputed in presence of various simulated positioning errors. From the point of view of performance, average running time on the GPU decreased by at least one order of magnitude compared to the CPU, rendering the GPU-accelerated analysis a feasible step in a clinical treatment planning interactive session. In conclusion, selection of robust particle therapy beam setups can be effectively accelerated on a GPU and become an unintrusive part of the particle therapy treatment planning workflow. Additionally, the speed gain opens new usage scenarios, like interactive analysis manipulation (e.g. constraining of some setup) and re-execution. Finally, through OpenCL portable parallelism, the new implementation is suitable also for CPU-only use, taking advantage of multiple cores, and can potentially exploit types of accelerators other than GPUs.
Naming Game with Multiple Hearers
NASA Astrophysics Data System (ADS)
Li, Bing; Chen, Guanrong; Chow, Tommy W. S.
2013-05-01
A new model called Naming Game with Multiple Hearers (NGMH) is proposed in this paper. A naming game over a population of individuals aims to reach consensus on the name of an object through pair-wise local interactions among all the individuals. The proposed NGMH model describes the learning process of a new word, in a population with one speaker and multiple hearers, at each interaction towards convergence. The characteristics of NGMH are examined on three types of network topologies, namely ER random-graph network, WS small-world network, and BA scale-free network. Comparative analysis on the convergence time is performed, revealing that the topology with a larger average (node) degree can reach consensus faster than the others over the same population. It is found that, for a homogeneous network, the average degree is the limiting value of the number of hearers, which reduces the individual ability of learning new words, consequently decreasing the convergence time; for a scale-free network, this limiting value is the deviation of the average degree. It is also found that a network with a larger clustering coefficient takes longer time to converge; especially a small-word network with smallest rewiring possibility takes longest time to reach convergence. As more new nodes are being added to scale-free networks with different degree distributions, their convergence time appears to be robust against the network-size variation. Most new findings reported in this paper are different from that of the single-speaker/single-hearer naming games documented in the literature.
On how the brain decodes vocal cues about speaker confidence.
Jiang, Xiaoming; Pell, Marc D
2015-05-01
In speech communication, listeners must accurately decode vocal cues that refer to the speaker's mental state, such as their confidence or 'feeling of knowing'. However, the time course and neural mechanisms associated with online inferences about speaker confidence are unclear. Here, we used event-related potentials (ERPs) to examine the temporal neural dynamics underlying a listener's ability to infer speaker confidence from vocal cues during speech processing. We recorded listeners' real-time brain responses while they evaluated statements wherein the speaker's tone of voice conveyed one of three levels of confidence (confident, close-to-confident, unconfident) or were spoken in a neutral manner. Neural responses time-locked to event onset show that the perceived level of speaker confidence could be differentiated at distinct time points during speech processing: unconfident expressions elicited a weaker P2 than all other expressions of confidence (or neutral-intending utterances), whereas close-to-confident expressions elicited a reduced negative response in the 330-500 msec and 550-740 msec time window. Neutral-intending expressions, which were also perceived as relatively confident, elicited a more delayed, larger sustained positivity than all other expressions in the 980-1270 msec window for this task. These findings provide the first piece of evidence of how quickly the brain responds to vocal cues signifying the extent of a speaker's confidence during online speech comprehension; first, a rough dissociation between unconfident and confident voices occurs as early as 200 msec after speech onset. At a later stage, further differentiation of the exact level of speaker confidence (i.e., close-to-confident, very confident) is evaluated via an inferential system to determine the speaker's meaning under current task settings. These findings extend three-stage models of how vocal emotion cues are processed in speech comprehension (e.g., Schirmer & Kotz, 2006) by revealing how a speaker's mental state (i.e., feeling of knowing) is simultaneously inferred from vocal expressions. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dillon, Christina
2013-01-01
The goal of this project was to design, model, build, and test a flat panel speaker and frame for a spherical dome structure being made into a simulator. The simulator will be a test bed for evaluating an immersive environment for human interfaces. This project focused on the loud speakers and a sound diffuser for the dome. The rest of the team worked on an Ambisonics 3D sound system, video projection system, and multi-direction treadmill to create the most realistic scene possible. The main programs utilized in this project, were Pro-E and COMSOL. Pro-E was used for creating detailed figures for the fabrication of a frame that held a flat panel loud speaker. The loud speaker was made from a thin sheet of Plexiglas and 4 acoustic exciters. COMSOL, a multiphysics finite analysis simulator, was used to model and evaluate all stages of the loud speaker, frame, and sound diffuser. Acoustical testing measurements were utilized to create polar plots from the working prototype which were then compared to the COMSOL simulations to select the optimal design for the dome. The final goal of the project was to install the flat panel loud speaker design in addition to a sound diffuser on to the wall of the dome. After running tests in COMSOL on various speaker configurations, including a warped Plexiglas version, the optimal speaker design included a flat piece of Plexiglas with a rounded frame to match the curvature of the dome. Eight of these loud speakers will be mounted into an inch and a half of high performance acoustic insulation, or Thinsulate, that will cover the inside of the dome. The following technical paper discusses these projects and explains the engineering processes used, knowledge gained, and the projected future goals of this project
Perception of speaker size and sex of vowel sounds
NASA Astrophysics Data System (ADS)
Smith, David R. R.; Patterson, Roy D.
2005-04-01
Glottal-pulse rate (GPR) and vocal-tract length (VTL) are both related to speaker size and sex-however, it is unclear how they interact to determine our perception of speaker size and sex. Experiments were designed to measure the relative contribution of GPR and VTL to judgements of speaker size and sex. Vowels were scaled to represent people with different GPRs and VTLs, including many well beyond the normal population values. In a single interval, two response rating paradigm, listeners judged the size (using a 7-point scale) and sex/age of the speaker (man, woman, boy, or girl) of these scaled vowels. Results from the size-rating experiments show that VTL has a much greater influence upon judgements of speaker size than GPR. Results from the sex-categorization experiments show that judgements of speaker sex are influenced about equally by GPR and VTL for vowels with normal GPR and VTL values. For abnormal combinations of GPR and VTL, where low GPRs are combined with short VTLs, VTL has more influence than GPR in sex judgements. [Work supported by the UK MRC (G9901257) and the German Volkswagen Foundation (VWF 1/79 783).
Voice Handicap Index in Persian Speakers with Various Severities of Hearing Loss.
Aghadoost, Ozra; Moradi, Negin; Dabirmoghaddam, Payman; Aghadoost, Alireza; Naderifar, Ehsan; Dehbokri, Siavash Mohammadi
2016-01-01
The purpose of this study was to assess and compare the total score and subscale scores of the Voice Handicap Index (VHI) in speakers with and without hearing loss. A further aim was to determine if a correlation exists between severities of hearing loss with total scores and VHI subscale scores. In this cross-sectional, descriptive analytical study, 100 participants, divided in 2 groups of participants with and without hearing loss, were studied. Background information was gathered by interview, and VHI questionnaires were filled in by all participants. For all variables, including mean total score and VHI subscale scores, there was a considerable difference in speakers with and without hearing loss (p < 0.05). The correlation between severity of hearing loss with total score and VHI subscale scores was significant. Speakers with hearing loss were found to have higher mean VHI scores than speakers with normal hearing. This indicates a high voice handicap related to voice in speakers with hearing loss. In addition, increased severity of hearing loss leads to more severe voice handicap. This finding emphasizes the need for a multilateral assessment and treatment of voice disorders in speakers with hearing loss. © 2017 S. Karger AG, Basel.
Understanding speaker attitudes from prosody by adults with Parkinson's disease.
Monetta, Laura; Cheang, Henry S; Pell, Marc D
2008-09-01
The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical 'pseudo-utterances' were presented to listener groups with and without PD in two separate rating tasks. Task I required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo-utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the politelimpolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language (Pell & Leonard, 2003).
Liu, Hanjun; Wang, Emily Q.; Chen, Zhaocong; Liu, Peng; Larson, Charles R.; Huang, Dongfeng
2010-01-01
The purpose of this cross-language study was to examine whether the online control of voice fundamental frequency (F0) during vowel phonation is influenced by language experience. Native speakers of Cantonese and Mandarin, both tonal languages spoken in China, participated in the experiments. Subjects were asked to vocalize a vowel sound ∕u∕ at their comfortable habitual F0, during which their voice pitch was unexpectedly shifted (±50, ±100, ±200, or ±500 cents, 200 ms duration) and fed back instantaneously to them over headphones. The results showed that Cantonese speakers produced significantly smaller responses than Mandarin speakers when the stimulus magnitude varied from 200 to 500 cents. Further, response magnitudes decreased along with the increase in stimulus magnitude in Cantonese speakers, which was not observed in Mandarin speakers. These findings suggest that online control of voice F0 during vocalization is sensitive to language experience. Further, systematic modulations of vocal responses across stimulus magnitude were observed in Cantonese speakers but not in Mandarin speakers, which indicates that this highly automatic feedback mechanism is sensitive to the specific tonal system of each language. PMID:21218905
Inside-in, alternative paradigms for sound spatialization
NASA Astrophysics Data System (ADS)
Bahn, Curtis; Moore, Stephan
2003-04-01
Arrays of widely spaced mono-directional loudspeakers (P.A.-style stereo configurations or ``outside-in'' surround-sound systems) have long provided the dominant paradigms for electronic sound diffusion. So prevalent are these models that alternatives have largely been ignored and electronic sound, regardless of musical aesthetic, has come to be inseparably associated with single-channel speakers, or headphones. We recognize the value of these familiar paradigms, but believe that electronic sound can and should have many alternative, idiosyncratic voices. Through the design and construction of unique sound diffusion structures, one can reinvent the nature of electronic sound; when allied with new sensor technologies, these structures offer alternative modes of interaction with techniques of sonic computation. This paper describes several recent applications of spherical speakers (multichannel, outward-radiating geodesic speaker arrays) and Sensor-Speaker-Arrays (SenSAs: combinations of various sensor devices with outward-radiating multi-channel speaker arrays). This presentation introduces the development of four generations of spherical speakers-over a hundred individual speakers of various configurations-and their use in many different musical situations including live performance, recording, and sound installation. We describe the design and construction of these systems, and, more generally, the new ``voices'' they give to electronic sound.
von Lochow, Heike; Lyberg-Åhlander, Viveka; Sahlén, Birgitta; Kastberg, Tobias; Brännström, K Jonas
2018-04-01
This study explores the effect of voice quality and competing speaker/-s on children's performance in a passage comprehension task. Furthermore, it explores the interaction between passage comprehension and cognitive functioning. Forty-nine children (27 girls and 22 boys) with normal hearing (aged 7-12 years) participated. Passage comprehension was tested in six different listening conditions; a typical voice (non-dysphonic voice) in quiet, a typical voice with one competing speaker, a typical voice with four competing speakers, a dysphonic voice in quiet, a dysphonic voice with one competing speaker, and a dysphonic voice with four competing speakers. The children's working memory capacity and executive functioning were also assessed. The findings indicate no direct effect of voice quality on the children's performance, but a significant effect of background listening condition. Interaction effects were seen between voice quality, background listening condition, and executive functioning. The children's susceptibility to the effect of the dysphonic voice and the background listening conditions are related to the individual's executive functions. The findings have several implications for design of interventions in language learning environments such as classrooms.
San Juan, Valerie; Chambers, Craig G; Berman, Jared; Humphry, Chelsea; Graham, Susan A
2017-10-01
Two experiments examined whether 5-year-olds draw inferences about desire outcomes that constrain their online interpretation of an utterance. Children were informed of a speaker's positive (Experiment 1) or negative (Experiment 2) desire to receive a specific toy as a gift before hearing a referentially ambiguous statement ("That's my present") spoken with either a happy or sad voice. After hearing the speaker express a positive desire, children (N=24) showed an implicit (i.e., eye gaze) and explicit ability to predict reference to the desired object when the speaker sounded happy, but they showed only implicit consideration of the alternate object when the speaker sounded sad. After hearing the speaker express a negative desire, children (N=24) used only happy prosodic cues to predict the intended referent of the statement. Taken together, the findings indicate that the efficiency with which 5-year-olds integrate desire reasoning with language processing depends on the emotional valence of the speaker's voice but not on the type of desire representations (i.e., positive vs. negative) that children must reason about online. Copyright © 2017 Elsevier Inc. All rights reserved.
Four S's to Turn Your "Sex Talk" into a Super Program.
ERIC Educational Resources Information Center
Friedman, Jay
1995-01-01
Selection of campus speakers on sexuality is discussed, including assessment of speaker qualifications, the importance of teaching style and tone, choice of subject, program design for a meaningful event, and the sensitivity of both the speaker and the institution. (MSE)
NREL: International Activities - Fourth Renewable Energy Industries Forum
Speakers and Presentations International Activities Printable Version Fourth Renewable Energy Industries Forum Speakers and Presentations The Fourth Renewable Energy Industries Forum (REIF) speakers and practices, opportunities and challenges of utility and distributed projects, renewable energy integration
Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm
NASA Astrophysics Data System (ADS)
Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.
2014-08-01
This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.
Electrophysiology of subject-verb agreement mediated by speakers' gender.
Hanulíková, Adriana; Carreiras, Manuel
2015-01-01
An important property of speech is that it explicitly conveys features of a speaker's identity such as age or gender. This event-related potential (ERP) study examined the effects of social information provided by a speaker's gender, i.e., the conceptual representation of gender, on subject-verb agreement. Despite numerous studies on agreement, little is known about syntactic computations generated by speaker characteristics extracted from the acoustic signal. Slovak is well suited to investigate this issue because it is a morphologically rich language in which agreement involves features for number, case, and gender. Grammaticality of a sentence can be evaluated by checking a speaker's gender as conveyed by his/her voice. We examined how conceptual information about speaker gender, which is not syntactic but rather social and pragmatic in nature, is interpreted for the computation of agreement patterns. ERP responses to verbs disagreeing with the speaker's gender (e.g., a sentence including a masculine verbal inflection spoken by a female person 'the neighbors were upset because I (∗)stoleMASC plums') elicited a larger early posterior negativity compared to correct sentences. When the agreement was purely syntactic and did not depend on the speaker's gender, a disagreement between a formally marked subject and the verb inflection (e.g., the womanFEM (∗)stoleMASC plums) resulted in a larger P600 preceded by a larger anterior negativity compared to the control sentences. This result is in line with proposals according to which the recruitment of non-syntactic information such as the gender of the speaker results in N400-like effects, while formally marked syntactic features lead to structural integration as reflected in a LAN/P600 complex.
Groenewold, Rimke; Armstrong, Elizabeth
2018-05-14
Previous research has shown that speakers with aphasia rely on enactment more often than non-brain-damaged language users. Several studies have been conducted to explain this observed increase, demonstrating that spoken language containing enactment is easier to produce and is more engaging to the conversation partner. This paper describes the effects of the occurrence of enactment in casual conversation involving individuals with aphasia on its level of conversational assertiveness. To evaluate whether and to what extent the occurrence of enactment in speech of individuals with aphasia contributes to its conversational assertiveness. Conversations between a speaker with aphasia and his wife (drawn from AphasiaBank) were analysed in several steps. First, the transcripts were divided into moves, and all moves were coded according to the systemic functional linguistics (SFL) framework. Next, all moves were labelled in terms of their level of conversational assertiveness, as defined in the previous literature. Finally, all enactments were identified and their level of conversational assertiveness was compared with that of non-enactments. Throughout their conversations, the non-brain-damaged speaker was more assertive than the speaker with aphasia. However, the speaker with aphasia produced more enactments than the non-brain-damaged speaker. The moves of the speaker with aphasia containing enactment were more assertive than those without enactment. The use of enactment in the conversations under study positively affected the level of conversational assertiveness of the speaker with aphasia, a competence that is important for speakers with aphasia because it contributes to their floor time, chances to be heard seriously and degree of control over the conversation topic. © 2018 The Authors International Journal of Language & Communication Disorders published by John Wiley & Sons Ltd on behalf of Royal College of Speech and Language Therapists.
Speaker normalization and adaptation using second-order connectionist networks.
Watrous, R L
1993-01-01
A method for speaker normalization and adaption using connectionist networks is developed. A speaker-specific linear transformation of observations of the speech signal is computed using second-order network units. Classification is accomplished by a multilayer feedforward network that operates on the normalized speech data. The network is adapted for a new talker by modifying the transformation parameters while leaving the classifier fixed. This is accomplished by backpropagating classification error through the classifier to the second-order transformation units. This method was evaluated for the classification of ten vowels for 76 speakers using the first two formant values of the Peterson-Barney data. The results suggest that rapid speaker adaptation resulting in high classification accuracy can be accomplished by this method.
EFL Teachers' Responses to L2 Writing.
ERIC Educational Resources Information Center
Chang, Yuh-Fang
This study investigated differences in the product and process of evaluating second language compositions by Taiwanese speakers of English. It examined whether such factors as language background (native English speaker versus native Chinese speaker), academic discipline, and educational background affected raters' scoring outcomes; whether rating…
Russian Emotion Vocabulary in American Learners' Narratives
ERIC Educational Resources Information Center
Pavlenko, Aneta; Driagina, Viktoria
2007-01-01
This study compared the uses of emotion vocabulary in narratives elicited from monolingual speakers of Russian and English and advanced American learners of Russian. Monolingual speakers differed significantly in the distribution of emotion terms across morphosyntactic categories: English speakers favored an adjectival pattern of emotion…
Motion cues that make an impression: Predicting perceived personality by minimal motion information.
Koppensteiner, Markus
2013-11-01
The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information.
Motion cues that make an impression☆
Koppensteiner, Markus
2013-01-01
The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information. PMID:24223432
NASA Astrophysics Data System (ADS)
Kim, Yunjung; Weismer, Gary; Kent, Ray D.
2005-09-01
In previous work [J. Acoust. Soc. Am. 117, 2605 (2005)], we reported on formant trajectory characteristics of a relatively large number of speakers with dysarthria and near-normal speech intelligibility. The purpose of that analysis was to begin a documentation of the variability, within relatively homogeneous speech-severity groups, of acoustic measures commonly used to predict across-speaker variation in speech intelligibility. In that study we found that even with near-normal speech intelligibility (90%-100%), many speakers had reduced formant slopes for some words and distributional characteristics of acoustic measures that were different than values obtained from normal speakers. In the current report we extend those findings to a group of speakers with dysarthria with somewhat poorer speech intelligibility than the original group. Results are discussed in terms of the utility of certain acoustic measures as indices of speech intelligibility, and as explanatory data for theories of dysarthria. [Work supported by NIH Award R01 DC00319.
Speaker Invariance for Phonetic Information: an fMRI Investigation
Salvata, Caden; Blumstein, Sheila E.; Myers, Emily B.
2012-01-01
The current study explored how listeners map the variable acoustic input onto a common sound structure representation while being able to retain phonetic detail to distinguish among the identity of talkers. An adaptation paradigm was utilized to examine areas which showed an equal neural response (equal release from adaptation) to phonetic change when spoken by the same speaker and when spoken by two different speakers, and insensitivity (failure to show release from adaptation) when the same phonetic input was spoken by a different speaker. Neural areas which showed speaker invariance were located in the anterior portion of the middle superior temporal gyrus bilaterally. These findings provide support for the view that speaker normalization processes allow for the translation of a variable speech input to a common abstract sound structure. That this process appears to occur early in the processing stream, recruiting temporal structures, suggests that this mapping takes place prelexically, before sound structure input is mapped on to lexical representations. PMID:23264714
Long short-term memory for speaker generalization in supervised speech separation
Chen, Jitong; Wang, DeLiang
2017-01-01
Speech separation can be formulated as learning to estimate a time-frequency mask from acoustic features extracted from noisy speech. For supervised speech separation, generalization to unseen noises and unseen speakers is a critical issue. Although deep neural networks (DNNs) have been successful in noise-independent speech separation, DNNs are limited in modeling a large number of speakers. To improve speaker generalization, a separation model based on long short-term memory (LSTM) is proposed, which naturally accounts for temporal dynamics of speech. Systematic evaluation shows that the proposed model substantially outperforms a DNN-based model on unseen speakers and unseen noises in terms of objective speech intelligibility. Analyzing LSTM internal representations reveals that LSTM captures long-term speech contexts. It is also found that the LSTM model is more advantageous for low-latency speech separation and it, without future frames, performs better than the DNN model with future frames. The proposed model represents an effective approach for speaker- and noise-independent speech separation. PMID:28679261
Enhanced echolocation via robust statistics and super-resolution of sonar images
NASA Astrophysics Data System (ADS)
Kim, Kio
Echolocation is a process in which an animal uses acoustic signals to exchange information with environments. In a recent study, Neretti et al. have shown that the use of robust statistics can significantly improve the resiliency of echolocation against noise and enhance its accuracy by suppressing the development of sidelobes in the processing of an echo signal. In this research, the use of robust statistics is extended to problems in underwater explorations. The dissertation consists of two parts. Part I describes how robust statistics can enhance the identification of target objects, which in this case are cylindrical containers filled with four different liquids. Particularly, this work employs a variation of an existing robust estimator called an L-estimator, which was first suggested by Koenker and Bassett. As pointed out by Au et al.; a 'highlight interval' is an important feature, and it is closely related with many other important features that are known to be crucial for dolphin echolocation. A varied L-estimator described in this text is used to enhance the detection of highlight intervals, which eventually leads to a successful classification of echo signals. Part II extends the problem into 2 dimensions. Thanks to the advances in material and computer technology, various sonar imaging modalities are available on the market. By registering acoustic images from such video sequences, one can extract more information on the region of interest. Computer vision and image processing allowed application of robust statistics to the acoustic images produced by forward looking sonar systems, such as Dual-frequency Identification Sonar and ProViewer. The first use of robust statistics for sonar image enhancement in this text is in image registration. Random Sampling Consensus (RANSAC) is widely used for image registration. The registration algorithm using RANSAC is optimized for sonar image registration, and the performance is studied. The second use of robust statistics is in fusing the images. It is shown that the maximum a posteriori fusion method can be formulated in a Kalman filter-like manner, and also that the resulting expression is identical to a W-estimator with a specific weight function.
Applying Rasch model analysis in the development of the cantonese tone identification test (CANTIT).
Lee, Kathy Y S; Lam, Joffee H S; Chan, Kit T Y; van Hasselt, Charles Andrew; Tong, Michael C F
2017-01-01
Applying Rasch analysis to evaluate the internal structure of a lexical tone perception test known as the Cantonese Tone Identification Test (CANTIT). A 75-item pool (CANTIT-75) with pictures and sound tracks was developed. Respondents were required to make a four-alternative forced choice on each item. A short version of 30 items (CANTIT-30) was developed based on fit statistics, difficulty estimates, and content evaluation. Internal structure was evaluated by fit statistics and Rasch Factor Analysis (RFA). 200 children with normal hearing and 141 children with hearing impairment were recruited. For CANTIT-75, all infit and 97% of outfit values were < 2.0. RFA revealed 40.1% of total variance was explained by the Rasch measure. The first residual component explained 2.5% of total variance in an eigenvalue of 3.1. For CANTIT-30, all infit and outfit values were < 2.0. The Rasch measure explained 38.8% of total variance, the first residual component explained 3.9% of total variance in an eigenvalue of 1.9. The Rasch model provides excellent guidance for the development of short forms. Both CANTIT-75 and CANTIT-30 possess satisfactory internal structure as a construct validity evidence in measuring the lexical tone identification ability of the Cantonese speakers.
Nuclear Magnetic Resonance Spectroscopy-Based Identification of Yeast.
Himmelreich, Uwe; Sorrell, Tania C; Daniel, Heide-Marie
2017-01-01
Rapid and robust high-throughput identification of environmental, industrial, or clinical yeast isolates is important whenever relatively large numbers of samples need to be processed in a cost-efficient way. Nuclear magnetic resonance (NMR) spectroscopy generates complex data based on metabolite profiles, chemical composition and possibly on medium consumption, which can not only be used for the assessment of metabolic pathways but also for accurate identification of yeast down to the subspecies level. Initial results on NMR based yeast identification where comparable with conventional and DNA-based identification. Potential advantages of NMR spectroscopy in mycological laboratories include not only accurate identification but also the potential of automated sample delivery, automated analysis using computer-based methods, rapid turnaround time, high throughput, and low running costs.We describe here the sample preparation, data acquisition and analysis for NMR-based yeast identification. In addition, a roadmap for the development of classification strategies is given that will result in the acquisition of a database and analysis algorithms for yeast identification in different environments.
Does language shape thought? Mandarin and English speakers' conceptions of time.
Boroditsky, L
2001-08-01
Does the language you speak affect how you think about the world? This question is taken up in three experiments. English and Mandarin talk about time differently--English predominantly talks about time as if it were horizontal, while Mandarin also commonly describes time as vertical. This difference between the two languages is reflected in the way their speakers think about time. In one study, Mandarin speakers tended to think about time vertically even when they were thinking for English (Mandarin speakers were faster to confirm that March comes earlier than April if they had just seen a vertical array of objects than if they had just seen a horizontal array, and the reverse was true for English speakers). Another study showed that the extent to which Mandarin-English bilinguals think about time vertically is related to how old they were when they first began to learn English. In another experiment native English speakers were taught to talk about time using vertical spatial terms in a way similar to Mandarin. On a subsequent test, this group of English speakers showed the same bias to think about time vertically as was observed with Mandarin speakers. It is concluded that (1) language is a powerful tool in shaping thought about abstract domains and (2) one's native language plays an important role in shaping habitual thought (e.g., how one tends to think about time) but does not entirely determine one's thinking in the strong Whorfian sense. Copyright 2001 Academic Press.
An oscillator model of the timing of turn-taking.
Wilson, Margaret; Wilson, Thomas P
2005-12-01
When humans talk without conventionalized arrangements, they engage in conversation--that is, a continuous and largely nonsimultaneous exchange in which speakers take turns. Turn-taking is ubiquitous in conversation and is the normal case against which alternatives, such as interruptions, are treated as violations that warrant repair. Furthermore, turn-taking involves highly coordinated timing, including a cyclic rise and fall in the probability of initiating speech during brief silences, and involves the notable rarity, especially in two-party conversations, of two speakers' breaking a silence at once. These phenomena, reported by conversation analysts, have been neglected by cognitive psychologists, and to date there has been no adequate cognitive explanation. Here, we propose that, during conversation, endogenous oscillators in the brains of the speaker and the listeners become mutually entrained, on the basis of the speaker's rate of syllable production. This entrained cyclic pattern governs the potential for initiating speech at any given instant for the speaker and also for the listeners (as potential next speakers). Furthermore, the readiness functions of the listeners are counterphased with that of the speaker, minimizing the likelihood of simultaneous starts by a listener and the previous speaker. This mutual entrainment continues for a brief period when the speech stream ceases, accounting for the cyclic property of silences. This model not only captures the timing phenomena observed inthe literature on conversation analysis, but also converges with findings from the literatures on phoneme timing, syllable organization, and interpersonal coordination.
Tebb, Kathleen P; Pollack, Lance M; Millstein, Shana; Otero-Sabogal, Regina; Wibbelsman, Charles J
2014-09-01
To explore parental beliefs and attitudes about confidential services for their teenagers; and to develop an instrument to assess these beliefs and attitudes that could be used among English and Spanish speakers. The long-term goal is to use this research to better understand and evaluate interventions to improve parental knowledge and attitudes toward their adolescent's access and utilization of comprehensive confidential health services. The instrument was developed using an extensive literature review and theoretical framework followed by qualitative data from focus groups and in-depth interviews. It was then pilot tested with a random sample of English- and Spanish-speaking parents and further revised. The final instrument was administered to a random sample of 1,000 mothers. The psychometric properties of the instrument were assessed for Spanish and English speakers. The instrument consisted of 12 scales. Most Cronbach alphas were >.70 for Spanish and English speakers. Fewer items for Spanish speakers "loaded" for the Responsibility and Communication scales. Parental Control of Health Information failed for Spanish speakers. The Parental Attitudes of Adolescent Confidential Health Services Questionnaire (PAACS-Q) contains 12 scales and is a valid and reliable instrument to assess parental knowledge and attitudes toward confidential health services for adolescents among English speakers and all but one scale was applicable for Spanish speakers. More research is needed to understand key constructs with Spanish speakers. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Cartei, Valentina; Bond, Rod; Reby, David
2014-09-01
Men's voices contain acoustic cues to body size and hormonal status, which have been found to affect women's ratings of speaker size, masculinity and attractiveness. However, the extent to which these voice parameters mediate the relationship between speakers' fitness-related features and listener's judgments of their masculinity has not yet been investigated. We audio-recorded 37 adult heterosexual males performing a range of speech tasks and asked 20 adult heterosexual female listeners to rate speakers' masculinity on the basis of their voices only. We then used a two-level (speaker within listener) path analysis to examine the relationships between the physiological (testosterone, height), acoustic (fundamental frequency or F0, and resonances or ΔF) and perceptual dimensions (listeners' ratings) of speakers' masculinity. Overall, results revealed that male speakers who were taller and had higher salivary testosterone levels also had lower F0 and ΔF, and were in turn rated as more masculine. The relationship between testosterone and perceived masculinity was essentially mediated by F0, while that of height and perceived masculinity was partially mediated by both F0 and ΔF. These observations confirm that women listeners attend to sexually dimorphic voice cues to assess the masculinity of unseen male speakers. In turn, variation in these voice features correlate with speakers' variation in stature and hormonal status, highlighting the interdependence of these physiological, acoustic and perceptual dimensions. Copyright © 2014. Published by Elsevier Inc.
The artful dodger: answering the wrong question the right way.
Rogers, Todd; Norton, Michael I
2011-06-01
What happens when speakers try to "dodge" a question they would rather not answer by answering a different question? In 4 studies, we show that listeners can fail to detect dodges when speakers answer similar-but objectively incorrect-questions (the "artful dodge"), a detection failure that goes hand-in-hand with a failure to rate dodgers more negatively. We propose that dodges go undetected because listeners' attention is not usually directed toward a goal of dodge detection (i.e., Is this person answering the question?) but rather toward a goal of social evaluation (i.e., Do I like this person?). Listeners were not blind to all dodge attempts, however. Dodge detection increased when listeners' attention was diverted from social goals toward determining the relevance of the speaker's answers (Study 1), when speakers answered a question egregiously dissimilar to the one asked (Study 2), and when listeners' attention was directed to the question asked by keeping it visible during speakers' answers (Study 4). We also examined the interpersonal consequences of dodge attempts: When listeners were guided to detect dodges, they rated speakers more negatively (Study 2), and listeners rated speakers who answered a similar question in a fluent manner more positively than speakers who answered the actual question but disfluently (Study 3). These results add to the literatures on both Gricean conversational norms and goal-directed attention. We discuss the practical implications of our findings in the contexts of interpersonal communication and public debates.
Content-specific coordination of listeners' to speakers' EEG during communication
Kuhlen, Anna K.; Allefeld, Carsten; Haynes, John-Dylan
2012-01-01
Cognitive neuroscience has recently begun to extend its focus from the isolated individual mind to two or more individuals coordinating with each other. In this study we uncover a coordination of neural activity between the ongoing electroencephalogram (EEG) of two people—a person speaking and a person listening. The EEG of one set of twelve participants (“speakers”) was recorded while they were narrating short stories. The EEG of another set of twelve participants (“listeners”) was recorded while watching audiovisual recordings of these stories. Specifically, listeners watched the superimposed videos of two speakers simultaneously and were instructed to attend either to one or the other speaker. This allowed us to isolate neural coordination due to processing the communicated content from the effects of sensory input. We find several neural signatures of communication: First, the EEG is more similar among listeners attending to the same speaker than among listeners attending to different speakers, indicating that listeners' EEG reflects content-specific information. Secondly, listeners' EEG activity correlates with the attended speakers' EEG, peaking at a time delay of about 12.5 s. This correlation takes place not only between homologous, but also between non-homologous brain areas in speakers and listeners. A semantic analysis of the stories suggests that listeners coordinate with speakers at the level of complex semantic representations, so-called “situation models”. With this study we link a coordination of neural activity between individuals directly to verbally communicated information. PMID:23060770
Bornkessel-Schlesewsky, Ina; Krauspenhaar, Sylvia; Schlesewsky, Matthias
2013-01-01
Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described. PMID:23894425
A scoring scheme for evaluating magnetofossil identifications
NASA Astrophysics Data System (ADS)
Kopp, R. E.; Kirschvink, J. L.
2007-12-01
In many Quaternary lacustrine and marine settings, fossil magnetotactic bacteria are a major contributor to sedimentary magnetization [1]. Magnetite particles produced by magnetotactic bacteria have traits, shaped by natural selection, that increase the efficiency with which the bacteria utilize iron and also facilitate the recognition of the particles' biological origin. In particular, magnetotactic bacteria generally produce particles with characteristic shapes and narrow size and shape distributions that lie within the single domain stability field. The particles have effective positive magnetic anisotropy, produced by alignment in chains and frequently by particle elongation. In addition, the crystals are often nearly stochiometric and have few crystallographic defects. Yet, despite these distinctive traits, there are few identified magnetofossils that predate the Quaternary, and many putative identifications are highly controversial. We propose a six-criteria scoring scheme for evaluating identifications based on the quality of the geological, magnetic, and electron microscopic evidence. Our criteria are: (1) whether the geological context is well-constrained stratigraphically, and whether paleomagnetic evidence suggests a primary magnetization; (2) whether magnetic or microscopic evidence support the presence of significant single-domain magnetite; (3) whether magnetic or ferromagnetic resonance evidence indicates narrow size and shape distributions, and whether microscopic evidence reveals single-domain particles with truncated edges, elongate single-domain particles, and/or narrow size and shape distributions; (4) whether ferromagnetic resonance, low-temperature magnetic, or electron microscopic evidence reveals the presence of chains; (5) whether low-temperature magnetometry, energy dispersive X-ray spectroscopy, or other techniques demonstrate the near-stochiometry of the particles; and (6) whether high-resolution TEM indicates the near- absence of crystallographic defects. We use criterion 1 to set the threshold for determining whether a magnetofossil identification is robust. Criteria 3 and 4 are assigned numerical scores that range from 0 to 4, while criteria 2, 5, and 6 are evaluated based on presence or absence. Based on this scheme, the oldest robust magnetofossils yet found come from the Cretaceous chalk beds of southern England [2], though Lower Cambrian limestones of the Pestrotsvet Formation, Siberia Platform, only marginally fail to meet our robust criteria [3]. Although magnetofossils have also been reported from Proterozoic, Archean, and Martian rocks, none of these identifications are robust. References: [1] R. E. Kopp and J. L. Kirschvink (2007). Earth Sci. Rev. doi:10.1016/j.earscirev.2007.08.001. [2] P. Montgomery et al. (1998). Earth Planet. Sci. Lett. 156: 209-224. [3] S. B. R. Chang et al. (1987). Phys. Earth Planet. Int. 46: 289-303.
Gayle, Alberto Alexander; Shimaoka, Motomu
2017-01-01
Introduction The predominance of English in scientific research has created hurdles for “non-native speakers” of English. Here we present a novel application of native language identification (NLI) for the assessment of medical-scientific writing. For this purpose, we created a novel classification system whereby scoring would be based solely on text features found to be distinctive among native English speakers (NS) within a given context. We dubbed this the “Genuine Index” (GI). Methodology This methodology was validated using a small set of journals in the field of pediatric oncology. Our dataset consisted of 5,907 abstracts, representing work from 77 countries. A support vector machine (SVM) was used to generate our model and for scoring. Results Accuracy, precision, and recall of the classification model were 93.3%, 93.7%, and 99.4%, respectively. Class specific F-scores were 96.5% for NS and 39.8% for our benchmark class, Japan. Overall kappa was calculated to be 37.2%. We found significant differences between countries with respect to the GI score. Significant correlation was found between GI scores and two validated objective measures of writing proficiency and readability. Two sets of key terms and phrases differentiating NS and non-native writing were identified. Conclusions Our GI model was able to detect, with a high degree of reliability, subtle differences between the terms and phrasing used by native and non-native speakers in peer reviewed journals, in the field of pediatric oncology. In addition, L1 language transfer was found to be very likely to survive revision, especially in non-Western countries such as Japan. These findings show that even when the language used is technically correct, there may still be some phrasing or usage that impact quality. PMID:28212419
ERIC Educational Resources Information Center
Yow, W. Quin; Markman, Ellen M.
2016-01-01
Bilingual children regularly face communicative challenges when speakers switch languages. To cope with such challenges, children may attempt to discern a speaker's communicative intent, thereby heightening their sensitivity to nonverbal communicative cues. Two studies examined whether such communication breakdowns increase sensitivity to…
Turn-Taking, Turn-Giving, and Alzheimer's Disease.
ERIC Educational Resources Information Center
Sabat, Steven R.
1991-01-01
Analysis of a conversation with an Alzheimer's disease sufferer with word-finding problems revealed that social context, speaker characteristics, and awareness of the other speaker's perspective governed such conversational aspects of turn taking and turn giving, which allowed full development of both speakers' personas. (23 references) (CB)
The Acquisition of Clitic Pronouns in the Spanish Interlanguage of Peruvian Quechua Speakers.
ERIC Educational Resources Information Center
Klee, Carol A.
1989-01-01
Analysis of four adult Quechua speakers' acquisition of clitic pronouns in Spanish revealed that educational attainment and amount of contact with monolingual Spanish speakers were positively related to native-like norms of competence in the use of object pronouns in Spanish. (CB)
Arctic Visiting Speakers Series (AVS)
NASA Astrophysics Data System (ADS)
Fox, S. E.; Griswold, J.
2011-12-01
The Arctic Visiting Speakers (AVS) Series funds researchers and other arctic experts to travel and share their knowledge in communities where they might not otherwise connect. Speakers cover a wide range of arctic research topics and can address a variety of audiences including K-12 students, graduate and undergraduate students, and the general public. Host applications are accepted on an on-going basis, depending on funding availability. Applications need to be submitted at least 1 month prior to the expected tour dates. Interested hosts can choose speakers from an online Speakers Bureau or invite a speaker of their choice. Preference is given to individuals and organizations to host speakers that reach a broad audience and the general public. AVS tours are encouraged to span several days, allowing ample time for interactions with faculty, students, local media, and community members. Applications for both domestic and international visits will be considered. Applications for international visits should involve participation of more than one host organization and must include either a US-based speaker or a US-based organization. This is a small but important program that educates the public about Arctic issues. There have been 27 tours since 2007 that have impacted communities across the globe including: Gatineau, Quebec Canada; St. Petersburg, Russia; Piscataway, New Jersey; Cordova, Alaska; Nuuk, Greenland; Elizabethtown, Pennsylvania; Oslo, Norway; Inari, Finland; Borgarnes, Iceland; San Francisco, California and Wolcott, Vermont to name a few. Tours have included lectures to K-12 schools, college and university students, tribal organizations, Boy Scout troops, science center and museum patrons, and the general public. There are approximately 300 attendees enjoying each AVS tour, roughly 4100 people have been reached since 2007. The expectations for each tour are extremely manageable. Hosts must submit a schedule of events and a tour summary to be posted online. Hosts must acknowledge the National Science Foundation Office of Polar Programs and ARCUS in all promotional materials. Host agrees to send ARCUS photographs, fliers, and if possible a video of the main lecture. Host and speaker agree to collect data on the number of attendees in each audience to submit as part of a post-tour evaluation. The grants can generally cover all the expenses of a tour, depending on the location. A maximum of 2,000 will be provided for the travel related expenses of a speaker on a domestic visit. A maxiμm of 2,500 will be provided for the travel related expenses of a speaker on an international visit. Each speaker will receive an honorarium of $300.
Design and experimental evaluation of robust controllers for a two-wheeled robot
NASA Astrophysics Data System (ADS)
Kralev, J.; Slavov, Ts.; Petkov, P.
2016-11-01
The paper presents the design and experimental evaluation of two alternative μ-controllers for robust vertical stabilisation of a two-wheeled self-balancing robot. The controllers design is based on models derived by identification from closed-loop experimental data. In the first design, a signal-based uncertainty representation obtained directly from the identification procedure is used, which leads to a controller of order 29. In the second design the signal uncertainty is approximated by an input multiplicative uncertainty, which leads to a controller of order 50, subsequently reduced to 30. The performance of the two μ-controllers is compared with the performance of a conventional linear quadratic controller with 17th-order Kalman filter. A proportional-integral controller of the rotational motion around the vertical axis is implemented as well. The control code is generated using Simulink® controller models and is embedded in a digital signal processor. Results from the simulation of the closed-loop system as well as experimental results obtained during the real-time implementation of the designed controllers are given. The theoretical investigation and experimental results confirm that the closed-loop system achieves robust performance in respect to the uncertainties related to the identified robot model.
Vyzantiadis, Timoleon-Achilleas A; Johnson, Elizabeth M; Kibbler, Christopher C
2012-06-01
The identification of fungi relies mainly on morphological criteria. However, there is a need for robust and definitive phenotypic identification procedures in order to evaluate continuously evolving molecular methods. For the future, there is an emerging consensus that a combined (phenotypic and molecular) approach is more powerful for fungal identification, especially for moulds. Most of the procedures used for phenotypic identification are based on experience rather than comparative studies of effectiveness or performance and there is a need for standardisation among mycology laboratories. This review summarises and evaluates the evidence for the major existing phenotypic identification procedures for the predominant causes of opportunistic mould infection. We have concentrated mainly on Aspergillus, Fusarium and mucoraceous mould species, as these are the most important clinically and the ones for which there are the most molecular taxonomic data.
Co-Construction of Nonnative Speaker Identity in Cross-Cultural Interaction
ERIC Educational Resources Information Center
Park, Jae-Eun
2007-01-01
Informed by Conversation Analysis, this paper examines discursive practices through which nonnative speaker (NNS) identity is constituted in relation to native speaker (NS) identity in naturally occurring English conversations. Drawing on studies of social interaction that view identity as intrinsically a social, dialogic, negotiable entity, I…
Guest Speakers in School-Based Sexuality Education
ERIC Educational Resources Information Center
McRee, Annie-Laurie; Madsen, Nikki; Eisenberg, Marla E.
2014-01-01
This study, using data from a statewide survey (n = 332), examined teachers' practices regarding the inclusion of guest speakers to cover sexuality content. More than half of teachers (58%) included guest speakers. In multivariate analyses, teachers who taught high school, had professional preparation in health education, or who received…
Using Word Clouds to Teach about Speaking Style
ERIC Educational Resources Information Center
Perry, Lisa
2012-01-01
Good public speaking style requires, among other skills, "effective management of the resources of language." Good speakers choose language carefully to create credibility, emotional impact, and logical appeal. If a speaker's language is wishy-washy, dull, vague, or long-winded, the speaker appears less trustworthy. Audience distrust of a speaker…
Phase Asymmetries in Normophonic Speakers: Visual Judgments and Objective Findings
ERIC Educational Resources Information Center
Bonilha, Heather Shaw; Deliyski, Dimitar D.; Gerlach, Terri Treman
2008-01-01
Purpose: To ascertain the amount of phase asymmetry of the vocal fold vibration in normophonic speakers via visualization techniques and compare findings for habitual and pressed phonations. Method: Fifty-two normophonic speakers underwent stroboscopy and high-speed videoendoscopy (HSV). The HSV images were further processed into 4 visual…
Mitigating U.S. Undergraduates' Attitudes toward International Teaching Assistants
ERIC Educational Resources Information Center
Kang, Okim; Rubin, Donald; Lindemann, Stephanie
2015-01-01
Intelligibility problems between native speakers (NSs) and nonnative speakers (NNSs) of English are often attributed to some perceived inadequacy of the NNSs. This emphasis on the NNSs' role in successful communication is highly problematic, given that intelligibility is a negotiated process between speaker and listener. In some cases, NSs have…
Dysprosody and Stimulus Effects in Cantonese Speakers with Parkinson's Disease
ERIC Educational Resources Information Center
Ma, Joan K.-Y.; Whitehill, Tara; Cheung, Katherine S.-K.
2010-01-01
Background: Dysprosody is a common feature in speakers with hypokinetic dysarthria. However, speech prosody varies across different types of speech materials. This raises the question of what is the most appropriate speech material for the evaluation of dysprosody. Aims: To characterize the prosodic impairment in Cantonese speakers with…
Clear Speech Variants: An Acoustic Study in Parkinson's Disease
ERIC Educational Resources Information Center
Lam, Jennifer; Tjaden, Kris
2016-01-01
Purpose: The authors investigated how different variants of clear speech affect segmental and suprasegmental acoustic measures of speech in speakers with Parkinson's disease and a healthy control group. Method: A total of 14 participants with Parkinson's disease and 14 control participants served as speakers. Each speaker produced 18 different…
The Interaction of Lexical Characteristics and Speech Production in Parkinson's Disease
ERIC Educational Resources Information Center
Chiu, Yi-Fang; Forrest, Karen
2017-01-01
Purpose: This study sought to investigate the interaction of speech movement execution with higher order lexical parameters. The authors examined how lexical characteristics affect speech output in individuals with Parkinson's disease (PD) and healthy control (HC) speakers. Method: Twenty speakers with PD and 12 healthy speakers read sentences…
Native Reactions to Non-Native Speech: A Review of Empirical Research.
ERIC Educational Resources Information Center
Eisenstein, Miriam
1983-01-01
Recent research on native speakers' reactions to nonnative speech that views listeners, speakers, and language from a variety of perspectives using both objective and subjective research paradigms is reviewed. Studies of error gravity, relative intelligibility of language samples, the role of accent, speakers' characteristics, and context in which…