Validation of Automated Scoring of Oral Reading
ERIC Educational Resources Information Center
Balogh, Jennifer; Bernstein, Jared; Cheng, Jian; Van Moere, Alistair; Townshend, Brent; Suzuki, Masanori
2012-01-01
A two-part experiment is presented that validates a new measurement tool for scoring oral reading ability. Data collected by the U.S. government in a large-scale literacy assessment of adults were analyzed by a system called VersaReader that uses automatic speech recognition and speech processing technologies to score oral reading fluency. In the…
A Survey of Speech Programs in Community Colleges.
ERIC Educational Resources Information Center
Meyer, Arthur C.
The rapid growth of community colleges in the last decade resulted in large numbers of students enrolled in programs previously unavailable to them in a single comprehensive institution. The purpose of this study was to gather and analyze data to provide information about the speech programs that community colleges created or expanded as a result…
Long short-term memory for speaker generalization in supervised speech separation
Chen, Jitong; Wang, DeLiang
2017-01-01
Speech separation can be formulated as learning to estimate a time-frequency mask from acoustic features extracted from noisy speech. For supervised speech separation, generalization to unseen noises and unseen speakers is a critical issue. Although deep neural networks (DNNs) have been successful in noise-independent speech separation, DNNs are limited in modeling a large number of speakers. To improve speaker generalization, a separation model based on long short-term memory (LSTM) is proposed, which naturally accounts for temporal dynamics of speech. Systematic evaluation shows that the proposed model substantially outperforms a DNN-based model on unseen speakers and unseen noises in terms of objective speech intelligibility. Analyzing LSTM internal representations reveals that LSTM captures long-term speech contexts. It is also found that the LSTM model is more advantageous for low-latency speech separation and it, without future frames, performs better than the DNN model with future frames. The proposed model represents an effective approach for speaker- and noise-independent speech separation. PMID:28679261
Accurate visible speech synthesis based on concatenating variable length motion capture data.
Ma, Jiyong; Cole, Ron; Pellom, Bryan; Ward, Wayne; Wise, Barbara
2006-01-01
We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is desrcribed. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergarten through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis.
Pathological speech signal analysis and classification using empirical mode decomposition.
Kaleem, Muhammad; Ghoraani, Behnaz; Guergachi, Aziz; Krishnan, Sridhar
2013-07-01
Automated classification of normal and pathological speech signals can provide an objective and accurate mechanism for pathological speech diagnosis, and is an active area of research. A large part of this research is based on analysis of acoustic measures extracted from sustained vowels. However, sustained vowels do not reflect real-world attributes of voice as effectively as continuous speech, which can take into account important attributes of speech such as rapid voice onset and termination, changes in voice frequency and amplitude, and sudden discontinuities in speech. This paper presents a methodology based on empirical mode decomposition (EMD) for classification of continuous normal and pathological speech signals obtained from a well-known database. EMD is used to decompose randomly chosen portions of speech signals into intrinsic mode functions, which are then analyzed to extract meaningful temporal and spectral features, including true instantaneous features which can capture discriminative information in signals hidden at local time-scales. A total of six features are extracted, and a linear classifier is used with the feature vector to classify continuous speech portions obtained from a database consisting of 51 normal and 161 pathological speakers. A classification accuracy of 95.7 % is obtained, thus demonstrating the effectiveness of the methodology.
Fidelity of Automatic Speech Processing for Adult and Child Talker Classifications.
VanDam, Mark; Silbert, Noah H
2016-01-01
Automatic speech processing (ASP) has recently been applied to very large datasets of naturalistically collected, daylong recordings of child speech via an audio recorder worn by young children. The system developed by the LENA Research Foundation analyzes children's speech for research and clinical purposes, with special focus on of identifying and tagging family speech dynamics and the at-home acoustic environment from the auditory perspective of the child. A primary issue for researchers, clinicians, and families using the Language ENvironment Analysis (LENA) system is to what degree the segment labels are valid. This classification study evaluates the performance of the computer ASP output against 23 trained human judges who made about 53,000 judgements of classification of segments tagged by the LENA ASP. Results indicate performance consistent with modern ASP such as those using HMM methods, with acoustic characteristics of fundamental frequency and segment duration most important for both human and machine classifications. Results are likely to be important for interpreting and improving ASP output.
Fidelity of Automatic Speech Processing for Adult and Child Talker Classifications
2016-01-01
Automatic speech processing (ASP) has recently been applied to very large datasets of naturalistically collected, daylong recordings of child speech via an audio recorder worn by young children. The system developed by the LENA Research Foundation analyzes children's speech for research and clinical purposes, with special focus on of identifying and tagging family speech dynamics and the at-home acoustic environment from the auditory perspective of the child. A primary issue for researchers, clinicians, and families using the Language ENvironment Analysis (LENA) system is to what degree the segment labels are valid. This classification study evaluates the performance of the computer ASP output against 23 trained human judges who made about 53,000 judgements of classification of segments tagged by the LENA ASP. Results indicate performance consistent with modern ASP such as those using HMM methods, with acoustic characteristics of fundamental frequency and segment duration most important for both human and machine classifications. Results are likely to be important for interpreting and improving ASP output. PMID:27529813
Analysis of 3-D Tongue Motion From Tagged and Cine Magnetic Resonance Images
Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.
2016-01-01
Purpose Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during speech in order to estimate 3-dimensional tissue displacement and deformation over time. Method The method involves computing 2-dimensional motion components using a standard tag-processing method called harmonic phase, constructing superresolution tongue volumes using cine magnetic resonance images, segmenting the tongue region using a random-walker algorithm, and estimating 3-dimensional tongue motion using an incompressible deformation estimation algorithm. Results Evaluation of the method is presented with a control group and a group of people who had received a glossectomy carrying out a speech task. A 2-step principal-components analysis is then used to reveal the unique motion patterns of the subjects. Azimuth motion angles and motion on the mirrored hemi-tongues are analyzed. Conclusion Tests of the method with a various collection of subjects show its capability of capturing patient motion patterns and indicate its potential value in future speech studies. PMID:27295428
The ``listener'' in the modeling of speech prosody
NASA Astrophysics Data System (ADS)
Kohler, Klaus J.
2004-05-01
Autosegmental-metrical modeling of speech prosody is principally speaker-oriented. The production of pitch patterns, in systematic lab speech experiments as well as in spontaneous speech corpora, is analyzed in f0 tracings, from which sequences of H(igh) and L(ow) are abstracted. The perceptual relevance of these pitch categories in the transmission from speakers to listeners is largely not conceptualized; thus their modeling in speech communication lacks an essential component. In the metalinguistic task of labeling speech data with the annotation system ToBI, the ``listener'' plays a subordinate role as well: H and L, being suggestive of signal values, are allocated with reference to f0 curves and little or no concern for perceptual classification by the trained labeler. The seriousness of this theoretical gap in the modeling of speech prosody is demonstrated by experimental data concerning f0-peak alignment. A number of papers in JASA have dealt with this topic from the point of synchronizing f0 with the vocal tract time course in acoustic output. However, perceptual experiments within the Kiel intonation model show that ``early,'' ``medial'' and ``late'' peak alignments need to be defined perceptually and that in doing so microprosodic variation has to be filtered out from the surface signal.
NASA Technical Reports Server (NTRS)
1973-01-01
The users manual for the word recognition computer program contains flow charts of the logical diagram, the memory map for templates, the speech analyzer card arrangement, minicomputer input/output routines, and assembly language program listings.
Multilevel Analysis in Analyzing Speech Data
ERIC Educational Resources Information Center
Guddattu, Vasudeva; Krishna, Y.
2011-01-01
The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…
Classification Influence of Features on Given Emotions and Its Application in Feature Selection
NASA Astrophysics Data System (ADS)
Xing, Yin; Chen, Chuang; Liu, Li-Long
2018-04-01
In order to solve the problem that there is a large amount of redundant data in high-dimensional speech emotion features, we analyze deeply the extracted speech emotion features and select better features. Firstly, a given emotion is classified by each feature. Secondly, the recognition rate is ranked in descending order. Then, the optimal threshold of features is determined by rate criterion. Finally, the better features are obtained. When applied in Berlin and Chinese emotional data set, the experimental results show that the feature selection method outperforms the other traditional methods.
Speech and motor disturbances in Rett syndrome.
Bashina, V M; Simashkova, N V; Grachev, V V; Gorbachevskaya, N L
2002-01-01
Rett syndrome is a severe, genetically determined disease of early childhood which produces a defined clinical phenotype in girls. The main clinical manifestations include lesions affecting speech functions, involving both expressive and receptive speech, as well as motor functions, producing apraxia of the arms and profound abnormalities of gait in the form of ataxia-apraxia. Most investigators note that patients have variability in the severity of derangement to large motor acts and in the damage to fine hand movements and speech functions. The aims of the present work were to study disturbances of speech and motor functions over 2-5 years in 50 girls aged 12 months to 14 years with Rett syndrome and to analyze the correlations between these disturbances. The results of comparing clinical data and EEG traces supported the stepwise involvement of frontal and parietal-temporal cortical structures in the pathological process. The ability to organize speech and motor activity is affected first, with subsequent development of lesions to gnostic functions, which are in turn followed by derangement of subcortical structures and the cerebellum and later by damage to structures in the spinal cord. A clear correlation was found between the severity of lesions to motor and speech functions and neurophysiological data: the higher the level of preservation of elements of speech and motor functions, the smaller were the contributions of theta activity and the greater the contributions of alpha and beta activities to the EEG. The possible pathogenetic mechanisms underlying the motor and speech disturbances in Rett syndrome are discussed.
Articulatory mediation of speech perception: a causal analysis of multi-modal imaging data.
Gow, David W; Segawa, Jennifer A
2009-02-01
The inherent confound between the organization of articulation and the acoustic-phonetic structure of the speech signal makes it exceptionally difficult to evaluate the competing claims of motor and acoustic-phonetic accounts of how listeners recognize coarticulated speech. Here we use Granger causation analyzes of high spatiotemporal resolution neural activation data derived from the integration of magnetic resonance imaging, magnetoencephalography and electroencephalography, to examine the role of lexical and articulatory mediation in listeners' ability to use phonetic context to compensate for place assimilation. Listeners heard two-word phrases such as pen pad and then saw two pictures, from which they had to select the one that depicted the phrase. Assimilation, lexical competitor environment and the phonological validity of assimilation context were all manipulated. Behavioral data showed an effect of context on the interpretation of assimilated segments. Analysis of 40 Hz gamma phase locking patterns identified a large distributed neural network including 16 distinct regions of interest (ROIs) spanning portions of both hemispheres in the first 200 ms of post-assimilation context. Granger analyzes of individual conditions showed differing patterns of causal interaction between ROIs during this interval, with hypothesized lexical and articulatory structures and pathways driving phonetic activation in the posterior superior temporal gyrus in assimilation conditions, but not in phonetically unambiguous conditions. These results lend strong support for the motor theory of speech perception, and clarify the role of lexical mediation in the phonetic processing of assimilated speech.
Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis
Vielsmeier, Veronika; Kreuzer, Peter M.; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R. O.; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin
2016-01-01
Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments (“How would you rate your ability to understand speech?”; “How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?”). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role. PMID:28018209
Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis.
Vielsmeier, Veronika; Kreuzer, Peter M; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R O; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin
2016-01-01
Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments ("How would you rate your ability to understand speech?"; "How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?"). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role.
Methods for eliciting, annotating, and analyzing databases for child speech development.
Beckman, Mary E; Plummer, Andrew R; Munson, Benjamin; Reidy, Patrick F
2017-09-01
Methods from automatic speech recognition (ASR), such as segmentation and forced alignment, have facilitated the rapid annotation and analysis of very large adult speech databases and databases of caregiver-infant interaction, enabling advances in speech science that were unimaginable just a few decades ago. This paper centers on two main problems that must be addressed in order to have analogous resources for developing and exploiting databases of young children's speech. The first problem is to understand and appreciate the differences between adult and child speech that cause ASR models developed for adult speech to fail when applied to child speech. These differences include the fact that children's vocal tracts are smaller than those of adult males and also changing rapidly in size and shape over the course of development, leading to between-talker variability across age groups that dwarfs the between-talker differences between adult men and women. Moreover, children do not achieve fully adult-like speech motor control until they are young adults, and their vocabularies and phonological proficiency are developing as well, leading to considerably more within-talker variability as well as more between-talker variability. The second problem then is to determine what annotation schemas and analysis techniques can most usefully capture relevant aspects of this variability. Indeed, standard acoustic characterizations applied to child speech reveal that adult-centered annotation schemas fail to capture phenomena such as the emergence of covert contrasts in children's developing phonological systems, while also revealing children's nonuniform progression toward community speech norms as they acquire the phonological systems of their native languages. Both problems point to the need for more basic research into the growth and development of the articulatory system (as well as of the lexicon and phonological system) that is oriented explicitly toward the construction of age-appropriate computational models.
Motor laterality as an indicator of speech laterality.
Flowers, Kenneth A; Hudson, John M
2013-03-01
The determination of speech laterality, especially where it is anomalous, is both a theoretical issue and a practical problem for brain surgery. Handedness is commonly thought to be related to speech representation, but exactly how is not clearly understood. This investigation analyzed handedness by preference rating and performance on a reliable task of motor laterality in 34 patients undergoing a Wada test, to see if they could provide an indicator of speech laterality. Hand usage preference ratings divided patients into left, right, and mixed in preference. Between-hand differences in movement time on a pegboard task determined motor laterality. Results were correlated (χ2) with speech representation as determined by a standard Wada test. It was found that patients whose between-hand difference in speed on the motor task was small or inconsistent were the ones whose Wada test speech representation was likely to be ambiguous or anomalous, whereas all those with a consistently large between-hand difference showed clear unilateral speech representation in the hemisphere controlling the better hand (χ2 = 10.45, df = 1, p < .01, η2 = 0.55) This relationship prevailed across hand preference and level of skill in the hands itself. We propose that motor and speech laterality are related where they both involve a central control of motor output sequencing and that a measure of that aspect of the former will indicate the likely representation of the latter. A between-hand measure of motor laterality based on such a measure may indicate the possibility of anomalous speech representation. PsycINFO Database Record (c) 2013 APA, all rights reserved.
[Speech fluency developmental profile in Brazilian Portuguese speakers].
Martins, Vanessa de Oliveira; Andrade, Claudia Regina Furquim de
2008-01-01
speech fluency varies from one individual to the next, fluent or stutterer, depending on several factors. Studies that investigate the influence of age on fluency patterns have been identified; however these differences were investigated in isolated age groups. Studies about life span fluency variations were not found. to verify the speech fluency developmental profile. speech samples of 594 fluent participants of both genders, with ages between 2:0 and 99:11 years, speakers of the Brazilian Portuguese language, were analyzed. Participants were grouped as follows: pre-scholars, scholars, early adolescence, late adolescence, adults and elderlies. Speech samples were analyzed according to the Speech Fluency Profile variables and were compared regarding: typology of speech disruptions (typical and less typical), speech rate (words and syllables per minute) and frequency of speech disruptions (percentage of speech discontinuity). although isolated variations were identified, overall there was no significant difference between the age groups for the speech disruption indexes (typical and less typical speech disruptions and percentage of speech discontinuity). Significant differences were observed between the groups when considering speech rate. the development of the neurolinguistic system for speech fluency, in terms of speech disruptions, seems to stabilize itself during the first years of life, presenting no alterations during the life span. Indexes of speech rate present variations in the age groups, indicating patterns of acquisition, development, stabilization and degeneration.
Speech Patterns and Racial Wage Inequality
ERIC Educational Resources Information Center
Grogger, Jeffrey
2011-01-01
Speech patterns differ substantially between whites and many African Americans. I collect and analyze speech data to understand the role that speech may play in explaining racial wage differences. Among blacks, speech patterns are highly correlated with measures of skill such as schooling and AFQT scores. They are also highly correlated with the…
Advanced Persuasive Speaking, English, Speech: 5114.112.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
Developed as a high school quinmester unit on persuasive speaking, this guide provides the teacher with teaching strategies for a course which analyzes speeches from "Vital Speeches of the Day," political speeches, TV commercials, and other types of speeches. Practical use of persuasive methods for school, community, county, state, and…
Analysis of 3-D Tongue Motion from Tagged and Cine Magnetic Resonance Images
ERIC Educational Resources Information Center
Xing, Fangxu; Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.
2016-01-01
Purpose: Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during…
ERIC Educational Resources Information Center
Gelfand, Jessica T.; Christie, Robert E.; Gelfand, Stanley A.
2014-01-01
Purpose: Speech recognition may be analyzed in terms of recognition probabilities for perceptual wholes (e.g., words) and parts (e.g., phonemes), where j or the j-factor reveals the number of independent perceptual units required for recognition of the whole (Boothroyd, 1968b; Boothroyd & Nittrouer, 1988; Nittrouer & Boothroyd, 1990). For…
Stereotype and Tradition: White Folklore About Blacks (Volumes 1 and 2).
ERIC Educational Resources Information Center
Rosenberg, Neil Vandraegen
The forms of white folklore about blacks in the United States are described. This folklore appears in most folklore genres, including folk speech, proverbs, riddles, beliefs, songs and narratives. Using texts submitted to various folklore archives over a 20-year period, this study analyzes the content and context of a large group of jokes and…
ERIC Educational Resources Information Center
Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.
2016-01-01
Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…
Goldrick, Matthew; Keshet, Joseph; Gustafson, Erin; Heller, Jordana; Needle, Jeremy
2016-04-01
Traces of the cognitive mechanisms underlying speaking can be found within subtle variations in how we pronounce sounds. While speech errors have traditionally been seen as categorical substitutions of one sound for another, acoustic/articulatory analyses show they partially reflect the intended sound. When "pig" is mispronounced as "big," the resulting /b/ sound differs from correct productions of "big," moving towards intended "pig"-revealing the role of graded sound representations in speech production. Investigating the origins of such phenomena requires detailed estimation of speech sound distributions; this has been hampered by reliance on subjective, labor-intensive manual annotation. Computational methods can address these issues by providing for objective, automatic measurements. We develop a novel high-precision computational approach, based on a set of machine learning algorithms, for measurement of elicited speech. The algorithms are trained on existing manually labeled data to detect and locate linguistically relevant acoustic properties with high accuracy. Our approach is robust, is designed to handle mis-productions, and overall matches the performance of expert coders. It allows us to analyze a very large dataset of speech errors (containing far more errors than the total in the existing literature), illuminating properties of speech sound distributions previously impossible to reliably observe. We argue that this provides novel evidence that two sources both contribute to deviations in speech errors: planning processes specifying the targets of articulation and articulatory processes specifying the motor movements that execute this plan. These findings illustrate how a much richer picture of speech provides an opportunity to gain novel insights into language processing. Copyright © 2016 Elsevier B.V. All rights reserved.
Analyzing crowdsourced ratings of speech-based take-over requests for automated driving.
Bazilinskyy, P; de Winter, J C F
2017-10-01
Take-over requests in automated driving should fit the urgency of the traffic situation. The robustness of various published research findings on the valuations of speech-based warning messages is unclear. This research aimed to establish how people value speech-based take-over requests as a function of speech rate, background noise, spoken phrase, and speaker's gender and emotional tone. By means of crowdsourcing, 2669 participants from 95 countries listened to a random 10 out of 140 take-over requests, and rated each take-over request on urgency, commandingness, pleasantness, and ease of understanding. Our results replicate several published findings, in particular that an increase in speech rate results in a monotonic increase of perceived urgency. The female voice was easier to understand than a male voice when there was a high level of background noise, a finding that contradicts the literature. Moreover, a take-over request spoken with Indian accent was found to be easier to understand by participants from India than by participants from other countries. Our results replicate effects in the literature regarding speech-based warnings, and shed new light on effects of background noise, gender, and nationality. The results may have implications for the selection of appropriate take-over requests in automated driving. Additionally, our study demonstrates the promise of crowdsourcing for testing human factors and ergonomics theories with large sample sizes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Guidi, Andrea; Salvi, Sergio; Ottaviano, Manuel; Gentili, Claudio; Bertschy, Gilles; de Rossi, Danilo; Scilingo, Enzo Pasquale; Vanello, Nicola
2015-11-06
Bipolar disorder is one of the most common mood disorders characterized by large and invalidating mood swings. Several projects focus on the development of decision support systems that monitor and advise patients, as well as clinicians. Voice monitoring and speech signal analysis can be exploited to reach this goal. In this study, an Android application was designed for analyzing running speech using a smartphone device. The application can record audio samples and estimate speech fundamental frequency, F0, and its changes. F0-related features are estimated locally on the smartphone, with some advantages with respect to remote processing approaches in terms of privacy protection and reduced upload costs. The raw features can be sent to a central server and further processed. The quality of the audio recordings, algorithm reliability and performance of the overall system were evaluated in terms of voiced segment detection and features estimation. The results demonstrate that mean F0 from each voiced segment can be reliably estimated, thus describing prosodic features across the speech sample. Instead, features related to F0 variability within each voiced segment performed poorly. A case study performed on a bipolar patient is presented.
Guidi, Andrea; Salvi, Sergio; Ottaviano, Manuel; Gentili, Claudio; Bertschy, Gilles; de Rossi, Danilo; Scilingo, Enzo Pasquale; Vanello, Nicola
2015-01-01
Bipolar disorder is one of the most common mood disorders characterized by large and invalidating mood swings. Several projects focus on the development of decision support systems that monitor and advise patients, as well as clinicians. Voice monitoring and speech signal analysis can be exploited to reach this goal. In this study, an Android application was designed for analyzing running speech using a smartphone device. The application can record audio samples and estimate speech fundamental frequency, F0, and its changes. F0-related features are estimated locally on the smartphone, with some advantages with respect to remote processing approaches in terms of privacy protection and reduced upload costs. The raw features can be sent to a central server and further processed. The quality of the audio recordings, algorithm reliability and performance of the overall system were evaluated in terms of voiced segment detection and features estimation. The results demonstrate that mean F0 from each voiced segment can be reliably estimated, thus describing prosodic features across the speech sample. Instead, features related to F0 variability within each voiced segment performed poorly. A case study performed on a bipolar patient is presented. PMID:26561811
SPEECH TO FACULTY OF HARVARD-BOSTON SUMMER PROGRAM AT PREPLANNING MEETINGS.
ERIC Educational Resources Information Center
HAIZLIP, HAROLD
THE AREA TO WHICH THIS GROUP OF TEACHERS WILL BE SENT IS CHARACTERIZED BY ITS LARGE INFLUX OF POOR NEGRO FAMILIES WITH POOR CULTURAL BACKGROUNDS. SOME OF THE PROBLEMS OF THIS AREA WILL REQUIRE A BROAD-SCALE, CAREFULLY ANALYZED, AND PLANNED ATTACK WITHIN AND BY PUBLIC SCHOOLS. EVERY SCHOOL HAS A HIDDEN OR SUBLIMINAL CURRICULUM WHICH TEACHES, IN…
ERIC Educational Resources Information Center
Kleinert, Jane O'Regan; Harrison, Elizabeth; Mills, Karen Rose; Dueppen, Brittney M.; Trailor, Ann Michelle
2014-01-01
The purpose of this study was to analyze a large data base of self-selected/self-determined goals developed by students aged 7-21 that had developmental disabilities. Data included 288 goals developed by 205 students involved in the Kentucky Youth Advocacy Project. Students, teachers, Speech/Language pathologists and families utilized the Self…
Effects and modeling of phonetic and acoustic confusions in accented speech.
Fung, Pascale; Liu, Yi
2005-11-01
Accented speech recognition is more challenging than standard speech recognition due to the effects of phonetic and acoustic confusions. Phonetic confusion in accented speech occurs when an expected phone is pronounced as a different one, which leads to erroneous recognition. Acoustic confusion occurs when the pronounced phone is found to lie acoustically between two baseform models and can be equally recognized as either one. We propose that it is necessary to analyze and model these confusions separately in order to improve accented speech recognition without degrading standard speech recognition. Since low phonetic confusion units in accented speech do not give rise to automatic speech recognition errors, we focus on analyzing and reducing phonetic and acoustic confusability under high phonetic confusion conditions. We propose using likelihood ratio test to measure phonetic confusion, and asymmetric acoustic distance to measure acoustic confusion. Only accent-specific phonetic units with low acoustic confusion are used in an augmented pronunciation dictionary, while phonetic units with high acoustic confusion are reconstructed using decision tree merging. Experimental results show that our approach is effective and superior to methods modeling phonetic confusion or acoustic confusion alone in accented speech, with a significant 5.7% absolute WER reduction, without degrading standard speech recognition.
Inner Speech's Relationship with Overt Speech in Poststroke Aphasia
ERIC Educational Resources Information Center
Stark, Brielle C.; Geva, Sharon; Warburton, Elizabeth A.
2017-01-01
Purpose: Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech…
Collaborative Signaling of Informational Structures by Dynamic Speech Rate.
ERIC Educational Resources Information Center
Koiso, Hanae; Shimojima, Atsushi; Katagiri, Yasuhiro
1998-01-01
Investigated the functions of dynamic speech rates as contextualization cues in conversational Japanese, examining five spontaneous task-oriented dialogs and analyzing the potential of speech-rate changes in signaling the structure of the information being exchanged. Results found a correlation between speech decelerations and the openings of new…
A Clinician Survey of Speech and Non-Speech Characteristics of Neurogenic Stuttering
ERIC Educational Resources Information Center
Theys, Catherine; van Wieringen, Astrid; De Nil, Luc F.
2008-01-01
This study presents survey data on 58 Dutch-speaking patients with neurogenic stuttering following various neurological injuries. Stroke was the most prevalent cause of stuttering in our patients, followed by traumatic brain injury, neurodegenerative diseases, and other causes. Speech and non-speech characteristics were analyzed separately for…
Analysis of Factors Affecting System Performance in the ASpIRE Challenge
2015-12-13
performance in the ASpIRE (Automatic Speech recognition In Reverberant Environments) challenge. In particular, overall word error rate (WER) of the solver...systems is analyzed as a function of room, distance between talker and microphone, and microphone type. We also analyze speech activity detection...analysis will inform the design of future challenges and provide insight into the efficacy of current solutions addressing noisy reverberant speech
A software tool for analyzing multichannel cochlear implant signals.
Lai, Wai Kong; Bögli, Hans; Dillier, Norbert
2003-10-01
A useful and convenient means to analyze the radio frequency (RF) signals being sent by a speech processor to a cochlear implant would be to actually capture and display them with appropriate software. This is particularly useful for development or diagnostic purposes. sCILab (Swiss Cochlear Implant Laboratory) is such a PC-based software tool intended for the Nucleus family of Multichannel Cochlear Implants. Its graphical user interface provides a convenient and intuitive means for visualizing and analyzing the signals encoding speech information. Both numerical and graphic displays are available for detailed examination of the captured CI signals, as well as an acoustic simulation of these CI signals. sCILab has been used in the design and verification of new speech coding strategies, and has also been applied as an analytical tool in studies of how different parameter settings of existing speech coding strategies affect speech perception. As a diagnostic tool, it is also useful for troubleshooting problems with the external equipment of the cochlear implant systems.
Dias, Roberta Freitas; Melo, Roberta Michelon; Mezzomo, Carolina Lisbôa; Mota, Helena Bolli
2013-01-01
To analyze the possible relationship among the awareness of one's own speech disorder and some aspects of the phonological system, as the number and the type of changed distinctive features, as well as the interaction among the severity of the disorder and the non-specification of distinctive features. The analyzed group has 23 children with diagnosis of speech disorder, aged 5:0 to 7:7. The speech data were analyzed through the Distinctive Features Analysis and classified by the Percentage of Correct Consonants. One also applied the Awareness of one's own speech disorder test. The children were separated in two groups: with awareness of their own speech disorder established (more than 50% of correct identification) and without awareness of their own speech disorder established (less than 50% of correct identification). Finally, the variables of this research were submitted to analysis using descriptive and inferential statistics. The type of changed distinctive features weren't different between the groups, as well as the total of changed features and the severity disorder. However, a correlation between the severity disorder and the non-specification of distinctive features was verified, because the more severe disorders have more changes in these linguistic variables. The awareness of one's own speech disorder doesn't seem to be directly influenced by the type and by the number of changed distinctive features, neither by the speech disorder severity. Moreover, one verifies that the greater phonological disorder severity, the greater the number of changed distinctive features.
Ictal speech and language dysfunction in adult epilepsy: Clinical study of 95 seizures.
Dussaule, C; Cauquil, C; Flamand-Roze, C; Gagnepain, J-P; Bouilleret, V; Denier, C; Masnou, P
2017-04-01
To analyze the semiological characteristics of the language and speech disorders arising during epileptic seizures, and to describe the patterns of language and speech disorders that can predict laterality of the epileptic focus. This study retrospectively analyzed 95 consecutive videos of seizures with language and/or speech disorders in 44 patients admitted for diagnostic video-EEG monitoring. Laterality of the epileptic focus was defined according to electro-clinical correlation studies and structural and functional neuroimaging findings. Language and speech disorders were analyzed by a neurologist and a speech therapist blinded to these data. Language and/or speech disorders were subdivided into eight dynamic patterns: pure anterior aphasia; anterior aphasia and vocal; anterior aphasia and "arthria"; pure posterior aphasia; posterior aphasia and vocal; pure vocal; vocal and arthria; and pure arthria. The epileptic focus was in the left hemisphere in more than 4/5 of seizures presenting with pure anterior aphasia or pure posterior aphasia patterns, while discharges originated in the right hemisphere in almost 2/3 of seizures presenting with a pure vocal pattern. No laterality value was found for the other patterns. Classification of the language and speech disorders arising during epileptic seizures into dynamic patterns may be useful for the optimal analysis of anatomo-electro-clinical correlations. In addition, our research has led to the development of standardized tests for analyses of language and speech disorders arising during seizures that can be conducted during video-EEG sessions. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
ERIC Educational Resources Information Center
Iuzzini-Seigel, Jenya; Hogan, Tiffany P.; Green, Jordan R.
2017-01-01
Purpose: The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and…
Ethnography of Communication: Cultural Codes and Norms.
ERIC Educational Resources Information Center
Carbaugh, Donal
The primary tasks of the ethnographic researcher are to discover, describe, and comparatively analyze different speech communities' ways of speaking. Two general abstractions occurring in ethnographic analyses are normative and cultural. Communicative norms are formulated in analyzing and explaining the "patterned use of speech."…
Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition
Rigoulot, Simon; Wassiliwizky, Eugen; Pell, Marc D.
2013-01-01
Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400–1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech. PMID:23805115
Cleft Audit Protocol for Speech (CAPS-A): A Comprehensive Training Package for Speech Analysis
ERIC Educational Resources Information Center
Sell, D.; John, A.; Harding-Bell, A.; Sweeney, T.; Hegarty, F.; Freeman, J.
2009-01-01
Background: The previous literature has largely focused on speech analysis systems and ignored process issues, such as the nature of adequate speech samples, data acquisition, recording and playback. Although there has been recognition of the need for training on tools used in speech analysis associated with cleft palate, little attention has been…
Investigating Holistic Measures of Speech Prosody
ERIC Educational Resources Information Center
Cunningham, Dana Aliel
2012-01-01
Speech prosody is a multi-faceted dimension of speech which can be measured and analyzed in a variety of ways. In this study, the speech prosody of Mandarin L1 speakers, English L2 speakers, and English L1 speakers was assessed by trained raters who listened to sound clips of the speakers responding to a graph prompt and reading a short passage.…
Understanding Freedom of Speech in America: The Origin & Evolution of the 1st Amendment.
ERIC Educational Resources Information Center
Barnes, Judy
In this booklet the content and implications of the First Amendment are analyzed. Historical origins of free speech from ancient Greece to England before the discovery of America, free speech in colonial America, and the Bill of Rights and its meaning for free speech are outlined. The evolution of the First Amendment is described, and the…
Using the Pecha Kucha Speech to Analyze and Train Humor Skills
ERIC Educational Resources Information Center
Waisanen, Don
2018-01-01
Courses: Public speaking; communication courses requiring speeches. Objective: Students will learn how to apply humor principles to speeches through a slideshow method supportive of this goal, and to become more discerning about the possibilities and pitfalls of humorous communication.
An analysis of the masking of speech by competing speech using self-report data.
Agus, Trevor R; Akeroyd, Michael A; Noble, William; Bhullar, Navjot
2009-01-01
Many of the items in the "Speech, Spatial, and Qualities of Hearing" scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85-99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively higher scores for competing speech (six items), energetic masking (one item), and no masking (three items). The results suggest significant masking by competing speech in everyday listening situations.
Speech outcomes in Cantonese patients after glossectomy.
Wong, Ripley Kit; Poon, Esther Sok-Man; Woo, Cynthia Yuen-Man; Chan, Sabina Ching-Shun; Wong, Elsa Siu-Ping; Chu, Ada Wai-Sze
2007-08-01
We sought to determine the major factors affecting speech production of Cantonese-speaking glossectomized patients. Error pattern was analyzed. Forty-one Cantonese-speaking subjects who had undergone glossectomy > or = 6 months previously were recruited. Speech production evaluation included (1) phonetic error analysis in nonsense syllable; (2) speech intelligibility in sentences evaluated by naive listeners; (3) overall speech intelligibility in conversation evaluated by experienced speech therapists. Patients receiving adjuvant radiotherapy had significantly poorer segmental and connected speech production. Total or subtotal glossectomy also resulted in poor speech outcomes. Patients having free flap reconstruction showed the best speech outcomes. Patients without lymph node metastasis had significantly better speech scores when compared with patients with lymph node metastasis. Initial consonant production had the worst scores, while vowel production was the least affected. Speech outcomes of Cantonese-speaking glossectomized patients depended on the severity of the disease. Initial consonants had the greatest effect on speech intelligibility.
NASA Technical Reports Server (NTRS)
Lokerson, D. C. (Inventor)
1977-01-01
A speech signal is analyzed by applying the signal to formant filters which derive first, second and third signals respectively representing the frequency of the speech waveform in the first, second and third formants. A first pulse train having approximately a pulse rate representing the average frequency of the first formant is derived; second and third pulse trains having pulse rates respectively representing zero crossings of the second and third formants are derived. The first formant pulse train is derived by establishing N signal level bands, where N is an integer at least equal to two. Adjacent ones of the signal bands have common boundaries, each of which is a predetermined percentage of the peak level of a complete cycle of the speech waveform.
Analyzing Distributional Learning of Phonemic Categories in Unsupervised Deep Neural Networks
Räsänen, Okko; Nagamine, Tasha; Mesgarani, Nima
2017-01-01
Infants’ speech perception adapts to the phonemic categories of their native language, a process assumed to be driven by the distributional properties of speech. This study investigates whether deep neural networks (DNNs), the current state-of-the-art in distributional feature learning, are capable of learning phoneme-like representations of speech in an unsupervised manner. We trained DNNs with unlabeled and labeled speech and analyzed the activations of each layer with respect to the phones in the input segments. The analyses reveal that the emergence of phonemic invariance in DNNs is dependent on the availability of phonemic labeling of the input during the training. No increased phonemic selectivity of the hidden layers was observed in the purely unsupervised networks despite successful learning of low-dimensional representations for speech. This suggests that additional learning constraints or more sophisticated models are needed to account for the emergence of phone-like categories in distributional learning operating on natural speech. PMID:29359204
Speech recognition: Acoustic phonetic and lexical knowledge representation
NASA Astrophysics Data System (ADS)
Zue, V. W.
1983-02-01
The purpose of this program is to develop a speech data base facility under which the acoustic characteristics of speech sounds in various contexts can be studied conveniently; investigate the phonological properties of a large lexicon of, say 10,000 words, and determine to what extent the phontactic constraints can be utilized in speech recognition; study the acoustic cues that are used to mark work boundaries; develop a test bed in the form of a large-vocabulary, IWR system to study the interactions of acoustic, phonetic and lexical knowledge; and develop a limited continuous speech recognition system with the goal of recognizing any English word from its spelling in order to assess the interactions of higher-level knowledge sources.
Sobol-Shikler, Tal; Robinson, Peter
2010-07-01
We present a classification algorithm for inferring affective states (emotions, mental states, attitudes, and the like) from their nonverbal expressions in speech. It is based on the observations that affective states can occur simultaneously and different sets of vocal features, such as intonation and speech rate, distinguish between nonverbal expressions of different affective states. The input to the inference system was a large set of vocal features and metrics that were extracted from each utterance. The classification algorithm conducted independent pairwise comparisons between nine affective-state groups. The classifier used various subsets of metrics of the vocal features and various classification algorithms for different pairs of affective-state groups. Average classification accuracy of the 36 pairwise machines was 75 percent, using 10-fold cross validation. The comparison results were consolidated into a single ranked list of the nine affective-state groups. This list was the output of the system and represented the inferred combination of co-occurring affective states for the analyzed utterance. The inference accuracy of the combined machine was 83 percent. The system automatically characterized over 500 affective state concepts from the Mind Reading database. The inference of co-occurring affective states was validated by comparing the inferred combinations to the lexical definitions of the labels of the analyzed sentences. The distinguishing capabilities of the system were comparable to human performance.
Method and apparatus for obtaining complete speech signals for speech recognition applications
NASA Technical Reports Server (NTRS)
Abrash, Victor (Inventor); Cesari, Federico (Inventor); Franco, Horacio (Inventor); George, Christopher (Inventor); Zheng, Jing (Inventor)
2009-01-01
The present invention relates to a method and apparatus for obtaining complete speech signals for speech recognition applications. In one embodiment, the method continuously records an audio stream comprising a sequence of frames to a circular buffer. When a user command to commence or terminate speech recognition is received, the method obtains a number of frames of the audio stream occurring before or after the user command in order to identify an augmented audio signal for speech recognition processing. In further embodiments, the method analyzes the augmented audio signal in order to locate starting and ending speech endpoints that bound at least a portion of speech to be processed for recognition. At least one of the speech endpoints is located using a Hidden Markov Model.
Building Searchable Collections of Enterprise Speech Data.
ERIC Educational Resources Information Center
Cooper, James W.; Viswanathan, Mahesh; Byron, Donna; Chan, Margaret
The study has applied speech recognition and text-mining technologies to a set of recorded outbound marketing calls and analyzed the results. Since speaker-independent speech recognition technology results in a significantly lower recognition rate than that found when the recognizer is trained for a particular speaker, a number of post-processing…
ERIC Educational Resources Information Center
Overby, Megan S.
2018-01-01
Background: Academic programmes in speech-language pathology are increasingly providing telehealth/telepractice clinical education to students. Despite this growth, there is little information describing effective ways to teach it. Aims: The current exploratory study analyzed the perceptions of speech-language pathology/therapy (SLP/SLT) faculty,…
Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles
2012-01-01
We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756
Study of acoustic correlates associate with emotional speech
NASA Astrophysics Data System (ADS)
Yildirim, Serdar; Lee, Sungbok; Lee, Chul Min; Bulut, Murtaza; Busso, Carlos; Kazemzadeh, Ebrahim; Narayanan, Shrikanth
2004-10-01
This study investigates the acoustic characteristics of four different emotions expressed in speech. The aim is to obtain detailed acoustic knowledge on how a speech signal is modulated by changes from neutral to a certain emotional state. Such knowledge is necessary for automatic emotion recognition and classification and emotional speech synthesis. Speech data obtained from two semi-professional actresses are analyzed and compared. Each subject produces 211 sentences with four different emotions; neutral, sad, angry, happy. We analyze changes in temporal and acoustic parameters such as magnitude and variability of segmental duration, fundamental frequency and the first three formant frequencies as a function of emotion. Acoustic differences among the emotions are also explored with mutual information computation, multidimensional scaling and acoustic likelihood comparison with normal speech. Results indicate that speech associated with anger and happiness is characterized by longer duration, shorter interword silence, higher pitch and rms energy with wider ranges. Sadness is distinguished from other emotions by lower rms energy and longer interword silence. Interestingly, the difference in formant pattern between [happiness/anger] and [neutral/sadness] are better reflected in back vowels such as /a/(/father/) than in front vowels. Detailed results on intra- and interspeaker variability will be reported.
Speech Analysis of Bengali Speaking Children with Repaired Cleft Lip & Palate
ERIC Educational Resources Information Center
Chakrabarty, Madhushree; Kumar, Suman; Chatterjee, Indranil; Maheshwari, Neha
2012-01-01
The present study aims at analyzing speech samples of four Bengali speaking children with repaired cleft palates with a view to differentiate between the misarticulations arising out of a deficit in linguistic skills and structural or motoric limitations. Spontaneous speech samples were collected and subjected to a number of linguistic analyses…
Rhetorical and Linguistic Analysis of Bush's Second Inaugural Speech
ERIC Educational Resources Information Center
Sameer, Imad Hayif
2017-01-01
This study attempts to analyze Bush's second inaugural speech. It aims at investigating the use of linguistic strategies in it. It resorts to two models which are Aristotle's model while the second is that of Atkinson's (1984) to draw the attention towards linguistic strategies. The analysis shows that Bush's second inaugural speech is successful…
Applications of Text Analysis Tools for Spoken Response Grading
ERIC Educational Resources Information Center
Crossley, Scott; McNamara, Danielle
2013-01-01
This study explores the potential for automated indices related to speech delivery, language use, and topic development to model human judgments of TOEFL speaking proficiency in second language (L2) speech samples. For this study, 244 transcribed TOEFL speech samples taken from 244 L2 learners were analyzed using automated indices taken from…
Automatic speech recognition technology development at ITT Defense Communications Division
NASA Technical Reports Server (NTRS)
White, George M.
1977-01-01
An assessment of the applications of automatic speech recognition to defense communication systems is presented. Future research efforts include investigations into the following areas: (1) dynamic programming; (2) recognition of speech degraded by noise; (3) speaker independent recognition; (4) large vocabulary recognition; (5) word spotting and continuous speech recognition; and (6) isolated word recognition.
ERIC Educational Resources Information Center
Oliveira, Carla; Lousada, Marisa; Jesus, Luis M. T.
2015-01-01
Children with speech sound disorders (SSD) represent a large number of speech and language therapists' caseloads. The intervention with children who have SSD can involve different therapy approaches, and these may be articulatory or phonologically based. Some international studies reveal a widespread application of articulatory based approaches in…
Comprehending idioms cross-linguistically.
Bortfeld, Heather
2003-01-01
Speakers of three different languages (English, Latvian, and Mandarin) rated sets of idioms from their language for the analyzability of the relationship between each phrase's literal and figurative meaning. For each language, subsets of idioms were selected based on these ratings. Latvian and Mandarin idioms were literally translated into English. Across three experiments, people classified idioms from the three languages according to their figurative meanings. Response times and error rates indicate that participants were able to interpret unfamiliar (e.g., other languages') idioms depending largely on the degree to which they were analyzable, and that different forms of processing were used both within and between languages depending on this analyzability. Results support arguments for a continuum of analyzability (Bortfeld & McGlone, 2001), along which figurative speech ranges from reflecting general conceptual structures to specific cultural and historical references.
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Karnofsky, K. F.; Stevens, K. N.; Alakel, M. N.
1983-12-01
The use of multiple sensors to transduce speech was investigated. A data base of speech and noise was collected from a number of transducers located on and around the head of the speaker. The transducers included pressure, first order gradient, second order gradient microphones and an accelerometer. The effort analyzed this data and evaluated the performance of a multiple sensor configuration. The conclusion was: multiple transducer configurations can provide a signal containing more useable speech information than that provided by a microphone.
The Acquisition of Relative Clauses in Spontaneous Child Speech in Mandarin Chinese
ERIC Educational Resources Information Center
Chen, Jidong; Shirai, Yasuhiro
2015-01-01
This study investigates the developmental trajectory of relative clauses (RCs) in Mandarin-learning children's speech. We analyze the spontaneous production of RCs by four monolingual Mandarin-learning children (0;11 to 3;5) and their input from a longitudinal naturalistic speech corpus (Min, 1994). The results reveal that in terms of the…
ERIC Educational Resources Information Center
Fukuda, Makiko
2014-01-01
The present study revealed the dynamic process of speech development in a domestic immersion program by seven adult beginning learners of Japanese. The speech data were analyzed with fluency, accuracy, and complexity measurements at group, interindividual, and intraindividual levels. The results revealed the complex nature of language development…
Neural networks supporting audiovisual integration for speech: A large-scale lesion study.
Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius
2018-06-01
Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.
Peng, Jianxin; Yan, Nanjie; Wang, Dan
2015-01-01
The present study investigated Chinese speech intelligibility in 28 classrooms from nine different elementary schools in Guangzhou, China. The subjective Chinese speech intelligibility in the classrooms was evaluated with children in grades 2, 4, and 6 (7 to 12 years old). Acoustical measurements were also performed in these classrooms. Subjective Chinese speech intelligibility scores and objective speech intelligibility parameters, such as speech transmission index (STI), were obtained at each listening position for all tests. The relationship between subjective Chinese speech intelligibility scores and STI was revealed and analyzed. The effects of age on Chinese speech intelligibility scores were compared. Results indicate high correlations between subjective Chinese speech intelligibility scores and STI for grades 2, 4, and 6 children. Chinese speech intelligibility scores increase with increase of age under the same STI condition. The differences in scores among different age groups decrease as STI increases. To achieve 95% Chinese speech intelligibility scores, the STIs required for grades 2, 4, and 6 children are 0.75, 0.69, and 0.63, respectively.
Effect of classroom acoustics on the speech intelligibility of students.
Rabelo, Alessandra Terra Vasconcelos; Santos, Juliana Nunes; Oliveira, Rafaella Cristina; Magalhães, Max de Castro
2014-01-01
To analyze the acoustic parameters of classrooms and the relationship among equivalent sound pressure level (Leq), reverberation time (T₃₀), the Speech Transmission Index (STI), and the performance of students in speech intelligibility testing. A cross-sectional descriptive study, which analyzed the acoustic performance of 18 classrooms in 9 public schools in Belo Horizonte, Minas Gerais, Brazil, was conducted. The following acoustic parameters were measured: Leq, T₃₀, and the STI. In the schools evaluated, a speech intelligibility test was performed on 273 students, 45.4% of whom were boys, with an average age of 9.4 years. The results of the speech intelligibility test were compared to the values of the acoustic parameters with the help of Student's t-test. The Leq, T₃₀, and STI tests were conducted in empty and furnished classrooms. Children showed better results in speech intelligibility tests conducted in classrooms with less noise, a lower T₃₀, and greater STI values. The majority of classrooms did not meet the recommended regulatory standards for good acoustic performance. Acoustic parameters have a direct effect on the speech intelligibility of students. Noise contributes to a decrease in their understanding of information presented orally, which can lead to negative consequences in their education and their social integration as future professionals.
Speech emotion recognition methods: A literature review
NASA Astrophysics Data System (ADS)
Basharirad, Babak; Moradhaseli, Mohammadreza
2017-10-01
Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.
An investigation of articulatory setting using real-time magnetic resonance imaging
Ramanarayanan, Vikram; Goldstein, Louis; Byrd, Dani; Narayanan, Shrikanth S.
2013-01-01
This paper presents an automatic procedure to analyze articulatory setting in speech production using real-time magnetic resonance imaging of the moving human vocal tract. The procedure extracts frames corresponding to inter-speech pauses, speech-ready intervals and absolute rest intervals from magnetic resonance imaging sequences of read and spontaneous speech elicited from five healthy speakers of American English and uses automatically extracted image features to quantify vocal tract posture during these intervals. Statistical analyses show significant differences between vocal tract postures adopted during inter-speech pauses and those at absolute rest before speech; the latter also exhibits a greater variability in the adopted postures. In addition, the articulatory settings adopted during inter-speech pauses in read and spontaneous speech are distinct. The results suggest that adopted vocal tract postures differ on average during rest positions, ready positions and inter-speech pauses, and might, in that order, involve an increasing degree of active control by the cognitive speech planning mechanism. PMID:23862826
Phonetic Modification of Vowel Space in Storybook Speech to Infants up to 2 Years of Age
ERIC Educational Resources Information Center
Burnham, Evamarie B.; Wieland, Elizabeth A.; Kondaurova, Maria V.; McAuley, J. Devin; Bergeson, Tonya R.; Dilley, Laura C.
2015-01-01
Purpose: A large body of literature has indicated vowel space area expansion in infant-directed (ID) speech compared with adult-directed (AD) speech, which may promote language acquisition. The current study tested whether this expansion occurs in storybook speech read to infants at various points during their first 2 years of life. Method: In 2…
Free Speech and the Rights of Congress: Robert M. LaFollette and the Argument from Principle.
ERIC Educational Resources Information Center
Schliessmann, Michael R.
Senator Robert LaFollette's speech to the United States Senate on "Free Speech and the Right of Congress to Declare the Objects of War," given October 6, 1917, epitomized his opposition to the war and the Wilson administration's largely successful moves to suppress public criticism of the war. In the speech he asserted his position on…
Chyl, Katarzyna; Kossowski, Bartosz; Dębska, Agnieszka; Łuniewska, Magdalena; Banaszkiewicz, Anna; Żelechowska, Agata; Frost, Stephen J; Mencl, William Einar; Wypych, Marek; Marchewka, Artur; Pugh, Kenneth R; Jednoróg, Katarzyna
2018-01-01
Literacy acquisition is a demanding process that induces significant changes in the brain, especially in the spoken and written language networks. Nevertheless, large-scale paediatric fMRI studies are still limited. We analyzed fMRI data to show how individual differences in reading performance correlate with brain activation for speech and print in 111 children attending kindergarten or first grade and examined group differences between a matched subset of emergent-readers and prereaders. Across the entire cohort, individual differences analysis revealed that reading skill was positively correlated with the magnitude of activation difference between words and symbol strings in left superior temporal, inferior frontal and fusiform gyri. Group comparisons of the matched subset of pre- and emergent-readers showed higher activity for emergent-readers in left inferior frontal, precentral, and postcentral gyri. Individual differences in activation for natural versus vocoded speech were also positively correlated with reading skill, primarily in the left temporal cortex. However, in contrast to studies on adult illiterates, group comparisons revealed higher activity in prereaders compared to readers in the frontal lobes. Print-speech coactivation was observed only in readers and individual differences analyses revealed a positive correlation between convergence and reading skill in the left superior temporal sulcus. These results emphasise that a child's brain undergoes several modifications to both visual and oral language systems in the process of learning to read. They also suggest that print-speech convergence is a hallmark of acquiring literacy. © 2017 Association for Child and Adolescent Mental Health.
The Semiotics of Two Speech Styles in Shokleng. Sociolinguistic Working Paper Number 103.
ERIC Educational Resources Information Center
Urban, Greg
Two speech styles, origin-myth telling and ritual wailing, found among the Shokleng Indians of south Brazil are analyzed from the perspective of two specific functions of speech style: (1) for indexing or highlighting the subject matter in certain contexts, and (2) for relating the contexts and subject matters to other contexts and subject matters…
Real-time spectrum estimation–based dual-channel speech-enhancement algorithm for cochlear implant
2012-01-01
Background Improvement of the cochlear implant (CI) front-end signal acquisition is needed to increase speech recognition in noisy environments. To suppress the directional noise, we introduce a speech-enhancement algorithm based on microphone array beamforming and spectral estimation. The experimental results indicate that this method is robust to directional mobile noise and strongly enhances the desired speech, thereby improving the performance of CI devices in a noisy environment. Methods The spectrum estimation and the array beamforming methods were combined to suppress the ambient noise. The directivity coefficient was estimated in the noise-only intervals, and was updated to fit for the mobile noise. Results The proposed algorithm was realized in the CI speech strategy. For actual parameters, we use Maxflat filter to obtain fractional sampling points and cepstrum method to differentiate the desired speech frame and the noise frame. The broadband adjustment coefficients were added to compensate the energy loss in the low frequency band. Discussions The approximation of the directivity coefficient is tested and the errors are discussed. We also analyze the algorithm constraint for noise estimation and distortion in CI processing. The performance of the proposed algorithm is analyzed and further be compared with other prevalent methods. Conclusions The hardware platform was constructed for the experiments. The speech-enhancement results showed that our algorithm can suppresses the non-stationary noise with high SNR. Excellent performance of the proposed algorithm was obtained in the speech enhancement experiments and mobile testing. And signal distortion results indicate that this algorithm is robust with high SNR improvement and low speech distortion. PMID:23006896
An Analysis of The Parameters Used In Speech ABR Assessment Protocols.
Sanfins, Milaine D; Hatzopoulos, Stavros; Donadon, Caroline; Diniz, Thais A; Borges, Leticia R; Skarzynski, Piotr H; Colella-Santos, Maria Francisca
2018-04-01
The aim of this study was to assess the parameters of choice, such as duration, intensity, rate, polarity, number of sweeps, window length, stimulated ear, fundamental frequency, first formant, and second formant, from previously published speech ABR studies. To identify candidate articles, five databases were assessed using the following keyword descriptors: speech ABR, ABR-speech, speech auditory brainstem response, auditory evoked potential to speech, speech-evoked brainstem response, and complex sounds. The search identified 1288 articles published between 2005 and 2015. After filtering the total number of papers according to the inclusion and exclusion criteria, 21 studies were selected. Analyzing the protocol details used in 21 studies suggested that there is no consensus to date on a speech-ABR protocol and that the parameters of analysis used are quite variable between studies. This inhibits the wider generalization and extrapolation of data across languages and studies.
Calandruccio, Lauren; Zhou, Haibo
2014-01-01
Purpose To examine whether improved speech recognition during linguistically mismatched target–masker experiments is due to linguistic unfamiliarity of the masker speech or linguistic dissimilarity between the target and masker speech. Method Monolingual English speakers (n = 20) and English–Greek simultaneous bilinguals (n = 20) listened to English sentences in the presence of competing English and Greek speech. Data were analyzed using mixed-effects regression models to determine differences in English recogition performance between the 2 groups and 2 masker conditions. Results Results indicated that English sentence recognition for monolinguals and simultaneous English–Greek bilinguals improved when the masker speech changed from competing English to competing Greek speech. Conclusion The improvement in speech recognition that has been observed for linguistically mismatched target–masker experiments cannot be simply explained by the masker language being linguistically unknown or unfamiliar to the listeners. Listeners can improve their speech recognition in linguistically mismatched target–masker experiments even when the listener is able to obtain meaningful linguistic information from the masker speech. PMID:24167230
Mispronunciation Detection for Language Learning and Speech Recognition Adaptation
ERIC Educational Resources Information Center
Ge, Zhenhao
2013-01-01
The areas of "mispronunciation detection" (or "accent detection" more specifically) within the speech recognition community are receiving increased attention now. Two application areas, namely language learning and speech recognition adaptation, are largely driving this research interest and are the focal points of this work.…
Speech Appliances in the Treatment of Phonological Disorders.
ERIC Educational Resources Information Center
Ruscello, Dennis M.
1995-01-01
This article addresses the rationale for and issues related to the use of speech appliances, especially a removable speech appliance that positions the tongue to produce the correct /r/ phoneme. Research results suggest that this appliance was successful with a large group of clients. (Author/DB)
ERIC Educational Resources Information Center
Johnston, Dale
2006-01-01
Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…
Nixon's Checkers: A Rhetoric-Communication Criticism.
ERIC Educational Resources Information Center
Kneupper, Charles W.; Mabry, Edward A.
Richard Nixon's "Checkers" speech, a response to charges brought against the "Nixon fund," was primarily an effort to explain the behavior of Eisenhower's 1952 presidential-campaign staff. The effectiveness of this speech was largely due to Nixon's self-disclosure within the context of the speech's narrative mode. In…
Gender Disparities in Speech-evoked Auditory Brainstem Response in Healthy Adults.
Jalaei, Bahram; Zakaria, Mohd Normani; Mohd Azmi, Mohd Hafiz Afifi; Nik Othman, Nik Adilah; Sidek, Dinsuhaimi
2017-04-01
Gender disparities in speech-evoked auditory brainstem response (speech-ABR) outcomes have been reported, but the literature is limited. The present study was performed to further verify this issue and determine the influence of head size on speech-ABR results between genders. Twenty-nine healthy Malaysian subjects (14 males and 15 females) aged 19 to 30 years participated in this study. After measuring the head circumference, speech-ABR was recorded by using synthesized syllable /da/ from the right ear of each participant. Speech-ABR peaks amplitudes, peaks latencies, and composite onset measures were computed and analyzed. Significant gender disparities were noted in the transient component but not in the sustained component of speech-ABR. Statistically higher V/A amplitudes and less steeper V/A slopes were found in females. These gender differences were partially affected after controlling for the head size. Head size is not the main contributing factor for gender disparities in speech-ABR outcomes. Gender-specific normative data can be useful when recording speech-ABR for clinical purposes.
Representative American Speeches 1996-1997. The Reference Shelf Volume 69 Number 6.
ERIC Educational Resources Information Center
Logue, Calvin McLeod, Ed.; DeHart, Jean, Ed.
This collection of representative speeches delivered by public officials and other prominent persons contains addresses to both large and small organizations, given both on ceremonial occasions and on less formal occasions. The collection contains school commencement addresses, addresses to government bodies, speeches to international…
Speech Perception as a Cognitive Process: The Interactive Activation Model.
ERIC Educational Resources Information Center
Elman, Jeffrey L.; McClelland, James L.
Research efforts to model speech perception in terms of a processing system in which knowledge and processing are distributed over large numbers of highly interactive--but computationally primative--elements are described in this report. After discussing the properties of speech that demand a parallel interactive processing system, the report…
Imitation of Para-Phonological Detail Following Left Hemisphere Lesions
ERIC Educational Resources Information Center
Kappes, Juliane; Baumgaertner, Annette; Peschke, Claudia; Goldenberg, Georg; Ziegler, Wolfram
2010-01-01
Imitation in speech refers to the unintentional transfer of phonologically irrelevant acoustic-phonetic information of auditory input into speech motor output. Evidence for such imitation effects has been explained within the framework of episodic theories. However, it is largely unclear, which neural structures mediate speech imitation and how…
Acoustic correlates of sexual orientation and gender-role self-concept in women's speech.
Kachel, Sven; Simpson, Adrian P; Steffens, Melanie C
2017-06-01
Compared to studies of male speakers, relatively few studies have investigated acoustic correlates of sexual orientation in women. The present investigation focuses on shedding more light on intra-group variability in lesbians and straight women by using a fine-grained analysis of sexual orientation and collecting data on psychological characteristics (e.g., gender-role self-concept). For a large-scale women's sample (overall n = 108), recordings of spontaneous and read speech were analyzed for median fundamental frequency and acoustic vowel space features. Two studies showed no acoustic differences between lesbians and straight women, but there was evidence of acoustic differences within sexual orientation groups. Intra-group variability in median f0 was found to depend on the exclusivity of sexual orientation; F1 and F2 in /iː/ (study 1) and median f0 (study 2) were acoustic correlates of gender-role self-concept, at least for lesbians. Other psychological characteristics (e.g., sexual orientation of female friends) were also reflected in lesbians' speech. Findings suggest that acoustic features indexicalizing sexual orientation can only be successfully interpreted in combination with a fine-grained analysis of psychological characteristics.
ERIC Educational Resources Information Center
Szagun, Gisela
2011-01-01
The acquisition of German participle inflection was investigated using spontaneous speech samples from six children between 1 ; 4 and 3 ; 8 and ten children between 1 ; 4 and 2 ; 10 recorded longitudinally at regular intervals. Child-directed speech was also analyzed. In adult and child speech weak participles were significantly more frequent than…
ERIC Educational Resources Information Center
Neeley, Richard A.; Pulliam, Mary Hannah; Catt, Merrill; McDaniel, D. Mike
2015-01-01
This case study examined the initial and renewed impact of speech generating devices on the expressive communication behaviors of a child with autism spectrum disorder. The study spanned six years of interrupted use of two speech generating devices. The child's communication behaviors were analyzed from video recordings and included communication…
Hong Kong: Ten Years After the Handover
2007-06-29
Liberties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Freedom of Speech and Assembly...surrounding the proposed anti-sedition legislation in 2003. The freedoms of speech and assembly appear to have been largely respected by the Chinese and...undermined the civil liberties of Hong Kong residents. However, there have been perceived threats of the freedom of speech and assembly, and erosion of press
Talker Differences in Clear and Conversational Speech: Acoustic Characteristics of Vowels
ERIC Educational Resources Information Center
Ferguson, Sarah Hargus; Kewley-Port, Diane
2007-01-01
Purpose: To determine the specific acoustic changes that underlie improved vowel intelligibility in clear speech. Method: Seven acoustic metrics were measured for conversational and clear vowels produced by 12 talkers--6 who previously were found (S. H. Ferguson, 2004) to produce a large clear speech vowel intelligibility effect for listeners with…
ERIC Educational Resources Information Center
Altvater-Mackensen, Nicole; Grossmann, Tobias
2015-01-01
Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…
Modulation, Adaptation, and Control of Orofacial Pathways in Healthy Adults
ERIC Educational Resources Information Center
Estep, Meredith E.
2009-01-01
Although the healthy adult possesses a large repertoire of coordinative strategies for oromotor behaviors, a range of nonverbal, speech-like movements can be observed during speech. The extent of overlap among sensorimotor speech and nonspeech neural correlates and the role of neuromodulatory inputs generated during oromotor behaviors are unknown.…
Discourse Analysis and Language Learning [Summary of a Symposium].
ERIC Educational Resources Information Center
Hatch, Evelyn
1981-01-01
A symposium on discourse analysis and language learning is summarized. Discourse analysis can be divided into six fields of research: syntax, the amount of syntactic organization required for different types of discourse, large speech events, intra-sentential cohesion in text, speech acts, and unequal power discourse. Research on speech events and…
Some articulatory details of emotional speech
NASA Astrophysics Data System (ADS)
Lee, Sungbok; Yildirim, Serdar; Bulut, Murtaza; Kazemzadeh, Abe; Narayanan, Shrikanth
2005-09-01
Differences in speech articulation among four emotion types, neutral, anger, sadness, and happiness are investigated by analyzing tongue tip, jaw, and lip movement data collected from one male and one female speaker of American English. The data were collected using an electromagnetic articulography (EMA) system while subjects produce simulated emotional speech. Pitch, root-mean-square (rms) energy and the first three formants were estimated for vowel segments. For both speakers, angry speech exhibited the largest rms energy and largest articulatory activity in terms of displacement range and movement speed. Happy speech is characterized by largest pitch variability. It has higher rms energy than neutral speech but articulatory activity is rather comparable to, or less than, neutral speech. That is, happy speech is more prominent in voicing activity than in articulation. Sad speech exhibits longest sentence duration and lower rms energy. However, its articulatory activity is no less than neutral speech. Interestingly, for the male speaker, articulation for vowels in sad speech is consistently more peripheral (i.e., more forwarded displacements) when compared to other emotions. However, this does not hold for female subject. These and other results will be discussed in detail with associated acoustics and perceived emotional qualities. [Work supported by NIH.
Developing a Weighted Measure of Speech Sound Accuracy
Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.
2010-01-01
Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344
A clinician survey of speech and non-speech characteristics of neurogenic stuttering.
Theys, Catherine; van Wieringen, Astrid; De Nil, Luc F
2008-03-01
This study presents survey data on 58 Dutch-speaking patients with neurogenic stuttering following various neurological injuries. Stroke was the most prevalent cause of stuttering in our patients, followed by traumatic brain injury, neurodegenerative diseases, and other causes. Speech and non-speech characteristics were analyzed separately for these four etiology groups. Results suggested possible group differences, including site of lesion and influence of speech conditions on stuttering. Other characteristics, such as within-word localization of disfluencies and presence of secondary behaviors were comparable across the etiology groups. The implications of our results for the diagnosis of neurogenic stuttering will be discussed. After reading this article, the reader will be able to: (1) provide a concise overview of the main literature on neurogenic stuttering; (2) discuss the speech and non-speech characteristics of neurogenic stuttering; (3) provide an overview of current clinical practices for intervention with neurogenic stuttering patients and their perceived outcome.
From disgust to contempt-speech: The nature of contempt on the map of prejudicial emotions.
Bilewicz, Michal; Kamińska, Olga Katarzyna; Winiewski, Mikołaj; Soral, Wiktor
2017-01-01
Analyzing the contempt as an intergroup emotion, we suggest that contempt and anger are not built upon each other, whereas disgust seems to be the most elementary and specific basic-emotional antecedent of contempt. Concurring with Gervais & Fessler, we suggest that many instances of "hate speech" are in fact instances of "contempt speech" - being based on disgust-driven contempt rather than hate.
ERIC Educational Resources Information Center
Williams, Frederick, Ed.; And Others
In this second of two studies conducted with portions of the National Speech and Hearing Survey data, the investigators analyzed the phonetic variants from standard American English in the speech of two groups of nonstandard English speaking children. The study used samples of free speech and performance on the Gold-Fristoe Test of Articulation…
The impact of extrinsic demographic factors on Cantonese speech acquisition.
To, Carol K S; Cheung, Pamela S P; McLeod, Sharynne
2013-05-01
This study modeled the associations between extrinsic demographic factors and children's speech acquisition in Hong Kong Cantonese. The speech of 937 Cantonese-speaking children aged 2;4 to 6;7 in Hong Kong was assessed using a standardized speech test. Demographic information regarding household income, paternal education, maternal education, presence of siblings and having a domestic helper as the main caregiver was collected via parent questionnaires. After controlling for age and sex, higher maternal education and higher household income were significantly associated with better speech skills; however, these variables explained a negligible amount of variance. Paternal education, number of siblings and having a foreign domestic helper did not associate with a child's speech acquisition. Extrinsic factors only exerted minimal influence on children's speech acquisition. A large amount of unexplained variance in speech ability still warrants further research.
Temporal modulations in speech and music.
Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David
2017-10-01
Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
The somatotopy of speech: Phonation and articulation in the human motor cortex
Brown, Steven; Laird, Angela R.; Pfordresher, Peter Q.; Thelen, Sarah M.; Turkeltaub, Peter; Liotti, Mario
2010-01-01
A sizable literature on the neuroimaging of speech production has reliably shown activations in the orofacial region of the primary motor cortex. These activations have invariably been interpreted as reflecting “mouth” functioning and thus articulation. We used functional magnetic resonance imaging to compare an overt speech task with tongue movement, lip movement, and vowel phonation. The results showed that the strongest motor activation for speech was the somatotopic larynx area of the motor cortex, thus reflecting the significant contribution of phonation to speech production. In order to analyze further the phonatory component of speech, we performed a voxel-based meta-analysis of neuroimaging studies of syllable-singing (11 studies) and compared the results with a previously-published meta-analysis of oral reading (11 studies), showing again a strong overlap in the larynx motor area. Overall, these findings highlight the under-recognized presence of phonation in imaging studies of speech production, and support the role of the larynx motor cortex in mediating the “melodicity” of speech. PMID:19162389
Huh, Young Eun; Park, Jongkyu; Suh, Mee Kyung; Lee, Sang Eun; Kim, Jumin; Jeong, Yuri; Kim, Hee-Tae; Cho, Jin Whan
2015-08-01
In Parkinson variant of multiple system atrophy (MSA-P), patterns of early speech impairment and their distinguishing features from Parkinson's disease (PD) require further exploration. Here, we compared speech data among patients with early-stage MSA-P, PD, and healthy subjects using quantitative acoustic and perceptual analyses. Variables were analyzed for men and women in view of gender-specific features of speech. Acoustic analysis revealed that male patients with MSA-P exhibited more profound speech abnormalities than those with PD, regarding increased voice pitch, prolonged pause time, and reduced speech rate. This might be due to widespread pathology of MSA-P in nigrostriatal or extra-striatal structures related to speech production. Although several perceptual measures were mildly impaired in MSA-P and PD patients, none of these parameters showed a significant difference between patient groups. Detailed speech analysis using acoustic measures may help distinguish between MSA-P and PD early in the disease process. Copyright © 2015 Elsevier Inc. All rights reserved.
A social feedback loop for speech development and its reduction in autism
Warlaumont, Anne S.; Richards, Jeffrey A.; Gilkerson, Jill; Oller, D. Kimbrough
2014-01-01
We analyze the microstructure of child-adult interaction during naturalistic, daylong, automatically labeled audio recordings (13,836 hours total) of children (8- to 48-month-olds) with and without autism. We find that adult responses are more likely when child vocalizations are speech-related. In turn, a child vocalization is more likely to be speech-related if the previous speech-related child vocalization received an immediate adult response. Taken together, these results are consistent with the idea that there is a social feedback loop between child and caregiver that promotes speech-language development. Although this feedback loop applies in both typical development and autism, children with autism produce proportionally fewer speech-related vocalizations and the responses they receive are less contingent on whether their vocalizations are speech-related. We argue that such differences will diminish the strength of the social feedback loop with cascading effects on speech development over time. Differences related to socioeconomic status are also reported. PMID:24840717
Hearing impaired speech in noisy classrooms
NASA Astrophysics Data System (ADS)
Shahin, Kimary; McKellin, William H.; Jamieson, Janet; Hodgson, Murray; Pichora-Fuller, M. Kathleen
2005-04-01
Noisy classrooms have been shown to induce among students patterns of interaction similar to those used by hearing impaired people [W. H. McKellin et al., GURT (2003)]. In this research, the speech of children in a noisy classroom setting was investigated to determine if noisy classrooms have an effect on students' speech. Audio recordings were made of the speech of students during group work in their regular classrooms (grades 1-7), and of the speech of the same students in a sound booth. Noise level readings in the classrooms were also recorded. Each student's noisy and quiet environment speech samples were acoustically analyzed for prosodic and segmental properties (f0, pitch range, pitch variation, phoneme duration, vowel formants), and compared. The analysis showed that the students' speech in the noisy classrooms had characteristics of the speech of hearing-impaired persons [e.g., R. O'Halpin, Clin. Ling. and Phon. 15, 529-550 (2001)]. Some educational implications of our findings were identified. [Work supported by the Peter Wall Institute for Advanced Studies, University of British Columbia.
Crosse, Michael J; Lalor, Edmund C
2014-04-01
Visual speech can greatly enhance a listener's comprehension of auditory speech when they are presented simultaneously. Efforts to determine the neural underpinnings of this phenomenon have been hampered by the limited temporal resolution of hemodynamic imaging and the fact that EEG and magnetoencephalographic data are usually analyzed in response to simple, discrete stimuli. Recent research has shown that neuronal activity in human auditory cortex tracks the envelope of natural speech. Here, we exploit this finding by estimating a linear forward-mapping between the speech envelope and EEG data and show that the latency at which the envelope of natural speech is represented in cortex is shortened by >10 ms when continuous audiovisual speech is presented compared with audio-only speech. In addition, we use a reverse-mapping approach to reconstruct an estimate of the speech stimulus from the EEG data and, by comparing the bimodal estimate with the sum of the unimodal estimates, find no evidence of any nonlinear additive effects in the audiovisual speech condition. These findings point to an underlying mechanism that could account for enhanced comprehension during audiovisual speech. Specifically, we hypothesize that low-level acoustic features that are temporally coherent with the preceding visual stream may be synthesized into a speech object at an earlier latency, which may provide an extended period of low-level processing before extraction of semantic information.
Contextual modulation of reading rate for direct versus indirect speech quotations.
Yao, Bo; Scheepers, Christoph
2011-12-01
In human communication, direct speech (e.g., Mary said: "I'm hungry") is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, the processing consequences of this distinction are largely unclear. In two experiments, participants were asked to either orally (Experiment 1) or silently (Experiment 2, eye-tracking) read written stories that contained either a direct speech or an indirect speech quotation. The context preceding those quotations described a situation that implied either a fast-speaking or a slow-speaking quoted protagonist. It was found that this context manipulation affected reading rates (in both oral and silent reading) for direct speech quotations, but not for indirect speech quotations. This suggests that readers are more likely to engage in perceptual simulations of the reported speech act when reading direct speech as opposed to meaning-equivalent indirect speech quotations, as part of a more vivid representation of the former. Copyright © 2011 Elsevier B.V. All rights reserved.
Characterizing Articulation in Apraxic Speech Using Real-Time Magnetic Resonance Imaging.
Hagedorn, Christina; Proctor, Michael; Goldstein, Louis; Wilson, Stephen M; Miller, Bruce; Gorno-Tempini, Maria Luisa; Narayanan, Shrikanth S
2017-04-14
Real-time magnetic resonance imaging (MRI) and accompanying analytical methods are shown to capture and quantify salient aspects of apraxic speech, substantiating and expanding upon evidence provided by clinical observation and acoustic and kinematic data. Analysis of apraxic speech errors within a dynamic systems framework is provided and the nature of pathomechanisms of apraxic speech discussed. One adult male speaker with apraxia of speech was imaged using real-time MRI while producing spontaneous speech, repeated naming tasks, and self-paced repetition of word pairs designed to elicit speech errors. Articulatory data were analyzed, and speech errors were detected using time series reflecting articulatory activity in regions of interest. Real-time MRI captured two types of apraxic gestural intrusion errors in a word pair repetition task. Gestural intrusion errors in nonrepetitive speech, multiple silent initiation gestures at the onset of speech, and covert (unphonated) articulation of entire monosyllabic words were also captured. Real-time MRI and accompanying analytical methods capture and quantify many features of apraxic speech that have been previously observed using other modalities while offering high spatial resolution. This patient's apraxia of speech affected the ability to select only the appropriate vocal tract gestures for a target utterance, suppressing others, and to coordinate them in time.
A social feedback loop for speech development and its reduction in autism.
Warlaumont, Anne S; Richards, Jeffrey A; Gilkerson, Jill; Oller, D Kimbrough
2014-07-01
We analyzed the microstructure of child-adult interaction during naturalistic, daylong, automatically labeled audio recordings (13,836 hr total) of children (8- to 48-month-olds) with and without autism. We found that an adult was more likely to respond when the child's vocalization was speech related rather than not speech related. In turn, a child's vocalization was more likely to be speech related if the child's previous speech-related vocalization had received an immediate adult response rather than no response. Taken together, these results are consistent with the idea that there is a social feedback loop between child and caregiver that promotes speech development. Although this feedback loop applies in both typical development and autism, children with autism produced proportionally fewer speech-related vocalizations, and the responses they received were less contingent on whether their vocalizations were speech related. We argue that such differences will diminish the strength of the social feedback loop and have cascading effects on speech development over time. Differences related to socioeconomic status are also reported. © The Author(s) 2014.
Tilsen, Sam; Arvaniti, Amalia
2013-07-01
This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.
Design of an efficient music-speech discriminator.
Tardón, Lorenzo J; Sammartino, Simone; Barbancho, Isabel
2010-01-01
In this paper, the problem of the design of a simple and efficient music-speech discriminator for large audio data sets in which advanced music playing techniques are taught and voice and music are intrinsically interleaved is addressed. In the process, a number of features used in speech-music discrimination are defined and evaluated over the available data set. Specifically, the data set contains pieces of classical music played with different and unspecified instruments (or even lyrics) and the voice of a teacher (a top music performer) or even the overlapped voice of the translator and other persons. After an initial test of the performance of the features implemented, a selection process is started, which takes into account the type of classifier selected beforehand, to achieve good discrimination performance and computational efficiency, as shown in the experiments. The discrimination application has been defined and tested on a large data set supplied by Fundacion Albeniz, containing a large variety of classical music pieces played with different instrument, which include comments and speeches of famous performers.
Prototype app for voice therapy: a peer review.
Lavaissiéri, Paula; Melo, Paulo Eduardo Damasceno
2017-03-09
Voice speech therapy promotes changes in patients' voice-related habits and rehabilitation. Speech-language therapists use a host of materials ranging from pictures to electronic resources and computer tools as aids in this process. Mobile technology is attractive, interactive and a nearly constant feature in the daily routine of a large part of the population and has a growing application in healthcare. To develop a prototype application for voice therapy, submit it to peer assessment, and to improve the initial prototype based on these assessments. a prototype of the Q-Voz application was developed based on Apple's Human Interface Guidelines. The prototype was analyzed by seven speech therapists who work in the voice area. Improvements to the product were made based on these assessments. all features of the application were considered satisfactory by most evaluators. All evaluators found the application very useful; evaluators reported that patients would find it easier to make changes in voice behavior with the application than without it; the evaluators stated they would use this application with their patients with dysphonia and in the process of rehabilitation and that the application offers useful tools for voice self-management. Based on the suggestions provided, six improvements were made to the prototype. the prototype Q-Voz Application was developed and evaluated by seven judges and subsequently improved. All evaluators stated they would use the application with their patients undergoing rehabilitation, indicating that the Q-Voz Application for mobile devices can be considered an auxiliary tool for voice speech therapy.
What Makes a Caseload (Un) Manageable? School-Based Speech-Language Pathologists Speak
ERIC Educational Resources Information Center
Katz, Lauren A.; Maag, Abby; Fallon, Karen A.; Blenkarn, Katie; Smith, Megan K.
2010-01-01
Purpose: Large caseload sizes and a shortage of speech-language pathologists (SLPs) are ongoing concerns in the field of speech and language. This study was conducted to identify current mean caseload size for school-based SLPs, a threshold at which caseload size begins to be perceived as unmanageable, and variables contributing to school-based…
ERIC Educational Resources Information Center
Bailey, Dallin J.; Dromey, Christopher
2015-01-01
Purpose: The purpose of this study was to examine divided attention over a large age range by looking at the effects of 3 nonspeech tasks on concurrent speech motor performance. The nonspeech tasks were designed to facilitate measurement of bidirectional interference, allowing examination of their sensitivity to speech activity. A cross-sectional…
Contextual Modulation of Reading Rate for Direct versus Indirect Speech Quotations
ERIC Educational Resources Information Center
Yao, Bo; Scheepers, Christoph
2011-01-01
In human communication, direct speech (e.g., "Mary said: "I'm hungry"") is perceived to be more vivid than indirect speech (e.g., "Mary said [that] she was hungry"). However, the processing consequences of this distinction are largely unclear. In two experiments, participants were asked to either orally (Experiment 1) or silently (Experiment 2,…
Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo
2015-05-01
The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.
Determining the energetic and informational components of speech-on-speech masking
Kidd, Gerald; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Clayton, Kameron K.; Best, Virginia
2016-01-01
Identification of target speech was studied under masked conditions consisting of two or four independent speech maskers. In the reference conditions, the maskers were colocated with the target, the masker talkers were the same sex as the target, and the masker speech was intelligible. The comparison conditions, intended to provide release from masking, included different-sex target and masker talkers, time-reversal of the masker speech, and spatial separation of the maskers from the target. Significant release from masking was found for all comparison conditions. To determine whether these reductions in masking could be attributed to differences in energetic masking, ideal time-frequency segregation (ITFS) processing was applied so that the time-frequency units where the masker energy dominated the target energy were removed. The remaining target-dominated “glimpses” were reassembled as the stimulus. Speech reception thresholds measured using these resynthesized ITFS-processed stimuli were the same for the reference and comparison conditions supporting the conclusion that the amount of energetic masking across conditions was the same. These results indicated that the large release from masking found under all comparison conditions was due primarily to a reduction in informational masking. Furthermore, the large individual differences observed generally were correlated across the three masking release conditions. PMID:27475139
Values most extolled in Nobel Peace Prize speeches.
Kinnier, Richard T; Kernes, Jerry L; Hayman, Jessie Wetherbe; Flynn, Patricia N; Simon, Elia; Kilian, Laura A
2007-11-01
The authors randomly selected 50 Nobel Peace Prize speeches and content analyzed them to determine which values the speakers extolled most frequently. The 10 most frequently mentioned values were peace (in 100% of the speeches), hope (92%), security (86%), justice (85%), responsibility (81%), liberty (80%), tolerance (79%), altruism (75%), God (49%), and truth (38%). The authors discuss the interplay of these values in the modern world and implications regarding the search for universal moral values.
SAM: speech-aware applications in medicine to support structured data entry.
Wormek, A. K.; Ingenerf, J.; Orthner, H. F.
1997-01-01
In the last two years, improvement in speech recognition technology has directed the medical community's interest to porting and using such innovations in clinical systems. The acceptance of speech recognition systems in clinical domains increases with recognition speed, large medical vocabulary, high accuracy, continuous speech recognition, and speaker independence. Although some commercial speech engines approach these requirements, the greatest benefit can be achieved in adapting a speech recognizer to a specific medical application. The goals of our work are first, to develop a speech-aware core component which is able to establish connections to speech recognition engines of different vendors. This is realized in SAM. Second, with applications based on SAM we want to support the physician in his/her routine clinical care activities. Within the STAMP project (STAndardized Multimedia report generator in Pathology), we extend SAM by combining a structured data entry approach with speech recognition technology. Another speech-aware application in the field of Diabetes care is connected to a terminology server. The server delivers a controlled vocabulary which can be used for speech recognition. PMID:9357730
ERIC Educational Resources Information Center
Leaby, Margaret M.; Walsh, Irene P.
2008-01-01
The importance of learning about and applying clinical discourse analysis to enhance the talk in interaction in the speech-language pathology clinic is discussed. The benefits of analyzing clinical discourse to explicate therapy dynamics are described.
Multimodal Speech Capture System for Speech Rehabilitation and Learning.
Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam
2017-11-01
Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.
Inner Speech: Development, Cognitive Functions, Phenomenology, and Neurobiology
2015-01-01
Inner speech—also known as covert speech or verbal thinking—has been implicated in theories of cognitive development, speech monitoring, executive function, and psychopathology. Despite a growing body of knowledge on its phenomenology, development, and function, approaches to the scientific study of inner speech have remained diffuse and largely unintegrated. This review examines prominent theoretical approaches to inner speech and methodological challenges in its study, before reviewing current evidence on inner speech in children and adults from both typical and atypical populations. We conclude by considering prospects for an integrated cognitive science of inner speech, and present a multicomponent model of the phenomenon informed by developmental, cognitive, and psycholinguistic considerations. Despite its variability among individuals and across the life span, inner speech appears to perform significant functions in human cognition, which in some cases reflect its developmental origins and its sharing of resources with other cognitive processes. PMID:26011789
Developing a weighted measure of speech sound accuracy.
Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J
2011-02-01
To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.
Automatic Speech Recognition from Neural Signals: A Focused Review.
Herff, Christian; Schultz, Tanja
2016-01-01
Speech interfaces have become widely accepted and are nowadays integrated in various real-life applications and devices. They have become a part of our daily life. However, speech interfaces presume the ability to produce intelligible speech, which might be impossible due to either loud environments, bothering bystanders or incapabilities to produce speech (i.e., patients suffering from locked-in syndrome). For these reasons it would be highly desirable to not speak but to simply envision oneself to say words or sentences. Interfaces based on imagined speech would enable fast and natural communication without the need for audible speech and would give a voice to otherwise mute people. This focused review analyzes the potential of different brain imaging techniques to recognize speech from neural signals by applying Automatic Speech Recognition technology. We argue that modalities based on metabolic processes, such as functional Near Infrared Spectroscopy and functional Magnetic Resonance Imaging, are less suited for Automatic Speech Recognition from neural signals due to low temporal resolution but are very useful for the investigation of the underlying neural mechanisms involved in speech processes. In contrast, electrophysiologic activity is fast enough to capture speech processes and is therefor better suited for ASR. Our experimental results indicate the potential of these signals for speech recognition from neural data with a focus on invasively measured brain activity (electrocorticography). As a first example of Automatic Speech Recognition techniques used from neural signals, we discuss the Brain-to-text system.
Zeng, Yin-Ting; Hwu, Wuh-Liang; Torng, Pao-Chuan; Lee, Ni-Chung; Shieh, Jeng-Yi; Lu, Lu; Chien, Yin-Hsiu
2017-05-01
Patients with infantile-onset Pompe disease (IOPD) can be treated by recombinant human acid alpha glucosidase (rhGAA) replacement beginning at birth with excellent survival rates, but they still commonly present with speech disorders. This study investigated the progress of speech disorders in these early-treated patients and ascertained the relationship with treatments. Speech disorders, including hypernasal resonance, articulation disorders, and speech intelligibility, were scored by speech-language pathologists using auditory perception in seven early-treated patients over a period of 6 years. Statistical analysis of the first and last evaluations of the patients was performed with the Wilcoxon signed-rank test. A total of 29 speech samples were analyzed. All the patients suffered from hypernasality, articulation disorder, and impairment in speech intelligibility at the age of 3 years. The conditions were stable, and 2 patients developed normal or near normal speech during follow-up. Speech therapy and a high dose of rhGAA appeared to improve articulation in 6 of the 7 patients (86%, p = 0.028) by decreasing the omission of consonants, which consequently increased speech intelligibility (p = 0.041). Severity of hypernasality greatly reduced only in 2 patients (29%, p = 0.131). Speech disorders were common even in early and successfully treated patients with IOPD; however, aggressive speech therapy and high-dose rhGAA could improve their speech disorders. Copyright © 2016 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.
Quality of communication in interpreted versus noninterpreted PICU family meetings.
Van Cleave, Alisa C; Roosen-Runge, Megan U; Miller, Alison B; Milner, Lauren C; Karkazis, Katrina A; Magnus, David C
2014-06-01
To describe the quality of physician-family communication during interpreted and noninterpreted family meetings in the PICU. Prospective, exploratory, descriptive observational study of noninterpreted English family meetings and interpreted Spanish family meetings in the pediatric intensive care setting. A single, university-based, tertiary children's hospital. Participants in PICU family meetings, including medical staff, family members, ancillary staff, and interpreters. Thirty family meetings (21 English and nine Spanish) were audio-recorded, transcribed, de-identified, and analyzed using the qualitative method of directed content analysis. Quality of communication was analyzed in three ways: 1) presence of elements of shared decision-making, 2) balance between physician and family speech, and 3) complexity of physician speech. Of the 11 elements of shared decision-making, only four occurred in more than half of English meetings, and only three occurred in more than half of Spanish meetings. Physicians spoke for a mean of 20.7 minutes, while families spoke for 9.3 minutes during English meetings. During Spanish meetings, physicians spoke for a mean of 14.9 minutes versus just 3.7 minutes of family speech. Physician speech complexity received a mean grade level score of 8.2 in English meetings compared to 7.2 in Spanish meetings. The quality of physician-family communication during PICU family meetings is poor overall. Interpreted meetings had poorer communication quality as evidenced by fewer elements of shared decision-making and greater imbalance between physician and family speech. However, physician speech may be less complex during interpreted meetings. Our data suggest that physicians can improve communication in both interpreted and noninterpreted family meetings by increasing the use of elements of shared decision-making, improving the balance between physician and family speech, and decreasing the complexity of physician speech.
Multi-time resolution analysis of speech: evidence from psychophysics
Chait, Maria; Greenberg, Steven; Arai, Takayuki; Simon, Jonathan Z.; Poeppel, David
2015-01-01
How speech signals are analyzed and represented remains a foundational challenge both for cognitive science and neuroscience. A growing body of research, employing various behavioral and neurobiological experimental techniques, now points to the perceptual relevance of both phoneme-sized (10–40 Hz modulation frequency) and syllable-sized (2–10 Hz modulation frequency) units in speech processing. However, it is not clear how information associated with such different time scales interacts in a manner relevant for speech perception. We report behavioral experiments on speech intelligibility employing a stimulus that allows us to investigate how distinct temporal modulations in speech are treated separately and whether they are combined. We created sentences in which the slow (~4 Hz; Slow) and rapid (~33 Hz; Shigh) modulations—corresponding to ~250 and ~30 ms, the average duration of syllables and certain phonetic properties, respectively—were selectively extracted. Although Slow and Shigh have low intelligibility when presented separately, dichotic presentation of Shigh with Slow results in supra-additive performance, suggesting a synergistic relationship between low- and high-modulation frequencies. A second experiment desynchronized presentation of the Slow and Shigh signals. Desynchronizing signals relative to one another had no impact on intelligibility when delays were less than ~45 ms. Longer delays resulted in a steep intelligibility decline, providing further evidence of integration or binding of information within restricted temporal windows. Our data suggest that human speech perception uses multi-time resolution processing. Signals are concurrently analyzed on at least two separate time scales, the intermediate representations of these analyses are integrated, and the resulting bound percept has significant consequences for speech intelligibility—a view compatible with recent insights from neuroscience implicating multi-timescale auditory processing. PMID:26136650
Campus Free Speech Presents Both Legal and PR Challenges for Colleges
ERIC Educational Resources Information Center
Nguyen, AiVi; Dragga, Anthony
2016-01-01
Free speech is fast becoming a hot-button issue at colleges across the country, with campus protests often mirroring those of the public-at-large on issues such as racism or tackling institution-specific matters such as college governance. On the surface, the issue of campus free speech may seem like a purely legal concern, yet in reality,…
ERIC Educational Resources Information Center
Ferati, Mexhid Adem
2012-01-01
To access interactive systems, blind and visually impaired users can leverage their auditory senses by using non-speech sounds. The current structure of non-speech sounds, however, is geared toward conveying user interface operations (e.g., opening a file) rather than large theme-based information (e.g., a history passage) and, thus, is ill-suited…
When speaker identity is unavoidable: Neural processing of speaker identity cues in natural speech.
Tuninetti, Alba; Chládková, Kateřina; Peter, Varghese; Schiller, Niels O; Escudero, Paola
2017-11-01
Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners' native language. In a multi-deviant oddball paradigm, native and nonnative speakers of Dutch were exposed to naturally-produced Dutch vowels varying in speaker, sex, accent, and phoneme identity. Unexpectedly, the analysis of mismatch negativity (MMN) amplitudes elicited by each type of change shows a large degree of early perceptual sensitivity to non-linguistic cues. This finding on perception of naturally-produced stimuli contrasts with previous studies examining the perception of synthetic stimuli wherein adult listeners automatically disregard acoustic cues to speaker identity. The present finding bears relevance to speech normalization theories, suggesting that at an unattended level of processing, listeners are indeed sensitive to changes in fundamental frequency in natural speech tokens. Copyright © 2017 Elsevier Inc. All rights reserved.
Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.
Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A
2017-09-18
Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p < .01) and with mean length of utterance produced during a written picture description (r = .96, p < .01). Correlations between inner speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.
Application of advanced speech technology in manned penetration bombers
NASA Astrophysics Data System (ADS)
North, R.; Lea, W.
1982-03-01
This report documents research on the potential use of speech technology in a manned penetration bomber aircraft (B-52/G and H). The objectives of the project were to analyze the pilot/copilot crewstation tasks over a three-hour-and forty-minute mission and determine the tasks that would benefit the most from conversion to speech recognition/generation, determine the technological feasibility of each of the identified tasks, and prioritize these tasks based on these criteria. Secondary objectives of the program were to enunciate research strategies in the application of speech technologies in airborne environments, and develop guidelines for briefing user commands on the potential of using speech technologies in the cockpit. The results of this study indicated that for the B-52 crewmember, speech recognition would be most beneficial for retrieving chart and procedural data that is contained in the flight manuals. Technological feasibility of these tasks indicated that the checklist and procedural retrieval tasks would be highly feasible for a speech recognition system.
Communication in a noisy environment: Perception of one's own voice and speech enhancement
NASA Astrophysics Data System (ADS)
Le Cocq, Cecile
Workers in noisy industrial environments are often confronted to communication problems. Lost of workers complain about not being able to communicate easily with their coworkers when they wear hearing protectors. In consequence, they tend to remove their protectors, which expose them to the risk of hearing loss. In fact this communication problem is a double one: first the hearing protectors modify one's own voice perception; second they interfere with understanding speech from others. This double problem is examined in this thesis. When wearing hearing protectors, the modification of one's own voice perception is partly due to the occlusion effect which is produced when an earplug is inserted in the car canal. This occlusion effect has two main consequences: first the physiological noises in low frequencies are better perceived, second the perception of one's own voice is modified. In order to have a better understanding of this phenomenon, the literature results are analyzed systematically, and a new method to quantify the occlusion effect is developed. Instead of stimulating the skull with a bone vibrator or asking the subject to speak as is usually done in the literature, it has been decided to excite the buccal cavity with an acoustic wave. The experiment has been designed in such a way that the acoustic wave which excites the buccal cavity does not excite the external car or the rest of the body directly. The measurement of the hearing threshold in open and occluded car has been used to quantify the subjective occlusion effect for an acoustic wave in the buccal cavity. These experimental results as well as those reported in the literature have lead to a better understanding of the occlusion effect and an evaluation of the role of each internal path from the acoustic source to the internal car. The speech intelligibility from others is altered by both the high sound levels of noisy industrial environments and the speech signal attenuation due to hearing protectors. A possible solution to this problem is to denoise the speech signal and transmit it under the hearing protector. Lots of denoising techniques are available and are often used for denoising speech in telecommunication. In the framework of this thesis, denoising by wavelet thresholding is considered. A first study on "classical" wavelet denoising technics is conducted in order to evaluate their performance in noisy industrial environments. The tested speech signals are altered by industrial noises according to a wide range of signal to noise ratios. The speech denoised signals are evaluated with four criteria. A large database is obtained and analyzed with a selection algorithm which has been designed for this purpose. This first study has lead to the identification of the influence from the different parameters of the wavelet denoising method on its quality and has identified the "classical" method which has given the best performances in terms of denoising quality. This first study has also generated ideas for designing a new thresholding rule suitable for speech wavelet denoising in an industrial noisy environment. In a second study, this new thresholding rule is presented and evaluated. Its performances are better than the "classical" method found in the first study when the signal to noise ratio from the speech signal is between --10 dB and 15 dB.
Iuzzini-Seigel, Jenya; Hogan, Tiffany P; Green, Jordan R
2017-05-24
The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and speech delay. Participants included 48 children ranging between 4;7 to 17;8 (years;months) with CAS (n = 10), CAS + language impairment (n = 10), speech delay (n = 10), language impairment (n = 9), or typical development (n = 9). Speech inconsistency was assessed at phonemic and token-to-token levels using a variety of stimuli. Children with CAS and CAS + language impairment performed equivalently on all inconsistency assessments. Children with language impairment evidenced high levels of speech inconsistency on the phrase "buy Bobby a puppy." Token-to-token inconsistency of monosyllabic words and the phrase "buy Bobby a puppy" was sensitive and specific in differentiating children with CAS and speech delay, whereas inconsistency calculated on other stimuli (e.g., multisyllabic words) was less efficacious in differentiating between these disorders. Speech inconsistency is a core feature of CAS and is efficacious in differentiating between children with CAS and speech delay; however, sensitivity and specificity are stimuli dependent.
Cortical Integration of Audio-Visual Information
Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.
2013-01-01
We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442
Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.
Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W
2017-06-22
Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.
Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback
Lee, Jackson C.; Mittelman, Talia; Stepp, Cara E.; Bohland, Jason W.
2017-01-01
Purpose Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Method Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. Results New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. Conclusions This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. Supplemental Material https://doi.org/10.23641/asha.5103067 PMID:28655038
Stanton's "The Solitude of Self": A Rationale for Feminism.
ERIC Educational Resources Information Center
Campbell, Karlyn Kohrs
1980-01-01
Examines Elizabeth Cady Stanton's speech, "The Solitude of Self," as a philosophical statement of the principles underlying the nineteenth century struggle for woman's rights in the United States. Analyzes the lyric structure and tone of the speech and its tragic, existential rationale for feminism. (JMF)
The Sad Case of the Columbine Tiles.
ERIC Educational Resources Information Center
Dowling-Sendor, Benjamin
2003-01-01
Analyzes free-speech challenge to school district's guidelines for acceptable expressions on ceramic tiles painted by Columbine High School students to express their feelings about the massacre. Tenth Circuit found that tile painting constituted school-sponsored speech and thus district had the constitutional authority under "Hazelwood School…
Cortical activity patterns predict robust speech discrimination ability in noise
Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.
2012-01-01
The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331
Somanath, Keerthan; Mau, Ted
2016-11-01
(1) To develop an automated algorithm to analyze electroglottographic (EGG) signal in continuous dysphonic speech, and (2) to identify EGG waveform parameters that correlate with the auditory-perceptual quality of strain in the speech of patients with adductor spasmodic dysphonia (ADSD). Software development with application in a prospective controlled study. EGG was recorded from 12 normal speakers and 12 subjects with ADSD reading excerpts from the Rainbow Passage. Data were processed by a new algorithm developed with the specific goal of analyzing continuous dysphonic speech. The contact quotient, pulse width, a new parameter peak skew, and various contact closing slope quotient and contact opening slope quotient measures were extracted. EGG parameters were compared between normal and ADSD speech. Within the ADSD group, intra-subject comparison was also made between perceptually strained syllables and unstrained syllables. The opening slope quotient SO7525 distinguished strained syllables from unstrained syllables in continuous speech within individual subjects with ADSD. The standard deviations, but not the means, of contact quotient, EGGW50, peak skew, and SO7525 were different between normal and ADSD speakers. The strain-stress pattern in continuous speech can be visualized as color gradients based on the variation of EGG parameter values. EGG parameters may provide a within-subject measure of vocal strain and serve as a marker for treatment response. The addition of EGG to multidimensional assessment may lead to improved characterization of the voice disturbance in ADSD. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Somanath, Keerthan; Mau, Ted
2016-01-01
Objectives (1) To develop an automated algorithm to analyze electroglottographic (EGG) signal in continuous, dysphonic speech, and (2) to identify EGG waveform parameters that correlate with the auditory-perceptual quality of strain in the speech of patients with adductor spasmodic dysphonia (ADSD). Study Design Software development with application in a prospective controlled study. Methods EGG was recorded from 12 normal speakers and 12 subjects with ADSD reading excerpts from the Rainbow Passage. Data were processed by a new algorithm developed with the specific goal of analyzing continuous dysphonic speech. The contact quotient (CQ), pulse width (EGGW), a new parameter peak skew, and various contact closing slope quotient (SC) and contact opening slope quotient (SO) measures were extracted. EGG parameters were compared between normal and ADSD speech. Within the ADSD group, intra-subject comparison was also made between perceptually strained syllables and unstrained syllables. Results The opening slope quotient SO7525 distinguished strained syllables from unstrained syllables in continuous speech within individual ADSD subjects. The standard deviations, but not the means, of CQ, EGGW50, peak skew, and SO7525 were different between normal and ADSD speakers. The strain-stress pattern in continuous speech can be visualized as color gradients based on the variation of EGG parameter values. Conclusions EGG parameters may provide a within-subject measure of vocal strain and serve as a marker for treatment response. The addition of EGG to multi-dimensional assessment may lead to improved characterization of the voice disturbance in ADSD. PMID:26739857
A systematic review of treatment intensity in speech disorders.
Kaipa, Ramesh; Peterson, Abigail Marie
2016-12-01
Treatment intensity (sometimes referred to as "practice amount") has been well-investigated in learning non-speech tasks, but its role in treating speech disorders has not been largely analysed. This study reviewed the literature regarding treatment intensity in speech disorders. A systematic search was conducted in four databases using appropriate search terms. Seven articles from a total of 580 met the inclusion criteria. The speech disorders investigated included speech sound disorders, dysarthria, acquired apraxia of speech and childhood apraxia of speech. All seven studies were evaluated for their methodological quality, research phase and evidence level. Evidence level of reviewed studies ranged from moderate to strong. With regard to the research phase, only one study was considered to be phase III research, which corresponds to the controlled trial phase. The remaining studies were considered to be phase II research, which corresponds to the phase where magnitude of therapeutic effect is assessed. Results suggested that higher treatment intensity was favourable over lower treatment intensity of specific treatment technique(s) for treating childhood apraxia of speech and speech sound (phonological) disorders. Future research should incorporate randomised-controlled designs to establish optimal treatment intensity that is specific to each of the speech disorders.
Recovering With Acquired Apraxia of Speech: The First 2 Years.
Haley, Katarina L; Shafer, Jennifer N; Harmon, Tyson G; Jacks, Adam
2016-12-01
This study was intended to document speech recovery for 1 person with acquired apraxia of speech quantitatively and on the basis of her lived experience. The second author sustained a traumatic brain injury that resulted in acquired apraxia of speech. Over a 2-year period, she documented her recovery through 22 video-recorded monologues. We analyzed these monologues using a combination of auditory perceptual, acoustic, and qualitative methods. Recovery was evident for all quantitative variables examined. For speech sound production, the recovery was most prominent during the first 3 months, but slower improvement was evident for many months. Measures of speaking rate, fluency, and prosody changed more gradually throughout the entire period. A qualitative analysis of topics addressed in the monologues was consistent with the quantitative speech recovery and indicated a subjective dynamic relationship between accuracy and rate, an observation that several factors made speech sound production variable, and a persisting need for cognitive effort while speaking. Speech features improved over an extended time, but the recovery trajectories differed, indicating dynamic reorganization of the underlying speech production system. The relationship among speech dimensions should be examined in other cases and in population samples. The combination of quantitative and qualitative analysis methods offers advantages for understanding clinically relevant aspects of recovery.
Ertmer, David J.; Jung, Jongmin
2012-01-01
Background Evidence of auditory-guided speech development can be heard as the prelinguistic vocalizations of young cochlear implant recipients become increasingly complex, phonetically diverse, and speech-like. In research settings, these changes are most often documented by collecting and analyzing speech samples. Sampling, however, may be too time-consuming and impractical for widespread use in clinical settings. The Conditioned Assessment of Speech Production (CASP; Ertmer & Stoel-Gammon, 2008) is an easily administered and time-efficient alternative to speech sample analysis. The current investigation examined the concurrent validity of the CASP and data obtained from speech samples recorded at the same intervals. Methods Nineteen deaf children who received CIs before their third birthdays participated in the study. Speech samples and CASP scores were gathered at 6, 12, 18, and 24 months post-activation. Correlation analyses were conducted to assess the concurrent validity of CASP scores and data from samples. Results CASP scores showed strong concurrent validity with scores from speech samples gathered across all recording sessions (6 – 24 months). Conclusions The CASP was found to be a valid, reliable, and time-efficient tool for assessing progress in vocal development during young CI recipient’s first 2 years of device experience. PMID:22628109
Speech-Enabled Interfaces for Travel Information Systems with Large Grammars
NASA Astrophysics Data System (ADS)
Zhao, Baoli; Allen, Tony; Bargiela, Andrzej
This paper introduces three grammar-segmentation methods capable of handling the large grammar issues associated with producing a real-time speech-enabled VXML bus travel application for London. Large grammars tend to produce relatively slow recognition interfaces and this work shows how this limitation can be successfully addressed. Comparative experimental results show that the novel last-word recognition based grammar segmentation method described here achieves an optimal balance between recognition rate, speed of processing and naturalness of interaction.
Integrated Speech and Language Technology for Intelligence, Surveillance, and Reconnaissance (ISR)
2017-07-01
applying submodularity techniques to address computing challenges posed by large datasets in speech and language processing. MT and speech tools were...aforementioned research-oriented activities, the IT system administration team provided necessary support to laboratory computing and network operations...operations of SCREAM Lab computer systems and networks. Other miscellaneous activities in relation to Task Order 29 are presented in an additional fourth
Development and Perceptual Evaluation of Amplitude-Based F0 Control in Electrolarynx Speech
ERIC Educational Resources Information Center
Saikachi, Yoko; Stevens, Kenneth N.; Hillman, Robert E.
2009-01-01
Purpose: Current electrolarynx (EL) devices produce a mechanical speech quality that has been largely attributed to the lack of natural fundamental frequency (F0) variation. In order to improve the quality of EL speech, in the present study the authors aimed to develop and evaluate an automatic F0 control scheme, in which F0 was modulated based on…
ERIC Educational Resources Information Center
McConnellogue, Sheila
2011-01-01
There is a large population of children with speech, language and communication needs who have additional special educational needs (SEN). Whilst professional collaboration between education and health professionals is recommended to ensure an integrated delivery of statutory services for this population of children, formal frameworks should be…
Bhuskute, Aditi; Skirko, Jonathan R; Roth, Christina; Bayoumi, Ahmed; Durbin-Johnson, Blythe; Tollefson, Travis T
2017-09-01
Patients with cleft palate and other causes of velopharyngeal insufficiency (VPI) suffer adverse effects on social interactions and communication. Measurement of these patient-reported outcomes is needed to help guide surgical and nonsurgical care. To further validate the VPI Effects on Life Outcomes (VELO) instrument, measure the change in quality of life (QOL) after speech surgery, and test the association of change in speech with change in QOL. Prospective descriptive cohort including children and young adults undergoing speech surgery for VPI in a tertiary academic center. Participants completed the validated VELO instrument before and after surgical treatment. The main outcome measures were preoperative and postoperative VELO scores and the perceptual speech assessment of speech intelligibility. The VELO scores are divided into subscale domains. Changes in VELO after surgery were analyzed using linear regression models. VELO scores were analyzed as a function of speech intelligibility adjusting for age and cleft type. The correlation between speech intelligibility rating and VELO scores was estimated using the polyserial correlation. Twenty-nine patients (13 males and 16 females) were included. Mean (SD) age was 7.9 (4.1) years (range, 4-20 years). Pharyngeal flap was used in 14 (48%) cases, Furlow palatoplasty in 12 (41%), and sphincter pharyngoplasty in 1 (3%). The mean (SD) preoperative speech intelligibility rating was 1.71 (1.08), which decreased postoperatively to 0.79 (0.93) in 24 patients who completed protocol (P < .01). The VELO scores improved after surgery (P<.001) as did most subscale scores. Caregiver impact did not change after surgery (P = .36). Speech Intelligibility was correlated with preoperative and postoperative total VELO score (P < .01) and to preoperative subscale domains (situational difficulty [VELO-SiD, P = .005] and perception by others [VELO-PO, P = .05]) and postoperative subscale domains (VELO-SiD [P = .03], VELO-PO [P = .003]). Neither the VELO total nor subscale score change after surgery was correlated with change in speech intelligibility. Speech surgery improves VPI-specific quality of life. We confirmed validation in a population of untreated patients with VPI and included pharyngeal flap surgery, which had not previously been included in validation studies. The VELO instrument provides patient-specific outcomes, which allows a broader understanding of the social, emotional, and physical effects of VPI. 2.
Siblings of Children with Speech Impairment: Cavalry on the Hill
ERIC Educational Resources Information Center
Barr, Jacqueline; McLeod, Sharynne; Daniel, Graham
2008-01-01
Purpose: The purpose of this article was to examine the experiences of siblings of children with speech impairment, an underresearched area of family-centered practice. Method: Using naturalistic inquiry, we interviewed 6 siblings and 15 significant others. Interview transcripts were analyzed for meaning statements, and meaning statements were…
A Review of Traditional Cloze Testing Methodology.
ERIC Educational Resources Information Center
Heerman, Charles E.
To analyze the validity of W. L. Taylor's cloze testing methodology, this paper first examines three areas contributing to Taylor's thinking: communications theory, the psychology of speech and communication, and the theory of dispositional mechanisms--or nonessential words--in speech. It then evaluates Taylor's research to determine how he…
The Peril of Ignoring Middle School Student Speech Rights
ERIC Educational Resources Information Center
Hassenpflug, Ann
2016-01-01
Analysis of two recent federal court cases in which principals violated student speech rights offers guidance to middle school administrators as they attempt to address student expression. Characteristics of a successful school from the Association for Middle Level Education provide a framework for analyzing these cases in order to prevent…
Analyzing Stimulus-Stimulus Pairing Effects on Preferences for Speech Sounds
ERIC Educational Resources Information Center
Petursdottir, Anna Ingeborg; Carp, Charlotte L.; Matthies, Derek W.; Esch, Barbara E.
2011-01-01
Several studies have demonstrated effects of stimulus-stimulus pairing (SSP) on children's vocalizations, but numerous treatment failures have also been reported. The present study attempted to isolate procedural variables related to failures of SSP to condition speech sounds as reinforcers. Three boys diagnosed with autism-spectrum disorders…
Speech-Language Pathologists' Attitudes and Involvement Regarding Language and Reading.
ERIC Educational Resources Information Center
Casby, Michael W.
1988-01-01
The study analyzed responses of 105 public school speech-language pathologists to a survey of perceptions of their knowledge, competencies, educational needs, and involvement with children regarding the relationship between oral language and reading disorders. Most reported they were not very involved with children with reading disorders though…
Spanish Native-Speaker Perception of Accentedness in Learner Speech
ERIC Educational Resources Information Center
Moranski, Kara
2012-01-01
Building upon current research in native-speaker (NS) perception of L2 learner phonology (Zielinski, 2008; Derwing & Munro, 2009), the present investigation analyzed multiple dimensions of NS speech perception in order to achieve a more complete understanding of the specific linguistic elements and attitudinal variables that contribute to…
ERIC Educational Resources Information Center
Arcaini, Giovanna
1988-01-01
The political speech is a unique kind of document that reflects the socio-historic climate of its time. Two historical events (Dunkirk and the Falkland Islands Crisis) and a principal protagonist from each are discussed, and the speeches of these two individuals are analyzed in order to find similarities and differences, and to find their basic…
New Developments in Understanding the Complexity of Human Speech Production.
Simonyan, Kristina; Ackermann, Hermann; Chang, Edward F; Greenlee, Jeremy D
2016-11-09
Speech is one of the most unique features of human communication. Our ability to articulate our thoughts by means of speech production depends critically on the integrity of the motor cortex. Long thought to be a low-order brain region, exciting work in the past years is overturning this notion. Here, we highlight some of major experimental advances in speech motor control research and discuss the emerging findings about the complexity of speech motocortical organization and its large-scale networks. This review summarizes the talks presented at a symposium at the Annual Meeting of the Society of Neuroscience; it does not represent a comprehensive review of contemporary literature in the broader field of speech motor control. Copyright © 2016 the authors 0270-6474/16/3611440-09$15.00/0.
Simonyan, Kristina; Fuertinger, Stefan
2015-04-01
Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network. Copyright © 2015 the American Physiological Society.
Clear Speech Modifications in Children Aged 6-10
NASA Astrophysics Data System (ADS)
Taylor, Griffin Lijding
Modifications to speech production made by adult talkers in response to instructions to speak clearly have been well documented in the literature. Targeting adult populations has been motivated by efforts to improve speech production for the benefit of the communication partners, however, many adults also have communication partners who are children. Surprisingly, there is limited literature on whether children can change their speech production when cued to speak clearly. Pettinato, Tuomainen, Granlund, and Hazan (2016) showed that by age 12, children exhibited enlarged vowel space areas and reduced articulation rate when prompted to speak clearly, but did not produce any other adult-like clear speech modifications in connected speech. Moreover, Syrett and Kawahara (2013) suggested that preschoolers produced longer and more intense vowels when prompted to speak clearly at the word level. These findings contrasted with adult talkers who show significant temporal and spectral differences between speech produced in control and clear speech conditions. Therefore, it was the purpose of this study to analyze changes in temporal and spectral characteristics of speech production that children aged 6-10 made in these experimental conditions. It is important to elucidate the clear speech profile of this population to better understand which adult-like clear speech modifications they make spontaneously and which modifications are still developing. Understanding these baselines will advance future studies that measure the impact of more explicit instructions and children's abilities to better accommodate their interlocutors, which is a critical component of children's pragmatic and speech-motor development.
DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS
Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.
2014-01-01
We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757
The predictive roles of neural oscillations in speech motor adaptability.
Sengupta, Ranit; Nasir, Sazzad M
2016-06-01
The human speech system exhibits a remarkable flexibility by adapting to alterations in speaking environments. While it is believed that speech motor adaptation under altered sensory feedback involves rapid reorganization of speech motor networks, the mechanisms by which different brain regions communicate and coordinate their activity to mediate adaptation remain unknown, and explanations of outcome differences in adaption remain largely elusive. In this study, under the paradigm of altered auditory feedback with continuous EEG recordings, the differential roles of oscillatory neural processes in motor speech adaptability were investigated. The predictive capacities of different EEG frequency bands were assessed, and it was found that theta-, beta-, and gamma-band activities during speech planning and production contained significant and reliable information about motor speech adaptability. It was further observed that these bands do not work independently but interact with each other suggesting an underlying brain network operating across hierarchically organized frequency bands to support motor speech adaptation. These results provide novel insights into both learning and disorders of speech using time frequency analysis of neural oscillations. Copyright © 2016 the American Physiological Society.
Scaling and universality in the human voice.
Luque, Jordi; Luque, Bartolo; Lacasa, Lucas
2015-04-06
Speech is a distinctive complex feature of human capabilities. In order to understand the physics underlying speech production, in this work, we empirically analyse the statistics of large human speech datasets ranging several languages. We first show that during speech, the energy is unevenly released and power-law distributed, reporting a universal robust Gutenberg-Richter-like law in speech. We further show that such 'earthquakes in speech' show temporal correlations, as the interevent statistics are again power-law distributed. As this feature takes place in the intraphoneme range, we conjecture that the process responsible for this complex phenomenon is not cognitive, but it resides in the physiological (mechanical) mechanisms of speech production. Moreover, we show that these waiting time distributions are scale invariant under a renormalization group transformation, suggesting that the process of speech generation is indeed operating close to a critical point. These results are put in contrast with current paradigms in speech processing, which point towards low dimensional deterministic chaos as the origin of nonlinear traits in speech fluctuations. As these latter fluctuations are indeed the aspects that humanize synthetic speech, these findings may have an impact in future speech synthesis technologies. Results are robust and independent of the communication language or the number of speakers, pointing towards a universal pattern and yet another hint of complexity in human speech. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Kriboy, M; Tarasiuk, A; Zigel, Y
2014-01-01
Obstructive sleep apnea (OSA) is a common sleep disorder. OSA is associated with several anatomical and functional abnormalities of the upper airway. It was shown that these abnormalities in the upper airway are also likely to be the reason for increased rate of apneic events in the supine position. Functional and structural changes in the vocal tract can affect the acoustic properties of speech. We hypothesize that acoustic properties of speech that are affected by body position may aid in distinguishing between OSA and non-OSA patients. We aimed to explore the possibility to differentiate OSA and non-OSA patients by analyzing the acoustic properties of their speech signal in upright sitting and supine positions. 35 awake patients were recorded while pronouncing sustained vowels in the upright sitting and supine positions. Using linear discriminant analysis (LDA) classifier, accuracy of 84.6%, sensitivity of 92.7%, and specificity of 80.0% were achieved. This study provides the proof of concept that it is possible to screen for OSA by analyzing and comparing speech properties acquired in upright sitting vs. supine positions. An acoustic-based screening system during wakefulness may address the growing needs for a reliable OSA screening tool; further studies are needed to support these findings.
Adaptation to spectrally-rotated speech.
Green, Tim; Rosen, Stuart; Faulkner, Andrew; Paterson, Ruth
2013-08-01
Much recent interest surrounds listeners' abilities to adapt to various transformations that distort speech. An extreme example is spectral rotation, in which the spectrum of low-pass filtered speech is inverted around a center frequency (2 kHz here). Spectral shape and its dynamics are completely altered, rendering speech virtually unintelligible initially. However, intonation, rhythm, and contrasts in periodicity and aperiodicity are largely unaffected. Four normal hearing adults underwent 6 h of training with spectrally-rotated speech using Continuous Discourse Tracking. They and an untrained control group completed pre- and post-training speech perception tests, for which talkers differed from the training talker. Significantly improved recognition of spectrally-rotated sentences was observed for trained, but not untrained, participants. However, there were no significant improvements in the identification of medial vowels in /bVd/ syllables or intervocalic consonants. Additional tests were performed with speech materials manipulated so as to isolate the contribution of various speech features. These showed that preserving intonational contrasts did not contribute to the comprehension of spectrally-rotated speech after training, and suggested that improvements involved adaptation to altered spectral shape and dynamics, rather than just learning to focus on speech features relatively unaffected by the transformation.
Hengst, Julie A; Frame, Simone R; Neuman-Stritzel, Tiffany; Gannaway, Rachel
2005-02-01
Reported speech, wherein one quotes or paraphrases the speech of another, has been studied extensively as a set of linguistic and discourse practices. Researchers agree that reported speech is pervasive, found across languages, and used in diverse contexts. However, to date, there have been no studies of the use of reported speech among individuals with aphasia. Grounded in an interactional sociolinguistic perspective, the study presented here documents and analyzes the use of reported speech by 7 adults with mild to moderately severe aphasia and their routine communication partners. Each of the 7 pairs was videotaped in 4 everyday activities at home or around the community, yielding over 27 hr of conversational interaction for analysis. A coding scheme was developed that identified 5 types of explicitly marked reported speech: direct, indirect, projected, indexed, and undecided. Analysis of the data documented reported speech as a common discourse practice used successfully by the individuals with aphasia and their communication partners. All participants produced reported speech at least once, and across all observations the target pairs produced 400 reported speech episodes (RSEs), 149 by individuals with aphasia and 251 by their communication partners. For all participants, direct and indirect forms were the most prevalent (70% of RSEs). Situated discourse analysis of specific episodes of reported speech used by 3 of the pairs provides detailed portraits of the diverse interactional, referential, social, and discourse functions of reported speech and explores ways that the pairs used reported speech to successfully frame talk despite their ongoing management of aphasia.
McAllister, Anita; Brandt, Signe Kofoed
2012-09-01
A well-controlled recording in a studio is fundamental in most voice rehabilitation. However, this laboratory like recording method has been questioned because voice use in a natural environment may be quite different. In children's natural environment, high background noise levels are common and are an important factor contributing to voice problems. The primary noise source in day-care centers is the children themselves. The aim of the present study was to compare perceptual evaluations of voice quality and acoustic measures from a controlled recording with recordings of spontaneous speech in children's natural environment in a day-care setting. Eleven 5-year-old children were recorded three times during a day at the day care. The controlled speech material consisted of repeated sentences. Matching sentences were selected from the spontaneous speech. All sentences were repeated three times. Recordings were randomized and analyzed acoustically and perceptually. Statistic analyses showed that fundamental frequency was significantly higher in spontaneous speech (P<0.01) as was hyperfunction (P<0.001). The only characteristic the controlled sentences shared with spontaneous speech was degree of hoarseness (Spearman's rho=0.564). When data for boys and girls were analyzed separately, a correlation was found for the parameter breathiness (rho=0.551) for boys, and for girls the correlation for hoarseness remained (rho=0.752). Regarding acoustic data, none of the measures correlated across recording conditions for the whole group. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
ERIC Educational Resources Information Center
Hornberger, Nancy H.
1989-01-01
Analyzes ethnographic data regarding one prolonged speech event, the negotiation of a driver's license at the Ministry of Transportation in Puno, Peru, from the perspective of Hymes' redefinition of linguistic competence. Implications for the acquisition of second language communicative competence are also discussed. (Author/CB)
Will Microfilm and Computers Replace Clippings?
ERIC Educational Resources Information Center
Oppendahl, Alison; And Others
Four speeches are presented, each of which deals with the use of conputers to organize and retrieve news stories. The first speech relates in detail the step-by-step process devised by the "Free Press" in Detroit to analyze, categorize, code, film, process, and retrieve news stories through the use of the electronic film retrieval…
Questioning Mechanisms During Tutoring, Conversation, and Human-Computer Interaction
1992-10-14
project on the grant, we are analyzing sequences of speech act categories in dialogues between children. The 90 dialogues occur in the context of free ... play , a puzzle task, versus a 20-questions game. Our goal is to assess the extent to which various computational models can predict speech act category N
The Rhetoric of Betty Friedan: Rhetoric of Redefinition.
ERIC Educational Resources Information Center
Foss, Karen A.
Two speeches by Betty Friedan, author of "The Feminine Mystique" and first president of the National Organization for Women (NOW), are examined in this paper. The first speech analyzed, "Tokenism and the Pseudo-Radical Cop-Out," was delivered at Cornell University in January, 1969, and the second, a "Call to Women's Strike…
Style and Content in the Rhetoric of Early Afro-American Feminists.
ERIC Educational Resources Information Center
Campbell, Karlyn Kohrs
1986-01-01
Analyzes selected speeches by feminists active in the early Afro-American protest, revealing differences in their rhetoric and that of White feminists of the period. Argues that a simultaneous analysis and synthesis is necessary to understand these differences. Illustrates speeches by Sojourner Truth, Ida B. Wells, and Mary Church Terrell. (JD)
A recursive linear predictive vocoder
NASA Astrophysics Data System (ADS)
Janssen, W. A.
1983-12-01
A non-real time 10 pole recursive autocorrelation linear predictive coding vocoder was created for use in studying effects of recursive autocorrelation on speech. The vocoder is composed of two interchangeable pitch detectors, a speech analyzer, and speech synthesizer. The time between updating filter coefficients is allowed to vary from .125 msec to 20 msec. The best quality was found using .125 msec between each update. The greatest change in quality was noted when changing from 20 msec/update to 10 msec/update. Pitch period plots for the center clipping autocorrelation pitch detector and simplified inverse filtering technique are provided. Plots of speech into and out of the vocoder are given. Formant versus time three dimensional plots are shown. Effects of noise on pitch detection and formants are shown. Noise effects the voiced/unvoiced decision process causing voiced speech to be re-constructed as unvoiced.
Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant.
Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa
2016-07-01
The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied.
Human phoneme recognition depending on speech-intrinsic variability.
Meyer, Bernd T; Jürgens, Tim; Wesker, Thorsten; Brand, Thomas; Kollmeier, Birger
2010-11-01
The influence of different sources of speech-intrinsic variation (speaking rate, effort, style and dialect or accent) on human speech perception was investigated. In listening experiments with 16 listeners, confusions of consonant-vowel-consonant (CVC) and vowel-consonant-vowel (VCV) sounds in speech-weighted noise were analyzed. Experiments were based on the OLLO logatome speech database, which was designed for a man-machine comparison. It contains utterances spoken by 50 speakers from five dialect/accent regions and covers several intrinsic variations. By comparing results depending on intrinsic and extrinsic variations (i.e., different levels of masking noise), the degradation induced by variabilities can be expressed in terms of the SNR. The spectral level distance between the respective speech segment and the long-term spectrum of the masking noise was found to be a good predictor for recognition rates, while phoneme confusions were influenced by the distance to spectrally close phonemes. An analysis based on transmitted information of articulatory features showed that voicing and manner of articulation are comparatively robust cues in the presence of intrinsic variations, whereas the coding of place is more degraded. The database and detailed results have been made available for comparisons between human speech recognition (HSR) and automatic speech recognizers (ASR).
Everyday listeners' impressions of speech produced by individuals with adductor spasmodic dysphonia.
Nagle, Kathleen F; Eadie, Tanya L; Yorkston, Kathryn M
2015-01-01
Individuals with adductor spasmodic dysphonia (ADSD) have reported that unfamiliar communication partners appear to judge them as sneaky, nervous or not intelligent, apparently based on the quality of their speech; however, there is minimal research into the actual everyday perspective of listening to ADSD speech. The purpose of this study was to investigate the impressions of listeners hearing ADSD speech for the first time using a mixed-methods design. Everyday listeners were interviewed following sessions in which they made ratings of ADSD speech. A semi-structured interview approach was used and data were analyzed using thematic content analysis. Three major themes emerged: (1) everyday listeners make judgments about speakers with ADSD; (2) ADSD speech does not sound normal to everyday listeners; and (3) rating overall severity is difficult for everyday listeners. Participants described ADSD speech similarly to existing literature; however, some listeners inaccurately extrapolated speaker attributes based solely on speech samples. Listeners may draw erroneous conclusions about individuals with ADSD and these biases may affect the communicative success of these individuals. Results have implications for counseling individuals with ADSD, as well as the need for education and awareness about ADSD. Copyright © 2015 Elsevier Inc. All rights reserved.
Bedore, Lisa M.; Ramos, Daniel
2015-01-01
Purpose The primary purpose of this study was to describe the frequency and types of speech disfluencies that are produced by bilingual Spanish–English (SE) speaking children who do not stutter. The secondary purpose was to determine whether their disfluent speech is mediated by language dominance and/or language produced. Method Spanish and English narratives (a retell and a tell in each language) were elicited and analyzed relative to the frequency and types of speech disfluencies produced. These data were compared with the monolingual English-speaking guidelines for differential diagnosis of stuttering. Results The mean frequency of stuttering-like speech behaviors in the bilingual SE participants ranged from 3% to 22%, exceeding the monolingual English standard of 3 per 100 words. There was no significant frequency difference in stuttering-like or non-stuttering-like speech disfluency produced relative to the child's language dominance. There was a significant difference relative to the language the child was speaking; all children produced significantly more stuttering-like speech disfluencies in Spanish than in English. Conclusion Results demonstrate that the disfluent speech of bilingual SE children should be carefully considered relative to the complex nature of bilingualism. PMID:25215876
Tran, Phuong K; Letowski, Tomasz R; McBride, Maranda E
2013-06-01
Speech signals can be converted into electrical audio signals using either conventional air conduction (AC) microphone or a contact bone conduction (BC) microphone. The goal of this study was to investigate the effects of the location of a BC microphone on the intensity and frequency spectrum of the recorded speech. Twelve locations, 11 on the talker's head and 1 on the collar bone, were investigated. The speech sounds were three vowels (/u/, /a/, /i/) and two consonants (/m/, /∫/). The sounds were produced by 12 talkers. Each sound was recorded simultaneously with two BC microphones and an AC microphone. Analyzed spectral data showed that the BC recordings made at the forehead of the talker were the most similar to the AC recordings, whereas the collar bone recordings were most different. Comparison of the spectral data with speech intelligibility data collected in another study revealed a strong negative relationship between BC speech intelligibility and the degree of deviation of the BC speech spectrum from the AC spectrum. In addition, the head locations that resulted in the highest speech intelligibility were associated with the lowest output signals among all tested locations. Implications of these findings for BC communication are discussed.
Analysis of error type and frequency in apraxia of speech among Portuguese speakers.
Cera, Maysa Luchesi; Minett, Thaís Soares Cianciarullo; Ortiz, Karin Zazo
2010-01-01
Most studies characterizing errors in the speech of patients with apraxia involve English language. To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.
Liu, Xiaoluan; Xu, Yi
2015-01-01
This study compares affective piano performance with speech production from the perspective of dynamics: unlike previous research, this study uses finger force and articulatory effort as indexes reflecting the dynamics of affective piano performance and speech production respectively. Moreover, for the first time physical constraints such as piano fingerings and speech articulatory constraints are included due to their potential contribution to different patterns of dynamics. A piano performance experiment and speech production experiment were conducted in four emotions: anger, fear, happiness and sadness. The results show that in both piano performance and speech production, anger and happiness generally have high dynamics while sadness has the lowest dynamics. Fingerings interact with fear in the piano experiment and articulatory constraints interact with anger in the speech experiment, i.e., large physical constraints produce significantly higher dynamics than small physical constraints in piano performance under the condition of fear and in speech production under the condition of anger. Using production experiments, this study firstly supports previous perception studies on relations between affective music and speech. Moreover, this is the first study to show quantitative evidence for the importance of considering motor aspects such as dynamics in comparing music performance and speech production in which motor mechanisms play a crucial role.
Liu, Xiaoluan; Xu, Yi
2015-01-01
This study compares affective piano performance with speech production from the perspective of dynamics: unlike previous research, this study uses finger force and articulatory effort as indexes reflecting the dynamics of affective piano performance and speech production respectively. Moreover, for the first time physical constraints such as piano fingerings and speech articulatory constraints are included due to their potential contribution to different patterns of dynamics. A piano performance experiment and speech production experiment were conducted in four emotions: anger, fear, happiness and sadness. The results show that in both piano performance and speech production, anger and happiness generally have high dynamics while sadness has the lowest dynamics. Fingerings interact with fear in the piano experiment and articulatory constraints interact with anger in the speech experiment, i.e., large physical constraints produce significantly higher dynamics than small physical constraints in piano performance under the condition of fear and in speech production under the condition of anger. Using production experiments, this study firstly supports previous perception studies on relations between affective music and speech. Moreover, this is the first study to show quantitative evidence for the importance of considering motor aspects such as dynamics in comparing music performance and speech production in which motor mechanisms play a crucial role. PMID:26217252
The prevalence of childhood dysphonia: a cross-sectional study.
Carding, Paul N; Roulstone, Sue; Northstone, Kate
2006-12-01
There is only very limited information on the prevalence of voice disorders, particularly for the pediatric population. This study examined the prevalence of dysphonia in a large cohort of children (n = 7389) at 8 years of age. Data were collected within a large prospective epidemiological study and included a formal assessment by one of five research speech and language therapists as well as a parental report of their child's voice. Common risk factors that were also analyzed included sex, sibling numbers, asthma, regular conductive hearing loss, and frequent upper respiratory infection. The research clinicians identified a dysphonia prevalence of 6% compared with a parental report of 11%. Both measures suggested a significant risk of dysphonia for children with older siblings. Other measures were not in agreement between clinician and parental reports. The clinician judgments also suggested significant risk factors for sex (male) but not for any common respiratory or otolaryngological conditions that were analyzed. Parental report suggested significant risk factors with respect to asthma and tonsillectomy. These results are discussed in detail.
Speech rate and fluency in children with phonological disorder.
Novaes, Priscila Maronezi; Nicolielo-Carrilho, Ana Paola; Lopes-Herrera, Simone Aparecida
2015-01-01
To identify and describe the speech rate and fluency of children with phonological disorder (PD) with and without speech-language therapy. Thirty children, aged 5-8 years old, both genders, were divided into three groups: experimental group 1 (G1) — 10 children with PD in intervention; experimental group 2 (G2) — 10 children with PD without intervention; and control group (CG) — 10 children with typical development. Speech samples were collected and analyzed according to parameters of specific protocol. The children in CG had higher number of words per minute compared to those in G1, which, in turn, performed better in this aspect compared to children in G2. Regarding the number of syllables per minute, the CG showed the best result. In this aspect, the children in G1 showed better results than those in G2. Comparing children's performance in the assessed groups regarding the tests, those with PD in intervention had higher time of speech sample and adequate speech rate, which may be indicative of greater auditory monitoring of their own speech as a result of the intervention.
Pollonini, Luca; Olds, Cristen; Abaya, Homer; Bortfeld, Heather; Beauchamp, Michael S; Oghalai, John S
2014-03-01
The primary goal of most cochlear implant procedures is to improve a patient's ability to discriminate speech. To accomplish this, cochlear implants are programmed so as to maximize speech understanding. However, programming a cochlear implant can be an iterative, labor-intensive process that takes place over months. In this study, we sought to determine whether functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging method which is safe to use repeatedly and for extended periods of time, can provide an objective measure of whether a subject is hearing normal speech or distorted speech. We used a 140 channel fNIRS system to measure activation within the auditory cortex in 19 normal hearing subjects while they listed to speech with different levels of intelligibility. Custom software was developed to analyze the data and compute topographic maps from the measured changes in oxyhemoglobin and deoxyhemoglobin concentration. Normal speech reliably evoked the strongest responses within the auditory cortex. Distorted speech produced less region-specific cortical activation. Environmental sounds were used as a control, and they produced the least cortical activation. These data collected using fNIRS are consistent with the fMRI literature and thus demonstrate the feasibility of using this technique to objectively detect differences in cortical responses to speech of different intelligibility. Copyright © 2013 Elsevier B.V. All rights reserved.
Cortical activation patterns correlate with speech understanding after cochlear implantation
Olds, Cristen; Pollonini, Luca; Abaya, Homer; Larky, Jannine; Loy, Megan; Bortfeld, Heather; Beauchamp, Michael S.; Oghalai, John S.
2015-01-01
Objectives Cochlear implants are a standard therapy for deafness, yet the ability of implanted patients to understand speech varies widely. To better understand this variability in outcomes, we used functional near-infrared spectroscopy (fNIRS) to image activity within regions of the auditory cortex and compare the results to behavioral measures of speech perception. Design We studied 32 deaf adults hearing through cochlear implants and 35 normal-hearing controls. We used fNIRS to measure responses within the lateral temporal lobe and the superior temporal gyrus to speech stimuli of varying intelligibility. The speech stimuli included normal speech, channelized speech (vocoded into 20 frequency bands), and scrambled speech (the 20 frequency bands were shuffled in random order). We also used environmental sounds as a control stimulus. Behavioral measures consisted of the Speech Reception Threshold, CNC words, and AzBio Sentence tests measured in quiet. Results Both control and implanted participants with good speech perception exhibited greater cortical activations to natural speech than to unintelligible speech. In contrast, implanted participants with poor speech perception had large, indistinguishable cortical activations to all stimuli. The ratio of cortical activation to normal speech to that of scrambled speech directly correlated with the CNC Words and AzBio Sentences scores. This pattern of cortical activation was not correlated with auditory threshold, age, side of implantation, or time after implantation. Turning off the implant reduced cortical activations in all implanted participants. Conclusions Together, these data indicate that the responses we measured within the lateral temporal lobe and the superior temporal gyrus correlate with behavioral measures of speech perception, demonstrating a neural basis for the variability in speech understanding outcomes after cochlear implantation. PMID:26709749
Grossman, Murray; Powers, John; Ash, Sherry; McMillan, Corey; Burkholder, Lisa; Irwin, David; Trojanowski, John Q.
2012-01-01
Non-fluent/agrammatic primary progressive aphasia (naPPA) is a progressive neurodegenerative condition most prominently associated with slowed, effortful speech. A clinical imaging marker of naPPA is disease centered in the left inferior frontal lobe. We used multimodal imaging to assess large-scale neural networks underlying effortful expression in 15 patients with sporadic naPPA due to frontotemporal lobar degeneration (FTLD) spectrum pathology. Effortful speech in these patients is related in part to impaired grammatical processing, and to phonologic speech errors. Gray matter (GM) imaging shows frontal and anterior-superior temporal atrophy, most prominently in the left hemisphere. Diffusion tensor imaging reveals reduced fractional anisotropy in several white matter (WM) tracts mediating projections between left frontal and other GM regions. Regression analyses suggest disruption of three large-scale GM-WM neural networks in naPPA that support fluent, grammatical expression. These findings emphasize the role of large-scale neural networks in language, and demonstrate associated language deficits in naPPA. PMID:23218686
GRIN2A: an aptly named gene for speech dysfunction.
Turner, Samantha J; Mayes, Angela K; Verhoeven, Andrea; Mandelstam, Simone A; Morgan, Angela T; Scheffer, Ingrid E
2015-02-10
To delineate the specific speech deficits in individuals with epilepsy-aphasia syndromes associated with mutations in the glutamate receptor subunit gene GRIN2A. We analyzed the speech phenotype associated with GRIN2A mutations in 11 individuals, aged 16 to 64 years, from 3 families. Standardized clinical speech assessments and perceptual analyses of conversational samples were conducted. Individuals showed a characteristic phenotype of dysarthria and dyspraxia with lifelong impact on speech intelligibility in some. Speech was typified by imprecise articulation (11/11, 100%), impaired pitch (monopitch 10/11, 91%) and prosody (stress errors 7/11, 64%), and hypernasality (7/11, 64%). Oral motor impairments and poor performance on maximum vowel duration (8/11, 73%) and repetition of monosyllables (10/11, 91%) and trisyllables (7/11, 64%) supported conversational speech findings. The speech phenotype was present in one individual who did not have seizures. Distinctive features of dysarthria and dyspraxia are found in individuals with GRIN2A mutations, often in the setting of epilepsy-aphasia syndromes; dysarthria has not been previously recognized in these disorders. Of note, the speech phenotype may occur in the absence of a seizure disorder, reinforcing an important role for GRIN2A in motor speech function. Our findings highlight the need for precise clinical speech assessment and intervention in this group. By understanding the mechanisms involved in GRIN2A disorders, targeted therapy may be designed to improve chronic lifelong deficits in intelligibility. © 2015 American Academy of Neurology.
Five Lectures on Artificial Intelligence
1974-09-01
large systems The current projects on speech understanding (which I will describe iater) are an exception to this, dealing explicitly with the problem...learns that "Fred lives in Sydney", we must find some new fact to resolve the tension — 1 SPEECH UNDERSTANDING SYSTEMS perhaps he lives in a zco It is...possible Speech Understanding Systems Most of the problems described above might be characterized as relating to the chunking of knowledge. Such ideas are
Pimmer, Christoph; Mateescu, Magdalena; Zahn, Carmen; Genewein, Urs
2013-11-27
Despite the widespread use and advancements of mobile technology that facilitate rich communication modes, there is little evidence demonstrating the value of smartphones for effective interclinician communication and knowledge processes. The objective of this study was to determine the effects of different synchronous smartphone-based modes of communication, such as (1) speech only, (2) speech and images, and (3) speech, images, and image annotation (guided noticing) on the recall and transfer of visually and verbally represented medical knowledge. The experiment was conducted from November 2011 to May 2012 at the University Hospital Basel (Switzerland) with 42 medical students in a master's program. All participants analyzed a standardized case (a patient with a subcapital fracture of the fifth metacarpal bone) based on a radiological image, photographs of the hand, and textual descriptions, and were asked to consult a remote surgical specialist via a smartphone. Participants were randomly assigned to 3 experimental conditions/groups. In group 1, the specialist provided verbal explanations (speech only). In group 2, the specialist provided verbal explanations and displayed the radiological image and the photographs to the participants (speech and images). In group 3, the specialist provided verbal explanations, displayed the radiological image and the photographs, and annotated the radiological image by drawing structures/angle elements (speech, images, and image annotation). To assess knowledge recall, participants were asked to write brief summaries of the case (verbally represented knowledge) after the consultation and to re-analyze the diagnostic images (visually represented knowledge). To assess knowledge transfer, participants analyzed a similar case without specialist support. Data analysis by ANOVA found that participants in groups 2 and 3 (images used) evaluated the support provided by the specialist as significantly more positive than group 1, the speech-only group (group 1: mean 4.08, SD 0.90; group 2: mean 4.73, SD 0.59; group 3: mean 4.93, SD 0.25; F2,39=6.76, P=.003; partial η(2)=0.26, 1-β=.90). However, significant positive effects on the recall and transfer of visually represented medical knowledge were only observed when the smartphone-based communication involved the combination of speech, images, and image annotation (group 3). There were no significant positive effects on the recall and transfer of visually represented knowledge between group 1 (speech only) and group 2 (speech and images). No significant differences were observed between the groups regarding verbally represented medical knowledge. The results show (1) the value of annotation functions for digital and mobile technology for interclinician communication and medical informatics, and (2) the use of guided noticing (the integration of speech, images, and image annotation) leads to significantly improved knowledge gains for visually represented knowledge. This is particularly valuable in situations involving complex visual subject matters, typical in clinical practice.
Mateescu, Magdalena; Zahn, Carmen; Genewein, Urs
2013-01-01
Background Despite the widespread use and advancements of mobile technology that facilitate rich communication modes, there is little evidence demonstrating the value of smartphones for effective interclinician communication and knowledge processes. Objective The objective of this study was to determine the effects of different synchronous smartphone-based modes of communication, such as (1) speech only, (2) speech and images, and (3) speech, images, and image annotation (guided noticing) on the recall and transfer of visually and verbally represented medical knowledge. Methods The experiment was conducted from November 2011 to May 2012 at the University Hospital Basel (Switzerland) with 42 medical students in a master’s program. All participants analyzed a standardized case (a patient with a subcapital fracture of the fifth metacarpal bone) based on a radiological image, photographs of the hand, and textual descriptions, and were asked to consult a remote surgical specialist via a smartphone. Participants were randomly assigned to 3 experimental conditions/groups. In group 1, the specialist provided verbal explanations (speech only). In group 2, the specialist provided verbal explanations and displayed the radiological image and the photographs to the participants (speech and images). In group 3, the specialist provided verbal explanations, displayed the radiological image and the photographs, and annotated the radiological image by drawing structures/angle elements (speech, images, and image annotation). To assess knowledge recall, participants were asked to write brief summaries of the case (verbally represented knowledge) after the consultation and to re-analyze the diagnostic images (visually represented knowledge). To assess knowledge transfer, participants analyzed a similar case without specialist support. Results Data analysis by ANOVA found that participants in groups 2 and 3 (images used) evaluated the support provided by the specialist as significantly more positive than group 1, the speech-only group (group 1: mean 4.08, SD 0.90; group 2: mean 4.73, SD 0.59; group 3: mean 4.93, SD 0.25; F 2,39=6.76, P=.003; partial η2=0.26, 1–β=.90). However, significant positive effects on the recall and transfer of visually represented medical knowledge were only observed when the smartphone-based communication involved the combination of speech, images, and image annotation (group 3). There were no significant positive effects on the recall and transfer of visually represented knowledge between group 1 (speech only) and group 2 (speech and images). No significant differences were observed between the groups regarding verbally represented medical knowledge. Conclusions The results show (1) the value of annotation functions for digital and mobile technology for interclinician communication and medical informatics, and (2) the use of guided noticing (the integration of speech, images, and image annotation) leads to significantly improved knowledge gains for visually represented knowledge. This is particularly valuable in situations involving complex visual subject matters, typical in clinical practice. PMID:24284080
An experiment with spectral analysis of emotional speech affected by orthodontic appliances
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Ďuračková, Daniela
2012-11-01
The contribution describes the effect of the fixed and removable orthodontic appliances on spectral properties of emotional speech. Spectral changes were analyzed and evaluated by spectrograms and mean Welch’s periodograms. This alternative approach to the standard listening test enables to obtain objective comparison based on statistical analysis by ANOVA and hypothesis tests. Obtained results of analysis performed on short sentences of a female speaker in four emotional states (joyous, sad, angry, and neutral) show that, first of all, the removable orthodontic appliance affects the spectrograms of produced speech.
Perceptual Learning of Speech under Optimal and Adverse Conditions
Zhang, Xujin; Samuel, Arthur G.
2014-01-01
Humans have a remarkable ability to understand spoken language despite the large amount of variability in speech. Previous research has shown that listeners can use lexical information to guide their interpretation of atypical sounds in speech (Norris, McQueen, & Cutler, 2003). This kind of lexically induced perceptual learning enables people to adjust to the variations in utterances due to talker-specific characteristics, such as individual identity and dialect. The current study investigated perceptual learning in two optimal conditions: conversational speech (Experiment 1) vs. clear speech (Experiment 2), and three adverse conditions: noise (Experiment 3a) vs. two cognitive loads (Experiments 4a & 4b). Perceptual learning occurred in the two optimal conditions and in the two cognitive load conditions, but not in the noise condition. Furthermore, perceptual learning occurred only in the first of two sessions for each participant, and only for atypical /s/ sounds and not for atypical /f/ sounds. This pattern of learning and non-learning reflects a balance between flexibility and stability that the speech system must have to deal with speech variability in the diverse conditions that speech is encountered. PMID:23815478
NASA Astrophysics Data System (ADS)
Kim, Yunjung; Weismer, Gary; Kent, Ray D.
2005-09-01
In previous work [J. Acoust. Soc. Am. 117, 2605 (2005)], we reported on formant trajectory characteristics of a relatively large number of speakers with dysarthria and near-normal speech intelligibility. The purpose of that analysis was to begin a documentation of the variability, within relatively homogeneous speech-severity groups, of acoustic measures commonly used to predict across-speaker variation in speech intelligibility. In that study we found that even with near-normal speech intelligibility (90%-100%), many speakers had reduced formant slopes for some words and distributional characteristics of acoustic measures that were different than values obtained from normal speakers. In the current report we extend those findings to a group of speakers with dysarthria with somewhat poorer speech intelligibility than the original group. Results are discussed in terms of the utility of certain acoustic measures as indices of speech intelligibility, and as explanatory data for theories of dysarthria. [Work supported by NIH Award R01 DC00319.
NASA Astrophysics Data System (ADS)
Wang, Longbiao; Odani, Kyohei; Kai, Atsuhiko
2012-12-01
A blind dereverberation method based on power spectral subtraction (SS) using a multi-channel least mean squares algorithm was previously proposed to suppress the reverberant speech without additive noise. The results of isolated word speech recognition experiments showed that this method achieved significant improvements over conventional cepstral mean normalization (CMN) in a reverberant environment. In this paper, we propose a blind dereverberation method based on generalized spectral subtraction (GSS), which has been shown to be effective for noise reduction, instead of power SS. Furthermore, we extend the missing feature theory (MFT), which was initially proposed to enhance the robustness of additive noise, to dereverberation. A one-stage dereverberation and denoising method based on GSS is presented to simultaneously suppress both the additive noise and nonstationary multiplicative noise (reverberation). The proposed dereverberation method based on GSS with MFT is evaluated on a large vocabulary continuous speech recognition task. When the additive noise was absent, the dereverberation method based on GSS with MFT using only 2 microphones achieves a relative word error reduction rate of 11.4 and 32.6% compared to the dereverberation method based on power SS and the conventional CMN, respectively. For the reverberant and noisy speech, the dereverberation and denoising method based on GSS achieves a relative word error reduction rate of 12.8% compared to the conventional CMN with GSS-based additive noise reduction method. We also analyze the effective factors of the compensation parameter estimation for the dereverberation method based on SS, such as the number of channels (the number of microphones), the length of reverberation to be suppressed, and the length of the utterance used for parameter estimation. The experimental results showed that the SS-based method is robust in a variety of reverberant environments for both isolated and continuous speech recognition and under various parameter estimation conditions.
LaCroix, Arianna N; Diaz, Alvaro F; Rogalsky, Corianne
2015-01-01
The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.
LaCroix, Arianna N.; Diaz, Alvaro F.; Rogalsky, Corianne
2015-01-01
The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music. PMID:26321976
Rhetoric Reconsidered--Here and Abroad (Global Issues).
ERIC Educational Resources Information Center
Naff, Bea; Schnaufer, Thiela
1997-01-01
Describes a project in a third-year Latin class in a high school in the Appalachian foothills in which students (1) identified and analyzed classical rhetorical devices in speeches by Winston Churchill and Martin Luther King Jr.; (2) used rhetorical frames; and (3) wrote their own speeches as part of a year-long writing exchange with senior-level…
Who Receives Speech/Language Services by 5 Years of Age in the United States?
ERIC Educational Resources Information Center
Morgan, Paul L.; Hammer, Carol Scheffner; Farkas, Geroge; Hillemeier, Marianne M.; Maczuga, Steve; Cook, Michael; Morano, Stephanie
2016-01-01
Purpose: We sought to identify factors predictive of or associated with receipt of speech/language services during early childhood. We did so by analyzing data from the Early Childhood Longitudinal Study-Birth Cohort (ECLS-B; Andreassen & Fletcher, 2005), a nationally representative dataset maintained by the U.S. Department of Education. We…
ERIC Educational Resources Information Center
Eckes, Suzanne E.
2017-01-01
This article examines an education policy matter that involves homophobic speech in public schools. Using legal research methods, two federal circuit court opinions that have examined the tension surrounding anti-LGBTQ student expression are analyzed. This legal analysis provides non-lawyers some insight into the current realities of student…
Study on Gender-Related Speech Communication in Classical Chinese Poetry
ERIC Educational Resources Information Center
Tian, Xinhe; Qin, Dandan
2016-01-01
Gender, formed in men and women's growth which is constrained by social context, is tightly tied to the distinction which is presented in the process of men and women's language use. Hence, it's a new breakthrough for studies on gender and difference by analyzing gender-related speech communication on the background of ancient Chinese culture.
Working Papers in Experimental Speech-Language Pathology and Audiology. Volume VII, 1979.
ERIC Educational Resources Information Center
City Univ. of New York, Flushing, NY. Queens Coll.
Seven papers review research in speech-language pathology and audiology. K. Polzer et al. describe an investigation of sign language therapy for the severely language impaired. S. Dworetsky and L. Clark analyze the phonemic and nonphonemic error patterns in five nonverbal and five verbal oral apraxic adults. The performance of three language…
ERIC Educational Resources Information Center
Leroy-Malherbe, V.; Chevrie-Muller, C.; Rigoard, M. T.; Arabia, C.
1998-01-01
This case report describes the case of a 52-year-old man with bilateral central lingual paralysis following a myocardial infarction. Analysis of speech recordings 15 days and 18 months after the attack were acoustically analyzed. The case demonstrates the usefulness of acoustic analysis to detect slight acoustic differences. (DB)
ERIC Educational Resources Information Center
Bohn, Mariko Tajima
2015-01-01
This study examines student perspectives on gender differences in Japanese speech. Expanding on a small-scale survey by Siegal & Okamoto (2003) that investigated the views of eleven Japanese-language college teachers, this study analyzes 238 questionnaire responses from 220 Japanese-language students at four universities and a US government…
How Salient Are Onomatopoeia in the Early Input? A Prosodic Analysis of Infant-Directed Speech
ERIC Educational Resources Information Center
Laing, Catherine E.; Vihman, Marilyn; Keren-Portnoy, Tamar
2017-01-01
Onomatopoeia are frequently identified amongst infants' earliest words (Menn & Vihman, 2011), yet few authors have considered why this might be, and even fewer have explored this phenomenon empirically. Here we analyze mothers' production of onomatopoeia in infant-directed speech (IDS) to provide an input-based perspective on these forms.…
Honing EAP Learners' Public Speaking Skills by Analyzing TED Talks
ERIC Educational Resources Information Center
Leopold, Lisa
2016-01-01
Despite the importance of public speaking skills for English for Academic Purposes (EAP) students' academic and professional success, few EAP textbooks incorporate authentic, professional speech models. Thus, many EAP instructors have turned to TED talks for dynamic speech models. Yet a single TED talk may be too long for viewing in class and may…
Intonational Division of a Speech Flow in the Kazakh Language
ERIC Educational Resources Information Center
Bazarbayeva, Zeynep M.; Zhalalova, Akshay M.; Ormakhanova, Yenlik N.; Ospangaziyeva, Nazgul B.; Karbozova, Bulbul D.
2016-01-01
The purpose of this research is to analyze the speech intonation of the French, Kazakh, English and Russian languages. The study considers intonation component functions (of melodics, duration, and intensity) in poetry and language spoken. It is defined that a set of prosodic means are used in order to convey the intonational specifics of sounding…
Baby Talk as a Simplified Register. Papers and Reports on Child Language Development, No. 9.
ERIC Educational Resources Information Center
Ferguson, Charles A.
Every speech community has a baby talk register (BT) of phonological, grammatical, and lexical features regarded as primarily appropriate for addressing young children and also for other displaced or extended uses. Much BT is analyzable as derived from normal adult speech (AS) by such simplifying processes as reduction, substitution, assimilation,…
Separating the Problem and the Person: Insights from Narrative Therapy with People Who Stutter
ERIC Educational Resources Information Center
Ryan, Fiona; O'Dwyer, Mary; Leahy, Margaret M.
2015-01-01
Stuttering is a complex disorder of speech that encompasses motor speech and emotional and cognitive factors. The use of narrative therapy is described here, focusing on the stories that clients tell about the problems associated with stuttering that they have encountered in their lives. Narrative therapy uses these stories to understand, analyze,…
Dynamic action units slip in speech production errors ☆
Goldstein, Louis; Pouplier, Marianne; Chen, Larissa; Saltzman, Elliot; Byrd, Dani
2008-01-01
In the past, the nature of the compositional units proposed for spoken language has largely diverged from the types of control units pursued in the domains of other skilled motor tasks. A classic source of evidence as to the units structuring speech has been patterns observed in speech errors – “slips of the tongue”. The present study reports, for the first time, on kinematic data from tongue and lip movements during speech errors elicited in the laboratory using a repetition task. Our data are consistent with the hypothesis that speech production results from the assembly of dynamically defined action units – gestures – in a linguistically structured environment. The experimental results support both the presence of gestural units and the dynamical properties of these units and their coordination. This study of speech articulation shows that it is possible to develop a principled account of spoken language within a more general theory of action. PMID:16822494
[Improving the speech with a prosthetic construction].
Stalpers, M J; Engelen, M; van der Stappen, J A A M; Weijs, W L J; Takes, R P; van Heumen, C C M
2016-03-01
A 12-year-old boy had problems with his speech due to a defect in the soft palate. This defect was caused by the surgical removal of a synovial sarcoma. Testing with a nasometer revealed hypernasality above normal values. Given the size and severity of the defect in the soft palate, the possibility of improving the speech with speech therapy was limited. At a centre for special dentistry an attempt was made with a prosthetic construction to improve the performance of the palate and, in that way, the speech. This construction consisted of a denture with an obturator attached to it. With it, an effective closure of the palate could be achieved. New measurements with acoustic nasometry showed scores within the normal values. The nasality in the speech largely disappeared. The obturator is an effective and relatively easy solution for palatal insufficiency resulting from surgical resection. Intrusive reconstructive surgery can be avoided in this way.
How should a speech recognizer work?
Scharenborg, Odette; Norris, Dennis; Bosch, Louis; McQueen, James M
2005-11-12
Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input. 2005 Lawrence Erlbaum Associates, Inc.
Altvater-Mackensen, Nicole; Grossmann, Tobias
2015-01-01
Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential looking paradigm, 44 German 6-month olds' ability to detect mismatches between concurrently presented auditory and visual native vowels was tested. Outcomes were related to mothers' speech style and interactive behavior assessed during free play with their infant, and to infant-specific factors assessed through a questionnaire. Results show that mothers' and infants' social behavior modulated infants' preference for matching audiovisual speech. Moreover, infants' audiovisual speech perception correlated with later vocabulary size, suggesting a lasting effect on language development. © 2014 The Authors. Child Development © 2014 Society for Research in Child Development, Inc.
A characterization of verb use in Turkish agrammatic narrative speech.
Arslan, Seçkin; Bamyacı, Elif; Bastiaanse, Roelien
2016-01-01
This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large and regular verb inflection paradigm where verbs are inflected for evidentiality (i.e. direct versus indirect evidence available to the speaker). Particularly, we explored the general characteristics of the speech samples (e.g. utterance length) and the uses of lexical, finite and non-finite verbs and direct and indirect evidentials. The results show that speech rate is slow, verbs per utterance are lower than normal and the verb diversity is reduced in the agrammatic speakers. Verb inflection is relatively intact; however, a trade-off pattern between inflection for direct evidentials and verb diversity is found. The implications of the data are discussed in connection with narrative-speech production studies on other languages.
Electrocorticographic representations of segmental features in continuous speech
Lotte, Fabien; Brumberg, Jonathan S.; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L.; Guan, Cuntai; Schalk, Gerwin
2015-01-01
Acoustic speech output results from coordinated articulation of dozens of muscles, bones and cartilages of the vocal mechanism. While we commonly take the fluency and speed of our speech productions for granted, the neural mechanisms facilitating the requisite muscular control are not completely understood. Previous neuroimaging and electrophysiology studies of speech sensorimotor control has typically concentrated on speech sounds (i.e., phonemes, syllables and words) in isolation; sentence-length investigations have largely been used to inform coincident linguistic processing. In this study, we examined the neural representations of segmental features (place and manner of articulation, and voicing status) in the context of fluent, continuous speech production. We used recordings from the cortical surface [electrocorticography (ECoG)] to simultaneously evaluate the spatial topography and temporal dynamics of the neural correlates of speech articulation that may mediate the generation of hypothesized gestural or articulatory scores. We found that the representation of place of articulation involved broad networks of brain regions during all phases of speech production: preparation, execution and monitoring. In contrast, manner of articulation and voicing status were dominated by auditory cortical responses after speech had been initiated. These results provide a new insight into the articulatory and auditory processes underlying speech production in terms of their motor requirements and acoustic correlates. PMID:25759647
Yang, Ying; Liu, Yue-Hui; Fu, Ming-Fu; Li, Chun-Lin; Wang, Li-Yan; Wang, Qi; Sun, Xi-Bin
2015-08-20
Early auditory and speech development in home-based early intervention of infants and toddlers with hearing loss younger than 2 years are still spare in China. This study aimed to observe the development of auditory and speech in deaf infants and toddlers who were fitted with hearing aids and/or received cochlear implantation between the chronological ages of 7-24 months, and analyze the effect of chronological age and recovery time on auditory and speech development in the course of home-based early intervention. This longitudinal study included 55 hearing impaired children with severe and profound binaural deafness, who were divided into Group A (7-12 months), Group B (13-18 months) and Group C (19-24 months) based on the chronological age. Categories auditory performance (CAP) and speech intelligibility rating scale (SIR) were used to evaluate auditory and speech development at baseline and 3, 6, 9, 12, 18, and 24 months of habilitation. Descriptive statistics were used to describe demographic features and were analyzed by repeated measures analysis of variance. With 24 months of hearing intervention, 78% of the patients were able to understand common phrases and conversation without lip-reading, 96% of the patients were intelligible to a listener. In three groups, children showed the rapid growth of trend features in each period of habilitation. CAP and SIR scores have developed rapidly within 24 months after fitted auxiliary device in Group A, which performed much better auditory and speech abilities than Group B (P < 0.05) and Group C (P < 0.05). Group B achieved better results than Group C, whereas no significant differences were observed between Group B and Group C (P > 0.05). The data suggested the early hearing intervention and home-based habilitation benefit auditory and speech development. Chronological age and recovery time may be major factors for aural verbal outcomes in hearing impaired children. The development of auditory and speech in hearing impaired children may be relatively crucial in thefirst year's habilitation after fitted with the auxiliary device.
Li, Junfeng; Yang, Lin; Zhang, Jianping; Yan, Yonghong; Hu, Yi; Akagi, Masato; Loizou, Philipos C
2011-05-01
A large number of single-channel noise-reduction algorithms have been proposed based largely on mathematical principles. Most of these algorithms, however, have been evaluated with English speech. Given the different perceptual cues used by native listeners of different languages including tonal languages, it is of interest to examine whether there are any language effects when the same noise-reduction algorithm is used to process noisy speech in different languages. A comparative evaluation and investigation is taken in this study of various single-channel noise-reduction algorithms applied to noisy speech taken from three languages: Chinese, Japanese, and English. Clean speech signals (Chinese words and Japanese words) were first corrupted by three types of noise at two signal-to-noise ratios and then processed by five single-channel noise-reduction algorithms. The processed signals were finally presented to normal-hearing listeners for recognition. Intelligibility evaluation showed that the majority of noise-reduction algorithms did not improve speech intelligibility. Consistent with a previous study with the English language, the Wiener filtering algorithm produced small, but statistically significant, improvements in intelligibility for car and white noise conditions. Significant differences between the performances of noise-reduction algorithms across the three languages were observed.
Pring, Tim
2016-07-01
The inverse-care law suggests that fewer healthcare resources are available in deprived areas where health needs are greatest. To examine the provision of paediatric speech and language services across London boroughs and to relate provision to the level of deprivation of the boroughs. Information on the employment of paediatric speech and language therapists was obtained from London boroughs by freedom-of-information requests. The relationship between the number of therapists and the index of multiple deprivation for the borough was examined. Twenty-nine of 32 boroughs responded. A positive relationship between provision and need was obtained, suggesting that the inverse-care law does not apply. However, large inequalities of provision were found particularly among the more socially deprived boroughs. In some instances boroughs had five times as many therapists per child as other boroughs. The data reveal that large differences in speech and language therapy provision exist across boroughs. The reasons for these inequalities are unclear, but the lack of comparative information across boroughs is likely to be unhelpful in planning equitable services. The use of freedom of information in assessing health inequalities is stressed and its future availability is desirable. © 2016 Royal College of Speech and Language Therapists.
Dmitrieva, E S; Gel'man, V Ia; Zaĭtseva, K A; Orlov, A M
2009-01-01
Comparative study of acoustic correlates of emotional intonation was conducted on two types of speech material: sensible speech utterances and short meaningless words. The corpus of speech signals of different emotional intonations (happy, angry, frightened, sad and neutral) was created using the actor's method of simulation of emotions. Native Russian 20-70-year-old speakers (both professional actors and non-actors) participated in the study. In the corpus, the following characteristics were analyzed: mean values and standard deviations of the power, fundamental frequency, frequencies of the first and second formants, and utterance duration. Comparison of each emotional intonation with "neutral" utterances showed the greatest deviations of the fundamental frequency and frequencies of the first formant. The direction of these deviations was independent of the semantic content of speech utterance and its duration, age, gender, and being actor or non-actor, though the personal features of the speakers affected the absolute values of these frequencies.
NASA Technical Reports Server (NTRS)
Walklet, T.
1981-01-01
The feasibility of a miniature versatile portable speech prosthesis (VPSP) was analyzed and information on its potential users and on other similar devices was collected. The VPSP is a device that incorporates speech synthesis technology. The objective is to provide sufficient information to decide whether there is valuable technology to contribute to the miniaturization of the VPSP. The needs of potential users are identified, the development status of technologies similar or related to those used in the VPSP are evaluated. The VPSP, a computer based speech synthesis system fits on a wheelchair. The purpose was to produce a device that provides communication assistance in educational, vocational, and social situations to speech impaired individuals. It is expected that the VPSP can be a valuable aid for persons who are also motor impaired, which explains the placement of the system on a wheelchair.
Rhythmic patterning in Malaysian and Singapore English.
Tan, Rachel Siew Kuang; Low, Ee-Ling
2014-06-01
Previous work on the rhythm of Malaysian English has been based on impressionistic observations. This paper utilizes acoustic analysis to measure the rhythmic patterns of Malaysian English. Recordings of the read speech and spontaneous speech of 10 Malaysian English speakers were analyzed and compared with recordings of an equivalent sample of Singaporean English speakers. Analysis was done using two rhythmic indexes, the PVI and VarcoV. It was found that although the rhythm of read speech of the Singaporean speakers was syllable-based as described by previous studies, the rhythm of the Malaysian speakers was even more syllable-based. Analysis of the syllables in specific utterances showed that Malaysian speakers did not reduce vowels as much as Singaporean speakers in cases of syllables in utterances. Results of the spontaneous speech confirmed the findings for the read speech; that is, the same rhythmic patterning was found which normally triggers vowel reductions.
Telephone-quality pathological speech classification using empirical mode decomposition.
Kaleem, M F; Ghoraani, B; Guergachi, A; Krishnan, S
2011-01-01
This paper presents a computationally simple and effective methodology based on empirical mode decomposition (EMD) for classification of telephone quality normal and pathological speech signals. EMD is used to decompose continuous normal and pathological speech signals into intrinsic mode functions, which are analyzed to extract physically meaningful and unique temporal and spectral features. Using continuous speech samples from a database of 51 normal and 161 pathological speakers, which has been modified to simulate telephone quality speech under different levels of noise, a linear classifier is used with the feature vector thus obtained to obtain a high classification accuracy, thereby demonstrating the effectiveness of the methodology. The classification accuracy reported in this paper (89.7% for signal-to-noise ratio 30 dB) is a significant improvement over previously reported results for the same task, and demonstrates the utility of our methodology for cost-effective remote voice pathology assessment over telephone channels.
Statistical Analysis of Spectral Properties and Prosodic Parameters of Emotional Speech
NASA Astrophysics Data System (ADS)
Přibil, J.; Přibilová, A.
2009-01-01
The paper addresses reflection of microintonation and spectral properties in male and female acted emotional speech. Microintonation component of speech melody is analyzed regarding its spectral and statistical parameters. According to psychological research of emotional speech, different emotions are accompanied by different spectral noise. We control its amount by spectral flatness according to which the high frequency noise is mixed in voiced frames during cepstral speech synthesis. Our experiments are aimed at statistical analysis of cepstral coefficient values and ranges of spectral flatness in three emotions (joy, sadness, anger), and a neutral state for comparison. Calculated histograms of spectral flatness distribution are visually compared and modelled by Gamma probability distribution. Histograms of cepstral coefficient distribution are evaluated and compared using skewness and kurtosis. Achieved statistical results show good correlation comparing male and female voices for all emotional states portrayed by several Czech and Slovak professional actors.
Dysarthria and broader motor speech deficits in Dravet syndrome.
Turner, Samantha J; Brown, Amy; Arpone, Marta; Anderson, Vicki; Morgan, Angela T; Scheffer, Ingrid E
2017-02-21
To analyze the oral motor, speech, and language phenotype in 20 children and adults with Dravet syndrome (DS) associated with mutations in SCN1A . Fifteen verbal and 5 minimally verbal DS patients with SCN1A mutations (aged 15 months-28 years) underwent a tailored assessment battery. Speech was characterized by imprecise articulation, abnormal nasal resonance, voice, and pitch, and prosody errors. Half of verbal patients had moderate to severely impaired conversational speech intelligibility. Oral motor impairment, motor planning/programming difficulties, and poor postural control were typical. Nonverbal individuals had intentional communication. Cognitive skills varied markedly, with intellectual functioning ranging from the low average range to severe intellectual disability. Language impairment was congruent with cognition. We describe a distinctive speech, language, and oral motor phenotype in children and adults with DS associated with mutations in SCN1A. Recognizing this phenotype will guide therapeutic intervention in patients with DS. © 2017 American Academy of Neurology.
Dysarthria and broader motor speech deficits in Dravet syndrome
Turner, Samantha J.; Brown, Amy; Arpone, Marta; Anderson, Vicki; Morgan, Angela T.
2017-01-01
Objective: To analyze the oral motor, speech, and language phenotype in 20 children and adults with Dravet syndrome (DS) associated with mutations in SCN1A. Methods: Fifteen verbal and 5 minimally verbal DS patients with SCN1A mutations (aged 15 months-28 years) underwent a tailored assessment battery. Results: Speech was characterized by imprecise articulation, abnormal nasal resonance, voice, and pitch, and prosody errors. Half of verbal patients had moderate to severely impaired conversational speech intelligibility. Oral motor impairment, motor planning/programming difficulties, and poor postural control were typical. Nonverbal individuals had intentional communication. Cognitive skills varied markedly, with intellectual functioning ranging from the low average range to severe intellectual disability. Language impairment was congruent with cognition. Conclusions: We describe a distinctive speech, language, and oral motor phenotype in children and adults with DS associated with mutations in SCN1A. Recognizing this phenotype will guide therapeutic intervention in patients with DS. PMID:28148630
Dong, Li; Kong, Jiangping; Sundberg, Johan
2014-07-01
Long-term-average spectrum (LTAS) characteristics were analyzed for ten Kunqu Opera singers, two in each of five roles. Each singer performed singing, stage speech, and conversational speech. Differences between the roles and between their performances of these three conditions are examined. After compensating for Leq difference LTAS characteristics still differ between the roles but are similar for the three conditions, especially for Colorful face (CF) and Old man roles, and especially between reading and singing. The curves show no evidence of a singer's formant cluster peak, but the CF role demonstrates a speaker's formant peak near 3 kHz. The LTAS characteristics deviate markedly from non-singers' standard conversational speech as well as from those of Western opera singing.
Microphone directionality, pre-emphasis filter, and wind noise in cochlear implants.
Chung, King; McKibben, Nicholas
2011-10-01
Wind noise can be a nuisance or a debilitating masker for cochlear implant users in outdoor environments. Previous studies indicated that wind noise at the microphone/hearing aid output had high levels of low-frequency energy and the amount of noise generated is related to the microphone directionality. Currently, cochlear implants only offer either directional microphones or omnidirectional microphones for users at-large. As all cochlear implants utilize pre-emphasis filters to reduce low-frequency energy before the signal is encoded, effective wind noise reduction algorithms for hearing aids might not be applicable for cochlear implants. The purposes of this study were to investigate the effect of microphone directionality on speech recognition and perceived sound quality of cochlear implant users in wind noise and to derive effective wind noise reduction strategies for cochlear implants. A repeated-measure design was used to examine the effects of spectral and temporal masking created by wind noise recorded through directional and omnidirectional microphones and the effects of pre-emphasis filters on cochlear implant performance. A digital hearing aid was programmed to have linear amplification and relatively flat in-situ frequency responses for the directional and omnidirectional modes. The hearing aid output was then recorded from 0 to 360° at flow velocities of 4.5 and 13.5 m/sec in a quiet wind tunnel. Sixteen postlingually deafened adult cochlear implant listeners who reported to be able to communicate on the phone with friends and family without text messages participated in the study. Cochlear implant users listened to speech in wind noise recorded at locations that the directional and omnidirectional microphones yielded the lowest noise levels. Cochlear implant listeners repeated the sentences and rated the sound quality of the testing materials. Spectral and temporal characteristics of flow noise, as well as speech and/or noise characteristics before and after the pre-emphasis filter, were analyzed. Correlation coefficients between speech recognition scores and crest factors of wind noise before and after pre-emphasis filtering were also calculated. Listeners obtained higher scores using the omnidirectional than the directional microphone mode at 13.5 m/sec, but they obtained similar speech recognition scores for the two microphone modes at 4.5 m/sec. Higher correlation coefficients were obtained between speech recognition scores and crest factors of wind noise after pre-emphasis filtering rather than before filtering. Cochlear implant users would benefit from both directional and omnidirectional microphones to reduce far-field background noise and near-field wind noise. Automatic microphone switching algorithms can be more effective if the incoming signal were analyzed after pre-emphasis filters for microphone switching decisions. American Academy of Audiology.
2013-03-20
Wakefield of the University of Michigan as Co-PI. This extended activity produced a large number of products and accomplishments; however, this report...speech communication will be expanded to provide a robust modeling and prediction capability for tasks involving speech production and speech and non...preparations made to move to the newer Cocoa API instead of the previous Carbon API. In the following sections, an extended treatment will be
Non-right handed primary progressive apraxia of speech.
Botha, Hugo; Duffy, Joseph R; Whitwell, Jennifer L; Strand, Edythe A; Machulda, Mary M; Spychalla, Anthony J; Tosakulwong, Nirubol; Senjem, Matthew L; Knopman, David S; Petersen, Ronald C; Jack, Clifford R; Lowe, Val J; Josephs, Keith A
2018-07-15
In recent years a large and growing body of research has greatly advanced our understanding of primary progressive apraxia of speech. Handedness has emerged as one potential marker of selective vulnerability in degenerative diseases. This study evaluated the clinical and imaging findings in non-right handed compared to right handed participants in a prospective cohort diagnosed with primary progressive apraxia of speech. A total of 30 participants were included. Compared to the expected rate in the population, there was a higher prevalence of non-right handedness among those with primary progressive apraxia of speech (6/30, 20%). Small group numbers meant that these results did not reach statistical significance, although the effect sizes were moderate-to-large. There were no clinical differences between right handed and non-right handed participants. Bilateral hypometabolism was seen in primary progressive apraxia of speech compared to controls, with non-right handed participants showing more right hemispheric involvement. This is the first report of a higher rate of non-right handedness in participants with isolated apraxia of speech, which may point to an increased vulnerability for developing this disorder among non-right handed participants. This challenges prior hypotheses about a relative protective effect of non-right handedness for tau-related neurodegeneration. We discuss potential avenues for future research to investigate the relationship between handedness and motor disorders more generally. Copyright © 2018 Elsevier B.V. All rights reserved.
Approaching the Linguistic Complexity
NASA Astrophysics Data System (ADS)
Drożdż, Stanisław; Kwapień, Jarosław; Orczyk, Adam
We analyze the rank-frequency distributions of words in selected English and Polish texts. We compare scaling properties of these distributions in both languages. We also study a few small corpora of Polish literary texts and find that for a corpus consisting of texts written by different authors the basic scaling regime is broken more strongly than in the case of comparable corpus consisting of texts written by the same author. Similarly, for a corpus consisting of texts translated into Polish from other languages the scaling regime is broken more strongly than for a comparable corpus of native Polish texts. Moreover, based on the British National Corpus, we consider the rank-frequency distributions of the grammatically basic forms of words (lemmas) tagged with their proper part of speech. We find that these distributions do not scale if each part of speech is analyzed separately. The only part of speech that independently develops a trace of scaling is verbs.
Large Vocabulary Audio-Visual Speech Recognition
2002-06-12
www.is.cs.cmu.edu Email: waibel(a)cs.cmu~edu Inttractive Systenms Labs ttoctis Ssstms Labs Meeting Browser - -- Interpreting Human Communication "Why did...Speech Interacti Stams Labs t-cive Systms Focus of Attention Tracking Conclusion - Complete Model of Human Communication is Needed - Include all
Differentiating primary progressive aphasias in a brief sample of connected speech
Evans, Emily; O'Shea, Jessica; Powers, John; Boller, Ashley; Weinberg, Danielle; Haley, Jenna; McMillan, Corey; Irwin, David J.; Rascovsky, Katya; Grossman, Murray
2013-01-01
Objective: A brief speech expression protocol that can be administered and scored without special training would aid in the differential diagnosis of the 3 principal forms of primary progressive aphasia (PPA): nonfluent/agrammatic PPA, logopenic variant PPA, and semantic variant PPA. Methods: We used a picture-description task to elicit a short speech sample, and we evaluated impairments in speech-sound production, speech rate, lexical retrieval, and grammaticality. We compared the results with those obtained by a longer, previously validated protocol and further validated performance with multimodal imaging to assess the neuroanatomical basis of the deficits. Results: We found different patterns of impaired grammar in each PPA variant, and additional language production features were impaired in each: nonfluent/agrammatic PPA was characterized by speech-sound errors; logopenic variant PPA by dysfluencies (false starts and hesitations); and semantic variant PPA by poor retrieval of nouns. Strong correlations were found between this brief speech sample and a lengthier narrative speech sample. A composite measure of grammaticality and other measures of speech production were correlated with distinct regions of gray matter atrophy and reduced white matter fractional anisotropy in each PPA variant. Conclusions: These findings provide evidence that large-scale networks are required for fluent, grammatical expression; that these networks can be selectively disrupted in PPA syndromes; and that quantitative analysis of a brief speech sample can reveal the corresponding distinct speech characteristics. PMID:23794681
Role of Visual Speech in Phonological Processing by Children With Hearing Loss
Jerger, Susan; Tye-Murray, Nancy; Abdi, Hervé
2011-01-01
Purpose This research assessed the influence of visual speech on phonological processing by children with hearing loss (HL). Method Children with HL and children with normal hearing (NH) named pictures while attempting to ignore auditory or audiovisual speech distractors whose onsets relative to the pictures were either congruent, conflicting in place of articulation, or conflicting in voicing—for example, the picture “pizza” coupled with the distractors “peach,” “teacher,” or “beast,” respectively. Speed of picture naming was measured. Results The conflicting conditions slowed naming, and phonological processing by children with HL displayed the age-related shift in sensitivity to visual speech seen in children with NH, although with developmental delay. Younger children with HL exhibited a disproportionately large influence of visual speech and a negligible influence of auditory speech, whereas older children with HL showed a robust influence of auditory speech with no benefit to performance from adding visual speech. The congruent conditions did not speed naming in children with HL, nor did the addition of visual speech influence performance. Unexpectedly, the /∧/-vowel congruent distractors slowed naming in children with HL and decreased articulatory proficiency. Conclusions Results for the conflicting conditions are consistent with the hypothesis that speech representations in children with HL (a) are initially disproportionally structured in terms of visual speech and (b) become better specified with age in terms of auditorily encoded information. PMID:19339701
Ultrasound analysis of tongue contour for the sound [j] in adults and children.
Barberena, Luciana da Silva; Simoni, Simone Nicolini de; Souza, Rosalina Correa Sobrinho de; Moraes, Denis Altieri de Oliveira; Berti, Larissa Cristina; Keske-Soares, Márcia
2017-12-11
Analyze and compare the mean tongue contours and articulatory gestures in the production of the sound [j] in adults and children with typical and atypical speech development. The children with atypical development presented speech sound disorders. The diagnosis was determined by speech assessments. The study sample was composed of 90 individuals divided into three groups: 30 adults with typical speech development aged 19-44 years (AT), 30 children with typical speech development (CT), and 30 children with speech sound disorders, named as atypical in this study, aged four years to eight years and eleven months (CA). Ultrasonography assessment of tongue movements was performed for all groups. Mean tongue contours were compared between three groups in different vocalic contexts following the sound [j]. The maximum elevation of the tongue tip was considered for delimitation of gestures using the Articulate Assistant Advanced (AAA) software and images in sagittal plane/Mode B. The points that intercepted the language curves were analyzed by the statistical tool R. The graphs of tongue contours were obtained adopting a 95% confidence interval. After that, the regions with significant statistical differences (p<0.05) between the CT and CA groups were obtained. The mean tongue contours demonstrated the gesture for the sound [j] in the comparison between typical and atypical children. For the semivowel [j], there is an articulatory gesture of tongue and dorsum towards the center of the hard palate, with significant differences observed between the children. The results showed differences between the groups of children regarding the ability to refine articulatory gestures.
Narayanan, Shrikanth; Georgiou, Panayiotis G
2013-02-07
The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion.
Vandewalle, Ellen; Boets, Bart; Ghesquière, Pol; Zink, Inge
2012-01-01
This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay (n = 8), (2) children with SLI and normal literacy (n = 10) and (3) typically developing children (n = 14). Moreover, the relations between these auditory processing and speech perception skills and oral language and literacy skills in grade 1 and grade 3 were analyzed. The SLI group with literacy delay scored significantly lower than both other groups on speech perception, but not on temporal auditory processing. Both normal reading groups did not differ in terms of speech perception or auditory processing. Speech perception was significantly related to reading and spelling in grades 1 and 3 and had a unique predictive contribution to reading growth in grade 3, even after controlling reading level, phonological ability, auditory processing and oral language skills in grade 1. These findings indicated that speech perception also had a unique direct impact upon reading development and not only through its relation with phonological awareness. Moreover, speech perception seemed to be more associated with the development of literacy skills and less with oral language ability. Copyright © 2011 Elsevier Ltd. All rights reserved.
Speech in 10-Year-Olds Born With Cleft Lip and Palate: What Do Peers Say?
Nyberg, Jill; Havstam, Christina
2016-09-01
The aim of this study was to explore how 10-year-olds describe speech and communicative participation in children born with unilateral cleft lip and palate in their own words, whether they perceive signs of velopharyngeal insufficiency (VPI) and articulation errors of different degrees, and if so, which terminology they use. Methods/Participants: Nineteen 10-year-olds participated in three focus group interviews where they listened to 10 to 12 speech samples with different types of cleft speech characteristics assessed by speech and language pathologists (SLPs) and described what they heard. The interviews were transcribed and analyzed with qualitative content analysis. The analysis resulted in three interlinked categories encompassing different aspects of speech, personality, and social implications: descriptions of speech, thoughts on causes and consequences, and emotional reactions and associations. Each category contains four subcategories exemplified with quotes from the children's statements. More pronounced signs of VPI were perceived but referred to in terms relevant to 10-year-olds. Articulatory difficulties, even minor ones, were noted. Peers reflected on the risk to teasing and bullying and on how children with impaired speech might experience their situation. The SLPs and peers did not agree on minor signs of VPI, but they were unanimous in their analysis of clinically normal and more severely impaired speech. Articulatory impairments may be more important to treat than minor signs of VPI based on what peers say.
Calvarial periosteal graft for second-stage cleft palate surgery: a preliminary report.
Neiva, Cecilia; Dakpe, Stephanie; Gbaguidi, Cica; Testelin, Sylvie; Devauchelle, Bernard
2014-07-01
The objectives of cleft palate surgery are to achieve optimal outcomes regarding speech development, hearing, maxillary arch development and facial skull growth. Early two-stage cleft palate repair has been the most recent protocol of choice to achieve good maxillary arch growth without compromising speech development. Hard palate closure occurs within one year of soft palate surgery. However, in some cases the residual hard palate cleft width is larger than 15 mm at the age of two. As previously reported, integrated speech development starts around that age and it is a challenge since we know that early mobilization of the mucoperiosteum interferes with normal facial growth on the long-term. In children with large residual hard palate clefts at the age 2, we report the use of calvarial periosteal grafts to close the cleft. With a retrospective 6-year study (2006-2012) we first analyzed the outcomes regarding impermeability of hard palate closure on 45 patients who at the age of two presented a residual cleft of the hard palate larger than 15 mm and benefited from a periosteal graft. We then studied the maxillary growth in these children. In order to compare long-term results, we included 14 patients (age range: 8-20) treated between 1994 & 2006. Two analyses were conducted, the first one on dental casts from birth to the age of 6 and the other one based on lateral cephalograms following Delaire's principles and TRIDIM software. After the systematic cephalometric analysis of 14 patients, we found no evidence of retrognathia or Class 3 dental malocclusion. In the population of 45 children who benefited from calvarial periosteal grafts the rate of palate fistula was 17% vs. 10% in the overall series. Despite major advances in understanding cleft defects, the issues of timing and choice of the surgical procedure remain widely debated. In second-stage surgery for hard palate closure, using a calvarial periosteal graft could be the solution for large residual clefts without compromising adequate speech development by encouraging proper maxillary arch growth. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Ultrasound applicability in Speech Language Pathology and Audiology.
Barberena, Luciana da Silva; Brasil, Brunah de Castro; Melo, Roberta Michelon; Mezzomo, Carolina Lisbôa; Mota, Helena Bolli; Keske-Soares, Márcia
2014-01-01
To present recent studies that used the ultrasound in the fields of Speech Language Pathology and Audiology, which evidence possibilities of the applicability of this technique in different subareas. A bibliographic research was carried out in the PubMed database, using the keywords "ultrasonic," "speech," "phonetics," "Speech, Language and Hearing Sciences," "voice," "deglutition," and "myofunctional therapy," comprising some areas of Speech Language Pathology and Audiology Sciences. The keywords "ultrasound," "ultrasonography," "swallow," "orofacial myofunctional therapy," and "orofacial myology" were also used in the search. Studies in humans from the past 5 years were selected. In the preselection, duplicated studies, articles not fully available, and those that did not present direct relation between ultrasound and Speech Language Pathology and Audiology Sciences were discarded. The data were analyzed descriptively and classified subareas of Speech Language Pathology and Audiology Sciences. The following items were considered: purposes, participants, procedures, and results. We selected 12 articles for ultrasound versus speech/phonetics subarea, 5 for ultrasound versus voice, 1 for ultrasound versus muscles of mastication, and 10 for ultrasound versus swallow. Studies relating "ultrasound" and "Speech Language Pathology and Audiology Sciences" in the past 5 years were not found. Different studies on the use of ultrasound in Speech Language Pathology and Audiology Sciences were found. Each of them, according to its purpose, confirms new possibilities of the use of this instrument in the several subareas, aiming at a more accurate diagnosis and new evaluative and therapeutic possibilities.
Kraaijenga, S A C; Oskam, I M; van Son, R J J H; Hamming-Vrieze, O; Hilgers, F J M; van den Brekel, M W M; van der Molen, L
2016-04-01
Assessment of long-term objective and subjective voice, speech, articulation, and quality of life in patients with head and neck cancer (HNC) treated with concurrent chemoradiotherapy (CRT) for advanced, stage IV disease. Twenty-two disease-free survivors, treated with cisplatin-based CRT for inoperable HNC (1999-2004), were evaluated at 10-years post-treatment. A standard Dutch text was recorded. Perceptual analysis of voice, speech, and articulation was conducted by two expert listeners (SLPs). Also an experimental expert system based on automatic speech recognition was used. Patients' perception of voice and speech and related quality of life was assessed with the Voice Handicap Index (VHI) and Speech Handicap Index (SHI) questionnaires. At a median follow-up of 11-years, perceptual evaluation showed abnormal scores in up to 64% of cases, depending on the outcome parameter analyzed. Automatic assessment of voice and speech parameters correlated moderate to strong with perceptual outcome scores. Patient-reported problems with voice (VHI>15) and speech (SHI>6) in daily life were present in 68% and 77% of patients, respectively. Patients treated with IMRT showed significantly less impairment compared to those treated with conventional radiotherapy. More than 10-years after organ-preservation treatment, voice and speech problems are common in this patient cohort, as assessed with perceptual evaluation, automatic speech recognition, and with validated structured questionnaires. There were fewer complaints in patients treated with IMRT than with conventional radiotherapy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Turner, Samantha J.; Mayes, Angela K.; Verhoeven, Andrea; Mandelstam, Simone A.; Morgan, Angela T.
2015-01-01
Objective: To delineate the specific speech deficits in individuals with epilepsy-aphasia syndromes associated with mutations in the glutamate receptor subunit gene GRIN2A. Methods: We analyzed the speech phenotype associated with GRIN2A mutations in 11 individuals, aged 16 to 64 years, from 3 families. Standardized clinical speech assessments and perceptual analyses of conversational samples were conducted. Results: Individuals showed a characteristic phenotype of dysarthria and dyspraxia with lifelong impact on speech intelligibility in some. Speech was typified by imprecise articulation (11/11, 100%), impaired pitch (monopitch 10/11, 91%) and prosody (stress errors 7/11, 64%), and hypernasality (7/11, 64%). Oral motor impairments and poor performance on maximum vowel duration (8/11, 73%) and repetition of monosyllables (10/11, 91%) and trisyllables (7/11, 64%) supported conversational speech findings. The speech phenotype was present in one individual who did not have seizures. Conclusions: Distinctive features of dysarthria and dyspraxia are found in individuals with GRIN2A mutations, often in the setting of epilepsy-aphasia syndromes; dysarthria has not been previously recognized in these disorders. Of note, the speech phenotype may occur in the absence of a seizure disorder, reinforcing an important role for GRIN2A in motor speech function. Our findings highlight the need for precise clinical speech assessment and intervention in this group. By understanding the mechanisms involved in GRIN2A disorders, targeted therapy may be designed to improve chronic lifelong deficits in intelligibility. PMID:25596506
Brumberg, Jonathan S; Krusienski, Dean J; Chakrabarti, Shreya; Gunduz, Aysegul; Brunner, Peter; Ritaccio, Anthony L; Schalk, Gerwin
2016-01-01
How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech. Specifically, we asked subjects to repeat continuous sentences aloud or silently while we recorded electrical signals directly from the surface of the brain (electrocorticography (ECoG)). We then determined the relationship between cortical activity and speech output across different areas of cortex and at sub-second timescales. The results highlight a spatio-temporal progression of cortical involvement in the continuous speech process that initiates utterances in frontal-motor areas and ends with the monitoring of auditory feedback in superior temporal gyrus. Direct comparison of cortical activity related to overt versus covert conditions revealed a common network of brain regions involved in speech that may implement orthographic and phonological processing. Our results provide one of the first characterizations of the spatiotemporal electrophysiological representations of the continuous speech process, and also highlight the common neural substrate of overt and covert speech. These results thereby contribute to a refined understanding of speech functions in the human brain.
Brumberg, Jonathan S.; Krusienski, Dean J.; Chakrabarti, Shreya; Gunduz, Aysegul; Brunner, Peter; Ritaccio, Anthony L.; Schalk, Gerwin
2016-01-01
How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech. Specifically, we asked subjects to repeat continuous sentences aloud or silently while we recorded electrical signals directly from the surface of the brain (electrocorticography (ECoG)). We then determined the relationship between cortical activity and speech output across different areas of cortex and at sub-second timescales. The results highlight a spatio-temporal progression of cortical involvement in the continuous speech process that initiates utterances in frontal-motor areas and ends with the monitoring of auditory feedback in superior temporal gyrus. Direct comparison of cortical activity related to overt versus covert conditions revealed a common network of brain regions involved in speech that may implement orthographic and phonological processing. Our results provide one of the first characterizations of the spatiotemporal electrophysiological representations of the continuous speech process, and also highlight the common neural substrate of overt and covert speech. These results thereby contribute to a refined understanding of speech functions in the human brain. PMID:27875590
Vogel, Adam P; Shirbin, Christopher; Churchyard, Andrew J; Stout, Julie C
2012-12-01
Speech disturbances (e.g., altered prosody) have been described in symptomatic Huntington's Disease (HD) individuals, however, the extent to which speech changes in gene positive pre-manifest (PreHD) individuals is largely unknown. The speech of individuals carrying the mutant HTT gene is a behavioural/motor/cognitive marker demonstrating some potential as an objective indicator of early HD onset and disease progression. Speech samples were acquired from 30 individuals carrying the mutant HTT gene (13 PreHD, 17 early stage HD) and 15 matched controls. Participants read a passage, produced a monologue and said the days of the week. Data were analysed acoustically for measures of timing, frequency and intensity. There was a clear effect of group across most acoustic measures, so that speech performance differed in-line with disease progression. Comparisons across groups revealed significant differences between the control and the early stage HD group on measures of timing (e.g., speech rate). Participants carrying the mutant HTT gene presented with slower rates of speech, took longer to say words and produced greater silences between and within words compared to healthy controls. Importantly, speech rate showed a significant correlation to burden of disease scores. The speech of early stage HD differed significantly from controls. The speech of PreHD, although not reaching significance, tended to lie between the performance of controls and early stage HD. This suggests that changes in speech production appear to be developing prior to diagnosis. Copyright © 2012 Elsevier Ltd. All rights reserved.
Motor speech signature of behavioral variant frontotemporal dementia: Refining the phenotype.
Vogel, Adam P; Poole, Matthew L; Pemberton, Hugh; Caverlé, Marja W J; Boonstra, Frederique M C; Low, Essie; Darby, David; Brodtmann, Amy
2017-08-22
To provide a comprehensive description of motor speech function in behavioral variant frontotemporal dementia (bvFTD). Forty-eight individuals (24 bvFTD and 24 age- and sex-matched healthy controls) provided speech samples. These varied in complexity and thus cognitive demand. Their language was assessed using the Progressive Aphasia Language Scale and verbal fluency tasks. Speech was analyzed perceptually to describe the nature of deficits and acoustically to quantify differences between patients with bvFTD and healthy controls. Cortical thickness and subcortical volume derived from MRI scans were correlated with speech outcomes in patients with bvFTD. Speech of affected individuals was significantly different from that of healthy controls. The speech signature of patients with bvFTD is characterized by a reduced rate (75%) and accuracy (65%) on alternating syllable production tasks, and prosodic deficits including reduced speech rate (45%), prolonged intervals (54%), and use of short phrases (41%). Groups differed on acoustic measures derived from the reading, unprepared monologue, and diadochokinetic tasks but not the days of the week or sustained vowel tasks. Variability of silence length was associated with cortical thickness of the inferior frontal gyrus and insula and speech rate with the precentral gyrus. One in 8 patients presented with moderate speech timing deficits with a further two-thirds rated as mild or subclinical. Subtle but measurable deficits in prosody are common in bvFTD and should be considered during disease management. Language function correlated with speech timing measures derived from the unprepared monologue only. © 2017 American Academy of Neurology.
A Window into the Intoxicated Mind? Speech as an Index of Psychoactive Drug Effects
Bedi, Gillinder; Cecchi, Guillermo A; Slezak, Diego F; Carrillo, Facundo; Sigman, Mariano; de Wit, Harriet
2014-01-01
Abused drugs can profoundly alter mental states in ways that may motivate drug use. These effects are usually assessed with self-report, an approach that is vulnerable to biases. Analyzing speech during intoxication may present a more direct, objective measure, offering a unique ‘window' into the mind. Here, we employed computational analyses of speech semantic and topological structure after ±3,4-methylenedioxymethamphetamine (MDMA; ‘ecstasy') and methamphetamine in 13 ecstasy users. In 4 sessions, participants completed a 10-min speech task after MDMA (0.75 and 1.5 mg/kg), methamphetamine (20 mg), or placebo. Latent Semantic Analyses identified the semantic proximity between speech content and concepts relevant to drug effects. Graph-based analyses identified topological speech characteristics. Group-level drug effects on semantic distances and topology were assessed. Machine-learning analyses (with leave-one-out cross-validation) assessed whether speech characteristics could predict drug condition in the individual subject. Speech after MDMA (1.5 mg/kg) had greater semantic proximity than placebo to the concepts friend, support, intimacy, and rapport. Speech on MDMA (0.75 mg/kg) had greater proximity to empathy than placebo. Conversely, speech on methamphetamine was further from compassion than placebo. Classifiers discriminated between MDMA (1.5 mg/kg) and placebo with 88% accuracy, and MDMA (1.5 mg/kg) and methamphetamine with 84% accuracy. For the two MDMA doses, the classifier performed at chance. These data suggest that automated semantic speech analyses can capture subtle alterations in mental state, accurately discriminating between drugs. The findings also illustrate the potential for automated speech-based approaches to characterize clinically relevant alterations to mental state, including those occurring in psychiatric illness. PMID:24694926
Schaadt, Gesa; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Männel, Claudia
2018-01-17
During information processing, individuals benefit from bimodally presented input, as has been demonstrated for speech perception (i.e., printed letters and speech sounds) or the perception of emotional expressions (i.e., facial expression and voice tuning). While typically developing individuals show this bimodal benefit, school children with dyslexia do not. Currently, it is unknown whether the bimodal processing deficit in dyslexia also occurs for visual-auditory speech processing that is independent of reading and spelling acquisition (i.e., no letter-sound knowledge is required). Here, we tested school children with and without spelling problems on their bimodal perception of video-recorded mouth movements pronouncing syllables. We analyzed the event-related potential Mismatch Response (MMR) to visual-auditory speech information and compared this response to the MMR to monomodal speech information (i.e., auditory-only, visual-only). We found a reduced MMR with later onset to visual-auditory speech information in children with spelling problems compared to children without spelling problems. Moreover, when comparing bimodal and monomodal speech perception, we found that children without spelling problems showed significantly larger responses in the visual-auditory experiment compared to the visual-only response, whereas children with spelling problems did not. Our results suggest that children with dyslexia exhibit general difficulties in bimodal speech perception independently of letter-speech sound knowledge, as apparent in altered bimodal speech perception and lacking benefit from bimodal information. This general deficit in children with dyslexia may underlie the previously reported reduced bimodal benefit for letter-speech sound combinations and similar findings in emotion perception. Copyright © 2018 Elsevier Ltd. All rights reserved.
A window into the intoxicated mind? Speech as an index of psychoactive drug effects.
Bedi, Gillinder; Cecchi, Guillermo A; Slezak, Diego F; Carrillo, Facundo; Sigman, Mariano; de Wit, Harriet
2014-09-01
Abused drugs can profoundly alter mental states in ways that may motivate drug use. These effects are usually assessed with self-report, an approach that is vulnerable to biases. Analyzing speech during intoxication may present a more direct, objective measure, offering a unique 'window' into the mind. Here, we employed computational analyses of speech semantic and topological structure after ±3,4-methylenedioxymethamphetamine (MDMA; 'ecstasy') and methamphetamine in 13 ecstasy users. In 4 sessions, participants completed a 10-min speech task after MDMA (0.75 and 1.5 mg/kg), methamphetamine (20 mg), or placebo. Latent Semantic Analyses identified the semantic proximity between speech content and concepts relevant to drug effects. Graph-based analyses identified topological speech characteristics. Group-level drug effects on semantic distances and topology were assessed. Machine-learning analyses (with leave-one-out cross-validation) assessed whether speech characteristics could predict drug condition in the individual subject. Speech after MDMA (1.5 mg/kg) had greater semantic proximity than placebo to the concepts friend, support, intimacy, and rapport. Speech on MDMA (0.75 mg/kg) had greater proximity to empathy than placebo. Conversely, speech on methamphetamine was further from compassion than placebo. Classifiers discriminated between MDMA (1.5 mg/kg) and placebo with 88% accuracy, and MDMA (1.5 mg/kg) and methamphetamine with 84% accuracy. For the two MDMA doses, the classifier performed at chance. These data suggest that automated semantic speech analyses can capture subtle alterations in mental state, accurately discriminating between drugs. The findings also illustrate the potential for automated speech-based approaches to characterize clinically relevant alterations to mental state, including those occurring in psychiatric illness.
Reilly, Kevin J.; Spencer, Kristie A.
2013-01-01
The current study investigated the processes responsible for selection of sounds and syllables during production of speech sequences in 10 adults with hypokinetic dysarthria from Parkinson’s disease, five adults with ataxic dysarthria, and 14 healthy control speakers. Speech production data from a choice reaction time task were analyzed to evaluate the effects of sequence length and practice on speech sound sequencing. Speakers produced sequences that were between one and five syllables in length over five experimental runs of 60 trials each. In contrast to the healthy speakers, speakers with hypokinetic dysarthria demonstrated exaggerated sequence length effects for both inter-syllable intervals (ISIs) and speech error rates. Conversely, speakers with ataxic dysarthria failed to demonstrate a sequence length effect on ISIs and were also the only group that did not exhibit practice-related changes in ISIs and speech error rates over the five experimental runs. The exaggerated sequence length effects in the hypokinetic speakers with Parkinson’s disease are consistent with an impairment of action selection during speech sequence production. The absent length effects observed in the speakers with ataxic dysarthria is consistent with previous findings that indicate a limited capacity to buffer speech sequences in advance of their execution. In addition, the lack of practice effects in these speakers suggests that learning-related improvements in the production rate and accuracy of speech sequences involves processing by structures of the cerebellum. Together, the current findings inform models of serial control for speech in healthy speakers and support the notion that sequencing deficits contribute to speech symptoms in speakers with hypokinetic or ataxic dysarthria. In addition, these findings indicate that speech sequencing is differentially impaired in hypokinetic and ataxic dysarthria. PMID:24137121
Speech and Voice Response to a Levodopa Challenge in Late-Stage Parkinson's Disease.
Fabbri, Margherita; Guimarães, Isabel; Cardoso, Rita; Coelho, Miguel; Guedes, Leonor Correia; Rosa, Mario M; Godinho, Catarina; Abreu, Daisy; Gonçalves, Nilza; Antonini, Angelo; Ferreira, Joaquim J
2017-01-01
Parkinson's disease (PD) patients are affected by hypokinetic dysarthria, characterized by hypophonia and dysprosody, which worsens with disease progression. Levodopa's (l-dopa) effect on quality of speech is inconclusive; no data are currently available for late-stage PD (LSPD). To assess the modifications of speech and voice in LSPD following an acute l-dopa challenge. LSPD patients [Schwab and England score <50/Hoehn and Yahr stage >3 (MED ON)] performed several vocal tasks before and after an acute l-dopa challenge. The following was assessed: respiratory support for speech, voice quality, stability and variability, speech rate, and motor performance (MDS-UPDRS-III). All voice samples were recorded and analyzed by a speech and language therapist blinded to patients' therapeutic condition using Praat 5.1 software. 24/27 (14 men) LSPD patients succeeded in performing voice tasks. Median age and disease duration of patients were 79 [IQR: 71.5-81.7] and 14.5 [IQR: 11-15.7] years, respectively. In MED OFF, respiratory breath support and pitch break time of LSPD patients were worse than the normative values of non-parkinsonian. A correlation was found between disease duration and voice quality ( R = 0.51; p = 0.013) and speech rate ( R = -0.55; p = 0.008). l-Dopa significantly improved MDS-UPDRS-III score (20%), with no effect on speech as assessed by clinical rating scales and automated analysis. Speech is severely affected in LSPD. Although l-dopa had some effect on motor performance, including axial signs, speech and voice did not improve. The applicability and efficacy of non-pharmacological treatment for speech impairment should be considered for speech disorder management in PD.
Acoustic properties of naturally produced clear speech at normal speaking rates
NASA Astrophysics Data System (ADS)
Krause, Jean C.; Braida, Louis D.
2004-01-01
Sentences spoken ``clearly'' are significantly more intelligible than those spoken ``conversationally'' for hearing-impaired listeners in a variety of backgrounds [Picheny et al., J. Speech Hear. Res. 28, 96-103 (1985); Uchanski et al., ibid. 39, 494-509 (1996); Payton et al., J. Acoust. Soc. Am. 95, 1581-1592 (1994)]. While producing clear speech, however, talkers often reduce their speaking rate significantly [Picheny et al., J. Speech Hear. Res. 29, 434-446 (1986); Uchanski et al., ibid. 39, 494-509 (1996)]. Yet speaking slowly is not solely responsible for the intelligibility benefit of clear speech (over conversational speech), since a recent study [Krause and Braida, J. Acoust. Soc. Am. 112, 2165-2172 (2002)] showed that talkers can produce clear speech at normal rates with training. This finding suggests that clear speech has inherent acoustic properties, independent of rate, that contribute to improved intelligibility. Identifying these acoustic properties could lead to improved signal processing schemes for hearing aids. To gain insight into these acoustical properties, conversational and clear speech produced at normal speaking rates were analyzed at three levels of detail (global, phonological, and phonetic). Although results suggest that talkers may have employed different strategies to achieve clear speech at normal rates, two global-level properties were identified that appear likely to be linked to the improvements in intelligibility provided by clear/normal speech: increased energy in the 1000-3000-Hz range of long-term spectra and increased modulation depth of low frequency modulations of the intensity envelope. Other phonological and phonetic differences associated with clear/normal speech include changes in (1) frequency of stop burst releases, (2) VOT of word-initial voiceless stop consonants, and (3) short-term vowel spectra.
Suppression of competing speech through entrainment of cortical oscillations
D'Zmura, Michael; Srinivasan, Ramesh
2013-01-01
People are highly skilled at attending to one speaker in the presence of competitors, but the neural mechanisms supporting this remain unclear. Recent studies have argued that the auditory system enhances the gain of a speech stream relative to competitors by entraining (or “phase-locking”) to the rhythmic structure in its acoustic envelope, thus ensuring that syllables arrive during periods of high neuronal excitability. We hypothesized that such a mechanism could also suppress a competing speech stream by ensuring that syllables arrive during periods of low neuronal excitability. To test this, we analyzed high-density EEG recorded from human adults while they attended to one of two competing, naturalistic speech streams. By calculating the cross-correlation between the EEG channels and the speech envelopes, we found evidence of entrainment to the attended speech's acoustic envelope as well as weaker yet significant entrainment to the unattended speech's envelope. An independent component analysis (ICA) decomposition of the data revealed sources in the posterior temporal cortices that displayed robust correlations to both the attended and unattended envelopes. Critically, in these components the signs of the correlations when attended were opposite those when unattended, consistent with the hypothesized entrainment-based suppressive mechanism. PMID:23515789
Speech enhancement using the modified phase-opponency model.
Deshmukh, Om D; Espy-Wilson, Carol Y; Carney, Laurel H
2007-06-01
In this paper we present a model called the Modified Phase-Opponency (MPO) model for single-channel speech enhancement when the speech is corrupted by additive noise. The MPO model is based on the auditory PO model, proposed for detection of tones in noise. The PO model includes a physiologically realistic mechanism for processing the information in neural discharge times and exploits the frequency-dependent phase properties of the tuned filters in the auditory periphery by using a cross-auditory-nerve-fiber coincidence detection for extracting temporal cues. The MPO model alters the components of the PO model such that the basic functionality of the PO model is maintained but the properties of the model can be analyzed and modified independently. The MPO-based speech enhancement scheme does not need to estimate the noise characteristics nor does it assume that the noise satisfies any statistical model. The MPO technique leads to the lowest value of the LPC-based objective measures and the highest value of the perceptual evaluation of speech quality measure compared to other methods when the speech signals are corrupted by fluctuating noise. Combining the MPO speech enhancement technique with our aperiodicity, periodicity, and pitch detector further improves its performance.
Lip Movement Exaggerations During Infant-Directed Speech
Green, Jordan R.; Nip, Ignatius S. B.; Wilson, Erin M.; Mefferd, Antje S.; Yunusova, Yana
2011-01-01
Purpose Although a growing body of literature has indentified the positive effects of visual speech on speech and language learning, oral movements of infant-directed speech (IDS) have rarely been studied. This investigation used 3-dimensional motion capture technology to describe how mothers modify their lip movements when talking to their infants. Method Lip movements were recorded from 25 mothers as they spoke to their infants and other adults. Lip shapes were analyzed for differences across speaking conditions. The maximum fundamental frequency, duration, acoustic intensity, and first and second formant frequency of each vowel also were measured. Results Lip movements were significantly larger during IDS than during adult-directed speech, although the exaggerations were vowel specific. All of the vowels produced during IDS were characterized by an elevated vocal pitch and a slowed speaking rate when compared with vowels produced during adult-directed speech. Conclusion The pattern of lip-shape exaggerations did not provide support for the hypothesis that mothers produce exemplar visual models of vowels during IDS. Future work is required to determine whether the observed increases in vertical lip aperture engender visual and acoustic enhancements that facilitate the early learning of speech. PMID:20699342
Jürgens, Tim; Brand, Thomas
2009-11-01
This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.
Free Speech Advocates at Berkeley.
ERIC Educational Resources Information Center
Watts, William A.; Whittaker, David
1966-01-01
This study compares highly committed members of the Free Speech Movement (FSM) at Berkeley with the student population at large on 3 sociopsychological foci: general biographical data, religious orientation, and rigidity-flexibility. Questionnaires were administered to 172 FSM members selected by chance from the 10 to 1200 who entered and…
Speech and Speech-Related Quality of Life After Late Palate Repair: A Patient's Perspective.
Schönmeyr, Björn; Wendby, Lisa; Sharma, Mitali; Jacobson, Lia; Restrepo, Carolina; Campbell, Alex
2015-07-01
Many patients with cleft palate deformities worldwide receive treatment at a later age than is recommended for normal speech to develop. The outcomes after late palate repairs in terms of speech and quality of life (QOL) still remain largely unstudied. In the current study, questionnaires were used to assess the patients' perception of speech and QOL before and after primary palate repair. All of the patients were operated at a cleft center in northeast India and had a cleft palate with a normal lip or with a cleft lip that had been previously repaired. A total of 134 patients (7-35 years) were interviewed preoperatively and 46 patients (7-32 years) were assessed in the postoperative survey. The survey showed that scores based on the speech handicap index, concerning speech and speech-related QOL, did not improve postoperatively. In fact, the questionnaires indicated that the speech became more unpredictable (P < 0.01) and that nasal regurgitation became worse (P < 0.01) for some patients after surgery. A total of 78% of the patients were still satisfied with the surgery and all of the patients reported that their self-confidence had improved after the operation. Thus, the majority of interviewed patients who underwent late primary palate repair were satisfied with the surgery. At the same time, speech and speech-related QOL did not improve according to the speech handicap index-based survey. Speech predictability may even become worse and nasal regurgitation may increase after late palate repair, according to these results.
Erb, Julia; Ludwig, Alexandra Annemarie; Kunke, Dunja; Fuchs, Michael; Obleser, Jonas
2018-04-24
Psychoacoustic tests assessed shortly after cochlear implantation are useful predictors of the rehabilitative speech outcome. While largely independent, both spectral and temporal resolution tests are important to provide an accurate prediction of speech recognition. However, rapid tests of temporal sensitivity are currently lacking. Here, we propose a simple amplitude modulation rate discrimination (AMRD) paradigm that is validated by predicting future speech recognition in adult cochlear implant (CI) patients. In 34 newly implanted patients, we used an adaptive AMRD paradigm, where broadband noise was modulated at the speech-relevant rate of ~4 Hz. In a longitudinal study, speech recognition in quiet was assessed using the closed-set Freiburger number test shortly after cochlear implantation (t0) as well as the open-set Freiburger monosyllabic word test 6 months later (t6). Both AMRD thresholds at t0 (r = -0.51) and speech recognition scores at t0 (r = 0.56) predicted speech recognition scores at t6. However, AMRD and speech recognition at t0 were uncorrelated, suggesting that those measures capture partially distinct perceptual abilities. A multiple regression model predicting 6-month speech recognition outcome with deafness duration and speech recognition at t0 improved from adjusted R = 0.30 to adjusted R = 0.44 when AMRD threshold was added as a predictor. These findings identify AMRD thresholds as a reliable, nonredundant predictor above and beyond established speech tests for CI outcome. This AMRD test could potentially be developed into a rapid clinical temporal-resolution test to be integrated into the postoperative test battery to improve the reliability of speech outcome prognosis.
Real time speech formant analyzer and display
Holland, George E.; Struve, Walter S.; Homer, John F.
1987-01-01
A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user.
Real time speech formant analyzer and display
Holland, G.E.; Struve, W.S.; Homer, J.F.
1987-02-03
A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user. 19 figs.
ERIC Educational Resources Information Center
Giangreco, Michael F.; Prelock, Patricia A.; Turnbull, H. Rutherford, III
2010-01-01
Purpose: Under the Individuals With Disabilities Education Act (IDEA; as amended, 2004), speech-language pathology services may be either special education or a related service. Given the absence of guidance documents or research on this issue, the purposes of this clinical exchange are to (a) present and analyze the IDEA definitions related to…
An Analysis of the Use and Structure of Logic in Japanese Argument.
ERIC Educational Resources Information Center
Hazen, Michael David
A study was conducted to determine if the Japanese use logic and argument in different ways than do Westerners. The study analyzed sample rebuttal speeches (in English) of 14 Japanese debaters using the Toulmin model of argument. In addition, it made comparisons with a sample of speeches made by 5 American high school debaters. Audiotapes of the…
Review of Speech-to-Text Recognition Technology for Enhancing Learning
ERIC Educational Resources Information Center
Shadiev, Rustam; Hwang, Wu-Yuin; Chen, Nian-Shing; Huang, Yueh-Min
2014-01-01
This paper reviewed literature from 1999 to 2014 inclusively on how Speech-to-Text Recognition (STR) technology has been applied to enhance learning. The first aim of this review is to understand how STR technology has been used to support learning over the past fifteen years, and the second is to analyze all research evidence to understand how…
ERIC Educational Resources Information Center
Araux, Jose Luis
2013-01-01
Purpose: The purpose of this study was to describe and analyze the conduct implications of qualified immunity in allegations of deprivation of civil rights by public school administrators regarding the First Amendment-student speech. Methodology: Data were collected using the LexisNexis and JuriSearch online legal research systems, which…
Noise and Speech Interference: Proceedings of Minisymposium
NASA Technical Reports Server (NTRS)
Shepherd, W. T. (Editor)
1975-01-01
Several papers are presented which deal with the psychophysical effects of interference with speech and listening activities by different forms of noise masking and filtering. Special attention was given to the annoyance such interruptions cause, particularly that due to aircraft flyover noises. Activities such as telephone listening and television watching were studied. A number of experimental investigations are described and the results are analyzed.
Filling the Information Void: Adapting the Information Operation (IO) Message in Post-Hostility Iraq
2005-05-26
more elaborate, with Pizza Huts and Burger Kings, and so large that one, called Anaconda, has nine bus routes to move the troops around inside the...circulation of 50,000 weekly.71 Based on American principles of freedom of speech and the value of debate, one may conclude that the proliferation of news...democratic values . According to the World Press Freedom Committee, “as always, the proper answer to bad speech is more speech, not the silencing of
Frontal lobe epileptic seizures are accompanied by elevated pitch during verbal communication.
Speck, Iva; Echternach, Matthias; Sammler, Daniela; Schulze-Bonhage, Andreas
2018-03-01
The objective of our study was to assess alterations in speech as a possible localizing sign in frontal lobe epilepsy. Ictal speech was analyzed in 18 patients with frontal lobe epilepsy (FLE) during seizures and in the interictal period. Matched identical words were analyzed regarding alterations in fundamental frequency (ƒo) as an approximation of pitch. In patients with FLE, ƒo of ictal utterances was significantly higher than ƒo in interictal recordings (p = 0.016). Ictal ƒo increases occurred in both FLE of right and left seizure origin. In contrast, a matched temporal lobe epilepsy (TLE) group showed less pronounced increases in ƒo, and only in patients with right-sided seizure foci. This study for the first time shows significant voice alterations in ictal speech in a cohort of patients with FLE. This may contribute to the localization of the epileptic focus. Increases in ƒo were interestingly found in frontal lobe seizures with origin in either hemisphere, suggesting a bilateral involvement to the planning of speech production, in contrast to a more right-sided lateralization of pitch perception in prosodic processing. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.
Cai, Shanqing; Tourville, Jason A.; Beal, Deryk S.; Perkell, Joseph S.; Guenther, Frank H.; Ghosh, Satrajit S.
2013-01-01
Deficits in brain white matter have been a main focus of recent neuroimaging studies on stuttering. However, no prior study has examined brain connectivity on the global level of the cerebral cortex in persons who stutter (PWS). In the current study, we analyzed the results from probabilistic tractography between regions comprising the cortical speech network. An anatomical parcellation scheme was used to define 28 speech production-related ROIs in each hemisphere. We used network-based statistic (NBS) and graph theory to analyze the connectivity patterns obtained from tractography. At the network-level, the probabilistic corticocortical connectivity from the PWS group were significantly weaker than that from persons with fluent speech (PFS). NBS analysis revealed significant components in the bilateral speech networks with negative correlations with stuttering severity. To facilitate comparison with previous studies, we also performed tract-based spatial statistics (TBSS) and regional fractional anisotropy (FA) averaging. Results from tractography, TBSS and regional FA averaging jointly highlight the importance of several regions in the left peri-Rolandic sensorimotor and premotor areas, most notably the left ventral premotor cortex (vPMC) and middle primary motor cortex, in the neuroanatomical basis of stuttering. PMID:24611042
Cai, Shanqing; Tourville, Jason A; Beal, Deryk S; Perkell, Joseph S; Guenther, Frank H; Ghosh, Satrajit S
2014-01-01
Deficits in brain white matter have been a main focus of recent neuroimaging studies on stuttering. However, no prior study has examined brain connectivity on the global level of the cerebral cortex in persons who stutter (PWS). In the current study, we analyzed the results from probabilistic tractography between regions comprising the cortical speech network. An anatomical parcellation scheme was used to define 28 speech production-related ROIs in each hemisphere. We used network-based statistic (NBS) and graph theory to analyze the connectivity patterns obtained from tractography. At the network-level, the probabilistic corticocortical connectivity from the PWS group were significantly weaker than that from persons with fluent speech (PFS). NBS analysis revealed significant components in the bilateral speech networks with negative correlations with stuttering severity. To facilitate comparison with previous studies, we also performed tract-based spatial statistics (TBSS) and regional fractional anisotropy (FA) averaging. Results from tractography, TBSS and regional FA averaging jointly highlight the importance of several regions in the left peri-Rolandic sensorimotor and premotor areas, most notably the left ventral premotor cortex (vPMC) and middle primary motor cortex, in the neuroanatomical basis of stuttering.
Discourse norms as default rules: structuring corporate speech to multiple stakeholders.
Yosifon, David G
2011-01-01
This Article analyzes corporate speech problems through the framework of corporate law. The focus here is on the "discourse norms" that regulate corporate speech to various corporate stakeholders, including shareholders, workers, and consumers. I argue that these "discourse norms" should be understood as default terms in the "nexus-of-contracts" that comprises the corporation. Having reviewed the failure of corporate law as it bears on the interests of non-shareholding stakeholders such as workers and consumers, I urge the adoption of prescriptive discourse norms as an approach to reforming corporate governance in a socially useful manner.
Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim
2015-06-15
Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim
2015-01-01
Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, J.
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation maymore » decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.« less
Asad, Areej Nimer; Purdy, Suzanne C; Ballard, Elaine; Fairgray, Liz; Bowen, Caroline
2018-04-27
In this descriptive study, phonological processes were examined in the speech of children aged 5;0-7;6 (years; months) with mild to profound hearing loss using hearing aids (HAs) and cochlear implants (CIs), in comparison to their peers. A second aim was to compare phonological processes of HA and CI users. Children with hearing loss (CWHL, N = 25) were compared to children with normal hearing (CWNH, N = 30) with similar age, gender, linguistic, and socioeconomic backgrounds. Speech samples obtained from a list of 88 words, derived from three standardized speech tests, were analyzed using the CASALA (Computer Aided Speech and Language Analysis) program to evaluate participants' phonological systems, based on lax (a process appeared at least twice in the speech of at least two children) and strict (a process appeared at least five times in the speech of at least two children) counting criteria. Developmental phonological processes were eliminated in the speech of younger and older CWNH while eleven developmental phonological processes persisted in the speech of both age groups of CWHL. CWHL showed a similar trend of age of elimination to CWNH, but at a slower rate. Children with HAs and CIs produced similar phonological processes. Final consonant deletion, weak syllable deletion, backing, and glottal replacement were present in the speech of HA users, affecting their overall speech intelligibility. Developmental and non-developmental phonological processes persist in the speech of children with mild to profound hearing loss compared to their peers with typical hearing. The findings indicate that it is important for clinicians to consider phonological assessment in pre-school CWHL and the use of evidence-based speech therapy in order to reduce non-developmental and non-age-appropriate developmental processes, thereby enhancing their speech intelligibility. Copyright © 2018 Elsevier Inc. All rights reserved.
Jiang, Chenghui; Whitehill, Tara L
2014-04-01
Speech errors associated with cleft palate are well established for English and several other Indo-European languages. Few articles describing the speech of Putonghua (standard Mandarin Chinese) speakers with cleft palate have been published in English language journals. Although methodological guidelines have been published for the perceptual speech evaluation of individuals with cleft palate, there has been no critical review of methodological issues in studies of Putonghua speakers with cleft palate. A literature search was conducted to identify relevant studies published over the past 30 years in Chinese language journals. Only studies incorporating perceptual analysis of speech were included. Thirty-seven articles which met inclusion criteria were analyzed and coded on a number of methodological variables. Reliability was established by having all variables recoded for all studies. This critical review identified many methodological issues. These design flaws make it difficult to draw reliable conclusions about characteristic speech errors in this group of speakers. Specific recommendations are made to improve the reliability and validity of future studies, as well to facilitate cross-center comparisons.
Speech Spectrum's Correlation with Speakers' Eysenck Personality Traits
Hu, Chao; Wang, Qiandong; Short, Lindsey A.; Fu, Genyue
2012-01-01
The current study explored the correlation between speakers' Eysenck personality traits and speech spectrum parameters. Forty-six subjects completed the Eysenck Personality Questionnaire. They were instructed to verbally answer the questions shown on a computer screen and their responses were recorded by the computer. Spectrum parameters of /sh/ and /i/ were analyzed by Praat voice software. Formant frequencies of the consonant /sh/ in lying responses were significantly lower than that in truthful responses, whereas no difference existed on the vowel /i/ speech spectrum. The second formant bandwidth of the consonant /sh/ speech spectrum was significantly correlated with the personality traits of Psychoticism, Extraversion, and Neuroticism, and the correlation differed between truthful and lying responses, whereas the first formant frequency of the vowel /i/ speech spectrum was negatively correlated with Neuroticism in both response types. The results suggest that personality characteristics may be conveyed through the human voice, although the extent to which these effects are due to physiological differences in the organs associated with speech or to a general Pygmalion effect is yet unknown. PMID:22439014
Schulz, Geralyn M; Hosey, Lara A; Bradberry, Trent J; Stager, Sheila V; Lee, Li-Ching; Pawha, Rajesh; Lyons, Kelly E; Metman, Leo Verhagen; Braun, Allen R
2012-01-01
Deep brain stimulation (DBS) of the subthalamic nucleus improves the motor symptoms of Parkinson's disease, but may produce a worsening of speech and language performance at rates and amplitudes typically selected in clinical practice. The possibility that these dissociated effects might be modulated by selective stimulation of left and right STN has never been systematically investigated. To address this issue, we analyzed motor, speech and language functions of 12 patients implanted with bilateral stimulators configured for optimal motor responses. Behavioral responses were quantified under four stimulator conditions: bilateral DBS, right-only DBS, left-only DBS and no DBS. Under bilateral and left-only DBS conditions, our results exhibited a significant improvement in motor symptoms but worsening of speech and language. These findings contribute to the growing body of literature demonstrating that bilateral STN DBS compromises speech and language function and suggests that these negative effects may be principally due to left-sided stimulation. These findings may have practical clinical consequences, suggesting that clinicians might optimize motor, speech and language functions by carefully adjusting left- and right-sided stimulation parameters.
Intelligent acoustic data fusion technique for information security analysis
NASA Astrophysics Data System (ADS)
Jiang, Ying; Tang, Yize; Lu, Wenda; Wang, Zhongfeng; Wang, Zepeng; Zhang, Luming
2017-08-01
Tone is an essential component of word formation in all tonal languages, and it plays an important role in the transmission of information in speech communication. Therefore, tones characteristics study can be applied into security analysis of acoustic signal by the means of language identification, etc. In speech processing, fundamental frequency (F0) is often viewed as representing tones by researchers of speech synthesis. However, regular F0 values may lead to low naturalness in synthesized speech. Moreover, F0 and tone are not equivalent linguistically; F0 is just a representation of a tone. Therefore, the Electroglottography (EGG) signal is collected for deeper tones characteristics study. In this paper, focusing on the Northern Kam language, which has nine tonal contours and five level tone types, we first collected EGG and speech signals from six natural male speakers of the Northern Kam language, and then achieved the clustering distributions of the tone curves. After summarizing the main characteristics of tones of Northern Kam, we analyzed the relationship between EGG and speech signal parameters, and laid the foundation for further security analysis of acoustic signal.
Hierarchical organization in the temporal structure of infant-direct speech and song.
Falk, Simone; Kello, Christopher T
2017-06-01
Caregivers alter the temporal structure of their utterances when talking and singing to infants compared with adult communication. The present study tested whether temporal variability in infant-directed registers serves to emphasize the hierarchical temporal structure of speech. Fifteen German-speaking mothers sang a play song and told a story to their 6-months-old infants, or to an adult. Recordings were analyzed using a recently developed method that determines the degree of nested clustering of temporal events in speech. Events were defined as peaks in the amplitude envelope, and clusters of various sizes related to periods of acoustic speech energy at varying timescales. Infant-directed speech and song clearly showed greater event clustering compared with adult-directed registers, at multiple timescales of hundreds of milliseconds to tens of seconds. We discuss the relation of this newly discovered acoustic property to temporal variability in linguistic units and its potential implications for parent-infant communication and infants learning the hierarchical structures of speech and language. Copyright © 2017 Elsevier B.V. All rights reserved.
Co-occurrence statistics as a language-dependent cue for speech segmentation.
Saksida, Amanda; Langus, Alan; Nespor, Marina
2017-05-01
To what extent can language acquisition be explained in terms of different associative learning mechanisms? It has been hypothesized that distributional regularities in spoken languages are strong enough to elicit statistical learning about dependencies among speech units. Distributional regularities could be a useful cue for word learning even without rich language-specific knowledge. However, it is not clear how strong and reliable the distributional cues are that humans might use to segment speech. We investigate cross-linguistic viability of different statistical learning strategies by analyzing child-directed speech corpora from nine languages and by modeling possible statistics-based speech segmentations. We show that languages vary as to which statistical segmentation strategies are most successful. The variability of the results can be partially explained by systematic differences between languages, such as rhythmical differences. The results confirm previous findings that different statistical learning strategies are successful in different languages and suggest that infants may have to primarily rely on non-statistical cues when they begin their process of speech segmentation. © 2016 John Wiley & Sons Ltd.
Vieira, Manuel; Fonseca, Paulo J; Amorim, M Clara P; Teixeira, Carlos J C
2015-12-01
The study of acoustic communication in animals often requires not only the recognition of species specific acoustic signals but also the identification of individual subjects, all in a complex acoustic background. Moreover, when very long recordings are to be analyzed, automatic recognition and identification processes are invaluable tools to extract the relevant biological information. A pattern recognition methodology based on hidden Markov models is presented inspired by successful results obtained in the most widely known and complex acoustical communication signal: human speech. This methodology was applied here for the first time to the detection and recognition of fish acoustic signals, specifically in a stream of round-the-clock recordings of Lusitanian toadfish (Halobatrachus didactylus) in their natural estuarine habitat. The results show that this methodology is able not only to detect the mating sounds (boatwhistles) but also to identify individual male toadfish, reaching an identification rate of ca. 95%. Moreover this method also proved to be a powerful tool to assess signal durations in large data sets. However, the system failed in recognizing other sound types.
Moerman, Mieke; Martens, Jean-Pierre; Dejonckere, Philippe
2015-04-01
This article is a compilation of own research performed during the European COoperation in Science and Technology (COST) action 2103: 'Advance Voice Function Assessment', an initiative of voice and speech processing teams consisting of physicists, engineers, and clinicians. This manuscript concerns analyzing largely irregular voicing types, namely substitution voicing (SV) and adductor spasmodic dysphonia (AdSD). A specific perceptual rating scale (IINFVo) was developed, and the Auditory Model Based Pitch Extractor (AMPEX), a piece of software that automatically analyses running speech and generates pitch values in background noise, was applied. The IINFVo perceptual rating scale has been shown to be useful in evaluating SV. The analysis of strongly irregular voices stimulated a modification of the European Laryngological Society's assessment protocol which was originally designed for the common types of (less severe) dysphonia. Acoustic analysis with AMPEX demonstrates that the most informative features are, for SV, the voicing-related acoustic features and, for AdSD, the perturbation measures. Poor correlations between self-assessment and acoustic and perceptual dimensions in the assessment of highly irregular voices argue for a multidimensional approach.
The Small College Administrative Environment.
ERIC Educational Resources Information Center
Buzza, Bonnie Wilson
Environmental differences for speech departments at large and small colleges are not simply of scale; there are qualitative as well as quantitative differences. At small colleges, faculty are hired as teachers, rather than as researchers. Because speech teachers at small colleges must be generalists, and because it is often difficult to replace…
Recognizing Speech under a Processing Load: Dissociating Energetic from Informational Factors
ERIC Educational Resources Information Center
Mattys, Sven L.; Brooks, Joanna; Cooke, Martin
2009-01-01
Effects of perceptual and cognitive loads on spoken-word recognition have so far largely escaped investigation. This study lays the foundations of a psycholinguistic approach to speech recognition in adverse conditions that draws upon the distinction between energetic masking, i.e., listening environments leading to signal degradation, and…
ERIC Educational Resources Information Center
Hamblin, DeAnna; Bartlett, Marilyn J.
2013-01-01
The authors note that when it comes to balancing free speech and schools' responsibilities, the online world is largely uncharted waters. Questions remain about the rights of both students and teachers in the world of social media. Although the lower courts have ruled that students' freedom of speech rights offer them some protection for…
Fifty years of progress in speech and speaker recognition
NASA Astrophysics Data System (ADS)
Furui, Sadaoki
2004-10-01
Speech and speaker recognition technology has made very significant progress in the past 50 years. The progress can be summarized by the following changes: (1) from template matching to corpus-base statistical modeling, e.g., HMM and n-grams, (2) from filter bank/spectral resonance to Cepstral features (Cepstrum + DCepstrum + DDCepstrum), (3) from heuristic time-normalization to DTW/DP matching, (4) from gdistanceh-based to likelihood-based methods, (5) from maximum likelihood to discriminative approach, e.g., MCE/GPD and MMI, (6) from isolated word to continuous speech recognition, (7) from small vocabulary to large vocabulary recognition, (8) from context-independent units to context-dependent units for recognition, (9) from clean speech to noisy/telephone speech recognition, (10) from single speaker to speaker-independent/adaptive recognition, (11) from monologue to dialogue/conversation recognition, (12) from read speech to spontaneous speech recognition, (13) from recognition to understanding, (14) from single-modality (audio signal only) to multi-modal (audio/visual) speech recognition, (15) from hardware recognizer to software recognizer, and (16) from no commercial application to many practical commercial applications. Most of these advances have taken place in both the fields of speech recognition and speaker recognition. The majority of technological changes have been directed toward the purpose of increasing robustness of recognition, including many other additional important techniques not noted above.
Havstam, Christina; Sandberg, Annika Dahlgren; Lohmander, Anette
2011-04-01
Many children born with cleft palate have impaired speech during their pre-school years, but usually the speech difficulties are transient and resolved by later childhood. This study investigated communication attitude with the Swedish version of the Communication Attitude Test (CAT-S) in 54 10-year-olds with cleft (lip and) palate. In addition, environmental factors were assessed via parent questionnaire. These data were compared to speech assessments by experienced listeners, who rated the children's velopharyngeal function, articulation, intelligibility, and general impression of speech at ages 5, 7, and 10 years. The children with clefts scored significantly higher on the CAT-S compared to reference data, indicating a more negative communication attitude on group level but with large individual variation. All speech variables, except velopharyngeal function at earlier ages, as well as the parent questionnaire scores, correlated significantly with the CAT-S scores. Although there was a relationship between speech and communication attitude, not all children with impaired speech developed negative communication attitudes. The assessment of communication attitude can make an important contribution to our understanding of the communicative situation for children with cleft (lip and) palate and give important indications for intervention.
Civier, Oren; Tasko, Stephen M.; Guenther, Frank H.
2010-01-01
This paper investigates the hypothesis that stuttering may result in part from impaired readout of feedforward control of speech, which forces persons who stutter (PWS) to produce speech with a motor strategy that is weighted too much toward auditory feedback control. Over-reliance on feedback control leads to production errors which, if they grow large enough, can cause the motor system to “reset” and repeat the current syllable. This hypothesis is investigated using computer simulations of a “neurally impaired” version of the DIVA model, a neural network model of speech acquisition and production. The model’s outputs are compared to published acoustic data from PWS’ fluent speech, and to combined acoustic and articulatory movement data collected from the dysfluent speech of one PWS. The simulations mimic the errors observed in the PWS subject’s speech, as well as the repairs of these errors. Additional simulations were able to account for enhancements of fluency gained by slowed/prolonged speech and masking noise. Together these results support the hypothesis that many dysfluencies in stuttering are due to a bias away from feedforward control and toward feedback control. PMID:20831971
Perceptual learning of speech under optimal and adverse conditions.
Zhang, Xujin; Samuel, Arthur G
2014-02-01
Humans have a remarkable ability to understand spoken language despite the large amount of variability in speech. Previous research has shown that listeners can use lexical information to guide their interpretation of atypical sounds in speech (Norris, McQueen, & Cutler, 2003). This kind of lexically induced perceptual learning enables people to adjust to the variations in utterances due to talker-specific characteristics, such as individual identity and dialect. The current study investigated perceptual learning in two optimal conditions: conversational speech (Experiment 1) versus clear speech (Experiment 2), and three adverse conditions: noise (Experiment 3a) versus two cognitive loads (Experiments 4a and 4b). Perceptual learning occurred in the two optimal conditions and in the two cognitive load conditions, but not in the noise condition. Furthermore, perceptual learning occurred only in the first of two sessions for each participant, and only for atypical /s/ sounds and not for atypical /f/ sounds. This pattern of learning and nonlearning reflects a balance between flexibility and stability that the speech system must have to deal with speech variability in the diverse conditions that speech is encountered. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Yoon, Yang-soo; Li, Yongxin; Kang, Hou-Yong; Fu, Qian-Jie
2011-01-01
Objective The full benefit of bilateral cochlear implants may depend on the unilateral performance with each device, the speech materials, processing ability of the user, and/or the listening environment. In this study, bilateral and unilateral speech performances were evaluated in terms of recognition of phonemes and sentences presented in quiet or in noise. Design Speech recognition was measured for unilateral left, unilateral right, and bilateral listening conditions; speech and noise were presented at 0° azimuth. The “binaural benefit” was defined as the difference between bilateral performance and unilateral performance with the better ear. Study Sample 9 adults with bilateral cochlear implants participated. Results On average, results showed a greater binaural benefit in noise than in quiet for all speech tests. More importantly, the binaural benefit was greater when unilateral performance was similar across ears. As the difference in unilateral performance between ears increased, the binaural advantage decreased; this functional relationship was observed across the different speech materials and noise levels even though there was substantial intra- and inter-subject variability. Conclusions The results indicate that subjects who show symmetry in speech recognition performance between implanted ears in general show a large binaural benefit. PMID:21696329
Neural systems for speech and song in autism
Pantazatos, Spiro P.; Schneider, Harry
2012-01-01
Despite language disabilities in autism, music abilities are frequently preserved. Paradoxically, brain regions associated with these functions typically overlap, enabling investigation of neural organization supporting speech and song in autism. Neural systems sensitive to speech and song were compared in low-functioning autistic and age-matched control children using passive auditory stimulation during functional magnetic resonance and diffusion tensor imaging. Activation in left inferior frontal gyrus was reduced in autistic children relative to controls during speech stimulation, but was greater than controls during song stimulation. Functional connectivity for song relative to speech was also increased between left inferior frontal gyrus and superior temporal gyrus in autism, and large-scale connectivity showed increased frontal–posterior connections. Although fractional anisotropy of the left arcuate fasciculus was decreased in autistic children relative to controls, structural terminations of the arcuate fasciculus in inferior frontal gyrus were indistinguishable between autistic and control groups. Fractional anisotropy correlated with activity in left inferior frontal gyrus for both speech and song conditions. Together, these findings indicate that in autism, functional systems that process speech and song were more effectively engaged for song than for speech and projections of structural pathways associated with these functions were not distinguishable from controls. PMID:22298195
Development of a Low-Cost, Noninvasive, Portable Visual Speech Recognition Program.
Kohlberg, Gavriel D; Gal, Ya'akov Kobi; Lalwani, Anil K
2016-09-01
Loss of speech following tracheostomy and laryngectomy severely limits communication to simple gestures and facial expressions that are largely ineffective. To facilitate communication in these patients, we seek to develop a low-cost, noninvasive, portable, and simple visual speech recognition program (VSRP) to convert articulatory facial movements into speech. A Microsoft Kinect-based VSRP was developed to capture spatial coordinates of lip movements and translate them into speech. The articulatory speech movements associated with 12 sentences were used to train an artificial neural network classifier. The accuracy of the classifier was then evaluated on a separate, previously unseen set of articulatory speech movements. The VSRP was successfully implemented and tested in 5 subjects. It achieved an accuracy rate of 77.2% (65.0%-87.6% for the 5 speakers) on a 12-sentence data set. The mean time to classify an individual sentence was 2.03 milliseconds (1.91-2.16). We have demonstrated the feasibility of a low-cost, noninvasive, portable VSRP based on Kinect to accurately predict speech from articulation movements in clinically trivial time. This VSRP could be used as a novel communication device for aphonic patients. © The Author(s) 2016.
Effect of technological advances on cochlear implant performance in adults.
Lenarz, Minoo; Joseph, Gert; Sönmez, Hasibe; Büchner, Andreas; Lenarz, Thomas
2011-12-01
To evaluate the effect of technological advances in the past 20 years on the hearing performance of a large cohort of adult cochlear implant (CI) patients. Individual, retrospective, cohort study. According to technological developments in electrode design and speech-processing strategies, we defined five virtual intervals on the time scale between 1984 and 2008. A cohort of 1,005 postlingually deafened adults was selected for this study, and their hearing performance with a CI was evaluated retrospectively according to these five technological intervals. The test battery was composed of four standard German speech tests: Freiburger monosyllabic test, speech tracking test, Hochmair-Schulz-Moser (HSM) sentence test in quiet, and HSM sentence test in 10 dB noise. The direct comparison of the speech perception in postlingually deafened adults, who were implanted during different technological periods, reveals an obvious improvement in the speech perception in patients who benefited from the recent electrode designs and speech-processing strategies. The major influence of technological advances on CI performance seems to be on speech perception in noise. Better speech perception in noisy surroundings is strong proof for demonstrating the success rate of new electrode designs and speech-processing strategies. Standard (internationally comparable) speech tests in noise should become an obligatory part of the postoperative test battery for adult CI patients. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
Varley, Rosemary; Cowell, Patricia E; Dyson, Lucy; Inglis, Lesley; Roper, Abigail; Whiteside, Sandra P
2016-03-01
There is currently little evidence on effective interventions for poststroke apraxia of speech. We report outcomes of a trial of self-administered computer therapy for apraxia of speech. Effects of speech intervention on naming and repetition of treated and untreated words were compared with those of a visuospatial sham program. The study used a parallel-group, 2-period, crossover design, with participants receiving 2 interventions. Fifty participants with chronic and stable apraxia of speech were randomly allocated to 1 of 2 order conditions: speech-first condition versus sham-first condition. Period 1 design was equivalent to a randomized controlled trial. We report results for this period and profile the effect of the period 2 crossover. Period 1 results revealed significant improvement in naming and repetition only in the speech-first group. The sham-first group displayed improvement in speech production after speech intervention in period 2. Significant improvement of treated words was found in both naming and repetition, with little generalization to structurally similar and dissimilar untreated words. Speech gains were largely maintained after withdrawal of intervention. There was a significant relationship between treatment dose and response. However, average self-administered dose was modest for both groups. Future software design would benefit from incorporation of social and gaming components to boost motivation. Single-word production can be improved in chronic apraxia of speech with behavioral intervention. Self-administered computerized therapy is a promising method for delivering high-intensity speech/language rehabilitation. URL: http://orcid.org/0000-0002-1278-0601. Unique identifier: ISRCTN88245643. © 2016 American Heart Association, Inc.
Ding, Nai; Pan, Xunyi; Luo, Cheng; Su, Naifei; Zhang, Wen; Zhang, Jianfeng
2018-01-31
How the brain groups sequential sensory events into chunks is a fundamental question in cognitive neuroscience. This study investigates whether top-down attention or specific tasks are required for the brain to apply lexical knowledge to group syllables into words. Neural responses tracking the syllabic and word rhythms of a rhythmic speech sequence were concurrently monitored using electroencephalography (EEG). The participants performed different tasks, attending to either the rhythmic speech sequence or a distractor, which was another speech stream or a nonlinguistic auditory/visual stimulus. Attention to speech, but not a lexical-meaning-related task, was required for reliable neural tracking of words, even when the distractor was a nonlinguistic stimulus presented cross-modally. Neural tracking of syllables, however, was reliably observed in all tested conditions. These results strongly suggest that neural encoding of individual auditory events (i.e., syllables) is automatic, while knowledge-based construction of temporal chunks (i.e., words) crucially relies on top-down attention. SIGNIFICANCE STATEMENT Why we cannot understand speech when not paying attention is an old question in psychology and cognitive neuroscience. Speech processing is a complex process that involves multiple stages, e.g., hearing and analyzing the speech sound, recognizing words, and combining words into phrases and sentences. The current study investigates which speech-processing stage is blocked when we do not listen carefully. We show that the brain can reliably encode syllables, basic units of speech sounds, even when we do not pay attention. Nevertheless, when distracted, the brain cannot group syllables into multisyllabic words, which are basic units for speech meaning. Therefore, the process of converting speech sound into meaning crucially relies on attention. Copyright © 2018 the authors 0270-6474/18/381178-11$15.00/0.
Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia
2015-01-01
We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.
Havas, David A; Chapp, Christopher B
2016-01-01
How does language influence the emotions and actions of large audiences? Functionally, emotions help address environmental uncertainty by constraining the body to support adaptive responses and social coordination. We propose emotions provide a similar function in language processing by constraining the mental simulation of language content to facilitate comprehension, and to foster alignment of mental states in message recipients. Consequently, we predicted that emotion-inducing language should be found in speeches specifically designed to create audience alignment - stump speeches of United States presidential candidates. We focused on phrases in the past imperfective verb aspect ("a bad economy was burdening us") that leave a mental simulation of the language content open-ended, and thus unconstrained, relative to past perfective sentences ("we were burdened by a bad economy"). As predicted, imperfective phrases appeared more frequently in stump versus comparison speeches, relative to perfective phrases. In a subsequent experiment, participants rated phrases from presidential speeches as more emotionally intense when written in the imperfective aspect compared to the same phrases written in the perfective aspect, particularly for sentences perceived as negative in valence. These findings are consistent with the notion that emotions have a role in constraining the comprehension of language, a role that may be used in communication with large audiences.
Delphi, Maryam; Lotfi, M-Yones; Moossavi, Abdollah; Bakhshi, Enayatollah; Banimostafa, Maryam
2017-09-01
Previous studies have shown that interaural-time-difference (ITD) training can improve localization ability. Surprisingly little is, however, known about localization training vis-à-vis speech perception in noise based on interaural time difference in the envelope (ITD ENV). We sought to investigate the reliability of an ITD ENV-based training program in speech-in-noise perception among elderly individuals with normal hearing and speech-in-noise disorder. The present interventional study was performed during 2016. Sixteen elderly men between 55 and 65 years of age with the clinical diagnosis of normal hearing up to 2000 Hz and speech-in-noise perception disorder participated in this study. The training localization program was based on changes in ITD ENV. In order to evaluate the reliability of the training program, we performed speech-in-noise tests before the training program, immediately afterward, and then at 2 months' follow-up. The reliability of the training program was analyzed using the Friedman test and the SPSS software. Significant statistical differences were shown in the mean scores of speech-in-noise perception between the 3 time points (P=0.001). The results also indicated no difference in the mean scores of speech-in-noise perception between the 2 time points of immediately after the training program and 2 months' follow-up (P=0.212). The present study showed the reliability of an ITD ENV-based localization training in elderly individuals with speech-in-noise perception disorder.
Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D; Senn, Pascal
2013-01-01
To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Webcameras have the potential to improve telecommunication of hearing-impaired individuals.
Neural Oscillations Carry Speech Rhythm through to Comprehension
Peelle, Jonathan E.; Davis, Matthew H.
2012-01-01
A key feature of speech is the quasi-regular rhythmic information contained in its slow amplitude modulations. In this article we review the information conveyed by speech rhythm, and the role of ongoing brain oscillations in listeners’ processing of this content. Our starting point is the fact that speech is inherently temporal, and that rhythmic information conveyed by the amplitude envelope contains important markers for place and manner of articulation, segmental information, and speech rate. Behavioral studies demonstrate that amplitude envelope information is relied upon by listeners and plays a key role in speech intelligibility. Extending behavioral findings, data from neuroimaging – particularly electroencephalography (EEG) and magnetoencephalography (MEG) – point to phase locking by ongoing cortical oscillations to low-frequency information (~4–8 Hz) in the speech envelope. This phase modulation effectively encodes a prediction of when important events (such as stressed syllables) are likely to occur, and acts to increase sensitivity to these relevant acoustic cues. We suggest a framework through which such neural entrainment to speech rhythm can explain effects of speech rate on word and segment perception (i.e., that the perception of phonemes and words in connected speech is influenced by preceding speech rate). Neuroanatomically, acoustic amplitude modulations are processed largely bilaterally in auditory cortex, with intelligible speech resulting in differential recruitment of left-hemisphere regions. Notable among these is lateral anterior temporal cortex, which we propose functions in a domain-general fashion to support ongoing memory and integration of meaningful input. Together, the reviewed evidence suggests that low-frequency oscillations in the acoustic speech signal form the foundation of a rhythmic hierarchy supporting spoken language, mirrored by phase-locked oscillations in the human brain. PMID:22973251
Selective spatial attention modulates bottom-up informational masking of speech
Carlile, Simon; Corkhill, Caitlin
2015-01-01
To hear out a conversation against other talkers listeners overcome energetic and informational masking. Largely attributed to top-down processes, information masking has also been demonstrated using unintelligible speech and amplitude-modulated maskers suggesting bottom-up processes. We examined the role of speech-like amplitude modulations in information masking using a spatial masking release paradigm. Separating a target talker from two masker talkers produced a 20 dB improvement in speech reception threshold; 40% of which was attributed to a release from informational masking. When across frequency temporal modulations in the masker talkers are decorrelated the speech is unintelligible, although the within frequency modulation characteristics remains identical. Used as a masker as above, the information masking accounted for 37% of the spatial unmasking seen with this masker. This unintelligible and highly differentiable masker is unlikely to involve top-down processes. These data provides strong evidence of bottom-up masking involving speech-like, within-frequency modulations and that this, presumably low level process, can be modulated by selective spatial attention. PMID:25727100
Selective spatial attention modulates bottom-up informational masking of speech.
Carlile, Simon; Corkhill, Caitlin
2015-03-02
To hear out a conversation against other talkers listeners overcome energetic and informational masking. Largely attributed to top-down processes, information masking has also been demonstrated using unintelligible speech and amplitude-modulated maskers suggesting bottom-up processes. We examined the role of speech-like amplitude modulations in information masking using a spatial masking release paradigm. Separating a target talker from two masker talkers produced a 20 dB improvement in speech reception threshold; 40% of which was attributed to a release from informational masking. When across frequency temporal modulations in the masker talkers are decorrelated the speech is unintelligible, although the within frequency modulation characteristics remains identical. Used as a masker as above, the information masking accounted for 37% of the spatial unmasking seen with this masker. This unintelligible and highly differentiable masker is unlikely to involve top-down processes. These data provides strong evidence of bottom-up masking involving speech-like, within-frequency modulations and that this, presumably low level process, can be modulated by selective spatial attention.
APEX/SPIN: a free test platform to measure speech intelligibility.
Francart, Tom; Hofmann, Michael; Vanthornhout, Jonas; Van Deun, Lieselot; van Wieringen, Astrid; Wouters, Jan
2017-02-01
Measuring speech intelligibility in quiet and noise is important in clinical practice and research. An easy-to-use free software platform for conducting speech tests is presented, called APEX/SPIN. The APEX/SPIN platform allows the use of any speech material in combination with any noise. A graphical user interface provides control over a large range of parameters, such as number of loudspeakers, signal-to-noise ratio and parameters of the procedure. An easy-to-use graphical interface is provided for calibration and storage of calibration values. To validate the platform, perception of words in quiet and sentences in noise were measured both with APEX/SPIN and with an audiometer and CD player, which is a conventional setup in current clinical practice. Five normal-hearing listeners participated in the experimental evaluation. Speech perception results were similar for the APEX/SPIN platform and conventional procedures. APEX/SPIN is a freely available and open source platform that allows the administration of all kinds of custom speech perception tests and procedures.
The Nationwide Speech Project: A multi-talker multi-dialect speech corpus
NASA Astrophysics Data System (ADS)
Clopper, Cynthia G.; Pisoni, David B.
2004-05-01
Most research on regional phonological variation relies on field recordings of interview speech. Recent research on the perception of dialect variation by naive listeners, however, has relied on read sentence materials in order to control for phonological and lexical content and syntax. The Nationwide Speech Project corpus was designed to obtain a large amount of speech from a number of talkers representing different regional varieties of American English. Five male and five female talkers from each of six different dialect regions in the United States were recorded reading isolated words, sentences, and passages, and in conversations with the experimenter. The talkers ranged in age from 18 and 25 years old and they were all monolingual native speakers of American English. They had lived their entire life in one dialect region and both of their parents were raised in the same region. Results of an acoustic analysis of the vowel spaces of the talkers included in the Nationwide Speech Project will be presented. [Work supported by NIH.
Determining the importance of fundamental hearing aid attributes.
Meister, Hartmut; Lausberg, Isabel; Kiessling, Juergen; Walger, Martin; von Wedel, Hasso
2002-07-01
To determine the importance of fundamental hearing aid attributes and to elicit measures of satisfaction and dissatisfaction. A prospective study based on a survey using a decompositional approach of preference measurement (conjoint analysis). Ear, nose, and throat university hospitals in Cologne and Giessen; various branches of hearing aid dispensers. A random sample of 175 experienced hearing aid users aged 20 to 91 years (mean age, 61 yr) recruited at two different sites. Relative importance of different hearing aid attributes, satisfaction and dissatisfaction with hearing aid attributes. Of the six fundamental hearing aid attributes assessed by the hearing aid users, the two features concerning speech perception attained the highest relative importance (25% speech in quiet, 27% speech in noise). The remaining four attributes (sound quality, handling, feedback, localization) had significantly lower values in a narrow range of 10 to 12%. Comparison of different subgroups of hearing aid wearers based on sociodemographic and user-specific data revealed a large interindividual scatter of the preferences for the attributes. A similar examination with 25 clinicians revealed overestimation of the importance of the attributes commonly associated with problems. Moreover, examination of satisfaction showed that speech in noise was the most frequent source of dissatisfaction (30% of all statements), whereas the subjects were satisfied with speech in quiet. The results emphasize the high importance of attributes related to speech perception. Speech discrimination in noise was the most important but also the most frequent source of negative statements. This attribute will be the outstanding parameter of future developments. Appropriate handling becomes an important factor for elderly subjects. However, because of the large interindividual scatter of data, the preferences of different hearing aid users were hardly predictable, giving evidence of multifactorial influences.
Speech intelligibility after glossectomy and speech rehabilitation.
Furia, C L; Kowalski, L P; Latorre, M R; Angelis, E C; Martins, N M; Barros, A P; Ribeiro, K C
2001-07-01
Oral tumor resections cause articulation deficiencies, depending on the site, extent of resection, type of reconstruction, and tongue stump mobility. To evaluate the speech intelligibility of patients undergoing total, subtotal, or partial glossectomy, before and after speech therapy. Twenty-seven patients (24 men and 3 women), aged 34 to 77 years (mean age, 56.5 years), underwent glossectomy. Tumor stages were T1 in 3 patients, T2 in 4, T3 in 8, T4 in 11, and TX in 1; node stages, N0 in 15 patients, N1 in 5, N2a-c in 6, and N3 in 1. No patient had metastases (M0). Patients were divided into 3 groups by extent of tongue resection, ie, total (group 1; n = 6), subtotal (group 2; n = 9), and partial (group 3; n = 12). Different phonological tasks were recorded and analyzed by 3 experienced judges, including sustained 7 oral vowels, vowel in a syllable, and the sequence vowel-consonant-vowel (VCV). The intelligibility of spontaneous speech (sequence story) was scored from 1 to 4 in consensus. All patients underwent a therapeutic program to activate articulatory adaptations, compensations, and maximization of the remaining structures for 3 to 6 months. The tasks were recorded after speech therapy. To compare mean changes, analyses of variance and Wilcoxon tests were used. Patients of groups 1 and 2 significantly improved their speech intelligibility (P<.05). Group 1 improved vowels, VCV, and spontaneous speech; group 2, syllable, VCV, and spontaneous speech. Group 3 demonstrated better intelligibility in the pretherapy phase, but the improvement after therapy was not significant. Speech therapy was effective in improving speech intelligibility of patients undergoing glossectomy, even after major resection. Different pretherapy ability between groups was seen, with improvement of speech intelligibility in groups 1 and 2. The improvement of speech intelligibility in group 3 was not statistically significant, possibly because of the small and heterogeneous sample.
Improving Understanding of Emotional Speech Acoustic Content
NASA Astrophysics Data System (ADS)
Tinnemore, Anna
Children with cochlear implants show deficits in identifying emotional intent of utterances without facial or body language cues. A known limitation to cochlear implants is the inability to accurately portray the fundamental frequency contour of speech which carries the majority of information needed to identify emotional intent. Without reliable access to the fundamental frequency, other methods of identifying vocal emotion, if identifiable, could be used to guide therapies for training children with cochlear implants to better identify vocal emotion. The current study analyzed recordings of adults speaking neutral sentences with a set array of emotions in a child-directed and adult-directed manner. The goal was to identify acoustic cues that contribute to emotion identification that may be enhanced in child-directed speech, but are also present in adult-directed speech. Results of this study showed that there were significant differences in the variation of the fundamental frequency, the variation of intensity, and the rate of speech among emotions and between intended audiences.
Longitudinal decline in speech production in Parkinson's disease spectrum disorders.
Ash, Sharon; Jester, Charles; York, Collin; Kofman, Olga L; Langey, Rachel; Halpin, Amy; Firn, Kim; Dominguez Perez, Sophia; Chahine, Lama; Spindler, Meredith; Dahodwala, Nabila; Irwin, David J; McMillan, Corey; Weintraub, Daniel; Grossman, Murray
2017-08-01
We examined narrative speech production longitudinally in non-demented (n=15) and mildly demented (n=8) patients with Parkinson's disease spectrum disorder (PDSD), and we related increasing impairment to structural brain changes in specific language and motor regions. Patients provided semi-structured speech samples, describing a standardized picture at two time points (mean±SD interval=38±24months). The recorded speech samples were analyzed for fluency, grammar, and informativeness. PDSD patients with dementia exhibited significant decline in their speech, unrelated to changes in overall cognitive or motor functioning. Regression analysis in a subset of patients with MRI scans (n=11) revealed that impaired language performance at Time 2 was associated with reduced gray matter (GM) volume at Time 1 in regions of interest important for language functioning but not with reduced GM volume in motor brain areas. These results dissociate language and motor systems and highlight the importance of non-motor brain regions for declining language in PDSD. Copyright © 2017 Elsevier Inc. All rights reserved.
Phonologically-based biomarkers for major depressive disorder
NASA Astrophysics Data System (ADS)
Trevino, Andrea Carolina; Quatieri, Thomas Francis; Malyska, Nicolas
2011-12-01
Of increasing importance in the civilian and military population is the recognition of major depressive disorder at its earliest stages and intervention before the onset of severe symptoms. Toward the goal of more effective monitoring of depression severity, we introduce vocal biomarkers that are derived automatically from phonologically-based measures of speech rate. To assess our measures, we use a 35-speaker free-response speech database of subjects treated for depression over a 6-week duration. We find that dissecting average measures of speech rate into phone-specific characteristics and, in particular, combined phone-duration measures uncovers stronger relationships between speech rate and depression severity than global measures previously reported for a speech-rate biomarker. Results of this study are supported by correlation of our measures with depression severity and classification of depression state with these vocal measures. Our approach provides a general framework for analyzing individual symptom categories through phonological units, and supports the premise that speaking rate can be an indicator of psychomotor retardation severity.
Monson, Brian B; Lotto, Andrew J; Story, Brad H
2012-09-01
The human singing and speech spectrum includes energy above 5 kHz. To begin an in-depth exploration of this high-frequency energy (HFE), a database of anechoic high-fidelity recordings of singers and talkers was created and analyzed. Third-octave band analysis from the long-term average spectra showed that production level (soft vs normal vs loud), production mode (singing vs speech), and phoneme (for voiceless fricatives) all significantly affected HFE characteristics. Specifically, increased production level caused an increase in absolute HFE level, but a decrease in relative HFE level. Singing exhibited higher levels of HFE than speech in the soft and normal conditions, but not in the loud condition. Third-octave band levels distinguished phoneme class of voiceless fricatives. Female HFE levels were significantly greater than male levels only above 11 kHz. This information is pertinent to various areas of acoustics, including vocal tract modeling, voice synthesis, augmentative hearing technology (hearing aids and cochlear implants), and training/therapy for singing and speech.
ERIC Educational Resources Information Center
Westerhausen, René; Bless, Josef J.; Passow, Susanne; Kompus, Kristiina; Hugdahl, Kenneth
2015-01-01
The ability to use cognitive-control functions to regulate speech perception is thought to be crucial in mastering developmental challenges, such as language acquisition during childhood or compensation for sensory decline in older age, enabling interpersonal communication and meaningful social interactions throughout the entire life span.…
Impact of Aberrant Acoustic Properties on the Perception of Sound Quality in Electrolarynx Speech
ERIC Educational Resources Information Center
Meltzner, Geoffrey S.; Hillman, Robert E.
2005-01-01
A large percentage of patients who have undergone laryngectomy to treat advanced laryngeal cancer rely on an electrolarynx (EL) to communicate verbally. Although serviceable, EL speech is plagued by shortcomings in both sound quality and intelligibility. This study sought to better quantify the relative contributions of previously identified…
Humes, Larry E.; Kidd, Gary R.; Lentz, Jennifer J.
2013-01-01
This study was designed to address individual differences in aided speech understanding among a relatively large group of older adults. The group of older adults consisted of 98 adults (50 female and 48 male) ranging in age from 60 to 86 (mean = 69.2). Hearing loss was typical for this age group and about 90% had not worn hearing aids. All subjects completed a battery of tests, including cognitive (6 measures), psychophysical (17 measures), and speech-understanding (9 measures), as well as the Speech, Spatial, and Qualities of Hearing (SSQ) self-report scale. Most of the speech-understanding measures made use of competing speech and the non-speech psychophysical measures were designed to tap phenomena thought to be relevant for the perception of speech in competing speech (e.g., stream segregation, modulation-detection interference). All measures of speech understanding were administered with spectral shaping applied to the speech stimuli to fully restore audibility through at least 4000 Hz. The measures used were demonstrated to be reliable in older adults and, when compared to a reference group of 28 young normal-hearing adults, age-group differences were observed on many of the measures. Principal-components factor analysis was applied successfully to reduce the number of independent and dependent (speech understanding) measures for a multiple-regression analysis. Doing so yielded one global cognitive-processing factor and five non-speech psychoacoustic factors (hearing loss, dichotic signal detection, multi-burst masking, stream segregation, and modulation detection) as potential predictors. To this set of six potential predictor variables were added subject age, Environmental Sound Identification (ESI), and performance on the text-recognition-threshold (TRT) task (a visual analog of interrupted speech recognition). These variables were used to successfully predict one global aided speech-understanding factor, accounting for about 60% of the variance. PMID:24098273
Reichenbach, Chagit S.; Braiman, Chananel; Schiff, Nicholas D.; Hudspeth, A. J.; Reichenbach, Tobias
2016-01-01
The auditory-brainstem response (ABR) to short and simple acoustical signals is an important clinical tool used to diagnose the integrity of the brainstem. The ABR is also employed to investigate the auditory brainstem in a multitude of tasks related to hearing, such as processing speech or selectively focusing on one speaker in a noisy environment. Such research measures the response of the brainstem to short speech signals such as vowels or words. Because the voltage signal of the ABR has a tiny amplitude, several hundred to a thousand repetitions of the acoustic signal are needed to obtain a reliable response. The large number of repetitions poses a challenge to assessing cognitive functions due to neural adaptation. Here we show that continuous, non-repetitive speech, lasting several minutes, may be employed to measure the ABR. Because the speech is not repeated during the experiment, the precise temporal form of the ABR cannot be determined. We show, however, that important structural features of the ABR can nevertheless be inferred. In particular, the brainstem responds at the fundamental frequency of the speech signal, and this response is modulated by the envelope of the voiced parts of speech. We accordingly introduce a novel measure that assesses the ABR as modulated by the speech envelope, at the fundamental frequency of speech and at the characteristic latency of the response. This measure has a high signal-to-noise ratio and can hence be employed effectively to measure the ABR to continuous speech. We use this novel measure to show that the ABR is weaker to intelligible speech than to unintelligible, time-reversed speech. The methods presented here can be employed for further research on speech processing in the auditory brainstem and can lead to the development of future clinical diagnosis of brainstem function. PMID:27303286
Flaherty, Mary; Dent, Micheal L.; Sawusch, James R.
2017-01-01
The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with “d” or “t” and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal. PMID:28562597
Flaherty, Mary; Dent, Micheal L; Sawusch, James R
2017-01-01
The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.
Melo, Roberta Michelon; Mota, Helena Bolli; Berti, Larissa Cristina
2017-06-08
This study used acoustic and articulatory analyses to characterize the contrast between alveolar and velar stops with typical speech data, comparing the parameters (acoustic and articulatory) of adults and children with typical speech development. The sample consisted of 20 adults and 15 children with typical speech development. The analyzed corpus was organized through five repetitions of each target-word (/'kap ə/, /'tapə/, /'galo/ e /'daɾə/). These words were inserted into a carrier phrase and the participant was asked to name them spontaneously. Simultaneous audio and video data were recorded (tongue ultrasound images). The data was submitted to acoustic analyses (voice onset time; spectral peak and burst spectral moments; vowel/consonant transition and relative duration measures) and articulatory analyses (proportion of significant axes of the anterior and posterior tongue regions and description of tongue curves). Acoustic and articulatory parameters were effective to indicate the contrast between alveolar and velar stops, mainly in the adult group. Both speech analyses showed statistically significant differences between the two groups. The acoustic and articulatory parameters provided signals to characterize the phonic contrast of speech. One of the main findings in the comparison between adult and child speech was evidence of articulatory refinement/maturation even after the period of segment acquisition.
Crew Communication as a Factor in Aviation Accidents
NASA Technical Reports Server (NTRS)
Goguen, J.; Linde, C.; Murphy, M.
1986-01-01
The crew communication process is analyzed. Planning and explanation are shown to be well-structured discourse types, described by formal rules. These formal rules are integrated with those describing the other most important discourse type within the cockpit: the command-and-control speech act chain. The latter is described as a sequence of speech acts for making requests (including orders and suggestions), for making reports, for supporting or challenging statements, and for acknowledging previous speech acts. Mitigation level, a linguistic indication of indirectness and tentativeness in speech, was an important variable in several hypotheses, i.e., the speech of subordinates is more mitigated than the speech of superiors, the speech of all crewmembers is less mitigated when they know that they are in either a problem or emergency situation, and mitigation is a factor in failures of crewmembers to initiate discussion of new topics or have suggestions ratified by the captain. Test results also show that planning and explanation are more frequently performed by captains, are done more during crew- recognized problems, and are done less during crew-recognized emergencies. The test results also indicated that planning and explanation are more frequently performed by captains than by other crewmembers, are done more during crew-recognized problems, and are done less during-recognized emergencies.
NASA Astrophysics Data System (ADS)
Jiang, Hongyan; Qiu, Hongbing; He, Ning; Liao, Xin
2018-06-01
For the optoacoustic communication from in-air platforms to submerged apparatus, a method based on speech recognition and variable laser-pulse repetition rates is proposed, which realizes character encoding and transmission for speech. Firstly, the theories and spectrum characteristics of the laser-generated underwater sound are analyzed; and moreover character conversion and encoding for speech as well as the pattern of codes for laser modulation is studied; lastly experiments to verify the system design are carried out. Results show that the optoacoustic system, where laser modulation is controlled by speech-to-character baseband codes, is beneficial to improve flexibility in receiving location for underwater targets as well as real-time performance in information transmission. In the overwater transmitter, a pulse laser is controlled to radiate by speech signals with several repetition rates randomly selected in the range of one to fifty Hz, and then in the underwater receiver laser pulse repetition rate and data can be acquired by the preamble and information codes of the corresponding laser-generated sound. When the energy of the laser pulse is appropriate, real-time transmission for speaker-independent speech can be realized in that way, which solves the problem of underwater bandwidth resource and provides a technical approach for the air-sea communication.
Breath Group Analysis for Reading and Spontaneous Speech in Healthy Adults
Wang, Yu-Tsai; Green, Jordan R.; Nip, Ignatius S.B.; Kent, Ray D.; Kent, Jane Finley
2010-01-01
Aims The breath group can serve as a functional unit to define temporal and fundamental frequency (f0) features in continuous speech. These features of the breath group are determined by the physiologic, linguistic, and cognitive demands of communication. Reading and spontaneous speech are two speaking tasks that vary in these demands and are commonly used to evaluate speech performance for research and clinical applications. The purpose of this study is to examine differences between reading and spontaneous speech in the temporal and f0 aspects of their breath groups. Methods Sixteen participants read two passages and answered six questions while wearing a circumferentially vented mask connected to a pneumotach. The aerodynamic signal was used to identify inspiratory locations. The audio signal was used to analyze task differences in breath group structure, including temporal and f0 components. Results The main findings were that spontaneous speech task exhibited significantly more grammatically inappropriate breath group locations and longer breath group duration than did the passage reading task. Conclusion The task differences in the percentage of grammatically inadequate breath group locations and in breath group duration for healthy adult speakers partly explain the differences in cognitive-linguistic load between the passage reading and spontaneous speech. PMID:20588052
Breath group analysis for reading and spontaneous speech in healthy adults.
Wang, Yu-Tsai; Green, Jordan R; Nip, Ignatius S B; Kent, Ray D; Kent, Jane Finley
2010-01-01
The breath group can serve as a functional unit to define temporal and fundamental frequency (f0) features in continuous speech. These features of the breath group are determined by the physiologic, linguistic, and cognitive demands of communication. Reading and spontaneous speech are two speaking tasks that vary in these demands and are commonly used to evaluate speech performance for research and clinical applications. The purpose of this study is to examine differences between reading and spontaneous speech in the temporal and f0 aspects of their breath groups. Sixteen participants read two passages and answered six questions while wearing a circumferentially vented mask connected to a pneumotach. The aerodynamic signal was used to identify inspiratory locations. The audio signal was used to analyze task differences in breath group structure, including temporal and f0 components. The main findings were that spontaneous speech task exhibited significantly more grammatically inappropriate breath group locations and longer breath group duration than did the passage reading task. The task differences in the percentage of grammatically inadequate breath group locations and in breath group duration for healthy adult speakers partly explain the differences in cognitive-linguistic load between the passage reading and spontaneous speech. Copyright © 2010 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Yildirim, Serdar; Montanari, Simona; Andersen, Elaine; Narayanan, Shrikanth S.
2003-10-01
Understanding the fine details of children's speech and gestural characteristics helps, among other things, in creating natural computer interfaces. We analyze the acoustic, lexical/non-lexical and spoken/gestural discourse characteristics of young children's speech using audio-video data gathered using a Wizard of Oz technique from 4 to 6 year old children engaged in resolving a series of age-appropriate cognitive challenges. Fundamental and formant frequencies exhibited greater variations between subjects consistent with previous results on read speech [Lee et al., J. Acoust. Soc. Am. 105, 1455-1468 (1999)]. Also, our analysis showed that, in a given bandwidth, phonemic information contained in the speech of young child is significantly less than that of older ones and adults. To enable an integrated analysis, a multi-track annotation board was constructed using the ANVIL tool kit [M. Kipp, Eurospeech 1367-1370 (2001)]. Along with speech transcriptions and acoustic analysis, non-lexical and discourse characteristics, and child's gesture (facial expressions, body movements, hand/head movements) were annotated in a synchronized multilayer system. Initial results showed that younger children rely more on gestures to emphasize their verbal assertions. Younger children use non-lexical speech (e.g., um, huh) associated with frustration and pondering/reflecting more frequently than older ones. Younger children also repair more with humans than with computer.
Effect of deep brain stimulation on different speech subsystems in patients with multiple sclerosis.
Pützer, Manfred; Barry, William John; Moringlane, Jean Richard
2007-11-01
The effect of deep brain stimulation on articulation and phonation subsystems in seven patients with multiple sclerosis (MS) was examined. Production parameters in fast syllable-repetitions were defined and measured, and the phonation quality during vowel productions was analyzed. Speech material was recorded for patients (with and without stimulation) and for a group of healthy control speakers. With stimulation, the precision of glottal and supraglottal articulatory gestures is reduced, whereas phonation has a greater tendency to be hyperfunctional in comparison with the healthy control data. Different effects on the two speech subsystems are induced by electrical stimulation of the thalamus in patients with MS.
Data-Driven Subclassification of Speech Sound Disorders in Preschool Children
Vick, Jennell C.; Campbell, Thomas F.; Shriberg, Lawrence D.; Green, Jordan R.; Truemper, Klaus; Rusiewicz, Heather Leavy; Moore, Christopher A.
2015-01-01
Purpose The purpose of the study was to determine whether distinct subgroups of preschool children with speech sound disorders (SSD) could be identified using a subgroup discovery algorithm (SUBgroup discovery via Alternate Random Processes, or SUBARP). Of specific interest was finding evidence of a subgroup of SSD exhibiting performance consistent with atypical speech motor control. Method Ninety-seven preschool children with SSD completed speech and nonspeech tasks. Fifty-three kinematic, acoustic, and behavioral measures from these tasks were input to SUBARP. Results Two distinct subgroups were identified from the larger sample. The 1st subgroup (76%; population prevalence estimate = 67.8%–84.8%) did not have characteristics that would suggest atypical speech motor control. The 2nd subgroup (10.3%; population prevalence estimate = 4.3%– 16.5%) exhibited significantly higher variability in measures of articulatory kinematics and poor ability to imitate iambic lexical stress, suggesting atypical speech motor control. Both subgroups were consistent with classes of SSD in the Speech Disorders Classification System (SDCS; Shriberg et al., 2010a). Conclusion Characteristics of children in the larger subgroup were consistent with the proportionally large SDCS class termed speech delay; characteristics of children in the smaller subgroup were consistent with the SDCS subtype termed motor speech disorder—not otherwise specified. The authors identified candidate measures to identify children in each of these groups. PMID:25076005
Civier, Oren; Tasko, Stephen M; Guenther, Frank H
2010-09-01
This paper investigates the hypothesis that stuttering may result in part from impaired readout of feedforward control of speech, which forces persons who stutter (PWS) to produce speech with a motor strategy that is weighted too much toward auditory feedback control. Over-reliance on feedback control leads to production errors which if they grow large enough, can cause the motor system to "reset" and repeat the current syllable. This hypothesis is investigated using computer simulations of a "neurally impaired" version of the DIVA model, a neural network model of speech acquisition and production. The model's outputs are compared to published acoustic data from PWS' fluent speech, and to combined acoustic and articulatory movement data collected from the dysfluent speech of one PWS. The simulations mimic the errors observed in the PWS subject's speech, as well as the repairs of these errors. Additional simulations were able to account for enhancements of fluency gained by slowed/prolonged speech and masking noise. Together these results support the hypothesis that many dysfluencies in stuttering are due to a bias away from feedforward control and toward feedback control. The reader will be able to (a) describe the contribution of auditory feedback control and feedforward control to normal and stuttered speech production, (b) summarize the neural modeling approach to speech production and its application to stuttering, and (c) explain how the DIVA model accounts for enhancements of fluency gained by slowed/prolonged speech and masking noise.
Modeling Driving Performance Using In-Vehicle Speech Data From a Naturalistic Driving Study.
Kuo, Jonny; Charlton, Judith L; Koppel, Sjaan; Rudin-Brown, Christina M; Cross, Suzanne
2016-09-01
We aimed to (a) describe the development and application of an automated approach for processing in-vehicle speech data from a naturalistic driving study (NDS), (b) examine the influence of child passenger presence on driving performance, and (c) model this relationship using in-vehicle speech data. Parent drivers frequently engage in child-related secondary behaviors, but the impact on driving performance is unknown. Applying automated speech-processing techniques to NDS audio data would facilitate the analysis of in-vehicle driver-child interactions and their influence on driving performance. Speech activity detection and speaker diarization algorithms were applied to audio data from a Melbourne-based NDS involving 42 families. Multilevel models were developed to evaluate the effect of speech activity and the presence of child passengers on driving performance. Speech activity was significantly associated with velocity and steering angle variability. Child passenger presence alone was not associated with changes in driving performance. However, speech activity in the presence of two child passengers was associated with the most variability in driving performance. The effects of in-vehicle speech on driving performance in the presence of child passengers appear to be heterogeneous, and multiple factors may need to be considered in evaluating their impact. This goal can potentially be achieved within large-scale NDS through the automated processing of observational data, including speech. Speech-processing algorithms enable new perspectives on driving performance to be gained from existing NDS data, and variables that were once labor-intensive to process can be readily utilized in future research. © 2016, Human Factors and Ergonomics Society.
Implementation of the Intelligent Voice System for Kazakh
NASA Astrophysics Data System (ADS)
Yessenbayev, Zh; Saparkhojayev, N.; Tibeyev, T.
2014-04-01
Modern speech technologies are highly advanced and widely used in day-to-day applications. However, this is mostly concerned with the languages of well-developed countries such as English, German, Japan, Russian, etc. As for Kazakh, the situation is less prominent and research in this field is only starting to evolve. In this research and application-oriented project, we introduce an intelligent voice system for the fast deployment of call-centers and information desks supporting Kazakh speech. The demand on such a system is obvious if the country's large size and small population is considered. The landline and cell phones become the only means of communication for the distant villages and suburbs. The system features Kazakh speech recognition and synthesis modules as well as a web-GUI for efficient dialog management. For speech recognition we use CMU Sphinx engine and for speech synthesis- MaryTTS. The web-GUI is implemented in Java enabling operators to quickly create and manage the dialogs in user-friendly graphical environment. The call routines are handled by Asterisk PBX and JBoss Application Server. The system supports such technologies and protocols as VoIP, VoiceXML, FastAGI, Java SpeechAPI and J2EE. For the speech recognition experiments we compiled and used the first Kazakh speech corpus with the utterances from 169 native speakers. The performance of the speech recognizer is 4.1% WER on isolated word recognition and 6.9% WER on clean continuous speech recognition tasks. The speech synthesis experiments include the training of male and female voices.
Integrating hidden Markov model and PRAAT: a toolbox for robust automatic speech transcription
NASA Astrophysics Data System (ADS)
Kabir, A.; Barker, J.; Giurgiu, M.
2010-09-01
An automatic time-aligned phone transcription toolbox of English speech corpora has been developed. Especially the toolbox would be very useful to generate robust automatic transcription and able to produce phone level transcription using speaker independent models as well as speaker dependent models without manual intervention. The system is based on standard Hidden Markov Models (HMM) approach and it was successfully experimented over a large audiovisual speech corpus namely GRID corpus. One of the most powerful features of the toolbox is the increased flexibility in speech processing where the speech community would be able to import the automatic transcription generated by HMM Toolkit (HTK) into a popular transcription software, PRAAT, and vice-versa. The toolbox has been evaluated through statistical analysis on GRID data which shows that automatic transcription deviates by an average of 20 ms with respect to manual transcription.
Practice Patterns of Speech-Language Pathologists in Pediatric Vocal Health.
Hartley, Naomi A; Braden, Maia; Thibeault, Susan L
2017-05-17
The purpose of this study was to investigate current practices of speech-language pathologists (SLPs) in the management of pediatric vocal health, with specific analysis of the influence of clinical specialty and workplace setting on management approaches. American Speech-Language-Hearing Association-certified clinicians providing services within the United States (1%-100% voice caseload) completed an anonymous online survey detailing clinician demographics; employment location and service delivery models; approaches to continuing professional development; and specifics of case management, including assessment, treatment, and discharge procedures. Current practice patterns were analyzed for 100 SLPs (0-42 years of experience; 77 self-identifying as voice specialists) providing services in 34 U.S. states across a range of metropolitan and nonmetropolitan workplace settings. In general, SLPs favored a multidisciplinary approach to management; included perceptual, instrumental, and quality of life measures during evaluation; and tailored intervention to the individual using a combination of therapy approaches. In contrast with current practice guidelines, only half reported requiring an otolaryngology evaluation prior to initiating treatment. Both clinical specialty and workplace setting were found to affect practice patterns. SLPs in school settings were significantly less likely to consider themselves voice specialists compared with all other work environments. Those SLPs who considered themselves voice specialists were significantly more likely to utilize voice-specific assessment and treatment approaches. SLP practice largely mirrors current professional practice guidelines; however, potential exists to further enhance client care. To ensure that SLPs are best able to support children in successful communication, further research, education, and advocacy are required.
Phonetic Modification of Vowel Space in Storybook Speech to Infants up to 2 Years of Age
Burnham, Evamarie B.; Wieland, Elizabeth A.; Kondaurova, Maria V.; McAuley, J. Devin; Bergeson, Tonya R.
2015-01-01
Purpose A large body of literature has indicated vowel space area expansion in infant-directed (ID) speech compared with adult-directed (AD) speech, which may promote language acquisition. The current study tested whether this expansion occurs in storybook speech read to infants at various points during their first 2 years of life. Method In 2 studies, mothers read a storybook containing target vowels in ID and AD speech conditions. Study 1 was longitudinal, with 11 mothers recorded when their infants were 3, 6, and 9 months old. Study 2 was cross-sectional, with 48 mothers recorded when their infants were 3, 9, 13, or 20 months old (n = 12 per group). The 1st and 2nd formants of vowels /i/, /ɑ/, and /u/ were measured, and vowel space area and dispersion were calculated. Results Across both studies, 1st and/or 2nd formant frequencies shifted systematically for /i/ and /u/ vowels in ID compared with AD speech. No difference in vowel space area or dispersion was found. Conclusions The results suggest that a variety of communication and situational factors may affect phonetic modifications in ID speech, but that vowel space characteristics in speech to infants stay consistent across the first 2 years of life. PMID:25659121
Strand, Edythe A; McCauley, Rebecca J; Weigand, Stephen D; Stoeckel, Ruth E; Baas, Becky S
2013-04-01
In this article, the authors report reliability and validity evidence for the Dynamic Evaluation of Motor Speech Skill (DEMSS), a new test that uses dynamic assessment to aid in the differential diagnosis of childhood apraxia of speech (CAS). Participants were 81 children between 36 and 79 months of age who were referred to the Mayo Clinic for diagnosis of speech sound disorders. Children were given the DEMSS and a standard speech and language test battery as part of routine evaluations. Subsequently, intrajudge, interjudge, and test-retest reliability were evaluated for a subset of participants. Construct validity was explored for all 81 participants through the use of agglomerative cluster analysis, sensitivity measures, and likelihood ratios. The mean percentage of agreement for 171 judgments was 89% for test-retest reliability, 89% for intrajudge reliability, and 91% for interjudge reliability. Agglomerative hierarchical cluster analysis showed that total DEMSS scores largely differentiated clusters of children with CAS vs. mild CAS vs. other speech disorders. Positive and negative likelihood ratios and measures of sensitivity and specificity suggested that the DEMSS does not overdiagnose CAS but sometimes fails to identify children with CAS. The value of the DEMSS in differential diagnosis of severe speech impairments was supported on the basis of evidence of reliability and validity.
ERIC Educational Resources Information Center
Thibault, Pierrette; Sankoff, Gillian
1999-01-01
Analyzes the reactions of francophone Montrealers (n=116) to the recorded speech of English speakers using French. Particular focus is on finding out which linguistic traits of speech triggered the judgments on the speakers' competence and to what extent they met the judges expectations with regard to their job suitability. (Author/VWL)
ERIC Educational Resources Information Center
Murphy, Paul L.
Using the Jazz Age as his frame of reference, the author analyzes the role of dissent and its suppression in American life. The crisis in civil liberties which began with a wartime restriction of freedom of speech in 1918 evoked a counteraction from concerned citizens during the 1920s. Their efforts were vindicated in 1931 when a…
ERIC Educational Resources Information Center
Riehl, Bambi
2010-01-01
PEPNet gets frequent requests for information about interpreter/speech-to-text position development. Determining or analyzing a salary is often part of this challenge and the people at PEPNet trust the information in this paper will be helpful to postsecondary programs. This is the sixth survey produced by PEPNet-Midwest and the University of…
An acoustic feature-based similarity scoring system for speech rehabilitation assistance.
Syauqy, Dahnial; Wu, Chao-Min; Setyawati, Onny
2016-08-01
The purpose of this study is to develop a tool to assist speech therapy and rehabilitation, which focused on automatic scoring based on the comparison of the patient's speech with another normal speech on several aspects including pitch, vowel, voiced-unvoiced segments, strident fricative and sound intensity. The pitch estimation employed the use of cepstrum-based algorithm for its robustness; the vowel classification used multilayer perceptron (MLP) to classify vowel from pitch and formants; and the strident fricative detection was based on the major peak spectral intensity, location and the pitch existence in the segment. In order to evaluate the performance of the system, this study analyzed eight patient's speech recordings (four males, four females; 4-58-years-old), which had been recorded in previous study in cooperation with Taipei Veterans General Hospital and Taoyuan General Hospital. The experiment result on pitch algorithm showed that the cepstrum method had 5.3% of gross pitch error from a total of 2086 frames. On the vowel classification algorithm, MLP method provided 93% accuracy (men), 87% (women) and 84% (children). In total, the overall results showed that 156 tool's grading results (81%) were consistent compared to 192 audio and visual observations done by four experienced respondents. Implication for Rehabilitation Difficulties in communication may limit the ability of a person to transfer and exchange information. The fact that speech is one of the primary means of communication has encouraged the needs of speech diagnosis and rehabilitation. The advances of technology in computer-assisted speech therapy (CAST) improve the quality, time efficiency of the diagnosis and treatment of the disorders. The present study attempted to develop tool to assist speech therapy and rehabilitation, which provided simple interface to let the assessment be done even by the patient himself without the need of particular knowledge of speech processing while at the same time, also provided further deep analysis of the speech, which can be useful for the speech therapist.
Studer-Eichenberger, Esther; Studer-Eichenberger, Felix; Koenig, Thomas
2016-01-01
The objectives of the present study were to investigate temporal/spectral sound-feature processing in preschool children (4 to 7 years old) with peripheral hearing loss compared with age-matched controls. The results verified the presence of statistical learning, which was diminished in children with hearing impairments (HIs), and elucidated possible perceptual mediators of speech production. Perception and production of the syllables /ba/, /da/, /ta/, and /na/ were recorded in 13 children with normal hearing and 13 children with HI. Perception was assessed physiologically through event-related potentials (ERPs) recorded by EEG in a multifeature mismatch negativity paradigm and behaviorally through a discrimination task. Temporal and spectral features of the ERPs during speech perception were analyzed, and speech production was quantitatively evaluated using speech motor maximum performance tasks. Proximal to stimulus onset, children with HI displayed a difference in map topography, indicating diminished statistical learning. In later ERP components, children with HI exhibited reduced amplitudes in the N2 and early parts of the late disciminative negativity components specifically, which are associated with temporal and spectral control mechanisms. Abnormalities of speech perception were only subtly reflected in speech production, as the lone difference found in speech production studies was a mild delay in regulating speech intensity. In addition to previously reported deficits of sound-feature discriminations, the present study results reflect diminished statistical learning in children with HI, which plays an early and important, but so far neglected, role in phonological processing. Furthermore, the lack of corresponding behavioral abnormalities in speech production implies that impaired perceptual capacities do not necessarily translate into productive deficits.
Halje, Pär; Seeck, Margitta; Blanke, Olaf; Ionta, Silvio
2015-12-01
The neural correspondence between the systems responsible for the execution and recognition of actions has been suggested both in humans and non-human primates. Apart from being a key region of this visuo-motor observation-execution matching (OEM) system, the human inferior frontal gyrus (IFG) is also important for speech production. The functional overlap of visuo-motor OEM and speech, together with the phylogenetic history of the IFG as a motor area, has led to the idea that speech function has evolved from pre-existing motor systems and to the hypothesis that an OEM system may exist also for speech. However, visuo-motor OEM and speech OEM have never been compared directly. We used electrocorticography to analyze oscillations recorded from intracranial electrodes in human fronto-parieto-temporal cortex during visuo-motor (executing or visually observing an action) and speech OEM tasks (verbally describing an action using the first or third person pronoun). The results show that neural activity related to visuo-motor OEM is widespread in the frontal, parietal, and temporal regions. Speech OEM also elicited widespread responses partly overlapping with visuo-motor OEM sites (bilaterally), including frontal, parietal, and temporal regions. Interestingly a more focal region, the inferior frontal gyrus (bilaterally), showed both visuo-motor OEM and speech OEM properties independent of orolingual speech-unrelated movements. Building on the methodological advantages in human invasive electrocorticography, the present findings provide highly precise spatial and temporal information to support the existence of a modality-independent action representation system in the human brain that is shared between systems for performing, interpreting and describing actions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Speech interference and transmission on residential balconies with road traffic noise.
Naish, Daniel A; Tan, Andy C C; Nur Demirbilek, F
2013-01-01
Balcony acoustic treatments can mitigate the effects of community road traffic noise. To further investigate, a theoretical study into the effects of balcony acoustic treatment combinations on speech interference and transmission is conducted for various street geometries. Nine different balcony types are investigated using a combined specular and diffuse reflection computer model. Diffusion in the model is calculated using the radiosity technique. The balcony types include a standard balcony with or without a ceiling and with various combinations of parapet, ceiling absorption and ceiling shield. A total of 70 balcony and street geometrical configurations are analyzed with each balcony type, resulting in 630 scenarios. In each scenario the reverberation time, speech interference level (SIL) and speech transmission index (STI) are calculated. These indicators are compared to determine trends based on the effects of propagation path, inclusion of opposite buildings and difference with a reference position outside the balcony. The results demonstrate trends in SIL and STI with different balcony types. It is found that an acoustically treated balcony reduces speech interference. A parapet provides the largest improvement, followed by absorption on the ceiling. The largest reductions in speech interference arise when a combination of balcony acoustic treatments are applied.
Soli, Sigfrid D; Giguère, Christian; Laroche, Chantal; Vaillancourt, Véronique; Dreschler, Wouter A; Rhebergen, Koenraad S; Harkins, Kevin; Ruckstuhl, Mark; Ramulu, Pradeep; Meyers, Lawrence S
The objectives of this study were to (1) identify essential hearing-critical job tasks for public safety and law enforcement personnel; (2) determine the locations and real-world noise environments where these tasks are performed; (3) characterize each noise environment in terms of its impact on the likelihood of effective speech communication, considering the effects of different levels of vocal effort, communication distances, and repetition; and (4) use this characterization to define an objective normative reference for evaluating the ability of individuals to perform essential hearing-critical job tasks in noisy real-world environments. Data from five occupational hearing studies performed over a 17-year period for various public safety agencies were analyzed. In each study, job task analyses by job content experts identified essential hearing-critical tasks and the real-world noise environments where these tasks are performed. These environments were visited, and calibrated recordings of each noise environment were made. The extended speech intelligibility index (ESII) was calculated for each 4-sec interval in each recording. These data, together with the estimated ESII value required for effective speech communication by individuals with normal hearing, allowed the likelihood of effective speech communication in each noise environment for different levels of vocal effort and communication distances to be determined. These likelihoods provide an objective norm-referenced and standardized means of characterizing the predicted impact of real-world noise on the ability to perform essential hearing-critical tasks. A total of 16 noise environments for law enforcement personnel and eight noise environments for corrections personnel were analyzed. Effective speech communication was essential to hearing-critical tasks performed in these environments. Average noise levels, ranged from approximately 70 to 87 dBA in law enforcement environments and 64 to 80 dBA in corrections environments. The likelihood of effective speech communication at communication distances of 0.5 and 1 m was often less than 0.50 for normal vocal effort. Likelihood values often increased to 0.80 or more when raised or loud vocal effort was used. Effective speech communication at and beyond 5 m was often unlikely, regardless of vocal effort. ESII modeling of nonstationary real-world noise environments may prove an objective means of characterizing their impact on the likelihood of effective speech communication. The normative reference provided by these measures predicts the extent to which hearing impairments that increase the ESII value required for effective speech communication also decrease the likelihood of effective speech communication. These predictions may provide an objective evidence-based link between the essential hearing-critical job task requirements of public safety and law enforcement personnel and ESII-based hearing assessment of individuals who seek to perform these jobs.
Comparing Measures of Voice Quality From Sustained Phonation and Continuous Speech.
Gerratt, Bruce R; Kreiman, Jody; Garellek, Marc
2016-10-01
The question of what type of utterance-a sustained vowel or continuous speech-is best for voice quality analysis has been extensively studied but with equivocal results. This study examines whether previously reported differences derive from the articulatory and prosodic factors occurring in continuous speech versus sustained phonation. Speakers with voice disorders sustained vowels and read sentences. Vowel samples were excerpted from the steadiest portion of each vowel in the sentences. In addition to sustained and excerpted vowels, a 3rd set of stimuli was created by shortening sustained vowel productions to match the duration of vowels excerpted from continuous speech. Acoustic measures were made on the stimuli, and listeners judged the severity of vocal quality deviation. Sustained vowels and those extracted from continuous speech contain essentially the same acoustic and perceptual information about vocal quality deviation. Perceived and/or measured differences between continuous speech and sustained vowels derive largely from voice source variability across segmental and prosodic contexts and not from variations in vocal fold vibration in the quasisteady portion of the vowels. Approaches to voice quality assessment by using continuous speech samples average across utterances and may not adequately quantify the variability they are intended to assess.
Speech and swallowing disorders in Parkinson disease.
Sapir, Shimon; Ramig, Lorraine; Fox, Cynthia
2008-06-01
To review recent research and clinical studies pertaining to the nature, diagnosis, and treatment of speech and swallowing disorders in Parkinson disease. Although some studies indicate improvement in voice and speech with dopamine therapy and deep brain stimulation of the subthalamic nucleus, others show minimal or adverse effects. Repetitive transcranial magnetic stimulation of the mouth motor cortex and injection of collagen in the vocal folds have preliminary data supporting improvement in phonation in people with Parkinson disease. Treatments focusing on vocal loudness, specifically LSVT LOUD (Lee Silverman Voice Treatment), have been effective for the treatment of speech disorders in Parkinson disease. Changes in brain activity due to LSVT LOUD provide preliminary evidence for neural plasticity. Computer-based technology makes the Lee Silverman Voice Treatment available to a large number of users. A rat model for studying neuropharmacologic effects on vocalization in Parkinson disease has been developed. New diagnostic methods of speech and swallowing are also available as the result of recent studies. Speech rehabilitation with the LSVT LOUD is highly efficacious and scientifically tested. There is a need for more studies to improve understanding, diagnosis, prevention, and treatment of speech and swallowing disorders in Parkinson disease.
Smiljanić, Rajka; Bradlow, Ann R.
2011-01-01
This study investigated how native language background interacts with speaking style adaptations in determining levels of speech intelligibility. The aim was to explore whether native and high proficiency non-native listeners benefit similarly from native and non-native clear speech adjustments. The sentence-in-noise perception results revealed that fluent non-native listeners gained a large clear speech benefit from native clear speech modifications. Furthermore, proficient non-native talkers in this study implemented conversational-to-clear speaking style modifications in their second language (L2) that resulted in significant intelligibility gain for both native and non-native listeners. The results of the accentedness ratings obtained for native and non-native conversational and clear speech sentences showed that while intelligibility was improved, the presence of foreign accent remained constant in both speaking styles. This suggests that objective intelligibility and subjective accentedness are two independent dimensions of non-native speech. Overall, these results provide strong evidence that greater experience in L2 processing leads to improved intelligibility in both production and perception domains. These results also demonstrated that speaking style adaptations along with less signal distortion can contribute significantly towards successful native and non-native interactions. PMID:22225056
Bohm, Lauren A; Nelson, Marc E; Driver, Lynn E; Green, Glenn E
2010-12-01
To determine the importance of prelinguistic babbling by studying patterns of speech and language development after cricotracheal resection in aphonic children. Retrospective review of seven previously aphonic children who underwent cricotracheal resection by our pediatric thoracic airway team. The analyzed variables include age, sex, comorbidity, grade of stenosis, length of resected trachea, and communication methods. Data regarding the children's pre- and postsurgical communication methods, along with their utilization of speech therapy services, were obtained via speech-language pathology evaluations, clinical observations, and a standardized telephone survey supplemented by parental documentation. Postsurgical voice quality was assessed using the Pediatric Voice Outcomes Survey. All seven subjects underwent tracheostomy prior to 2 months of age when corrected for prematurity. The subjects remained aphonic for the entire duration of cannulation. Following cricotracheal resection, they experienced an initial delay in speech acquisition. Vegetative functions were the first laryngeal sounds to emerge. Initially, the children were only able to produce these sounds reflexively, but they subsequently gained voluntary control over these laryngeal functions. All subjects underwent an identifiable stage of canonical babbling that often occurred concomitantly with vocalizations. This was followed by the emergence of true speech. The initial delay in speech acquisition observed following decannulation, along with the presence of a postsurgical canonical stage in all study subjects, supports the hypothesis that babbling is necessary for speech and language development. Furthermore, the presence of babbling is universally evident regardless of the age at which speech develops. Finally, there is no demonstrable correlation between preoperative sign language and rate of speech development. Copyright © 2010 The American Laryngological, Rhinological, and Otological Society, Inc.
Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D.; Senn, Pascal
2013-01-01
Objective To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Methods Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Results Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Conclusion Webcameras have the potential to improve telecommunication of hearing-impaired individuals. PMID:23359119
Carrigg, Bronwyn; Parry, Louise; Baker, Elise; Shriberg, Lawrence D; Ballard, Kirrie J
2016-10-05
This study describes the phenotype in a large family with a strong, multigenerational history of severe speech sound disorder (SSD) persisting into adolescence and adulthood in approximately half the cases. Aims were to determine whether a core phenotype, broader than speech, separated persistent from resolved SSD cases; and to ascertain the uniqueness of the phenotype relative to published cases. Eleven members of the PM family (9-55 years) were assessed across cognitive, language, literacy, speech, phonological processing, numeracy, and motor domains. Between group comparisons were made using the Mann-Whitney U-test (p < 0.01). Participant performances were compared to normative data using standardized tests and to the limited published data on persistent SSD phenotypes. Significant group differences were evident on multiple speech, language, literacy, phonological processing, and verbal intellect measures without any overlapping scores. Persistent cases performed within the impaired range on multiple measures. Phonological memory impairment and subtle literacy weakness were present in resolved SSD cases. A core phenotype distinguished persistent from resolved SSD cases that was characterized by a multiple verbal trait disorder, including Childhood Apraxia of Speech. Several phenotypic differences differentiated the persistent SSD phenotype in the PM family from the few previously reported studies of large families with SSD, including the absence of comorbid dysarthria and marked orofacial apraxia. This study highlights how comprehensive phenotyping can advance the behavioral study of disorders, in addition to forming a solid basis for future genetic and neural studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Priming of Non-Speech Vocalizations in Male Adults: The Influence of the Speaker's Gender
ERIC Educational Resources Information Center
Fecteau, Shirley; Armony, Jorge L.; Joanette, Yves; Belin, Pascal
2004-01-01
Previous research reported a priming effect for voices. However, the type of information primed is still largely unknown. In this study, we examined the influence of speaker's gender and emotional category of the stimulus on priming of non-speech vocalizations in 10 male participants, who performed a gender identification task. We found a…
The Haskins Optically Corrected Ultrasound System
ERIC Educational Resources Information Center
Whalen, D. H.; Iskarous, Khalil; Tiede, Mark K.; Ostry, David J.; Lehnert-LeHouillier, Heike; Vatikiotis-Bateson, Eric; Hailey, Donald S.
2005-01-01
The tongue is critical in the production of speech, yet its nature has made it difficult to measure. Not only does its ability to attain complex shapes make it difficult to track, it is also largely hidden from view during speech. The present article describes a new combination of optical tracking and ultrasound imaging that allows for a…
ERIC Educational Resources Information Center
Radford, Julie
2010-01-01
Repair practices used by teachers who work with children with specific speech and language difficulties (SSLDs) have hitherto remained largely unexplored. Such classrooms therefore offer a new context for researching repairs and considering how they compare with non-SSLD interactions. Repair trajectories are of interest because they are dialogic…
ERIC Educational Resources Information Center
Chasaide, Ailbhe Ni; Davis, Eugene
The data processing system used at Trinity College's Centre for Language and Communication Studies (Ireland) enables computer-automated collection and analysis of phonetic data and has many advantages for research on speech production. The system allows accurate handling of large quantities of data, eliminates many of the limitations of manual…
Bivariate Genetic Analyses of Stuttering and Nonfluency in a Large Sample of 5-Year-Old Twins
ERIC Educational Resources Information Center
van Beijsterveldt, Catharina Eugenie Maria; Felsenfeld, Susan; Boomsma, Dorret Irene
2010-01-01
Purpose: Behavioral genetic studies of speech fluency have focused on participants who present with clinical stuttering. Knowledge about genetic influences on the development and regulation of normal speech fluency is limited. The primary aims of this study were to identify the heritability of stuttering and high nonfluency and to assess the…
Remote Capture of Human Voice Acoustical Data by Telephone: A Methods Study
ERIC Educational Resources Information Center
Cannizzaro, Michael S.; Reilly, Nicole; Mundt, James C.; Snyder, Peter J.
2005-01-01
In this pilot study we sought to determine the reliability and validity of collecting speech and voice acoustical data via telephone transmission for possible future use in large clinical trials. Simultaneous recordings of each participant's speech and voice were made at the point of participation, the local recording (LR), and over a telephone…
Calandruccio, Lauren; Bradlow, Ann R; Dhar, Sumitrajit
2014-04-01
Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared with native-accented English speech was reported in Calandruccio et al (2010a). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech Affiliationect masking release. A mixed-model design with within-subject (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech and high-intelligibility, moderate-intelligibility, and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. Three listener groups were tested, including monolingual English speakers with normal hearing, nonnative English speakers with normal hearing, and monolingual English speakers with hearing loss. The nonnative English speakers were from various native language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetric mild sloping to moderate sensorineural hearing loss. Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the key words within the sentences (100 key words per masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and listener groups. Monolingual English speakers with normal hearing benefited when the competing speech signal was foreign accented compared with native accented, allowing for improved speech recognition. Various levels of intelligibility across the foreign-accented speech maskers did not influence results. Neither the nonnative English-speaking listeners with normal hearing nor the monolingual English speakers with hearing loss benefited from masking release when the masker was changed from native-accented to foreign-accented English. Slight modifications between the target and the masker speech allowed monolingual English speakers with normal hearing to improve their recognition of native-accented English, even when the competing speech was highly intelligible. Further research is needed to determine which modifications within the competing speech signal caused the Mandarin-accented English to be less effective with respect to masking. Determining the influences within the competing speech that make it less effective as a masker or determining why monolingual normal-hearing listeners can take advantage of these differences could help improve speech recognition for those with hearing loss in the future. American Academy of Audiology.
Speaking legibly: Qualitative perceptions of altered voice among oral tongue cancer survivors
Philiponis, Genevieve; Kagan, Sarah H.
2015-01-01
Objective: Treatment for oral tongue cancer poses unique challenges to restoring and maintaining personally acceptable, intelligible speech. Methods: We report how oral tongue cancer survivors describe their speech after treatment in a qualitative descriptive approach using constant comparative technique to complete a focal analysis of interview data from a larger grounded theory study of oral tongue cancer survivorship. Interviews were completed with 16 tongue cancer survivors 3 months to 12 years postdiagnosis with stage I-IV disease and treated with surgery alone, surgery and radiotherapy, or chemo-radiation. All interview data from the main study were analyzed for themes describing perceptions of speech as oral tongue cancer survivors. Results: Actual speech impairments varied among survivors. None experienced severe impairments that inhibited their daily lives. However, all expressed some level of concern about speech. Concerns about altered speech began when survivors heard their treatment plans and continued through to survivorship without being fully resolved. The overarching theme, maintaining a pattern and character of speech acceptable to the survivor, was termed “speaking legibly” using one survivor's vivid in vivo statement. Speaking legibly integrate the sub-themes of “fears of sounding unusual”, “learning to talk again”, “problems and adjustments”, and “social impact”. Conclusions: Clinical and scientific efforts to further understand and address concerns about speech, personal presentation, and identity among those diagnosed with oral tongue are important to improving care processes and patient-centered experience. PMID:27981121
NASA Astrophysics Data System (ADS)
Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan
2016-05-01
With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.
Directive Speech Act of Imamu in Katoba Discourse of Muna Ethnic
NASA Astrophysics Data System (ADS)
Ardianto, Ardianto; Hadirman, Hardiman
2018-05-01
One of the traditions of Muna ethnic is katoba ritual. Katoba ritual is one tradition that values local knowledge maintained its existence for generations until today. Katoba ritual is a ritual to be Islamic person, repentance, and the formation of a child's character (male/female) who will enter adulthood (6-11 years) using directive speech. In katoba ritual, a child who is in-katoba introduced to the teaching of the Islamic religion, customs, manners to parents and his brother and behaviour towards others which is expected to be implemented in daily life. This study aims to describe and explain the directive speech acts of the imamu in the katoba discourse of Muna ethnic. This research uses a qualitative approach. Data are collected from a natural setting, namely katoba speech discourses. The data consist of two types, namely: (a) speech data, and (b) field note data. Data are analyzed using an interactive model with four stages: (1) data collection, (2) data reduction, (3) data display, and (4) conclusion and verification. The result shows, firstly, the form of directive speech acts includes declarative and imperative form; secondly, the function of directive speech acts includes functions of teaching, explaining, suggesting, and expecting; and thirdly, the strategy of directive speech acts includes both direct and indirect strategy. The results of this study could be implied in the development of character learning materials at schools. It also can be one of the contents of local content (mulok) at school.
Mathai, Jijo Pottackal; Appu, Sabarish
2015-01-01
Auditory neuropathy spectrum disorder (ANSD) is a form of sensorineural hearing loss, causing severe deficits in speech perception. The perceptual problems of individuals with ANSD were attributed to their temporal processing impairment rather than to reduced audibility. This rendered their rehabilitation difficult using hearing aids. Although hearing aids can restore audibility, compression circuits in a hearing aid might distort the temporal modulations of speech, causing poor aided performance. Therefore, hearing aid settings that preserve the temporal modulations of speech might be an effective way to improve speech perception in ANSD. The purpose of the study was to investigate the perception of hearing aid-processed speech in individuals with late-onset ANSD. A repeated measures design was used to study the effect of various compression time settings on speech perception and perceived quality. Seventeen individuals with late-onset ANSD within the age range of 20-35 yr participated in the study. The word recognition scores (WRSs) and quality judgment of phonemically balanced words, processed using four different compression settings of a hearing aid (slow, medium, fast, and linear), were evaluated. The modulation spectra of hearing aid-processed stimuli were estimated to probe the effect of amplification on the temporal envelope of speech. Repeated measures analysis of variance and post hoc Bonferroni's pairwise comparisons were used to analyze the word recognition performance and quality judgment. The comparison between unprocessed and all four hearing aid-processed stimuli showed significantly higher perception using the former stimuli. Even though perception of words processed using slow compression time settings of the hearing aids were significantly higher than the fast one, their difference was only 4%. In addition, there were no significant differences in perception between any other hearing aid-processed stimuli. Analysis of the temporal envelope of hearing aid-processed stimuli revealed minimal changes in the temporal envelope across the four hearing aid settings. In terms of quality, the highest number of individuals preferred stimuli processed using slow compression time settings. Individuals who preferred medium ones followed this. However, none of the individuals preferred fast compression time settings. Analysis of quality judgment showed that slow, medium, and linear settings presented significantly higher preference scores than the fast compression setting. Individuals with ANSD showed no marked difference in perception of speech that was processed using the four different hearing aid settings. However, significantly higher preference, in terms of quality, was found for stimuli processed using slow, medium, and linear settings over the fast one. Therefore, whenever hearing aids are recommended for ANSD, those having slow compression time settings or linear amplification may be chosen over the fast (syllabic compression) one. In addition, WRSs obtained using hearing aid-processed stimuli were remarkably poorer than unprocessed stimuli. This shows that processing of speech through hearing aids might have caused a large reduction of performance in individuals with ANSD. However, further evaluation is needed using individually programmed hearing aids rather than hearing aid-processed stimuli. American Academy of Audiology.
Revealing the dual streams of speech processing.
Fridriksson, Julius; Yourganov, Grigori; Bonilha, Leonardo; Basilakos, Alexandra; Den Ouden, Dirk-Bart; Rorden, Christopher
2016-12-27
Several dual route models of human speech processing have been proposed suggesting a large-scale anatomical division between cortical regions that support motor-phonological aspects vs. lexical-semantic aspects of speech processing. However, to date, there is no complete agreement on what areas subserve each route or the nature of interactions across these routes that enables human speech processing. Relying on an extensive behavioral and neuroimaging assessment of a large sample of stroke survivors, we used a data-driven approach using principal components analysis of lesion-symptom mapping to identify brain regions crucial for performance on clusters of behavioral tasks without a priori separation into task types. Distinct anatomical boundaries were revealed between a dorsal frontoparietal stream and a ventral temporal-frontal stream associated with separate components. Collapsing over the tasks primarily supported by these streams, we characterize the dorsal stream as a form-to-articulation pathway and the ventral stream as a form-to-meaning pathway. This characterization of the division in the data reflects both the overlap between tasks supported by the two streams as well as the observation that there is a bias for phonological production tasks supported by the dorsal stream and lexical-semantic comprehension tasks supported by the ventral stream. As such, our findings show a division between two processing routes that underlie human speech processing and provide an empirical foundation for studying potential computational differences that distinguish between the two routes.
Masking effects of speech and music: does the masker's hierarchical structure matter?
Shi, Lu-Feng; Law, Yvonne
2010-04-01
Speech and music are time-varying signals organized by parallel hierarchical rules. Through a series of four experiments, this study compared the masking effects of single-talker speech and instrumental music on speech perception while manipulating the complexity of hierarchical and temporal structures of the maskers. Listeners' word recognition was found to be similar between hierarchically intact and disrupted speech or classical music maskers (Experiment 1). When sentences served as the signal, significantly greater masking effects were observed with disrupted than intact speech or classical music maskers (Experiment 2), although not with jazz or serial music maskers, which differed from the classical music masker in their hierarchical structures (Experiment 3). Removing the classical music masker's temporal dynamics or partially restoring it affected listeners' sentence recognition; yet, differences in performance between intact and disrupted maskers remained robust (Experiment 4). Hence, the effect of structural expectancy was largely present across maskers when comparing them before and after their hierarchical structure was purposefully disrupted. This effect seemed to lend support to the auditory stream segregation theory.
Lavigne, Katie M; Woodward, Todd S
2018-04-01
Hypercoupling of activity in speech-perception-specific brain networks has been proposed to play a role in the generation of auditory-verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task-based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI-CPCA), which allowed for comparison of task-related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory-motor, (b) language processing, and (c) default-mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory-motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non-AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech-perception-related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation. © 2017 Wiley Periodicals, Inc.
Comparison of upper and lower lip muscle activity between stutterers and fluent speakers.
de Felício, Cláudia Maria; Freitas, Rosana Luiza Rodrigues Gomes; Vitti, Mathias; Regalo, Simone Cecilio Hallak
2007-08-01
There is a widespread clinical view that stuttering is associated with high levels of muscles activity. The proposal of this research was to compare stutterers and fluent speakers with respect to the electromyographic activity of the upper and lower lip muscles. Ten individuals who stutter and 10 fluent speakers (control group) paired by gender and age were studied (mean age: 13.4 years). Groups were defined by the speech sample analysis of the ABFW-Language Test. A K6-I EMG (Myo-tronics Co., Seattle, WA, USA) with double disposable silver electrodes (Duotrodes, Myo-tronics Co., Seattle, WA) being used in order to analyze lip muscle activity. The clinical conditions investigated were movements during speech, orofacial non-speech tasks, and rest. Electromyographic data were normalized by lip pursing activity. The non-parametric Mann-Whitney test was used for the comparison of speech fluency profile, and the Student t-test for independent samples for group comparison regarding electromyographic data. There was a statistically significant difference between groups regarding speech fluency profile and upper lip activity in the following conditions: lip lateralization to the right and to the left and rest before exercises (P<0.05). There was no significant difference between groups regarding lower lip activity (P>0.05). The EMG activity of the upper lip muscle in the group with stuttering was significantly lower than in the control group in some of the clinical conditions analyzed. There was no significant difference between groups regarding the lower lip muscle. The subjects who stutter did not present higher levels of muscle activity in lip muscles than fluent speakers.
Perception and analysis of Spanish accents in English speech
NASA Astrophysics Data System (ADS)
Chism, Cori; Lass, Norman
2002-05-01
The purpose of the present study was to determine what relates most closely to the degree of perceived foreign accent in the English speech of native Spanish speakers: intonation, vowel length, stress, voice onset time (VOT), or segmental accuracy. Nineteen native English speaking listeners rated speech samples from 7 native English speakers and 15 native Spanish speakers for comprehensibility and degree of foreign accent. The speech samples were analyzed spectrographically and perceptually to obtain numerical values for each variable. Correlation coefficients were computed to determine the relationship beween these values and the average foreign accent scores. Results showed that the average foreign accent scores were statistically significantly correlated with three variables: the length of stressed vowels (r=-0.48, p=0.05), voice onset time (r =-0.62, p=0.01), and segmental accuracy (r=0.92, p=0.001). Implications of these findings and suggestions for future research are discussed.
Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie
2003-05-08
We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.
"My Mind Is Doing It All": No "Brake" to Stop Speech Generation in Jargon Aphasia.
Robinson, Gail A; Butterworth, Brian; Cipolotti, Lisa
2015-12-01
To study whether pressure of speech in jargon aphasia arises out of disturbances to core language or executive processes, or at the intersection of conceptual preparation. Conceptual preparation mechanisms for speech have not been well studied. Several mechanisms have been proposed for jargon aphasia, a fluent, well-articulated, logorrheic propositional speech that is almost incomprehensible. We studied the vast quantity of jargon speech produced by patient J.A., who had suffered an infarct after the clipping of a middle cerebral artery aneurysm. We gave J.A. baseline cognitive tests and experimental word- and sentence-generation tasks that we had designed for patients with dynamic aphasia, a severely reduced but otherwise fairly normal propositional speech thought to result from deficits in conceptual preparation. J.A. had cognitive dysfunction, including executive difficulties, and a language profile characterized by poor repetition and naming in the context of relatively intact single-word comprehension. J.A.'s spontaneous speech was fluent but jargon. He had no difficulty generating sentences; in contrast to dynamic aphasia, his sentences were largely meaningless and not significantly affected by stimulus constraint level. This patient with jargon aphasia highlights that voluminous speech output can arise from disturbances of both language and executive functions. Our previous studies have identified three conceptual preparation mechanisms for speech: generation of novel thoughts, their sequencing, and selection. This study raises the possibility that a "brake" to stop message generation may be a fourth conceptual preparation mechanism behind the pressure of speech characteristic of jargon aphasia.
Phonological Processes in the Speech of Jordanian Arabic Children with Cleft Lip and/or Palate
ERIC Educational Resources Information Center
Al-Tamimi, Feda Y.; Owais, Arwa I.; Khabour, Omar F.; Khamaiseh, Zaidan A.
2011-01-01
The controlled and free speech of 15 Jordanian male and female children with cleft lip and/or palate was analyzed to account for the different phonological processes exhibited. Study participants were divided into three main age groups, 4 years 2 months to 4 years 7 months, 5 years 3 months to 5 years 6 months, and 6 years 4 months to 6 years 6…
Communication variations related to leader personality
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Palmer, Mark T.; Veinott, Elizabeth S.
1991-01-01
While communication and captain personality type have been separately shown to relate to overall crew performance, this study attempts to establish a link between communication and personality among 12 crews whose captains represent three pre-selected personality profiles (IE+, I-, Ec-). Results from analyzing transcribed speech from one leg of a full-mission simulation study, so far indicate several discriminating patterns involving ratios of total initiating speech (captain to crew members); in particular commands, questions and observations.
ERIC Educational Resources Information Center
Kinzel, Paul F.
The spontaneous speech of a six-year-old bilingual child was analyzed for this study. The child has lived in the United States and English is her primary language but her parents speak only French in the home and she has spent several months in France during three visits there. The data used in this study were collected in the child's home by her…
NASA Technical Reports Server (NTRS)
Simpson, C. A.
1985-01-01
In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.
Sundberg, J; Elliot, N; Gramming, P; Nord, L
1993-09-01
According to previous investigations, subglottal pressure in singing is adapted not only to loudness but also to fundamental frequency. Here the significance of musical expression to subglottal pressure is analyzed in terms of alternations between stressed and unstressed bar positions. Esophageal pressure was recorded together with the audio signal in a male and a female professional singer using a paranasally introduced pressure transducer while the subjects performed vocal exercises. Also, the subjects gave examples of actors' speech by reading poetry aloud. The results show that subglottal pressure can be used for stressing the first beat in bars and also for increasing the sound level in voiced consonants in actor's speech.
High-frequency energy in singing and speech
NASA Astrophysics Data System (ADS)
Monson, Brian Bruce
While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.
Control of interior surface materials for speech privacy in high-speed train cabins.
Jang, H S; Lim, H; Jeon, J Y
2017-05-01
The effect of interior materials with various absorption coefficients on speech privacy was investigated in a 1:10 scale model of one high-speed train cabin geometry. The speech transmission index (STI) and privacy distance (r P ) were measured in the train cabin to quantify speech privacy. Measurement cases were selected for the ceiling, sidewall, and front and back walls and were classified as high-, medium- and low-absorption coefficient cases. Interior materials with high absorption coefficients yielded a low r P , and the ceiling had the largest impact on both the STI and r P among the interior elements. Combinations of the three cases were measured, and the maximum reduction in r P by the absorptive surfaces was 2.4 m, which exceeds the space between two rows of chairs in the high-speed train. Additionally, the contribution of the interior elements to speech privacy was analyzed using recorded impulse responses and a multiple regression model for r P using the equivalent absorption area. The analysis confirmed that the ceiling was the most important interior element for improving speech privacy. These results can be used to find the relative decrease in r P in the acoustic design of interior materials to improve speech privacy in train cabins. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Chang, Yen-Liang; Hung, Chao-Ho; Chen, Po-Yueh; Chen, Wei-Chang; Hung, Shih-Han
2015-10-01
Acoustic analysis is often used in speech evaluation but seldom for the evaluation of oral prostheses designed for reconstruction of surgical defect. This study aimed to introduce the application of acoustic analysis for patients with velopharyngeal insufficiency (VPI) due to oral surgery and rehabilitated with oral speech-aid prostheses. The pre- and postprosthetic rehabilitation acoustic features of sustained vowel sounds from two patients with VPI were analyzed and compared with the acoustic analysis software Praat. There were significant differences in the octave spectrum of sustained vowel speech sound between the pre- and postprosthetic rehabilitation. Acoustic measurements of sustained vowels for patients before and after prosthetic treatment showed no significant differences for all parameters of fundamental frequency, jitter, shimmer, noise-to-harmonics ratio, formant frequency, F1 bandwidth, and band energy difference. The decrease in objective nasality perceptions correlated very well with the decrease in dips of the spectra for the male patient with a higher speech bulb height. Acoustic analysis may be a potential technique for evaluating the functions of oral speech-aid prostheses, which eliminates dysfunctions due to the surgical defect and contributes to a high percentage of intelligible speech. Octave spectrum analysis may also be a valuable tool for detecting changes in nasality characteristics of the voice during prosthetic treatment of VPI. Copyright © 2014. Published by Elsevier B.V.
Loudness and pitch of Kunqu opera.
Dong, Li; Sundberg, Johan; Kong, Jiangping
2014-01-01
Equivalent sound level (Leq), sound pressure level (SPL), and fundamental frequency (F0) are analyzed in each of five Kunqu Opera roles, Young girl and Young woman, Young man, Old man, and Colorful face. Their pitch ranges are similar to those of some western opera singers (alto, alto, tenor, baritone, and baritone, respectively). Differences among tasks, conditions (stage speech, singing, and reading lyrics), singers, and roles are examined. For all singers, Leq of stage speech and singing were considerably higher than that of conversational speech. Interrole differences of Leq among tasks and singers were larger than the intrarole differences. For most roles, time domain variation of SPL differed between roles both in singing and stage speech. In singing, as compared with stage speech, SPL distribution was more concentrated and variation of SPL with time was smaller. With regard to gender and age, male roles had higher mean Leq and lower average F0, MF0, as compared with female roles. Female singers showed a wider F0 distribution for singing than for stage speech, whereas the opposite was true for male singers. The Leq of stage speech was higher than in singing for young personages. Younger female personages showed higher Leq, whereas older male personages had higher Leq. The roles performed with higher Leq tended to be sung at a lower MF0. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ravishankar, C., Hughes Network Systems, Germantown, MD
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfullymore » regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.« less
The Timing and Effort of Lexical Access in Natural and Degraded Speech
Wagner, Anita E.; Toffanin, Paolo; Başkent, Deniz
2016-01-01
Understanding speech is effortless in ideal situations, and although adverse conditions, such as caused by hearing impairment, often render it an effortful task, they do not necessarily suspend speech comprehension. A prime example of this is speech perception by cochlear implant users, whose hearing prostheses transmit speech as a significantly degraded signal. It is yet unknown how mechanisms of speech processing deal with such degraded signals, and whether they are affected by effortful processing of speech. This paper compares the automatic process of lexical competition between natural and degraded speech, and combines gaze fixations, which capture the course of lexical disambiguation, with pupillometry, which quantifies the mental effort involved in processing speech. Listeners’ ocular responses were recorded during disambiguation of lexical embeddings with matching and mismatching durational cues. Durational cues were selected due to their substantial role in listeners’ quick limitation of the number of lexical candidates for lexical access in natural speech. Results showed that lexical competition increased mental effort in processing natural stimuli in particular in presence of mismatching cues. Signal degradation reduced listeners’ ability to quickly integrate durational cues in lexical selection, and delayed and prolonged lexical competition. The effort of processing degraded speech was increased overall, and because it had its sources at the pre-lexical level this effect can be attributed to listening to degraded speech rather than to lexical disambiguation. In sum, the course of lexical competition was largely comparable for natural and degraded speech, but showed crucial shifts in timing, and different sources of increased mental effort. We argue that well-timed progress of information from sensory to pre-lexical and lexical stages of processing, which is the result of perceptual adaptation during speech development, is the reason why in ideal situations speech is perceived as an undemanding task. Degradation of the signal or the receiver channel can quickly bring this well-adjusted timing out of balance and lead to increase in mental effort. Incomplete and effortful processing at the early pre-lexical stages has its consequences on lexical processing as it adds uncertainty to the forming and revising of lexical hypotheses. PMID:27065901
Speech processing in children with functional articulation disorders.
Gósy, Mária; Horváth, Viktória
2015-03-01
This study explored auditory speech processing and comprehension abilities in 5-8-year-old monolingual Hungarian children with functional articulation disorders (FADs) and their typically developing peers. Our main hypothesis was that children with FAD would show co-existing auditory speech processing disorders, with different levels of these skills depending on the nature of the receptive processes. The tasks included (i) sentence and non-word repetitions, (ii) non-word discrimination and (iii) sentence and story comprehension. Results suggest that the auditory speech processing of children with FAD is underdeveloped compared with that of typically developing children, and largely varies across task types. In addition, there are differences between children with FAD and controls in all age groups from 5 to 8 years. Our results have several clinical implications.
[The endpoint detection of cough signal in continuous speech].
Yang, Guoqing; Mo, Hongqiang; Li, Wen; Lian, Lianfang; Zheng, Zeguang
2010-06-01
The endpoint detection of cough signal in continuous speech has been researched in order to improve the efficiency and veracity of manual recognition or computer-based automatic recognition. First, using the short time zero crossing ratio(ZCR) for identifying the suspicious coughs and getting the threshold of short time energy based on acoustic characteristics of cough. Then, the short time energy is combined with short time ZCR in order to implement the endpoint detection of cough in continuous speech. To evaluate the effect of the method, first, the virtual number of coughs in each recording was identified by two experienced doctors using the graphical user interface (GUI). Second, the recordings were analyzed by automatic endpoint detection program under Matlab7.0. Finally, the comparison between these two results showed: The error rate of undetected cough is 2.18%, and 98.13% of noise, silence and speech were removed. The way of setting short time energy threshold is robust. The endpoint detection program can remove most speech and noise, thus maintaining a lower rate of error.
Acoustic Analysis of PD Speech
Chenausky, Karen; MacAuslan, Joel; Goldhor, Richard
2011-01-01
According to the U.S. National Institutes of Health, approximately 500,000 Americans have Parkinson's disease (PD), with roughly another 50,000 receiving new diagnoses each year. 70%–90% of these people also have the hypokinetic dysarthria associated with PD. Deep brain stimulation (DBS) substantially relieves motor symptoms in advanced-stage patients for whom medication produces disabling dyskinesias. This study investigated speech changes as a result of DBS settings chosen to maximize motor performance. The speech of 10 PD patients and 12 normal controls was analyzed for syllable rate and variability, syllable length patterning, vowel fraction, voice-onset time variability, and spirantization. These were normalized by the controls' standard deviation to represent distance from normal and combined into a composite measure. Results show that DBS settings relieving motor symptoms can improve speech, making it up to three standard deviations closer to normal. However, the clinically motivated settings evaluated here show greater capacity to impair, rather than improve, speech. A feedback device developed from these findings could be useful to clinicians adjusting DBS parameters, as a means for ensuring they do not unwittingly choose DBS settings which impair patients' communication. PMID:21977333
Speech production in experienced cochlear implant users undergoing short-term auditory deprivation
NASA Astrophysics Data System (ADS)
Greenman, Geoffrey; Tjaden, Kris; Kozak, Alexa T.
2005-09-01
This study examined the effect of short-term auditory deprivation on the speech production of five postlingually deafened women, all of whom were experienced cochlear implant users. Each cochlear implant user, as well as age and gender matched control speakers, produced CVC target words embedded in a reading passage. Speech samples for the deafened adults were collected on two separate occasions. First, the speakers were recorded after wearing their speech processor consistently for at least two to three hours prior to recording (implant ``ON''). The second recording occurred when the speakers had their speech processors turned off for approximately ten to twelve hours prior to recording (implant ``OFF''). Acoustic measures, including fundamental frequency (F0), the first (F1) and second (F2) formants of the vowels, vowel space area, vowel duration, spectral moments of the consonants, as well as utterance duration and sound pressure level (SPL) across the entire utterance were analyzed in both speaking conditions. For each implant speaker, acoustic measures will be compared across implant ``ON'' and implant ``OFF'' speaking conditions, and will also be compared to data obtained from normal hearing speakers.
Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas
2014-03-01
Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.
Lopes, Leonardo Wanderley; Lima, Ivonaldo Leidson Barbosa; Silva, Eveline Gonçalves; Almeida, Larissa Nadjara Alves de; Almeida, Anna Alice Figueiredo de
2013-01-01
To analyze the preferences and attitudes of listeners in relation to regional (RA) and softened accents (SA) in television journalism. Three television news presenters recorded carrier phrases and a standard text using RA and SA. The recordings were presented to 105 judges who listened to the word pairs and answered whether they perceived differences between the RA and SA, and the type of pronunciation that they preferred in the speech of television news presenters. Afterwards, they listened to the sentences and judged seven attributes in the contexts of RA and SA using a semantic differential scale. The listeners perceived the difference between the regional and softened pronunciation (p<0.0001). They preferred the SA in the presenters' speech in all variants studied (p<0.0001). There was an association between linguistic variants and the judgment of attitudes (p=0.002). The listeners regarded the presence of SA in the presenters' speech as positive in all variants studied (p<0.0001). The listeners prefer and assign positive values to the SA in the speech of television journalists in all linguistic variants studied.
Monson, Brian B.; Lotto, Andrew J.; Story, Brad H.
2012-01-01
The human singing and speech spectrum includes energy above 5 kHz. To begin an in-depth exploration of this high-frequency energy (HFE), a database of anechoic high-fidelity recordings of singers and talkers was created and analyzed. Third-octave band analysis from the long-term average spectra showed that production level (soft vs normal vs loud), production mode (singing vs speech), and phoneme (for voiceless fricatives) all significantly affected HFE characteristics. Specifically, increased production level caused an increase in absolute HFE level, but a decrease in relative HFE level. Singing exhibited higher levels of HFE than speech in the soft and normal conditions, but not in the loud condition. Third-octave band levels distinguished phoneme class of voiceless fricatives. Female HFE levels were significantly greater than male levels only above 11 kHz. This information is pertinent to various areas of acoustics, including vocal tract modeling, voice synthesis, augmentative hearing technology (hearing aids and cochlear implants), and training/therapy for singing and speech. PMID:22978902
Schwartz, Jean-Luc; Savariaux, Christophe
2014-01-01
An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call “preparatory gestures”. However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call “comodulatory gestures” providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na) showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction. PMID:25079216
Quantifying the intelligibility of speech in noise for non-native listeners.
van Wijngaarden, Sander J; Steeneken, Herman J M; Houtgast, Tammo
2002-04-01
When listening to languages learned at a later age, speech intelligibility is generally lower than when listening to one's native language. The main purpose of this study is to quantify speech intelligibility in noise for specific populations of non-native listeners, only broadly addressing the underlying perceptual and linguistic processing. An easy method is sought to extend these quantitative findings to other listener populations. Dutch subjects listening to Germans and English speech, ranging from reasonable to excellent proficiency in these languages, were found to require a 1-7 dB better speech-to-noise ratio to obtain 50% sentence intelligibility than native listeners. Also, the psychometric function for sentence recognition in noise was found to be shallower for non-native than for native listeners (worst-case slope around the 50% point of 7.5%/dB, compared to 12.6%/dB for native listeners). Differences between native and non-native speech intelligibility are largely predicted by linguistic entropy estimates as derived from a letter guessing task. Less effective use of context effects (especially semantic redundancy) explains the reduced speech intelligibility for non-native listeners. While measuring speech intelligibility for many different populations of listeners (languages, linguistic experience) may be prohibitively time consuming, obtaining predictions of non-native intelligibility from linguistic entropy may help to extend the results of this study to other listener populations.
Quantifying the intelligibility of speech in noise for non-native listeners
NASA Astrophysics Data System (ADS)
van Wijngaarden, Sander J.; Steeneken, Herman J. M.; Houtgast, Tammo
2002-04-01
When listening to languages learned at a later age, speech intelligibility is generally lower than when listening to one's native language. The main purpose of this study is to quantify speech intelligibility in noise for specific populations of non-native listeners, only broadly addressing the underlying perceptual and linguistic processing. An easy method is sought to extend these quantitative findings to other listener populations. Dutch subjects listening to Germans and English speech, ranging from reasonable to excellent proficiency in these languages, were found to require a 1-7 dB better speech-to-noise ratio to obtain 50% sentence intelligibility than native listeners. Also, the psychometric function for sentence recognition in noise was found to be shallower for non-native than for native listeners (worst-case slope around the 50% point of 7.5%/dB, compared to 12.6%/dB for native listeners). Differences between native and non-native speech intelligibility are largely predicted by linguistic entropy estimates as derived from a letter guessing task. Less effective use of context effects (especially semantic redundancy) explains the reduced speech intelligibility for non-native listeners. While measuring speech intelligibility for many different populations of listeners (languages, linguistic experience) may be prohibitively time consuming, obtaining predictions of non-native intelligibility from linguistic entropy may help to extend the results of this study to other listener populations.
Emotion recognition from speech: tools and challenges
NASA Astrophysics Data System (ADS)
Al-Talabani, Abdulbasit; Sellahewa, Harin; Jassim, Sabah A.
2015-05-01
Human emotion recognition from speech is studied frequently for its importance in many applications, e.g. human-computer interaction. There is a wide diversity and non-agreement about the basic emotion or emotion-related states on one hand and about where the emotion related information lies in the speech signal on the other side. These diversities motivate our investigations into extracting Meta-features using the PCA approach, or using a non-adaptive random projection RP, which significantly reduce the large dimensional speech feature vectors that may contain a wide range of emotion related information. Subsets of Meta-features are fused to increase the performance of the recognition model that adopts the score-based LDC classifier. We shall demonstrate that our scheme outperform the state of the art results when tested on non-prompted databases or acted databases (i.e. when subjects act specific emotions while uttering a sentence). However, the huge gap between accuracy rates achieved on the different types of datasets of speech raises questions about the way emotions modulate the speech. In particular we shall argue that emotion recognition from speech should not be dealt with as a classification problem. We shall demonstrate the presence of a spectrum of different emotions in the same speech portion especially in the non-prompted data sets, which tends to be more "natural" than the acted datasets where the subjects attempt to suppress all but one emotion.
A highly penetrant form of childhood apraxia of speech due to deletion of 16p11.2
Fedorenko, Evelina; Morgan, Angela; Murray, Elizabeth; Cardinaux, Annie; Mei, Cristina; Tager-Flusberg, Helen; Fisher, Simon E; Kanwisher, Nancy
2016-01-01
Individuals with heterozygous 16p11.2 deletions reportedly suffer from a variety of difficulties with speech and language. Indeed, recent copy-number variant screens of children with childhood apraxia of speech (CAS), a specific and rare motor speech disorder, have identified three unrelated individuals with 16p11.2 deletions. However, the nature and prevalence of speech and language disorders in general, and CAS in particular, is unknown for individuals with 16p11.2 deletions. Here we took a genotype-first approach, conducting detailed and systematic characterization of speech abilities in a group of 11 unrelated children ascertained on the basis of 16p11.2 deletions. To obtain the most precise and replicable phenotyping, we included tasks that are highly diagnostic for CAS, and we tested children under the age of 18 years, an age group where CAS has been best characterized. Two individuals were largely nonverbal, preventing detailed speech analysis, whereas the remaining nine met the standard accepted diagnostic criteria for CAS. These results link 16p11.2 deletions to a highly penetrant form of CAS. Our findings underline the need for further precise characterization of speech and language profiles in larger groups of affected individuals, which will also enhance our understanding of how genetic pathways contribute to human communication disorders. PMID:26173965
Pitch-Based Segregation of Reverberant Speech
2005-02-01
speaker recognition in real environments, audio information retrieval and hearing prosthesis. Second, although binaural listening improves the...intelligibility of target speech under anechoic conditions (Bronkhorst, 2000), this binaural advantage is largely eliminated by reverberation (Plomp, 1976...Brown and Cooke, 1994; Wang and Brown, 1999; Hu and Wang, 2004) as well as in binaural separation (e.g., Roman et al., 2003; Palomaki et al., 2004
ERIC Educational Resources Information Center
Gentilucci, Maurizio; Campione, Giovanna Cristina; Volta, Riccardo Dalla; Bernardis, Paolo
2009-01-01
Does the mirror system affect the control of speech? This issue was addressed in behavioral and Transcranial Magnetic Stimulation (TMS) experiments. In behavioral experiment 1, participants pronounced the syllable /da/ while observing (1) a hand grasping large and small objects with power and precision grasps, respectively, (2) a foot interacting…
ERIC Educational Resources Information Center
Blau, Vera; Reithler, Joel; van Atteveldt, Nienke; Seitz, Jochen; Gerretsen, Patty; Goebel, Rainer; Blomert, Leo
2010-01-01
Learning to associate auditory information of speech sounds with visual information of letters is a first and critical step for becoming a skilled reader in alphabetic languages. Nevertheless, it remains largely unknown which brain areas subserve the learning and automation of such associations. Here, we employ functional magnetic resonance…
Oliveira Barrichelo, V M; Heuer, R J; Dean, C M; Sataloff, R T
2001-09-01
Many studies have described and analyzed the singer's formant. A similar phenomenon produced by trained speakers led some authors to examine the speaker's ring. If we consider these phenomena as resonance effects associated with vocal tract adjustments and training, can we hypothesize that trained singers can carry over their singing formant ability into speech, also obtaining a speaker's ring? Can we find similar differences for energy distribution in continuous speech? Forty classically trained singers and forty untrained normal speakers performed an all-voiced reading task and produced a sample of a sustained spoken vowel /a/. The singers were also requested to perform a sustained sung vowel /a/ at a comfortable pitch. The reading was analyzed by the long-term average spectrum (LTAS) method. The sustained vowels were analyzed through power spectrum analysis. The data suggest that singers show more energy concentration in the singer's formant/speaker's ring region in both sung and spoken vowels. The singers' spoken vowel energy in the speaker's ring area was found to be significantly larger than that of the untrained speakers. The LTAS showed similar findings suggesting that those differences also occur in continuous speech. This finding supports the value of further research on the effect of singing training on the resonance of the speaking voice.
Strom, Mark A; Silverberg, Jonathan I
2016-01-01
To determine if eczema is associated with an increased risk of a speech disorder. We analyzed data on 354,416 children and adolescents from 19 US population-based cohorts: the 2003-2004 and 2007-2008 National Survey of Children's Health and 1997-2013 National Health Interview Survey, each prospective, questionnaire-based cohorts. In multivariate survey logistic regression models adjusting for sociodemographics and comorbid allergic disease, eczema was significantly associated with higher odds of speech disorder in 12 of 19 cohorts (P < .05). The pooled prevalence of speech disorder in children with eczema was 4.7% (95% CI 4.5%-5.0%) compared with 2.2% (95% CI 2.2%-2.3%) in children without eczema. In pooled multivariate analysis, eczema was associated with increased odds of speech disorder (aOR [95% CI] 1.81 [1.57-2.05], P < .001). In a single study assessing eczema severity, mild (1.36 [1.02-1.81], P = .03) and severe eczema (3.56 [1.70-7.48], P < .001) were associated with higher odds of speech disorder. History of eczema was associated with moderate (2.35 [1.34-4.10], P = .003) and severe (2.28 [1.11-4.72], P = .03) speech disorder. Finally, significant interactions were found, such that children with both eczema and attention deficit disorder with or without hyperactivity or sleep disturbance had vastly increased risk of speech disorders than either by itself. Pediatric eczema may be associated with increased risk of speech disorder. Further, prospective studies are needed to characterize the exact nature of this association. Copyright © 2016 Elsevier Inc. All rights reserved.
Strom, Mark A.; Silverberg, Jonathan I.
2016-01-01
Objective To determine if eczema is associated with an increased risk of a speech disorder. Study design We analyzed data on 354 416 children and adolescents from 19 US population-based cohorts: the 2003–2004 and 2007–2008 National Survey of Children’s Health and 1997–2013 National Health Interview Survey, each prospective, questionnaire-based cohorts. Results In multivariate survey logistic regression models adjusting for sociodemographics and comorbid allergic disease, eczema was significantly associated with higher odds of speech disorder in 12 of 19 cohorts (P < .05). The pooled prevalence of speech disorder in children with eczema was 4.7% (95% CI 4.5%–5.0%) compared with 2.2% (95% CI 2.2%–2.3%) in children without eczema. In pooled multivariate analysis, eczema was associated with increased odds of speech disorder (aOR [95% CI] 1.81 [1.57–2.05], P < .001). In a single study assessing eczema severity, mild (1.36 [1.02–1.81], P = .03) and severe eczema (3.56 [1.70–7.48], P < .001) were associated with higher odds of speech disorder. History of eczema was associated with moderate (2.35 [1.34–4.10], P = .003) and severe (2.28 [1.11–4.72], P = .03) speech disorder. Finally, significant interactions were found, such that children with both eczema and attention deficit disorder with or without hyperactivity or sleep disturbance had vastly increased risk of speech disorders than either by itself. Conclusions Pediatric eczema may be associated with increased risk of speech disorder. Further, prospective studies are needed to characterize the exact nature of this association. PMID:26520915
Arnulf, Isabelle; Uguccioni, Ginevra; Gay, Frederick; Baldayrou, Etienne; Golmard, Jean-Louis; Gayraud, Frederique; Devevey, Alain
2017-11-01
Speech is a complex function in humans, but the linguistic characteristics of sleep talking are unknown. We analyzed sleep-associated speech in adults, mostly (92%) during parasomnias. The utterances recorded during night-time video-polysomnography were analyzed for number of words, propositions and speech episodes, frequency, gaps and pauses (denoting turn-taking in the conversation), lemmatization, verbosity, negative/imperative/interrogative tone, first/second person, politeness, and abuse. Two hundred thirty-two subjects (aged 49.5 ± 20 years old; 41% women; 129 with rapid eye movement [REM] sleep behavior disorder and 87 with sleepwalking/sleep terrors, 15 healthy subjects, and 1 patient with sleep apnea speaking in non-REM sleep) uttered 883 speech episodes, containing 59% nonverbal utterance (mumbles, shouts, whispers, and laughs) and 3349 understandable words. The most frequent word was "No": negations represented 21.4% of clauses (more in non-REM sleep). Interrogations were found in 26% of speech episodes (more in non-REM sleep), and subordinate clauses were found in 12.9% of speech episodes. As many as 9.7% of clauses contained profanities (more in non-REM sleep). Verbal abuse lasted longer in REM sleep and was mostly directed toward insulting or condemning someone, whereas swearing predominated in non-REM sleep. Men sleep-talked more than women and used a higher proportion of profanities. Apparent turn-taking in the conversation respected the usual language gaps. Sleep talking parallels awake talking for syntax, semantics, and turn-taking in conversation, suggesting that the sleeping brain can function at a high level. Language during sleep is mostly a familiar, tensed conversation with inaudible others, suggestive of conflicts. © Sleep Research Society 2017. Published by Oxford University Press [on behalf of the Sleep Research Society]. All rights reserved. For permissions, please email: journals.permissions@oup.com
Acoustic characteristics of modern Greek Orthodox Church music.
Delviniotis, Dimitrios S
2013-09-01
Some acoustic characteristics of the two types of vocal music of the Greek Orthodox Church Music, the Byzantine chant (BC) and ecclesiastical speech (ES), are studied in relation to the common Greek speech and the Western opera. Vocal samples were obtained, and their acoustic parameters of sound pressure level (SPL), fundamental frequency (F0), and the long-time average spectrum (LTAS) characteristics were analyzed. Twenty chanters, including two chanters-singers of opera, sang (BC) and read (ES) the same hymn of Byzantine music (BM), the two opera singers sang the same aria of opera, and common speech samples were obtained, and all audio were analyzed. The distribution of SPL values showed that the BC and ES have higher SPL by 9 and 12 dB, respectively, than common speech. The average F0 in ES tends to be lower than the common speech, and the smallest standard deviation (SD) of F0 values characterizes its monotonicity. The tone-scale intervals of BC are close enough to the currently accepted theory with SD equal to 0.24 semitones. The rate and extent of vibrato, which is rare in BC, equals 4.1 Hz and 0.6 semitones, respectively. The average LTAS slope is greatest in BC (+4.5 dB) but smaller than in opera (+5.7 dB). In both BC and ES, instead of a singer's formant appearing in an opera voice, a speaker's formant (SPF) was observed around 3300 Hz, with relative levels of +6.3 and +4.6 dB, respectively. The two vocal types of BM, BC, and ES differ both to each other and common Greek speech and opera style regarding SPL, the mean and SD of F0, the LTAS slope, and the relative level of SPF. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Polur, Prasad D; Miller, Gerald E
2006-10-01
Computer speech recognition of individuals with dysarthria, such as cerebral palsy patients requires a robust technique that can handle conditions of very high variability and limited training data. In this study, application of a 10 state ergodic hidden Markov model (HMM)/artificial neural network (ANN) hybrid structure for a dysarthric speech (isolated word) recognition system, intended to act as an assistive tool, was investigated. A small size vocabulary spoken by three cerebral palsy subjects was chosen. The effect of such a structure on the recognition rate of the system was investigated by comparing it with an ergodic hidden Markov model as a control tool. This was done in order to determine if this modified technique contributed to enhanced recognition of dysarthric speech. The speech was sampled at 11 kHz. Mel frequency cepstral coefficients were extracted from them using 15 ms frames and served as training input to the hybrid model setup. The subsequent results demonstrated that the hybrid model structure was quite robust in its ability to handle the large variability and non-conformity of dysarthric speech. The level of variability in input dysarthric speech patterns sometimes limits the reliability of the system. However, its application as a rehabilitation/control tool to assist dysarthric motor impaired individuals holds sufficient promise.
Speech intelligibility in complex acoustic environments in young children
NASA Astrophysics Data System (ADS)
Litovsky, Ruth
2003-04-01
While the auditory system undergoes tremendous maturation during the first few years of life, it has become clear that in complex scenarios when multiple sounds occur and when echoes are present, children's performance is significantly worse than their adult counterparts. The ability of children (3-7 years of age) to understand speech in a simulated multi-talker environment and to benefit from spatial separation of the target and competing sounds was investigated. In these studies, competing sources vary in number, location, and content (speech, modulated or unmodulated speech-shaped noise and time-reversed speech). The acoustic spaces were also varied in size and amount of reverberation. Finally, children with chronic otitis media who received binaural training were tested pre- and post-training on a subset of conditions. Results indicated the following. (1) Children experienced significantly more masking than adults, even in the simplest conditions tested. (2) When the target and competing sounds were spatially separated speech intelligibility improved, but the amount varied with age, type of competing sound, and number of competitors. (3) In a large reverberant classroom there was no benefit of spatial separation. (4) Binaural training improved speech intelligibility performance in children with otitis media. Future work includes similar studies in children with unilateral and bilateral cochlear implants. [Work supported by NIDCD, DRF, and NOHR.
Popova, Svetlana; Lange, Shannon; Burd, Larry; Shield, Kevin; Rehm, Jürgen
2014-12-01
This study, which is part of a large economic project on the overall burden and cost associated with Foetal Alcohol Spectrum Disorder (FASD) in Canada, estimated the cost of 1:1 speech-language interventions among children and youth with FASD for Canada in 2011. The number of children and youth with FASD and speech-language disorder(s) (SLD), the distribution of the level of severity, and the number of hours needed to treat were estimated using data from the available literature. 1:1 speech-language interventions were computed using the average cost per hour for speech-language pathologists. It was estimated that ˜ 37,928 children and youth with FASD had SLD in Canada in 2011. Using the most conservative approach, the annual cost of 1:1 speech-language interventions among children and youth with FASD is substantial, ranging from $72.5 million to $144.1 million Canadian dollars. Speech-language pathologists should be aware of the disproportionate number of children and youth with FASD who have SLD and the need for early identification to improve access to early intervention. Early identification and access to high quality services may have a role in decreasing the risk of developing the secondary disabilities and in reducing the economic burden of FASD on society.
Common cues to emotion in the dynamic facial expressions of speech and song.
Livingstone, Steven R; Thompson, William F; Wanderley, Marcelo M; Palmer, Caroline
2015-01-01
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.
Brammer, Anthony J; Yu, Gongqiang; Bernstein, Eric R; Cherniack, Martin G; Peterson, Donald R; Tufts, Jennifer B
2014-08-01
An adaptive, delayless, subband feed-forward control structure is employed to improve the speech signal-to-noise ratio (SNR) in the communication channel of a circumaural headset/hearing protector (HPD) from 90 Hz to 11.3 kHz, and to provide active noise control (ANC) from 50 to 800 Hz to complement the passive attenuation of the HPD. The task involves optimizing the speech SNR for each communication channel subband, subject to limiting the maximum sound level at the ear, maintaining a speech SNR preferred by users, and reducing large inter-band gain differences to improve speech quality. The performance of a proof-of-concept device has been evaluated in a pseudo-diffuse sound field when worn by human subjects under conditions of environmental noise and speech that do not pose a risk to hearing, and by simulation for other conditions. For the environmental noises employed in this study, subband speech SNR control combined with subband ANC produced greater improvement in word scores than subband ANC alone, and improved the consistency of word scores across subjects. The simulation employed a subject-specific linear model, and predicted that word scores are maintained in excess of 90% for sound levels outside the HPD of up to ∼115 dBA.
A survey of acoustic conditions in semi-open plan classrooms in the United Kingdom.
Greenland, Emma E; Shield, Bridget M
2011-09-01
This paper reports the results of a large scale, detailed acoustic survey of 42 open plan classrooms of varying design in the UK each of which contained between 2 and 14 teaching areas or classbases. The objective survey procedure, which was designed specifically for use in open plan classrooms, is described. The acoustic measurements relating to speech intelligibility within a classbase, including ambient noise level, intrusive noise level, speech to noise ratio, speech transmission index, and reverberation time, are presented. The effects on speech intelligibility of critical physical design variables, such as the number of classbases within an open plan unit and the selection of acoustic finishes for control of reverberation, are examined. This analysis enables limitations of open plan classrooms to be discussed and acoustic design guidelines to be developed to ensure good listening conditions. The types of teaching activity to provide adequate acoustic conditions, plus the speech intelligibility requirements of younger children, are also discussed. © 2011 Acoustical Society of America
Design and performance of an analysis-by-synthesis class of predictive speech coders
NASA Technical Reports Server (NTRS)
Rose, Richard C.; Barnwell, Thomas P., III
1990-01-01
The performance of a broad class of analysis-by-synthesis linear predictive speech coders is quantified experimentally. The class of coders includes a number of well-known techniques as well as a very large number of speech coders which have not been named or studied. A general formulation for deriving the parametric representation used in all of the coders in the class is presented. A new coder, named the self-excited vocoder, is discussed because of its good performance with low complexity, and because of the insight this coder gives to analysis-by-synthesis coders in general. The results of a study comparing the performances of different members of this class are presented. The study takes the form of a series of formal subjective and objective speech quality tests performed on selected coders. The results of this study lead to some interesting and important observations concerning the controlling parameters for analysis-by-synthesis speech coders.
The role of the supplementary motor area for speech and language processing.
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann
2016-09-01
Apart from its function in speech motor control, the supplementary motor area (SMA) has largely been neglected in models of speech and language processing in the brain. The aim of this review paper is to summarize more recent work, suggesting that the SMA has various superordinate control functions during speech communication and language reception, which is particularly relevant in case of increased task demands. The SMA is subdivided into a posterior region serving predominantly motor-related functions (SMA proper) whereas the anterior part (pre-SMA) is involved in higher-order cognitive control mechanisms. In analogy to motor triggering functions of the SMA proper, the pre-SMA seems to manage procedural aspects of cognitive processing. These latter functions, among others, comprise attentional switching, ambiguity resolution, context integration, and coordination between procedural and declarative memory structures. Regarding language processing, this refers, for example, to the use of inner speech mechanisms during language encoding, but also to lexical disambiguation, syntax and prosody integration, and context-tracking. Copyright © 2016 Elsevier Ltd. All rights reserved.
Calandruccio, Lauren; Bradlow, Ann R.; Dhar, Sumitrajit
2013-01-01
Background Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared to native-accented English speech was reported in Calandruccio, Dhar and Bradlow (2010). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. Purpose The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech affect masking release. Research Design A mixed model design with within- (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech, and high-intelligibility, moderate-intelligibility and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. Study Sample Three listener groups were tested including monolingual English speakers with normal hearing, non-native speakers of English with normal hearing, and monolingual speakers of English with hearing loss. The non-native speakers of English were from various native-language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetrical, mild sloping to moderate sensorineural hearing loss. Data Collection and Analysis Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the keywords within the sentences (100 keywords/masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and the listener groups. Results Monolingual speakers of English with normal hearing benefited when the competing speech signal was foreign-accented compared to native-accented allowing for improved speech recognition. Various levels of intelligibility across the foreign-accented speech maskers did not influence results. Neither the non-native English listeners with normal hearing, nor the monolingual English speakers with hearing loss benefited from masking release when the masker was changed from native-accented to foreign-accented English. Conclusions Slight modifications between the target and the masker speech allowed monolingual speakers of English with normal hearing to improve their recognition of native-accented English even when the competing speech was highly intelligible. Further research is needed to determine which modifications within the competing speech signal caused the Mandarin-accented English to be less effective with respect to masking. Determining the influences within the competing speech that make it less effective as a masker, or determining why monolingual normal-hearing listeners can take advantage of these differences could help improve speech recognition for those with hearing loss in the future. PMID:25126683
Speech recognition: Acoustic-phonetic knowledge acquisition and representation
NASA Astrophysics Data System (ADS)
Zue, Victor W.
1988-09-01
The long-term research goal is to develop and implement speaker-independent continuous speech recognition systems. It is believed that the proper utilization of speech-specific knowledge is essential for such advanced systems. This research is thus directed toward the acquisition, quantification, and representation, of acoustic-phonetic and lexical knowledge, and the application of this knowledge to speech recognition algorithms. In addition, we are exploring new speech recognition alternatives based on artificial intelligence and connectionist techniques. We developed a statistical model for predicting the acoustic realization of stop consonants in various positions in the syllable template. A unification-based grammatical formalism was developed for incorporating this model into the lexical access algorithm. We provided an information-theoretic justification for the hierarchical structure of the syllable template. We analyzed segmented duration for vowels and fricatives in continuous speech. Based on contextual information, we developed durational models for vowels and fricatives that account for over 70 percent of the variance, using data from multiple, unknown speakers. We rigorously evaluated the ability of human spectrogram readers to identify stop consonants spoken by many talkers and in a variety of phonetic contexts. Incorporating the declarative knowledge used by the readers, we developed a knowledge-based system for stop identification. We achieved comparable system performance to that to the readers.
Kates, James M; Arehart, Kathryn H
2015-10-01
This paper uses mutual information to quantify the relationship between envelope modulation fidelity and perceptual responses. Data from several previous experiments that measured speech intelligibility, speech quality, and music quality are evaluated for normal-hearing and hearing-impaired listeners. A model of the auditory periphery is used to generate envelope signals, and envelope modulation fidelity is calculated using the normalized cross-covariance of the degraded signal envelope with that of a reference signal. Two procedures are used to describe the envelope modulation: (1) modulation within each auditory frequency band and (2) spectro-temporal processing that analyzes the modulation of spectral ripple components fit to successive short-time spectra. The results indicate that low modulation rates provide the highest information for intelligibility, while high modulation rates provide the highest information for speech and music quality. The low-to-mid auditory frequencies are most important for intelligibility, while mid frequencies are most important for speech quality and high frequencies are most important for music quality. Differences between the spectral ripple components used for the spectro-temporal analysis were not significant in five of the six experimental conditions evaluated. The results indicate that different modulation-rate and auditory-frequency weights may be appropriate for indices designed to predict different types of perceptual relationships.
Hazan, Valerie; Tuomainen, Outi; Pettinato, Michèle
2016-12-01
This study investigated the acoustic characteristics of spontaneous speech by talkers aged 9-14 years and their ability to adapt these characteristics to maintain effective communication when intelligibility was artificially degraded for their interlocutor. Recordings were made for 96 children (50 female participants, 46 male participants) engaged in a problem-solving task with a same-sex friend; recordings for 20 adults were used as reference. The task was carried out in good listening conditions (normal transmission) and in degraded transmission conditions. Articulation rate, median fundamental frequency (f0), f0 range, and relative energy in the 1- to 3-kHz range were analyzed. With increasing age, children significantly reduced their median f0 and f0 range, became faster talkers, and reduced their mid-frequency energy in spontaneous speech. Children produced similar clear speech adaptations (in degraded transmission conditions) as adults, but only children aged 11-14 years increased their f0 range, an unhelpful strategy not transmitted via the vocoder. Changes made by children were consistent with a general increase in vocal effort. Further developments in speech production take place during later childhood. Children use clear speech strategies to benefit an interlocutor facing intelligibility problems but may not be able to attune these strategies to the same degree as adults.
Kates, James M.; Arehart, Kathryn H.
2015-01-01
This paper uses mutual information to quantify the relationship between envelope modulation fidelity and perceptual responses. Data from several previous experiments that measured speech intelligibility, speech quality, and music quality are evaluated for normal-hearing and hearing-impaired listeners. A model of the auditory periphery is used to generate envelope signals, and envelope modulation fidelity is calculated using the normalized cross-covariance of the degraded signal envelope with that of a reference signal. Two procedures are used to describe the envelope modulation: (1) modulation within each auditory frequency band and (2) spectro-temporal processing that analyzes the modulation of spectral ripple components fit to successive short-time spectra. The results indicate that low modulation rates provide the highest information for intelligibility, while high modulation rates provide the highest information for speech and music quality. The low-to-mid auditory frequencies are most important for intelligibility, while mid frequencies are most important for speech quality and high frequencies are most important for music quality. Differences between the spectral ripple components used for the spectro-temporal analysis were not significant in five of the six experimental conditions evaluated. The results indicate that different modulation-rate and auditory-frequency weights may be appropriate for indices designed to predict different types of perceptual relationships. PMID:26520329
JND measurements of the speech formants parameters and its implication in the LPC pole quantization
NASA Astrophysics Data System (ADS)
Orgad, Yaakov
1988-08-01
The inherent sensitivity of auditory perception is explicitly used with the objective of designing an efficient speech encoder. Speech can be modelled by a filter representing the vocal tract shape that is driven by an excitation signal representing glottal air flow. This work concentrates on the filter encoding problem, assuming that excitation signal encoding is optimal. Linear predictive coding (LPC) techniques were used to model a short speech segment by an all-pole filter; each pole was directly related to the speech formants. Measurements were made of the auditory just noticeable difference (JND) corresponding to the natural speech formants, with the LPC filter poles as the best candidates to represent the speech spectral envelope. The JND is the maximum precision required in speech quantization; it was defined on the basis of the shift of one pole parameter of a single frame of a speech segment, necessary to induce subjective perception of the distortion, with .75 probability. The average JND in LPC filter poles in natural speech was found to increase with increasing pole bandwidth and, to a lesser extent, frequency. The JND measurements showed a large spread of the residuals around the average values, indicating that inter-formant coupling and, perhaps, other, not yet fully understood, factors were not taken into account at this stage of the research. A future treatment should consider these factors. The average JNDs obtained in this work were used to design pole quantization tables for speech coding and provided a better bit-rate than the standard quantizer of reflection coefficient; a 30-bits-per-frame pole quantizer yielded a speech quality similar to that obtained with a standard 41-bits-per-frame reflection coefficient quantizer. Owing to the complexity of the numerical root extraction system, the practical implementation of the pole quantization approach remains to be proved.
Watts, Christopher R
2016-01-01
Reduced vocal intensity is a core impairment of hypokinetic dysarthria in Parkinson's disease (PD). Speech treatments have been developed to rehabilitate the vocal subsystems underlying this impairment. Intensive treatment programs requiring high-intensity voice and speech exercises with clinician-guided prompting and feedback have been established as effective for improving vocal function. Less is known, however, regarding long-term outcomes of clinical benefit in speakers with PD who receive these treatments. A retrospective cohort design was utilized. Data from 78 patient files across a three year period were analyzed. All patients received a structured, intensive program of voice therapy focusing on speaking intent and loudness. The dependent variable for all analyses was vocal intensity in decibels (dBSPL). Vocal intensity during sustained vowel production, reading, and novel conversational speech was compared at pre-treatment, post-treatment, six month follow-up, and twelve month follow-up periods. Statistically significant increases in vocal intensity were found at post-treatment, 6 months, and 12 month follow-up periods with intensity gains ranging from 5 to 17 dB depending on speaking condition and measurement period. Significant treatment effects were found in all three speaking conditions. Effect sizes for all outcome measures were large, suggesting a strong degree of practical significance. Significant increases in vocal intensity measured at 6 and 12 moth follow-up periods suggested that the sample of patients maintained treatment benefit for up to a year. These findings are supported by outcome studies reporting treatment outcomes within a few months post-treatment, in addition to prior studies that have reported long-term outcome results. The positive treatment outcomes experienced by the PD cohort in this study are consistent with treatment responses subsequent to other treatment approaches which focus on high-intensity, clinician guided motor learning for voice and speech production in PD. Theories regarding the underlying neurophysiological response to treatment will be discussed.
Rajchanovska, Domnika; Ivanovska, Beti Zaifirova
2015-01-01
Speech development in preschool children should be consistent with a child's overall development. However, disorders of speech in childhood are not uncommon. The purpose of the study was to determine the impact of demographic and socio-economic conditions on the prevalence of speech disorders in preschool children in Bitola. The study is observational and prospective with two years duration. During the period from May 2009 to June 2011, 1607 children aged 3 and 5 years, who came for regular examinations, were observed. The following research methods were applied: pediatric examination, psychological testing (Test of Chuturik), interviews with parents and a questionnaire for behavior of children (Child Behavior Checklist - CBCL). 1,607 children were analyzed, 772 aged three years, 835 aged five years, 51.65% male and 49.35% female.The prevalence of speech disorders was 37.65%. Statistical analysis showed that these disorders were more frequent in three years old children, males living in rural areas and in larger families.They did not have their own rooms at home, they were using mobile phones and were spending many hours per day watching television, (p<0.01). Also, children whose parents had lower levels of education and were engaged in agriculture, often had significant speech disorders, (p<0.01). Speech disorders in preschool children in Bitola have a high prevalence. Because of their influence on later cognitive development of children, the process requires cooperation among parents, children, speech and the audiologist with the significant role in prevention, early detection and treatment.
Systematic review of compound action potentials as predictors for cochlear implant performance.
van Eijl, Ruben H M; Buitenhuis, Patrick J; Stegeman, Inge; Klis, Sjaak F L; Grolman, Wilko
2017-02-01
The variability in speech perception between cochlear implant users is thought to result from the degeneration of the auditory nerve. Degeneration of the auditory nerve, histologically assessed, correlates with electrophysiologically acquired measures, such as electrically evoked compound action potentials (eCAPs) in experimental animals. To predict degeneration of the auditory nerve in humans, where histology is impossible, this paper reviews the correlation between speech perception and eCAP recordings in cochlear implant patients. PubMed and Embase. We performed a systematic search for articles containing the following major themes: cochlear implants, evoked potentials, and speech perception. Two investigators independently conducted title-abstract screening, full-text screening, and critical appraisal. Data were extracted from the remaining articles. Twenty-five of 1,429 identified articles described a correlation between speech perception and eCAP attributes. Due to study heterogeneity, a meta-analysis was not feasible, and studies were descriptively analyzed. Several studies investigating presence of the eCAP, recovery time constant, slope of the amplitude growth function, and spatial selectivity showed significant correlations with speech perception. In contrast, neural adaptation, eCAP threshold, and change with varying interphase gap did not significantly correlate with speech perception in any of the identified studies. Significant correlations between speech perception and parameters obtained through eCAP recordings have been documented in literature; however, reporting was ambiguous. There is insufficient evidence for eCAPs as a predictive factor for speech perception. More research is needed to further investigate this relation. Laryngoscope, 2016 127:476-487, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Cochlear implanted children present vocal parameters within normal standards.
de Souza, Lourdes Bernadete Rocha; Bevilacqua, Maria Cecília; Brasolotto, Alcione Ghedini; Coelho, Ana Cristina
2012-08-01
to compare acoustic and perceptual parameters regarding the voice of cochlear implanted children, with normal hearing children. this is a cross-sectional, quantitative and qualitative study. Thirty six cochlear implanted children aged between 3 y and 3 m to 5 y and 9 m and 25 children with normal hearing, aged between 3 y and 11 m and 6 y and 6 m, participated in this study. The recordings and the acoustics analysis of the sustained vowel/a/and spontaneous speech were performed using the PRAAT program. The parameters analyzed for the sustained vowel were the mean of the fundamental frequency, jitter, shimmer and harmonic-to-noise ratio (HNR). For the spontaneous speech, the minimum and maximum frequencies and the number of semitones were extracted. The perceptual analysis of the speech material was analyzed using visual-analogical scales of 100 points, composing the aspects related to the overall severity of the vocal deviation, roughness, breathiness, strain, pitch, loudness and resonance deviation, and instability. This last parameter was only analyzed for the sustained vowel. The results demonstrated that the majority of the vocal parameters analyzed in the samples of the implanted children disclosed values similar to those obtained by the group of children with normal hearing. implanted children who participate in a (re) habilitation and follow-up program, can present vocal characteristics similar to those vocal characteristics of children with normal hearing. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Cleft audit protocol for speech (CAPS-A): a comprehensive training package for speech analysis.
Sell, D; John, A; Harding-Bell, A; Sweeney, T; Hegarty, F; Freeman, J
2009-01-01
The previous literature has largely focused on speech analysis systems and ignored process issues, such as the nature of adequate speech samples, data acquisition, recording and playback. Although there has been recognition of the need for training on tools used in speech analysis associated with cleft palate, little attention has been paid to this issue. To design, execute, and evaluate a training programme for speech and language therapists on the systematic and reliable use of the Cleft Audit Protocol for Speech-Augmented (CAPS-A), addressing issues of standardized speech samples, data acquisition, recording, playback, and listening guidelines. Thirty-six specialist speech and language therapists undertook the training programme over four days. This consisted of two days' training on the CAPS-A tool followed by a third day, making independent ratings and transcriptions on ten new cases which had been previously recorded during routine audit data collection. This task was repeated on day 4, a minimum of one month later. Ratings were made using the CAPS-A record form with the CAPS-A definition table. An analysis was made of the speech and language therapists' CAPS-A ratings at occasion 1 and occasion 2 and the intra- and inter-rater reliability calculated. Trained therapists showed consistency in individual judgements on specific sections of the tool. Intraclass correlation coefficients were calculated for each section with good agreement on eight of 13 sections. There were only fair levels of agreement on anterior oral cleft speech characteristics, non-cleft errors/immaturities and voice. This was explained, at least in part, by their low prevalence which affects the calculation of the intraclass correlation coefficient statistic. Speech and language therapists benefited from training on the CAPS-A, focusing on specific aspects of speech using definitions of parameters and scalar points, in order to apply the tool systematically and reliably. Ratings are enhanced by ensuring a high degree of attention to the nature of the data, standardizing the speech sample, data acquisition, the listening process together with the use of high-quality recording and playback equipment. In addition, a method is proposed for maintaining listening skills following training as part of an individual's continuing education.
The Hierarchical Cortical Organization of Human Speech Processing
de Heer, Wendy A.; Huth, Alexander G.; Griffiths, Thomas L.
2017-01-01
Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech. SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to natural speech. Both cerebral hemispheres were actively involved in speech processing in large and equal amounts. Also, the transformation from spectral features to semantic elements occurs early in the cortical speech-processing stream. Our experimental and analytical approaches are important alternatives and complements to standard approaches that use segmented speech and block designs, which report more laterality in speech processing and associated semantic processing to higher levels of cortex than reported here. PMID:28588065
Changes in Preference for Infant-Directed Speech in Low and Moderate Noise by 4.5- to 13-Month-Olds
ERIC Educational Resources Information Center
Newman, Rochelle S.; Hussain, Isma
2006-01-01
Although a large literature discusses infants' preference for infant-directed speech (IDS), few studies have examined how this preference might change over time or across listening situations. The work reported here compares infants' preference for IDS while listening in a quiet versus a noisy environment, and across 3 points in development: 4.5…
ERIC Educational Resources Information Center
Larsson, AnnaKarin; Schölin, Johnna; Mark, Hans; Jönsson, Radi; Persson, Christina
2017-01-01
Background: In the last decade, a large number of children with cleft lip and palate have been adopted to Sweden. A majority of the children were born in China and they usually arrive in Sweden with an unoperated palate. There is currently a lack of knowledge regarding speech and articulation development in this group of children, who also have to…
ERIC Educational Resources Information Center
Kurth, Ruth Justine; Kurth, Lila Mae
A study compared mothers' and fathers' speech patterns when speaking to preschool children, particularly utterance length, sentence types, and word frequencies. All of the children attended a nursery school with a student population of 136 in a large urban area in the Southwest. Volunteer subjects, 28 mothers and 28 fathers of 28 children who…
Therapy Talk: Analyzing Therapeutic Discourse
ERIC Educational Resources Information Center
Leahy, Margaret M.
2004-01-01
Therapeutic discourse is the talk-in-interaction that represents the social practice between clinician and client. This article invites speech-language pathologists to apply their knowledge of language to analyzing therapy talk and to learn how talking practices shape clinical roles and identities. A range of qualitative research approaches,…
ERIC Educational Resources Information Center
Iivari, Juhani; Hirschheim, Rudy
1996-01-01
Analyzes and compares eight information systems (IS) development approaches: Information Modelling, Decision Support Systems, the Socio-Technical approach, the Infological approach, the Interactionist approach, the Speech Act-based approach, Soft Systems Methodology, and the Scandinavian Trade Unionist approach. Discusses the organizational roles…
Magnotti, John F; Basu Mallick, Debshila; Feng, Guo; Zhou, Bin; Zhou, Wen; Beauchamp, Michael S
2015-09-01
Humans combine visual information from mouth movements with auditory information from the voice to recognize speech. A common method for assessing multisensory speech perception is the McGurk effect: When presented with particular pairings of incongruent auditory and visual speech syllables (e.g., the auditory speech sounds for "ba" dubbed onto the visual mouth movements for "ga"), individuals perceive a third syllable, distinct from the auditory and visual components. Chinese and American cultures differ in the prevalence of direct facial gaze and in the auditory structure of their languages, raising the possibility of cultural- and language-related group differences in the McGurk effect. There is no consensus in the literature about the existence of these group differences, with some studies reporting less McGurk effect in native Mandarin Chinese speakers than in English speakers and others reporting no difference. However, these studies sampled small numbers of participants tested with a small number of stimuli. Therefore, we collected data on the McGurk effect from large samples of Mandarin-speaking individuals from China and English-speaking individuals from the USA (total n = 307) viewing nine different stimuli. Averaged across participants and stimuli, we found similar frequencies of the McGurk effect between Chinese and American participants (48 vs. 44 %). In both groups, we observed a large range of frequencies both across participants (range from 0 to 100 %) and stimuli (15 to 83 %) with the main effect of culture and language accounting for only 0.3 % of the variance in the data. High individual variability in perception of the McGurk effect necessitates the use of large sample sizes to accurately estimate group differences.
Small intragenic deletion in FOXP2 associated with childhood apraxia of speech and dysarthria.
Turner, Samantha J; Hildebrand, Michael S; Block, Susan; Damiano, John; Fahey, Michael; Reilly, Sheena; Bahlo, Melanie; Scheffer, Ingrid E; Morgan, Angela T
2013-09-01
Relatively little is known about the neurobiological basis of speech disorders although genetic determinants are increasingly recognized. The first gene for primary speech disorder was FOXP2, identified in a large, informative family with verbal and oral dyspraxia. Subsequently, many de novo and familial cases with a severe speech disorder associated with FOXP2 mutations have been reported. These mutations include sequencing alterations, translocations, uniparental disomy, and genomic copy number variants. We studied eight probands with speech disorder and their families. Family members were phenotyped using a comprehensive assessment of speech, oral motor function, language, literacy skills, and cognition. Coding regions of FOXP2 were screened to identify novel variants. Segregation of the variant was determined in the probands' families. Variants were identified in two probands. One child with severe motor speech disorder had a small de novo intragenic FOXP2 deletion. His phenotype included features of childhood apraxia of speech and dysarthria, oral motor dyspraxia, receptive and expressive language disorder, and literacy difficulties. The other variant was found in a family in two of three family members with stuttering, and also in the mother with oral motor impairment. This variant was considered a benign polymorphism as it was predicted to be non-pathogenic with in silico tools and found in database controls. This is the first report of a small intragenic deletion of FOXP2 that is likely to be the cause of severe motor speech disorder associated with language and literacy problems. Copyright © 2013 Wiley Periodicals, Inc.
Auditory Speech Perception Development in Relation to Patient's Age with Cochlear Implant
Ciscare, Grace Kelly Seixas; Mantello, Erika Barioni; Fortunato-Queiroz, Carla Aparecida Urzedo; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa dos
2017-01-01
Introduction A cochlear implant in adolescent patients with pre-lingual deafness is still a debatable issue. Objective The objective of this study is to analyze and compare the development of auditory speech perception in children with pre-lingual auditory impairment submitted to cochlear implant, in different age groups in the first year after implantation. Method This is a retrospective study, documentary research, in which we analyzed 78 reports of children with severe bilateral sensorineural hearing loss, unilateral cochlear implant users of both sexes. They were divided into three groups: G1, 22 infants aged less than 42 months; G2, 28 infants aged between 43 to 83 months; and G3, 28 older than 84 months. We collected medical record data to characterize the patients, auditory thresholds with cochlear implants, assessment of speech perception, and auditory skills. Results There was no statistical difference in the association of the results among groups G1, G2, and G3 with sex, caregiver education level, city of residence, and speech perception level. There was a moderate correlation between age and hearing aid use time, age and cochlear implants use time. There was a strong correlation between age and the age cochlear implants was performed, hearing aid use time and age CI was performed. Conclusion There was no statistical difference in the speech perception in relation to the patient's age when cochlear implant was performed. There were statistically significant differences for the variables of auditory deprivation time between G3 - G1 and G2 - G1 and hearing aid use time between G3 - G2 and G3 - G1. PMID:28680487
a Study of Multiplexing Schemes for Voice and Data.
NASA Astrophysics Data System (ADS)
Sriram, Kotikalapudi
Voice traffic variations are characterized by on/off transitions of voice calls, and talkspurt/silence transitions of speakers in conversations. A speaker is known to be in silence for more than half the time during a telephone conversation. In this dissertation, we study some schemes which exploit speaker silences for an efficient utilization of the transmission capacity in integrated voice/data multiplexing and in digital speech interpolation. We study two voice/data multiplexing schemes. In each scheme, any time slots momentarily unutilized by the voice traffic are made available to data. In the first scheme, the multiplexer does not use speech activity detectors (SAD), and hence the voice traffic variations are due to call on/off only. In the second scheme, the multiplexer detects speaker silences using SAD and transmits voice only during talkspurts. The multiplexer with SAD performs digital speech interpolation (DSI) as well as dynamic channel allocation to voice and data. The performance of the two schemes is evaluated using discrete-time modeling and analysis. The data delay performance for the case of English speech is compared with that for the case of Japanese speech. A closed form expression for the mean data message delay is derived for the single-channel single-talker case. In a DSI system, occasional speech losses occur whenever the number of speakers in simultaneous talkspurt exceeds the number of TDM voice channels. In a buffered DSI system, speech loss is further reduced at the cost of delay. We propose a novel fixed-delay buffered DSI scheme. In this scheme, speech fill-in/hangover is not required because there are no variable delays. Hence, all silences that naturally occur in speech are fully utilized. Consequently, a substantial improvement in the DSI performance is made possible. The scheme is modeled and analyzed in discrete -time. Its performance is evaluated in terms of the probability of speech clipping, packet rejection ratio, DSI advantage, and the delay.
Whitfield, Jason A; Goberman, Alexander M
2014-01-01
Individuals with Parkinson disease (PD) often exhibit decreased range of movement secondary to the disease process, which has been shown to affect articulatory movements. A number of investigations have failed to find statistically significant differences between control and disordered groups, and between speaking conditions, using traditional vowel space area measures. The purpose of the current investigation was to evaluate both between-group (PD versus control) and within-group (habitual versus clear) differences in articulatory function using a novel vowel space measure, the articulatory-acoustic vowel space (AAVS). The novel AAVS is calculated from continuously sampled formant trajectories of connected speech. In the current study, habitual and clear speech samples from twelve individuals with PD along with habitual control speech samples from ten neurologically healthy adults were collected and acoustically analyzed. In addition, a group of listeners completed perceptual rating of speech clarity for all samples. Individuals with PD were perceived to exhibit decreased speech clarity compared to controls. Similarly, the novel AAVS measure was significantly lower in individuals with PD. In addition, the AAVS measure significantly tracked changes between the habitual and clear conditions that were confirmed by perceptual ratings. In the current study, the novel AAVS measure is shown to be sensitive to disease-related group differences and within-person changes in articulatory function of individuals with PD. Additionally, these data confirm that individuals with PD can modulate the speech motor system to increase articulatory range of motion and speech clarity when given a simple prompt. The reader will be able to (i) describe articulatory behavior observed in the speech of individuals with Parkinson disease; (ii) describe traditional measures of vowel space area and how they relate to articulation; (iii) describe a novel measure of vowel space, the articulatory-acoustic vowel space and its relationship to articulation and the perception of speech clarity. Copyright © 2014 Elsevier Inc. All rights reserved.
Improving language models for radiology speech recognition.
Paulett, John M; Langlotz, Curtis P
2009-02-01
Speech recognition systems have become increasingly popular as a means to produce radiology reports, for reasons both of efficiency and of cost. However, the suboptimal recognition accuracy of these systems can affect the productivity of the radiologists creating the text reports. We analyzed a database of over two million de-identified radiology reports to determine the strongest determinants of word frequency. Our results showed that body site and imaging modality had a similar influence on the frequency of words and of three-word phrases as did the identity of the speaker. These findings suggest that the accuracy of speech recognition systems could be significantly enhanced by further tailoring their language models to body site and imaging modality, which are readily available at the time of report creation.
The Role of Corticostriatal Systems in Speech Category Learning
Yi, Han-Gyol; Maddox, W. Todd; Mumford, Jeanette A.; Chandrasekaran, Bharath
2016-01-01
One of the most difficult category learning problems for humans is learning nonnative speech categories. While feedback-based category training can enhance speech learning, the mechanisms underlying these benefits are unclear. In this functional magnetic resonance imaging study, we investigated neural and computational mechanisms underlying feedback-dependent speech category learning in adults. Positive feedback activated a large corticostriatal network including the dorsolateral prefrontal cortex, inferior parietal lobule, middle temporal gyrus, caudate, putamen, and the ventral striatum. Successful learning was contingent upon the activity of domain-general category learning systems: the fast-learning reflective system, involving the dorsolateral prefrontal cortex that develops and tests explicit rules based on the feedback content, and the slow-learning reflexive system, involving the putamen in which the stimuli are implicitly associated with category responses based on the reward value in feedback. Computational modeling of response strategies revealed significant use of reflective strategies early in training and greater use of reflexive strategies later in training. Reflexive strategy use was associated with increased activation in the putamen. Our results demonstrate a critical role for the reflexive corticostriatal learning system as a function of response strategy and proficiency during speech category learning. Keywords: category learning, fMRI, corticostriatal systems, speech, putamen PMID:25331600
A variable rate speech compressor for mobile applications
NASA Technical Reports Server (NTRS)
Yeldener, S.; Kondoz, A. M.; Evans, B. G.
1990-01-01
One of the most promising speech coder at the bit rate of 9.6 to 4.8 kbits/s is CELP. Code Excited Linear Prediction (CELP) has been dominating 9.6 to 4.8 kbits/s region during the past 3 to 4 years. Its set back however, is its expensive implementation. As an alternative to CELP, the Base-Band CELP (CELP-BB) was developed which produced good quality speech comparable to CELP and a single chip implementable complexity as reported previously. Its robustness was also improved to tolerate errors up to 1.0 pct. and maintain intelligibility up to 5.0 pct. and more. Although, CELP-BB produces good quality speech at around 4.8 kbits/s, it has a fundamental problem when updating the pitch filter memory. A sub-optimal solution is proposed for this problem. Below 4.8 kbits/s, however, CELP-BB suffers from noticeable quantization noise as a result of the large vector dimensions used. Efficient representation of speech below 4.8 kbits/s is reported by introducing Sinusoidal Transform Coding (STC) to represent the LPC excitation which is called Sine Wave Excited LPC (SWELP). In this case, natural sounding good quality synthetic speech is obtained at around 2.4 kbits/s.
NASA Astrophysics Data System (ADS)
Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.
2016-10-01
Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.
Kim, Soo Ji; Jo, Uiri
2013-01-01
Based on the anatomical and functional commonality between singing and speech, various types of musical elements have been employed in music therapy research for speech rehabilitation. This study was to develop an accent-based music speech protocol to address voice problems of stroke patients with mixed dysarthria. Subjects were 6 stroke patients with mixed dysarthria and they received individual music therapy sessions. Each session was conducted for 30 minutes and 12 sessions including pre- and post-test were administered for each patient. For examining the protocol efficacy, the measures of maximum phonation time (MPT), fundamental frequency (F0), average intensity (dB), jitter, shimmer, noise to harmonics ratio (NHR), and diadochokinesis (DDK) were compared between pre and post-test and analyzed with a paired sample t-test. The results showed that the measures of MPT, F0, dB, and sequential motion rates (SMR) were significantly increased after administering the protocol. Also, there were statistically significant differences in the measures of shimmer, and alternating motion rates (AMR) of the syllable /K$\\inve$/ between pre- and post-test. The results indicated that the accent-based music speech protocol may improve speech motor coordination including respiration, phonation, articulation, resonance, and prosody of patients with dysarthria. This suggests the possibility of utilizing the music speech protocol to maximize immediate treatment effects in the course of a long-term treatment for patients with dysarthria.
Vigliecca, Nora Silvana
2017-11-09
To study the relationship between the caregiver's perception about the patient's impairment in spontaneous speech, according to an item of four questions administered by semi-structured interview, and the patient's performance in the Brief Aphasia Evaluation (BAE). 102 right-handed patients with focal brain lesions of different types and location were examined. BAE is a valid and reliable instrument to assess aphasia. The caregiver's perception was correlated with the item of spontaneous speech, the total score and the three main factors of the BAE: Expression, Comprehension and Complementary factors. The precision (sensitivity/ specificity) about the caregiver's perception of the patient's spontaneous speech was analyzed with reference to the presence or absence of disorder, according to the professional, on the BAE item of spontaneous speech. The studied correlation was satisfactory, being greater (higher than 80%) for the following indicators: the item of spontaneous speech, the Expression factor and the total score of the scale; the correlation was a little smaller (higher than 70%) for the Comprehension and Complementary factors. Comparing two cut-off points that evaluated the precision of the caregiver's perception, satisfactory results were observed in terms of sensitivity and specificity (>70%) with likelihood ratios higher than three. By using the median as the cut-off point, more satisfactory diagnostic discriminations were obtained. Interviewing the caregiver specifically on the patient's spontaneous speech, in an abbreviated form, provides relevant information for the aphasia diagnosis.
Accuracy of cochlear implant recipients in speech reception in the presence of background music.
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-12-01
This study examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of 3 contrasting types of background music, and compared performance based upon listener groups: CI recipients using conventional long-electrode devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing adults. We tested 154 long-electrode CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 normal-hearing adults on closed-set recognition of spondees presented in 3 contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Signal-to-noise ratio thresholds for speech in music were examined in relation to measures of speech recognition in background noise and multitalker babble, pitch perception, and music experience. The signal-to-noise ratio thresholds for speech in music varied as a function of category of background music, group membership (long-electrode, Hybrid, normal-hearing), and age. The thresholds for speech in background music were significantly correlated with measures of pitch perception and thresholds for speech in background noise; auditory status was an important predictor. Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music.
Neural Tuning to Low-Level Features of Speech throughout the Perisylvian Cortex.
Berezutskaya, Julia; Freudenburg, Zachary V; Güçlü, Umut; van Gerven, Marcel A J; Ramsey, Nick F
2017-08-16
Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain. SIGNIFICANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cortical regions is involved in sound and language processing. Here, we investigated the tuning to low-level sound features within these regions using neural responses to a short feature film. We also looked at whether the tuning organization along these brain regions showed any parallel to the hierarchy of language structures in continuous speech. Our results show that low-level speech features propagate throughout the perisylvian cortex and potentially contribute to the emergence of "coarse" speech representations in inferior frontal gyrus typically associated with high-level language processing. These findings add to the previous work on auditory processing and underline a distinctive role of inferior frontal gyrus in natural speech comprehension. Copyright © 2017 the authors 0270-6474/17/377906-15$15.00/0.
Semantic Bias in the Acquisition of Relative Clauses in Japanese
ERIC Educational Resources Information Center
Ozeki, Hiromi; Shirai, Yasuhiro
2010-01-01
This study analyzes the acquisition of relative clauses in Japanese to determine the semantic and functional characteristics of children's relative clauses in spontaneous speech. Longitudinal data from five Japanese children are analyzed and compared with English data (Diessel & Tomasello, 2000). The results show that the relative clauses produced…
Stam, Mariska; Smits, Cas; Twisk, Jos W R; Lemke, Ulrike; Festen, Joost M; Kramer, Sophia E
2015-01-01
The first aim of the present study was to determine the change in speech recognition in noise over a period of 5 years in participants ages 18 to 70 years at baseline. The second aim was to investigate whether age, gender, educational level, the level of initial speech recognition in noise, and reported chronic conditions were associated with a change in speech recognition in noise. The baseline and 5-year follow-up data of 427 participants with and without hearing impairment participating in the National Longitudinal Study on Hearing (NL-SH) were analyzed. The ability to recognize speech in noise was measured twice with the online National Hearing Test, a digit-triplet speech-in-noise test. Speech-reception-threshold in noise (SRTn) scores were calculated, corresponding to 50% speech intelligibility. Unaided SRTn scores obtained with the same transducer (headphones or loudspeakers) at both test moments were included. Changes in SRTn were calculated as a raw shift (T1 - T0) and an adjusted shift for regression towards the mean. Paired t tests and multivariable linear regression analyses were applied. The mean increase (i.e., deterioration) in SRTn was 0.38-dB signal-to-noise ratio (SNR) over 5 years (p < 0.001). Results of the multivariable regression analyses showed that the age group of 50 to 59 years had a significantly larger deterioration in SRTn compared with the age group of 18 to 39 years (raw shift: beta: 0.64-dB SNR; 95% confidence interval: 0.07-1.22; p = 0.028, adjusted for initial speech recognition level - adjusted shift: beta: 0.82-dB SNR; 95% confidence interval: 0.27-1.34; p = 0.004). Gender, educational level, and the number of chronic conditions were not associated with a change in SRTn over time. No significant differences in increase of SRTn were found between the initial levels of speech recognition (i.e., good, insufficient, or poor) when taking into account the phenomenon regression towards the mean. The study results indicate that hearing deterioration of speech recognition in noise over 5 years can also be detected in adults ages 18 to 70 years. This rather small numeric change might represent a relevant impact on an individual's ability to understand speech in everyday life.
Acoustic and Perceptual Effects of Dysarthria in Greek with a Focus on Lexical Stress
NASA Astrophysics Data System (ADS)
Papakyritsis, Ioannis
The field of motor speech disorders in Greek is substantially underresearched. Additionally, acoustic studies on lexical stress in dysarthria are generally very rare (Kim et al. 2010). This dissertation examined the acoustic and perceptual effects of Greek dysarthria focusing on lexical stress. Additional possibly deviant speech characteristics were acoustically analyzed. Data from three dysarthric participants and matched controls was analyzed using a case study design. The analysis of lexical stress was based on data drawn from a single word repetition task that included pairs of disyllabic words differentiated by stress location. This data was acoustically analyzed in terms of the use of the acoustic cues for Greek stress. The ability of the dysarthric participants to signal stress in single words was further assessed in a stress identification task carried out by 14 naive Greek listeners. Overall, the acoustic and perceptual data indicated that, although all three dysarthric speakers presented with some difficulty in the patterning of stressed and unstressed syllables, each had different underlying problems that gave rise to quite distinct patterns of deviant speech characteristics. The atypical use of lexical stress cues in Anna's data obscured the prominence relations of stressed and unstressed syllables to the extent that the position of lexical stress was usually not perceptually transparent. Chris and Maria on the other hand, did not have marked difficulties signaling lexical stress location, although listeners were not 100% successful in the stress identification task. For the most part, Chris' atypical phonation patterns and Maria's very slow rate of speech did not interfere with lexical stress signaling. The acoustic analysis of the lexical stress cues was generally in agreement with the participants' performance in the stress identification task. Interestingly, in all three dysarthric participants, but more so in Anna, targets stressed on the 1st syllable were more impervious to error judgments of lexical stress location than targets stressed on the 2nd syllable, although the acoustic metrics did not always suggest a more appropriate use of lexical stress cues in 1st syllable position. The findings contribute to our limited knowledge of the speech characteristics of dysarthria across different languages.
A laboratory study for assessing speech privacy in a simulated open-plan office.
Lee, P J; Jeon, J Y
2014-06-01
The aim of this study is to assess speech privacy in open-plan office using two recently introduced single-number quantities: the spatial decay rate of speech, DL(2,S) [dB], and the A-weighted sound pressure level of speech at a distance of 4 m, L(p,A,S,4) m [dB]. Open-plan offices were modeled using a DL(2,S) of 4, 8, and 12 dB, and L(p,A,S,4) m was changed in three steps, from 43 to 57 dB.Auditory experiments were conducted at three locations with source–receiver distances of 8, 16, and 24 m, while background noise level was fixed at 30 dBA.A total of 20 subjects were asked to rate the speech intelligibility and listening difficulty of 240 Korean sentences in such surroundings. The speech intelligibility scores were not affected by DL(2,S) or L(p,A,S,4) m at a source–receiver distance of 8 m; however, listening difficulty ratings were significantly changed with increasing DL(2,S) and L(p,A,S,4) m values. At other locations, the influences of DL(2,S) and L(p,A,S,4) m on speech intelligibility and listening difficulty ratings were significant. It was also found that the speech intelligibility scores and listening difficulty ratings were considerably changed with increasing the distraction distance (r(D)). Furthermore, listening difficulty is more sensitive to variations in DL(2,S) and L(p,A,S,4) m than intelligibility scores for sound fields with high speech transmission performances. The recently introduced single-number quantities in the ISO standard, based on the spatial distribution of sound pressure level, were associated with speech privacy in an open-plan office. The results support single-number quantities being suitable to assess speech privacy, mainly at large distances. This new information can be considered when designing open-plan offices and making acoustic guidelines of open-plan offices.
Peters, B Robert; Litovsky, Ruth; Parkinson, Aaron; Lake, Jennifer
2007-08-01
Clinical trials in which children received bilateral cochlear implants in sequential operations were conducted to analyze the extent to which bilateral implantation offers benefits on a number of measures. The present investigation was particularly focused on measuring the effects of age at implantation and experience after activation of the second implant on speech perception performance. Thirty children aged 3 to 13 years were recipients of 2 cochlear implants, received in sequential operations, a minimum of 6 months apart. All children received their first implant before 5 years of age and had acquired speech perception capabilities with the first device. They were divided into 3 age groups on the basis of age at time of second ear implantation: Group I, 3 to 5 years; Group II, 5.1 to 8 years; and Group III, 8.1 to 13 years. Speech perception measures in quiet included the Multisyllabic Lexical Neighborhood Test (MLNT) for Group I, the Lexical Neighborhood Test (LNT) for Groups II and III, and the Hearing In Noise Test for Children (HINT-C) sentences in quiet for Group III. Speech perception in noise was assessed using the Children's Realistic Intelligibility and Speech Perception (CRISP) test. Testing was performed preoperatively and again postactivation of the second implant at 3, 6, and 12 months (CRISP at 3 and 9 mo) in both the unilateral and bilateral conditions in a repeated-measures study design. Two-way repeated-measures analysis of variance was used to analyze statistical significance among device configurations and performance over time. US Multicenter. Results for speech perception in quiet show that children implanted sequentially acquire open-set speech perception in the second ear relatively quickly (within 6 mo). However, children younger than 8 years do so more rapidly and to a higher level of speech perception ability at 12 months than older children (mean second ear MLNT/LNT scores at 12 months: Group I, 83.9%; range, 71-96%; Group II, 59.5%; range, 40-88%; Group III, 32%; range, 12-56%). The second-ear mean HINT-C score for Group III children remained far less than that of the first ear even after 12 months of device use (44 versus 89%; t, 6.48; p<0.001; critical value, 0.025). Speech intelligibility for spondees in noise was significantly better under bilateral conditions than with either ear alone when all children were analyzed as a single group and for Group III children. At the 9-month test interval, performance in the bilateral configuration was significantly better for all noise conditions (13.2% better for noise at first cochlear implant, 6.8% better for the noise front and noise at second cochlear implant conditions, t=2.32, p=0.024, critical level=0.05 for noise front; t=3.75, p<0.0001, critical level=0.05 for noise at first implant; t=2.73, p = 0.008, critical level=0.05 for noise at second implant side). The bilateral benefit in noise increased with time from 3 to 9 months after activation of the second implant. This bilateral advantage is greatest when noise is directed toward the first implanted ear, indicating that the head shadow effect is the most effective binaural mechanism. The bilateral condition produced small improvements in speech perception in quiet and for individual Group I and Group II patient results in noise that, in view of the relatively small number of subjects tested, do not reach statistical significance. Sequential bilateral cochlear implantation in children of diverse ages has the potential to improve speech perception abilities in the second implanted ear and to provide access to the use of binaural mechanisms such as the head shadow effect. The improvement unfolds over time and continues to grow during the 6 to 12 months after activation of the second implant. Younger children in this study achieved higher open-set speech perception scores in the second ear, but older children still demonstrate bilateral benefit in noise. Determining the long-term impact and cost-effectiveness that results from such potential capabilities in bilaterally implanted children requires additional study with larger groups of subjects and more prolonged monitoring.
Respiratory Constraints in Verbal and Non-verbal Communication.
Włodarczak, Marcin; Heldner, Mattias
2017-01-01
In the present paper we address the old question of respiratory planning in speech production. We recast the problem in terms of speakers' communicative goals and propose that speakers try to minimize respiratory effort in line with the H&H theory. We analyze respiratory cycles coinciding with no speech (i.e., silence), short verbal feedback expressions (SFE's) as well as longer vocalizations in terms of parameters of the respiratory cycle and find little evidence for respiratory planning in feedback production. We also investigate timing of speech and SFEs in the exhalation and contrast it with nods. We find that while speech is strongly tied to the exhalation onset, SFEs are distributed much more uniformly throughout the exhalation and are often produced on residual air. Given that nods, which do not have any respiratory constraints, tend to be more frequent toward the end of an exhalation, we propose a mechanism whereby respiratory patterns are determined by the trade-off between speakers' communicative goals and respiratory constraints.
Acoustic analysis of trill sounds.
Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri
2012-04-01
In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.
Sound and speech detection and classification in a Health Smart Home.
Fleury, A; Noury, N; Vacher, M; Glasson, H; Seri, J F
2008-01-01
Improvements in medicine increase life expectancy in the world and create a new bottleneck at the entrance of specialized and equipped institutions. To allow elderly people to stay at home, researchers work on ways to monitor them in their own environment, with non-invasive sensors. To meet this goal, smart homes, equipped with lots of sensors, deliver information on the activities of the person and can help detect distress situations. In this paper, we present a global speech and sound recognition system that can be set-up in a flat. We placed eight microphones in the Health Smart Home of Grenoble (a real living flat of 47m(2)) and we automatically analyze and sort out the different sounds recorded in the flat and the speech uttered (to detect normal or distress french sentences). We introduce the methods for the sound and speech recognition, the post-processing of the data and finally the experimental results obtained in real conditions in the flat.
Simeon, Katherine M.; Bicknell, Klinton; Grieco-Calub, Tina M.
2018-01-01
Individuals use semantic expectancy – applying conceptual and linguistic knowledge to speech input – to improve the accuracy and speed of language comprehension. This study tested how adults use semantic expectancy in quiet and in the presence of speech-shaped broadband noise at -7 and -12 dB signal-to-noise ratio. Twenty-four adults (22.1 ± 3.6 years, mean ±SD) were tested on a four-alternative-forced-choice task whereby they listened to sentences and were instructed to select an image matching the sentence-final word. The semantic expectancy of the sentences was unrelated to (neutral), congruent with, or conflicting with the acoustic target. Congruent expectancy improved accuracy and conflicting expectancy decreased accuracy relative to neutral, consistent with a theory where expectancy shifts beliefs toward likely words and away from unlikely words. Additionally, there were no significant interactions of expectancy and noise level when analyzed in log-odds, supporting the predictions of ideal observer models of speech perception. PMID:29472883
Simeon, Katherine M; Bicknell, Klinton; Grieco-Calub, Tina M
2018-01-01
Individuals use semantic expectancy - applying conceptual and linguistic knowledge to speech input - to improve the accuracy and speed of language comprehension. This study tested how adults use semantic expectancy in quiet and in the presence of speech-shaped broadband noise at -7 and -12 dB signal-to-noise ratio. Twenty-four adults (22.1 ± 3.6 years, mean ± SD ) were tested on a four-alternative-forced-choice task whereby they listened to sentences and were instructed to select an image matching the sentence-final word. The semantic expectancy of the sentences was unrelated to (neutral), congruent with, or conflicting with the acoustic target. Congruent expectancy improved accuracy and conflicting expectancy decreased accuracy relative to neutral, consistent with a theory where expectancy shifts beliefs toward likely words and away from unlikely words. Additionally, there were no significant interactions of expectancy and noise level when analyzed in log-odds, supporting the predictions of ideal observer models of speech perception.
Rejnö-Habte Selassie, Gunilla; Hedström, Anders; Viggedal, Gerd; Jennische, Margareta; Kyllerman, Mårten
2010-07-01
We reviewed the medical history, EEG recordings, and developmental milestones of 19 children with speech and language dysfunction and focal epileptiform activity. Speech, language, and neuropsychological assessments and EEG recordings were performed at follow-up, and prognostic indicators were analyzed. Three patterns of language development were observed: late start and slow development, late start and deterioration/regression, and normal start and later regression/deterioration. No differences in test results among these groups were seen, indicating a spectrum of related conditions including Landau-Kleffner syndrome and epileptic language disorder. More than half of the participants had speech and language dysfunction at follow-up. IQ levels, working memory, and processing speed were also affected. Dysfunction of auditory perception in noise was found in more than half of the participants, and dysfunction of auditory attention in all. Dysfunction of communication, oral motor ability, and stuttering were noted in a few. Family history of seizures and abundant epileptiform activity indicated a worse prognosis. Copyright 2010 Elsevier Inc. All rights reserved.
Carvalho Lima, Vania L C; Collange Grecco, Luanda A; Marques, Valéria C; Fregni, Felipe; Brandão de Ávila, Clara R
2016-04-01
The aim of this study was to describe the results of the first case combining integrative speech therapy with anodal transcranial direct current stimulation (tDCS) over Broca's area in a child with cerebral palsy. The ABFW phonology test was used to analyze speech based on the Percentage of Correct Consonants (PCC) and Percentage of Correct Consonants - Revised (PCC-R). After treatment, increases were found in both PCC (Imitation: 53.63%-78.10%; Nomination: 53.19%-70.21%) and PPC-R (Imitation: 64.54%-83.63%; Nomination: 61.70%-77.65%). Moreover, reductions occurred in distortions, substitutions and improvement was found in oral performance, especially tongue mobility (AMIOFE-mobility before = 4 after = 7). The child demonstrated a clinically important improvement in speech fluency as shown in results of imitation number of correct consonants and phonemes acquire. Based on these promising findings, continuing research in this field should be conducted with controlled clinical trials. Copyright © 2015 Elsevier Ltd. All rights reserved.
Budde, Kristin S.; Barron, Daniel S.; Fox, Peter T.
2015-01-01
Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as “neural signatures of stuttering” (Brown 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: 1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and 2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state). PMID:25463820
Cuppini, Cristiano; Ursino, Mauro; Magosso, Elisa; Ross, Lars A.; Foxe, John J.; Molholm, Sophie
2017-01-01
Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity. PMID:29163099
Variability and Intelligibility of Clarified Speech to Different Listener Groups
NASA Astrophysics Data System (ADS)
Silber, Ronnie F.
Two studies examined the modifications that adult speakers make in speech to disadvantaged listeners. Previous research that has focused on speech to the deaf individuals and to young children has shown that adults clarify speech when addressing these two populations. Acoustic measurements suggest that the signal undergoes similar changes for both populations. Perceptual tests corroborate these results for the deaf population, but are nonsystematic in developmental studies. The differences in the findings for these populations and the nonsystematic results in the developmental literature may be due to methodological factors. The present experiments addressed these methodological questions. Studies of speech to hearing impaired listeners have used read, nonsense, sentences, for which speakers received explicit clarification instructions and feedback, while in the child literature, excerpts of real-time conversations were used. Therefore, linguistic samples were not precisely matched. In this study, experiments used various linguistic materials. Experiment 1 used a children's story; experiment 2, nonsense sentences. Four mothers read both types of material in four ways: (1) in "normal" adult speech, (2) in "babytalk," (3) under the clarification instructions used in the "hearing impaired studies" (instructed clear speech) and (4) in (spontaneous) clear speech without instruction. No extra practice or feedback was given. Sentences were presented to 40 normal hearing college students with and without simultaneous masking noise. Results were separately tabulated for content and function words, and analyzed using standard statistical tests. The major finding in the study was individual variation in speaker intelligibility. "Real world" speakers vary in their baseline intelligibility. The four speakers also showed unique patterns of intelligibility as a function of each independent variable. Results were as follows. Nonsense sentences were less intelligible than story sentences. Function words were equal to, or more intelligible than, content words. Babytalk functioned as a clear speech style in story sentences but not nonsense sentences. One of the two clear speech styles was clearer than normal speech in adult-directed clarification. However, which style was clearer depended on interactions among the variables. The individual patterns seemed to result from interactions among demand characteristics, baseline intelligibility, materials, and differences in articulatory flexibility.
A speech and psychological profile of treatment-seeking adolescents who stutter.
Iverach, Lisa; Lowe, Robyn; Jones, Mark; O'Brian, Susan; Menzies, Ross G; Packman, Ann; Onslow, Mark
2017-03-01
The purpose of this study was to evaluate the relationship between stuttering severity, psychological functioning, and overall impact of stuttering, in a large sample of adolescents who stutter. Participants were 102 adolescents (11-17 years) seeking speech treatment for stuttering, including 86 boys and 16 girls, classified into younger (11-14 years, n=57) and older (15-17 years, n=45) adolescents. Linear regression models were used to evaluate the relationship between speech and psychological variables and overall impact of stuttering. The impact of stuttering during adolescence is influenced by a complex interplay of speech and psychological variables. Anxiety and depression scores fell within normal limits. However, higher self-reported stuttering severity predicted higher anxiety and internalizing problems. Boys reported externalizing problems-aggression, rule-breaking-in the clinical range, and girls reported total problems in the borderline-clinical range. Overall, higher scores on measures of anxiety, stuttering severity, and speech dissatisfaction predicted a more negative overall impact of stuttering. To our knowledge, this is the largest cohort study of adolescents who stutter. Higher stuttering severity, speech dissatisfaction, and anxiety predicted a more negative overall impact of stuttering, indicating the importance of carefully managing the speech and psychological needs of adolescents who stutter. Further research is needed to understand the relationship between stuttering and externalizing problems for adolescent boys who stutter. Copyright © 2016. Published by Elsevier Inc.
Speech pattern improvement following gingivectomy of excess palatal tissue.
Holtzclaw, Dan; Toscano, Nicholas
2008-10-01
Speech disruption secondary to excessive gingival tissue has received scant attention in periodontal literature. Although a few articles have addressed the causes of this condition, documentation and scientific explanation of treatment outcomes are virtually non-existent. This case report describes speech pattern improvements secondary to periodontal surgery and provides a concise review of linguistic and phonetic literature pertinent to the case. A 21-year-old white female with a history of gingival abscesses secondary to excessive palatal tissue presented for treatment. Bilateral gingivectomies of palatal tissues were performed with inverse bevel incisions extending distally from teeth #5 and #12 to the maxillary tuberosities, and large wedges of epithelium/connective tissue were excised. Within the first month of the surgery, the patient noted "changes in the manner in which her tongue contacted the roof of her mouth" and "changes in her speech." Further anecdotal investigation revealed the patient's enunciation of sounds such as "s," "sh," and "k" was greatly improved following the gingivectomy procedure. Palatometric research clearly demonstrates that the tongue has intimate contact with the lateral aspects of the posterior palate during speech. Gingival excess in this and other palatal locations has the potential to alter linguopalatal contact patterns and disrupt normal speech patterns. Surgical correction of this condition via excisional procedures may improve linguopalatal contact patterns which, in turn, may lead to improved patient speech.
Neger, Thordis M.; Rietveld, Toni; Janse, Esther
2014-01-01
Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly. PMID:25225475
Eslami Jahromi, Maryam; Ahmadian, Leila
2018-07-01
Investigating the required infrastructure for the implementation of telemedicine and the satisfaction of target groups improves the acceptance of this technology and facilitates the delivery of healthcare services. The aim of this study was to assess the satisfaction of patients with stutter concerning the therapeutic method and the infrastructure used to receive tele-speech therapy services. This descriptive-analytical study was conducted on all patients with stutter aged between 14 and 39 years at Jahrom Social Welfare Bureau (n = 30). The patients underwent speech therapy sessions through video conferencing with Skype. Data were collected by a researcher-made questionnaire. Its content validity was confirmed by three medical informatics specialists. Data were analyzed using SPSS version 19. The mean and standard deviation of patient satisfaction scores concerning the infrastructure and the tele-speech therapy method were 3.15 ± 0.52 and 3.49 ± 0.52, respectively. No significant relationship was found between the patients satisfaction and their gender, education level and age (p > 0.05). The results of this study showed that the number of speech therapy sessions did not affect the overall satisfaction of the patients (p > 0.05), but the number of therapeutic sessions had a direct relationship with their satisfaction with the infrastructure used for tele-speech therapy (p < 0.05). The present study showed that patients were satisfied with tele-speech therapy. According to most patients the low speed of the Internet connection in the country was a major challenge for receiving tele-speech therapy. The results suggest that healthcare planner and policy makers invest on increasing bandwidth to improve the success rate of telemedicine programs. Copyright © 2018 Elsevier B.V. All rights reserved.
Toyoda, Aru; Maruhashi, Tamaki; Malaivijitnond, Suchinda; Koda, Hiroki
2017-10-01
Speech is unique to humans and characterized by facial actions of ∼5 Hz oscillations of lip, mouth or jaw movements. Lip-smacking, a facial display of primates characterized by oscillatory actions involving the vertical opening and closing of the jaw and lips, exhibits stable 5-Hz oscillation patterns, matching that of speech, suggesting that lip-smacking is a precursor of speech. We tested if facial or vocal actions exhibiting the same rate of oscillation are found in wide forms of facial or vocal displays in various social contexts, exhibiting diversity among species. We observed facial and vocal actions of wild stump-tailed macaques (Macaca arctoides), and selected video clips including facial displays (teeth chattering; TC), panting calls, and feeding. Ten open-to-open mouth durations during TC and feeding and five amplitude peak-to-peak durations in panting were analyzed. Facial display (TC) and vocalization (panting) oscillated within 5.74 ± 1.19 and 6.71 ± 2.91 Hz, respectively, similar to the reported lip-smacking of long-tailed macaques and the speech of humans. These results indicated a common mechanism for the central pattern generator underlying orofacial movements, which would evolve to speech. Similar oscillations in panting, which evolved from different muscular control than the orofacial action, suggested the sensory foundations for perceptual saliency particular to 5-Hz rhythms in macaques. This supports the pre-adaptation hypothesis of speech evolution, which states a central pattern generator for 5-Hz facial oscillation and perceptual background tuned to 5-Hz actions existed in common ancestors of macaques and humans, before the emergence of speech. © 2017 Wiley Periodicals, Inc.
Pries, Lotta-Katrin; Guloksuz, Sinan; Menne-Lothmann, Claudia; Decoster, Jeroen; van Winkel, Ruud; Collip, Dina; Delespaul, Philippe; De Hert, Marc; Derom, Catherine; Thiery, Evert; Jacobs, Nele; Wichers, Marieke; Simons, Claudia J P; Rutten, Bart P F; van Os, Jim
2017-01-01
An association between white noise speech illusion and psychotic symptoms has been reported in patients and their relatives. This supports the theory that bottom-up and top-down perceptual processes are involved in the mechanisms underlying perceptual abnormalities. However, findings in nonclinical populations have been conflicting. The aim of this study was to examine the association between white noise speech illusion and subclinical expression of psychotic symptoms in a nonclinical sample. Findings were compared to previous results to investigate potential methodology dependent differences. In a general population adolescent and young adult twin sample (n = 704), the association between white noise speech illusion and subclinical psychotic experiences, using the Structured Interview for Schizotypy-Revised (SIS-R) and the Community Assessment of Psychic Experiences (CAPE), was analyzed using multilevel logistic regression analyses. Perception of any white noise speech illusion was not associated with either positive or negative schizotypy in the general population twin sample, using the method by Galdos et al. (2011) (positive: ORadjusted: 0.82, 95% CI: 0.6-1.12, p = 0.217; negative: ORadjusted: 0.75, 95% CI: 0.56-1.02, p = 0.065) and the method by Catalan et al. (2014) (positive: ORadjusted: 1.11, 95% CI: 0.79-1.57, p = 0.557). No association was found between CAPE scores and speech illusion (ORadjusted: 1.25, 95% CI: 0.88-1.79, p = 0.220). For the Catalan et al. (2014) but not the Galdos et al. (2011) method, a negative association was apparent between positive schizotypy and speech illusion with positive or negative affective valence (ORadjusted: 0.44, 95% CI: 0.24-0.81, p = 0.008). Contrary to findings in clinical populations, white noise speech illusion may not be associated with psychosis proneness in nonclinical populations.
Development and evaluation of the LittlEARS® Early Speech Production Questionnaire - LEESPQ.
Wachtlin, Bianka; Brachmaier, Joanna; Amann, Edda; Hoffmann, Vanessa; Keilmann, Annerose
2017-03-01
Universal Newborn Hearing Screening programs, now instituted throughout the German-speaking countries, allow hearing loss to be detected and treated much earlier than ever before. With this earlier detection, arises the need for tools fit for assessing the very early speech and language production development of today's younger (0-18 month old) children. We have created the LittlEARS ® Early Speech Production Questionnaire, with the aim of meeting this need. 600 questionnaires of the pilot version of the LittlEARS ® Early Speech Production Questionnaire were distributed to parents via pediatricians' practices, day care centers, and personal contact. The completed questionnaires were statistically analyzed to determine their reliability, predictive accuracy, internal consistency, and to what extent gender or unilingualism influenced a child's score. Further, a norm curve was generated to plot the children's increased expected speech production ability with age. Analysis of the data from the 352/600 returned questionnaires revealed that scores on LittlEARS ® Early Speech Production Questionnaire correlate positively with a child's age, with older children scoring higher than do younger children. Further, the questionnaire has a high measuring reliability, high predictability, high unidemensionality of scale, and is not significantly gender or uni-/multilingually biased. A norm curve for expected development with age was created. The LittlEARS ® Early Speech Production Questionnaire (LEESPQ) is a valid tool for assessing the most important milestones in very early development of speech and language production of German language children with normal hearing aged 0-18 months old. The questionnaire is a potentially useful tool for long-term infant screening and follow-up testing and for children with normal hearing and those who would benefit from or use hearing devices. Copyright © 2017 Elsevier B.V. All rights reserved.
Speech evaluation in children with temporomandibular disorders.
Pizolato, Raquel Aparecida; Fernandes, Frederico Silva de Freitas; Gavião, Maria Beatriz Duarte
2011-10-01
The aims of this study were to evaluate the influence of temporomandibular disorders (TMD) on speech in children, and to verify the influence of occlusal characteristics. Speech and dental occlusal characteristics were assessed in 152 Brazilian children (78 boys and 74 girls), aged 8 to 12 (mean age 10.05 ± 1.39 years) with or without TMD signs and symptoms. The clinical signs were evaluated using the Research Diagnostic Criteria for TMD (RDC/TMD) (axis I) and the symptoms were evaluated using a questionnaire. The following groups were formed: Group TMD (n=40), TMD signs and symptoms (Group S and S, n=68), TMD signs or symptoms (Group S or S, n=33), and without signs and symptoms (Group N, n=11). Articulatory speech disorders were diagnosed during spontaneous speech and repetition of the words using the "Phonological Assessment of Child Speech" for the Portuguese language. It was also applied a list of 40 phonological balanced words, read by the speech pathologist and repeated by the children. Data were analyzed by descriptive statistics, Fisher's exact or Chi-square tests (α=0.05). A slight prevalence of articulatory disturbances, such as substitutions, omissions and distortions of the sibilants /s/ and /z/, and no deviations in jaw lateral movements were observed. Reduction of vertical amplitude was found in 10 children, the prevalence being greater in TMD signs and symptoms children than in the normal children. The tongue protrusion in phonemes /t/, /d/, /n/, /l/ and frontal lips in phonemes /s/ and /z/ were the most prevalent visual alterations. There was a high percentage of dental occlusal alterations. There was no association between TMD and speech disorders. Occlusal alterations may be factors of influence, allowing distortions and frontal lisp in phonemes /s/ and /z/ and inadequate tongue position in phonemes /t/; /d/; /n/; /l/.
Neger, Thordis M; Rietveld, Toni; Janse, Esther
2014-01-01
Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model
Rallapalli, Varsha H.
2016-01-01
Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.
Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis
NASA Astrophysics Data System (ADS)
Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert
2005-12-01
A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.
Intentional Voice Command Detection for Trigger-Free Speech Interface
NASA Astrophysics Data System (ADS)
Obuchi, Yasunari; Sumiyoshi, Takashi
In this paper we introduce a new framework of audio processing, which is essential to achieve a trigger-free speech interface for home appliances. If the speech interface works continually in real environments, it must extract occasional voice commands and reject everything else. It is extremely important to reduce the number of false alarms because the number of irrelevant inputs is much larger than the number of voice commands even for heavy users of appliances. The framework, called Intentional Voice Command Detection, is based on voice activity detection, but enhanced by various speech/audio processing techniques such as emotion recognition. The effectiveness of the proposed framework is evaluated using a newly-collected large-scale corpus. The advantages of combining various features were tested and confirmed, and the simple LDA-based classifier demonstrated acceptable performance. The effectiveness of various methods of user adaptation is also discussed.
NASA Astrophysics Data System (ADS)
Maskeliunas, Rytis; Rudzionis, Vytautas
2011-06-01
In recent years various commercial speech recognizers have become available. These recognizers provide the possibility to develop applications incorporating various speech recognition techniques easily and quickly. All of these commercial recognizers are typically targeted to widely spoken languages having large market potential; however, it may be possible to adapt available commercial recognizers for use in environments where less widely spoken languages are used. Since most commercial recognition engines are closed systems the single avenue for the adaptation is to try set ways for the selection of proper phonetic transcription methods between the two languages. This paper deals with the methods to find the phonetic transcriptions for Lithuanian voice commands to be recognized using English speech engines. The experimental evaluation showed that it is possible to find phonetic transcriptions that will enable the recognition of Lithuanian voice commands with recognition accuracy of over 90%.
Functional outcome after total and subtotal glossectomy with free flap reconstruction.
Yanai, Chie; Kikutani, Takesi; Adachi, Masatosi; Thoren, Hanna; Suzuki, Munekazu; Iizuka, Tateyuki
2008-07-01
The aim of this study was to evaluate postoperative oral functions of patients who had undergone total or subtotal (75%) glossectomy with preservation of the larynx for oral squamous cell carcinomas. Speech intelligibility and swallowing capacity of 17 patients who had been treated between 1992 and 2002 were scored and classified using standard protocols 6 to 36 months postoperatively. The outcomes were finally rated as good, acceptable, or poor. The 4-year disease-specific survival rate was 64%. Speech intelligibility and swallowing capacity were satisfactory (acceptable or good) in 82.3%. Only 3 patients were still dependent on tube feeding. Good speech perceptibility did not always go together with normal diet tolerance, however. Our satisfactory results are attributable to the use of large, voluminous soft tissue flaps for reconstruction, and to the instigation of postoperative swallowing and speech therapy on a routine basis and at an early juncture.
Conceptual clusters in figurative language production.
Corts, Daniel P; Meyers, Kristina
2002-07-01
Although most prior research on figurative language examines comprehension, several recent studies on the production of such language have proved to be informative. One of the most noticeable traits of figurative language production is that it is produced at a somewhat random rate with occasional bursts of highly figurative speech (e.g., Corts & Pollio, 1999). The present article seeks to extend these findings by observing production during speech that involves a very high base rate of figurative language, making statistically defined bursts difficult to detect. In an analysis of three Baptist sermons, burst-like clusters of figurative language were identified. Further study indicated that these clusters largely involve a central root metaphor that represents the topic under consideration. An interaction of the coherence, along with a conceptual understanding of a topic and the relative importance of the topic to the purpose of the speech, is offered as the most likely explanation for the clustering of figurative language in natural speech.
Speech recognition in individuals with sensorineural hearing loss.
de Andrade, Adriana Neves; Iorio, Maria Cecilia Martinelli; Gil, Daniela
2016-01-01
Hearing loss can negatively influence the communication performance of individuals, who should be evaluated with suitable material and in situations of listening close to those found in everyday life. To analyze and compare the performance of patients with mild-to-moderate sensorineural hearing loss in speech recognition tests carried out in silence and with noise, according to the variables ear (right and left) and type of stimulus presentation. The study included 19 right-handed individuals with mild-to-moderate symmetrical bilateral sensorineural hearing loss, submitted to the speech recognition test with words in different modalities and speech test with white noise and pictures. There was no significant difference between right and left ears in any of the tests. The mean number of correct responses in the speech recognition test with pictures, live voice, and recorded monosyllables was 97.1%, 85.9%, and 76.1%, respectively, whereas after the introduction of noise, the performance decreased to 72.6% accuracy. The best performances in the Speech Recognition Percentage Index were obtained using monosyllabic stimuli, represented by pictures presented in silence, with no significant differences between the right and left ears. After the introduction of competitive noise, there was a decrease in individuals' performance. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
[Verbal and gestural communication in interpersonal interaction with Alzheimer's disease patients].
Schiaratura, Loris Tamara; Di Pastena, Angela; Askevis-Leherpeux, Françoise; Clément, Sylvain
2015-03-01
Communication can be defined as a verbal and non verbal exchange of thoughts and emotions. While verbal communication deficit in Alzheimer's disease is well documented, very little is known about gestural communication, especially in interpersonal situations. This study examines the production of gestures and its relations with verbal aspects of communication. Three patients suffering from moderately severe Alzheimer's disease were compared to three healthy adults. Each one were given a series of pictures and asked to explain which one she preferred and why. The interpersonal interaction was video recorded. Analyses concerned verbal production (quantity and quality) and gestures. Gestures were either non representational (i.e., gestures of small amplitude punctuating speech or accentuating some parts of utterance) or representational (i.e., referring to the object of the speech). Representational gestures were coded as iconic (depicting of concrete aspects), metaphoric (depicting of abstract meaning) or deictic (pointing toward an object). In comparison with healthy participants, patients revealed a decrease in quantity and quality of speech. Nevertheless, their production of gestures was always present. This pattern is in line with the conception that gestures and speech depend on different communicational systems and look inconsistent with the assumption of a parallel dissolution of gesture and speech. Moreover, analyzing the articulation between verbal and gestural dimensions suggests that representational gestures may compensate for speech deficits. It underlines the importance for the role of gestures in maintaining interpersonal communication.
Visual activity predicts auditory recovery from deafness after adult cochlear implantation.
Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2013-12-01
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Potts, Lisa G; Skinner, Margaret W; Litovsky, Ruth A; Strube, Michael J; Kuk, Francis
2009-06-01
The use of bilateral amplification is now common clinical practice for hearing aid users but not for cochlear implant recipients. In the past, most cochlear implant recipients were implanted in one ear and wore only a monaural cochlear implant processor. There has been recent interest in benefits arising from bilateral stimulation that may be present for cochlear implant recipients. One option for bilateral stimulation is the use of a cochlear implant in one ear and a hearing aid in the opposite nonimplanted ear (bimodal hearing). This study evaluated the effect of wearing a cochlear implant in one ear and a digital hearing aid in the opposite ear on speech recognition and localization. A repeated-measures correlational study was completed. Nineteen adult Cochlear Nucleus 24 implant recipients participated in the study. The participants were fit with a Widex Senso Vita 38 hearing aid to achieve maximum audibility and comfort within their dynamic range. Soundfield thresholds, loudness growth, speech recognition, localization, and subjective questionnaires were obtained six-eight weeks after the hearing aid fitting. Testing was completed in three conditions: hearing aid only, cochlear implant only, and cochlear implant and hearing aid (bimodal). All tests were repeated four weeks after the first test session. Repeated-measures analysis of variance was used to analyze the data. Significant effects were further examined using pairwise comparison of means or in the case of continuous moderators, regression analyses. The speech-recognition and localization tasks were unique, in that a speech stimulus presented from a variety of roaming azimuths (140 degree loudspeaker array) was used. Performance in the bimodal condition was significantly better for speech recognition and localization compared to the cochlear implant-only and hearing aid-only conditions. Performance was also different between these conditions when the location (i.e., side of the loudspeaker array that presented the word) was analyzed. In the bimodal condition, the speech-recognition and localization tasks were equal regardless of which side of the loudspeaker array presented the word, while performance was significantly poorer for the monaural conditions (hearing aid only and cochlear implant only) when the words were presented on the side with no stimulation. Binaural loudness summation of 1-3 dB was seen in soundfield thresholds and loudness growth in the bimodal condition. Measures of the audibility of sound with the hearing aid, including unaided thresholds, soundfield thresholds, and the Speech Intelligibility Index, were significant moderators of speech recognition and localization. Based on the questionnaire responses, participants showed a strong preference for bimodal stimulation. These findings suggest that a well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization. The dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers. It is recommended that unilateral cochlear implant recipients, with measurable unaided hearing thresholds, be fit with a hearing aid.
Turnover and intent to leave among speech pathologists.
McLaughlin, Emma G H; Adamson, Barbara J; Lincoln, Michelle A; Pallant, Julie F; Cooper, Cary L
2010-05-01
Sound, large scale and systematic research into why health professionals want to leave their jobs is needed. This study used psychometrically-sound tools and logistic regression analyses to determine why Australian speech pathologists were intending to leave their jobs or the profession. Based on data from 620 questionnaires, several variables were found to be significantly related to intent to leave. The speech pathologists intending to look for a new job were more likely to be under 34 years of age, and perceive low levels of job security and benefits of the profession. Those intending to leave the profession were more likely to spend greater than half their time at work on administrative duties, have a higher negative affect score, not have children under 18 years of age, and perceive that speech pathology did not offer benefits that met their professional needs. The findings of this study provide the first evidence regarding the reasons for turnover and attrition in the Australian speech pathology workforce, and can inform the development of strategies to retain a skilled and experienced allied health workforce.
Luo, Jie; Wu, Jieli; Lv, Kexing; Li, Kaichun; Wu, Jianhui; Wen, Yihui; Li, Xiaoling; Tang, Haocheng; Jiang, Aiyun; Wang, Zhangfeng; Wen, Weiping; Lei, Wenbin
2016-01-01
This study aims to analyze the postsurgical health-related quality of life (HRQOL) and quality of voice (QOV) of patients with laryngeal carcinoma with an expectation of improving the treatment and HRQOL of these patients. Based on the collection of information of patients with laryngeal carcinoma regarding clinical characteristics (age, TNM stage, with or without laryngeal preservation and/or neck dissection, with or without postoperative irradiation and/or chemotherapy, etc.), QOV using Voice Handicap Index (VIH) scale and HRQOL using EORTC QLQ-C30 and EORTCQLQ-H&N35 scales, the differences of postsurgical HRQOL related to their clinical characteristics were analyzed using univariate nonparametric tests, the main factors impacting the postsurgical HRQOL were analyzed using regression analyses (generalized linear models) and the correlation between QOV and HRQOL analyzed using spearman correlation analysis. A total of 92 patients were enrolled in this study, on whom the use of EORTC QLQ-C30, EORTC QLQ-H&N35 and VHI scales revealed that: the differences of HRQOL were significant among patients with different ages, TNM stages, and treatment modalities; the main factors impacting the postsurgical HRQOL were pain, speech disorder, and dry mouth; and QOV was significantly correlated with HRQOL. For the patients with laryngeal carcinoma included in our study, the quality of life after open surgeries were impacted by many factors predominated by pain, speech disorder, and dry mouth. It is suggested that doctors in China do more efforts on the patients' postoperative pain and xerostomia management and speech rehabilitation with the hope of improving the patients' quality of life.
Immediate effects of the semi-occluded vocal tract exercise with LaxVox® tube in singers.
Fadel, Congeta Bruniere Xavier; Dassie-Leite, Ana Paula; Santos, Rosane Sampaio; Santos, Celso Gonçalves Dos; Dias, Cláudio Antônio Sorondo; Sartori, Denise Jussara
The purpose of this study was to analyze the immediate effects of the semi-occluded vocal tract exercise (SOVTE) using the LaxVox® tube in singers. Participants were 23 singers, classical singing students, aged 18 to 47 years (mean age = 27.2 years). First, data was collected through the application of a demographic questionnaire and the recording of sustained emission - vowel /ε/, counting 1-10, and a music section from the participants' current repertoire. After that, the participants were instructed and performed the SOVTE using the LaxVox® tube for three minutes. Finally, the same vocal samples were collected immediately after SOVTE performance and the singers responded to a questionnaire on their perception regarding vocal changes after the exercise. The vocal samples were analyzed by referees (speech-language pathologists and singing teachers) and by means of acoustic analysis. Most of the singers reported improved voice post-exercise in both tasks - speech and singing. Regarding the perceptual assessment (sustained vowel, speech, and singing), the referees found no difference between pre- and post-exercise emissions. The acoustic analysis of the sustained vowel showed increased Fundamental Frequency (F0) and reduction of the Glottal to Noise Excitation (GNE) ratio post-exercise. The semi-occluded vocal tract exercise with LaxVox® tube promotes immediate positive effects on the self-assessment and acoustic analysis of voice in professional singers without vocal complains. No immediate significant changes were observed with respect to auditory-perceptual evaluation of speech and singing.
Helium Speech: An Application of Standing Waves
NASA Astrophysics Data System (ADS)
Wentworth, Christopher D.
2011-04-01
Taking a breath of helium gas and then speaking or singing to the class is a favorite demonstration for an introductory physics course, as it usually elicits appreciative laughter, which serves to energize the class session. Students will usually report that the helium speech "raises the frequency" of the voice. A more accurate description of the phenomenon requires that we distinguish between the frequencies of sound produced by the larynx and the filtering of those frequencies by the vocal tract. We will describe here an experiment done by introductory physics students that uses helium speech as a context for learning about the human vocal system and as an application of the standing sound-wave concept. Modern acoustic analysis software easily obtained by instructors for student use allows data to be obtained and analyzed quickly.
Lorino, Philippe
2014-12-01
The analysis of conversational turn-taking and its implications on time (the speaker cannot completely anticipate the future effects of her/his speech) and sociality (the speech is co-produced by the various speakers rather than by the speaking individual) can provide a useful basis to analyze complex organizing processes and collective action: the actor cannot completely anticipate the future effects of her/his acts and the act is co-produced by multiple actors. This translation from verbal to broader classes of interaction stresses the performativity of speeches, the importance of the situation, the role of semiotic mediations to make temporally and spatially distant "ghosts" present in the dialog, and the dissymmetrical relationship between successive conversational turns, due to temporal irreversibility.
Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Xiaojia; Mao Qirong; Zhan Yongzhao
There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions.more » The experiments show that this method can improve the recognition rate and the time of feature extraction.« less
NASA Astrophysics Data System (ADS)
Lecun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-01
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-28
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Strom, Mark A.; Silverberg, Jonathan I.
2016-01-01
Background Children with asthma, hay fever, and food allergy may have several factors that increase their risk of speech disorder, including allergic inflammation, ADD/ADHD, and sleep disturbance. However, few studies have examined a relationship between asthma, allergic disease, and speech disorder. We sought to determine whether asthma, hay fever, and food allergy are associated with speech disorder in children and whether disease severity, sleep disturbance, or ADD/ADHD modified such associations. Methods We analyzed cross-sectional data on 337,285 children aged 2–17 years from 19 US population-based studies, including the 1997–2013 National Health Interview Survey and the 2003/4 and 2007/8 National Survey of Children’s Health. Results In multivariate models, controlling for age, demographic factors, healthcare utilization, and history of eczema, lifetime history of asthma (odds ratio [95% confidence interval]: 1.18 [1.04–1.34], p = 0.01), and one-year history of hay fever (1.44 [1.28–1.62], p < 0.0001) and food allergy (1.35 [1.13–1.62], p = 0.001) were associated with increased odds of speech disorder. Children with current (1.37 [1.15–1.59] p = 0.0003) but not past (p = 0.06) asthma had increased risk of speech disorder. In one study that assessed caregiver-reported asthma severity, mild (1.58 [1.20–2.08], p = 0.001) and moderate (2.99 [1.54–3.41], p < 0.0001) asthma were associated with increased odds of speech disorder; however, severe asthma was associated with the highest odds of speech disorder (5.70 [2.36–13.78], p = 0.0001). Conclusion Childhood asthma, hay fever, and food allergy are associated with increased risk of speech disorder. Future prospective studies are needed to characterize the associations. PMID:27091599
Strom, Mark A; Silverberg, Jonathan I
2016-09-01
Children with asthma, hay fever, and food allergy may have several factors that increase their risk of speech disorder, including allergic inflammation, ADD/ADHD, and sleep disturbance. However, few studies have examined a relationship between asthma, allergic disease, and speech disorder. We sought to determine whether asthma, hay fever, and food allergy are associated with speech disorder in children and whether disease severity, sleep disturbance, or ADD/ADHD modified such associations. We analyzed cross-sectional data on 337,285 children aged 2-17 years from 19 US population-based studies, including the 1997-2013 National Health Interview Survey and the 2003/4 and 2007/8 National Survey of Children's Health. In multivariate models, controlling for age, demographic factors, healthcare utilization, and history of eczema, lifetime history of asthma (odds ratio [95% confidence interval]: 1.18 [1.04-1.34], p = 0.01), and one-year history of hay fever (1.44 [1.28-1.62], p < 0.0001) and food allergy (1.35 [1.13-1.62], p = 0.001) were associated with increased odds of speech disorder. Children with current (1.37 [1.15-1.59] p = 0.0003) but not past (p = 0.06) asthma had increased risk of speech disorder. In one study that assessed caregiver-reported asthma severity, mild (1.58 [1.20-2.08], p = 0.001) and moderate (2.99 [1.54-3.41], p < 0.0001) asthma were associated with increased odds of speech disorder; however, severe asthma was associated with the highest odds of speech disorder (5.70 [2.36-13.78], p = 0.0001). Childhood asthma, hay fever, and food allergy are associated with increased risk of speech disorder. Future prospective studies are needed to characterize the associations. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The discrepancy in the perception of the public-political speech in Croatia.
Tanta, Ivan; Lesinger, Gordana
2014-03-01
Key place in this paper takes the study of political speech in the Republic of Croatia and their impact on voters, or which keywords are in political speeches and public appearances of politicians in Croatia that their voting body wants to hear. Given listed below we will define the research topic in the form of a question - is there a discrepancy in the perception of the public-political speech in Croatia, and which keywords are specific to the two main regions in Croatia and that inhabitant these regions respond. Marcus Tullius Cicero, the most important Roman orator, he used a specific associative mnemonic technique that is called "technique room". He would talk expound on keywords and conceptual terms that he needed for the desired topic and join in these make them, according to the desired order, in a very creative and unique way, the premises of the house or palace, which he knew well. Then, while holding the speech intended to pass through rooms of the house or palace and then put keywords and concepts come to mind, again according to the desired order. Given that this is a specific kind of research political speech that is relatively recent in Croatia, it should be noted that there is still, this kind of political communication is not sufficiently explored. Particularly the emphasis on the impact and use of keywords specific to the Republic of Croatia, in everyday public and political communication. The paper will be analyzed the political, campaign speeches and promises several winning candidates, and now Croatian MEPs, specific keywords related to: economics, culture, science, education and health. The analysis is based on comparison of the survey results on the representation of key words in the speeches of politicians and qualitative analysis of the speeches of politicians on key words during the election campaign.
Speech perception in noise with a harmonic complex excited vocoder.
Churchill, Tyler H; Kan, Alan; Goupell, Matthew J; Ihlefeld, Antje; Litovsky, Ruth Y
2014-04-01
A cochlear implant (CI) presents band-pass-filtered acoustic envelope information by modulating current pulse train levels. Similarly, a vocoder presents envelope information by modulating an acoustic carrier. By studying how normal hearing (NH) listeners are able to understand degraded speech signals with a vocoder, the parameters that best simulate electric hearing and factors that might contribute to the NH-CI performance difference may be better understood. A vocoder with harmonic complex carriers (fundamental frequency, f0 = 100 Hz) was used to study the effect of carrier phase dispersion on speech envelopes and intelligibility. The starting phases of the harmonic components were randomly dispersed to varying degrees prior to carrier filtering and modulation. NH listeners were tested on recognition of a closed set of vocoded words in background noise. Two sets of synthesis filters simulated different amounts of current spread in CIs. Results showed that the speech vocoded with carriers whose starting phases were maximally dispersed was the most intelligible. Superior speech understanding may have been a result of the flattening of the dispersed-phase carrier's intrinsic temporal envelopes produced by the large number of interacting components in the high-frequency channels. Cross-correlogram analyses of auditory nerve model simulations confirmed that randomly dispersing the carrier's component starting phases resulted in better neural envelope representation. However, neural metrics extracted from these analyses were not found to accurately predict speech recognition scores for all vocoded speech conditions. It is possible that central speech understanding mechanisms are insensitive to the envelope-fine structure dichotomy exploited by vocoders.
Toward a dual-learning systems model of speech category learning
Chandrasekaran, Bharath; Koslov, Seth R.; Maddox, W. T.
2014-01-01
More than two decades of work in vision posits the existence of dual-learning systems of category learning. The reflective system uses working memory to develop and test rules for classifying in an explicit fashion, while the reflexive system operates by implicitly associating perception with actions that lead to reinforcement. Dual-learning systems models hypothesize that in learning natural categories, learners initially use the reflective system and, with practice, transfer control to the reflexive system. The role of reflective and reflexive systems in auditory category learning and more specifically in speech category learning has not been systematically examined. In this article, we describe a neurobiologically constrained dual-learning systems theoretical framework that is currently being developed in speech category learning and review recent applications of this framework. Using behavioral and computational modeling approaches, we provide evidence that speech category learning is predominantly mediated by the reflexive learning system. In one application, we explore the effects of normal aging on non-speech and speech category learning. Prominently, we find a large age-related deficit in speech learning. The computational modeling suggests that older adults are less likely to transition from simple, reflective, unidimensional rules to more complex, reflexive, multi-dimensional rules. In a second application, we summarize a recent study examining auditory category learning in individuals with elevated depressive symptoms. We find a deficit in reflective-optimal and an enhancement in reflexive-optimal auditory category learning. Interestingly, individuals with elevated depressive symptoms also show an advantage in learning speech categories. We end with a brief summary and description of a number of future directions. PMID:25132827
Auditory Evoked Potentials with Different Speech Stimuli: a Comparison and Standardization of Values
Didoné, Dayane Domeneghini; Oppitz, Sheila Jacques; Folgearini, Jordana; Biaggio, Eliara Pinto Vieira; Garcia, Michele Vargas
2016-01-01
Introduction Long Latency Auditory Evoked Potentials (LLAEP) with speech sounds has been the subject of research, as these stimuli would be ideal to check individualś detection and discrimination. Objective The objective of this study is to compare and describe the values of latency and amplitude of cortical potentials for speech stimuli in adults with normal hearing. Methods The sample population included 30 normal hearing individuals aged between 18 and 32 years old with ontological disease and auditory processing. All participants underwent LLAEP search using pairs of speech stimuli (/ba/ x /ga/, /ba/ x /da/, and /ba/ x /di/. The authors studied the LLAEP using binaural stimuli at an intensity of 75dBNPS. In total, they used 300 stimuli were used (∼60 rare and 240 frequent) to obtain the LLAEP. Individuals received guidance to count the rare stimuli. The authors analyzed latencies of potential P1, N1, P2, N2, and P300, as well as the ampleness of P300. Results The mean age of the group was approximately 23 years. The averages of cortical potentials vary according to different speech stimuli. The N2 latency was greater for /ba/ x /di/ and P300 latency was greater for /ba/ x /ga/. Considering the overall average amplitude, it ranged from 5.35 and 7.35uV for different speech stimuli. Conclusion It was possible to obtain the values of latency and amplitude for different speech stimuli. Furthermore, the N2 component showed higher latency with the / ba / x / di / stimulus and P300 for /ba/ x / ga /. PMID:27096012
Sensorimotor speech disorders in Parkinson's disease: Programming and execution deficits.
Ortiz, Karin Zazo; Brabo, Natalia Casagrande; Minett, Thais Soares C
2016-01-01
Dysfunction in the basal ganglia circuits is a determining factor in the physiopathology of the classic signs of Parkinson's disease (PD) and hypokinetic dysarthria is commonly related to PD. Regarding speech disorders associated with PD, the latest four-level framework of speech complicates the traditional view of dysarthria as a motor execution disorder. Based on findings that dysfunctions in basal ganglia can cause speech disorders, and on the premise that the speech deficits seen in PD are not related to an execution motor disorder alone but also to a disorder at the motor programming level, the main objective of this study was to investigate the presence of sensorimotor disorders of programming (besides the execution disorders previously described) in PD patients. A cross-sectional study was conducted in a sample of 60 adults matched for gender, age and education: 30 adult patients diagnosed with idiopathic PD (PDG) and 30 healthy adults (CG). All types of articulation errors were reanalyzed to investigate the nature of these errors. Interjections, hesitations and repetitions of words or sentences (during discourse) were considered typical disfluencies; blocking, episodes of palilalia (words or syllables) were analyzed as atypical disfluencies. We analysed features including successive self-initiated trial, phoneme distortions, self-correction, repetition of sounds and syllables, prolonged movement transitions, additions or omissions of sounds and syllables, in order to identify programming and/or execution failures. Orofacial agility was also investigated. The PDG had worse performance on all sensorimotor speech tasks. All PD patients had hypokinetic dysarthria. The clinical characteristics found suggest both execution and programming sensorimotor speech disorders in PD patients.
Oyama, Kentaro; Nishihara, Kazuhide; Matsunaga, Kazuhide; Miura, Naoko; Kibe, Toshiro; Nakamura, Norifumi
2016-07-01
Although the goal of cleft palate (CP) repair is to achieve normal speech, no standard procedure ensures that patients' speech will be at the same level as speech in children without CP. In this study, postoperative speech outcomes following primary CP repair combined with or without a mucosal graft was analyzed in comparison with that of control subjects without CP. Eighty-two patients who underwent modified V-Y palatoplasty with a mucosal graft on the nasal side for symmetrical muscular reconstruction during 2006-2012 (MG group) and 109 patients who previously underwent modified V-Y palatoplasty without a mucosal graft (non-MG group) were enrolled in this study. Speech data on 37 Japanese subjects without CP were used as a control. Perceptual rating of resonance and nasal emission and nasometry were carried out for all participants. Furthermore, cephalometric analyses were performed to assess postoperative velopharyngeal morphology and velar movement. Normal resonance was achieved at a significantly higher rate (90.3% of patients) in the MG group than in the non-MG group (68.8%) (P < .01). The mean nasalance scores in the MG group were significantly lower (P < .01) and were almost at the same level as in controls. Cephalometric analyses revealed a greater velar length and velar elevation angle during phonation in the MG group (P < .01 and P < .05, respectively). Modified V-Y palatoplasty combined with a mucosal graft on the nasal side of the velum for symmetrical muscular reconstruction facilitates speech outcomes for children with cleft palate that are comparable with those for peers without CP.
The Impact of Tympanostomy Tubes on Speech and Language Development in Children with Cleft Palate.
Shaffer, Amber D; Ford, Matthew D; Choi, Sukgi S; Jabbour, Noel
2017-09-01
Objective Describe the impact of hearing loss, tympanostomy tube placement before palatoplasty, and number of tubes received on speech outcomes in children with cleft palate. Study Design Case series with chart review. Setting Tertiary care children's hospital. Subjects and Methods Records from 737 children born between April 2005 and April 2015 who underwent palatoplasty at a tertiary children's hospital were reviewed. Exclusion criteria were cleft repair at an outside hospital, intact secondary palate, absence of postpalatoplasty speech evaluation, sensorineural or mixed hearing loss, no tubes, first tubes after palatoplasty, or first clinic after 12 months of age. Data from 152 patients with isolated cleft palate and 166 patients with cleft lip and palate were analyzed using Wilcoxon rank-sum, χ 2 , and Fisher exact test and logistic regression. Results Most patients (242, 76.1%) received tubes before palatoplasty. Hearing loss after tubes, but not before, was associated with speech/language delays at 24 months ( P = .005) and language delays ( P = .048) and speech sound production disorders (SSPDs, P = .040) at 5 years. Receiving tubes before palatoplasty was associated with failed newborn hearing screen ( P = .001) and younger age at first posttubes type B tympanogram with normal canal volume ( P = .015). Hearing loss after tubes ( P = .021), language delays ( P = .025), SSPDs ( P = .003), and velopharyngeal insufficiency ( P = .032) at 5 years and speech surgery ( P = .022) were associated with more tubes. Conclusion Continued middle ear disease, reflected by hearing loss and multiple tubes, may impair speech and language development. Inserting tubes before palatoplasty did not mitigate these impairments better than later tube placement.
Ethics and Ethos in Financial Reporting: Analyzing Persuasive Language in Earnings Calls
ERIC Educational Resources Information Center
Crawford Camiciottoli, Belinda
2011-01-01
In response to ongoing concerns about financial ethics, this study analyzes the speech of company executives in quarterly earnings conference calls to understand strategic usage of ethics-related language. Against the backdrop of the recent global financial crisis, the Aristotelian concept of ethos provides a framework to investigate linguistic…
ERIC Educational Resources Information Center
Kabadayi, Abdulkadir
2006-01-01
Language, as is known, is acquired under certain conditions: rapid and sequential brain maturation and cognitive development, the need to exchange information and to control others' actions, and an exposure to appropriate speech input. This research aims at analyzing preschoolers' overgeneralizations of the object labeling process in different…
Spectral-temporal EEG dynamics of speech discrimination processing in infants during sleep.
Gilley, Phillip M; Uhler, Kristin; Watson, Kaylee; Yoshinaga-Itano, Christine
2017-03-22
Oddball paradigms are frequently used to study auditory discrimination by comparing event-related potential (ERP) responses from a standard, high probability sound and to a deviant, low probability sound. Previous research has established that such paradigms, such as the mismatch response or mismatch negativity, are useful for examining auditory processes in young children and infants across various sleep and attention states. The extent to which oddball ERP responses may reflect subtle discrimination effects, such as speech discrimination, is largely unknown, especially in infants that have not yet acquired speech and language. Mismatch responses for three contrasts (non-speech, vowel, and consonant) were computed as a spectral-temporal probability function in 24 infants, and analyzed at the group level by a modified multidimensional scaling. Immediately following an onset gamma response (30-50 Hz), the emergence of a beta oscillation (12-30 Hz) was temporally coupled with a lower frequency theta oscillation (2-8 Hz). The spectral-temporal probability of this coupling effect relative to a subsequent theta modulation corresponds with discrimination difficulty for non-speech, vowel, and consonant contrast features. The theta modulation effect suggests that unexpected sounds are encoded as a probabilistic measure of surprise. These results support the notion that auditory discrimination is driven by the development of brain networks for predictive processing, and can be measured in infants during sleep. The results presented here have implications for the interpretation of discrimination as a probabilistic process, and may provide a basis for the development of single-subject and single-trial classification in a clinically useful context. An infant's brain is processing information about the environment and performing computations, even during sleep. These computations reflect subtle differences in acoustic feature processing that are necessary for language-learning. Results from this study suggest that brain responses to deviant sounds in an oddball paradigm follow a cascade of oscillatory modulations. This cascade begins with a gamma response that later emerges as a beta synchronization, which is temporally coupled with a theta modulation, and followed by a second, subsequent theta modulation. The difference in frequency and timing of the theta modulations appears to reflect a measure of surprise. These insights into the neurophysiological mechanisms of auditory discrimination provide a basis for exploring the clinically utility of the MMR TF and other auditory oddball responses.
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers.
Liu, Fang; Chan, Alice H D; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C M
2016-07-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production.
Restoring speech perception with cochlear implants by spanning defective electrode contacts.
Frijns, Johan H M; Snel-Bongers, Jorien; Vellinga, Dirk; Schrage, Erik; Vanpoucke, Filiep J; Briaire, Jeroen J
2013-04-01
Even with six defective contacts, spanning can largely restore speech perception with the HiRes 120 speech processing strategy to the level supported by an intact electrode array. Moreover, the sound quality is not degraded. Previous studies have demonstrated reduced speech perception scores (SPS) with defective contacts in HiRes 120. This study investigated whether replacing defective contacts by spanning, i.e. current steering on non-adjacent contacts, is able to restore speech recognition to the level supported by an intact electrode array. Ten adult cochlear implant recipients (HiRes90K, HiFocus1J) with experience with HiRes 120 participated in this study. Three different defective electrode arrays were simulated (six separate defective contacts, three pairs or two triplets). The participants received three take-home strategies and were asked to evaluate the sound quality in five predefined listening conditions. After 3 weeks, SPS were evaluated with monosyllabic words in quiet and in speech-shaped background noise. The participants rated the sound quality equal for all take-home strategies. SPS with background noise were equal for all conditions tested. However, SPS in quiet (85% phonemes correct on average with the full array) decreased significantly with increasing spanning distance, with a 3% decrease for each spanned contact.
Schmidt-Naylor, Anna C; Saunders, Kathryn J; Brady, Nancy C
2017-05-17
We explored alphabet supplementation as an augmentative and alternative communication strategy for adults with minimal literacy. Study 1's goal was to teach onset-letter selection with spoken words and assess generalization to untaught words, demonstrating the alphabetic principle. Study 2 incorporated alphabet supplementation within a naming task and then assessed effects on speech intelligibility. Three men with intellectual disabilities (ID) and low speech intelligibility participated. Study 1 used a multiple-probe design, across three 20-word sets, to show that our computer-based training improved onset-letter selection. We also probed generalization to untrained words. Study 2 taught onset-letter selection for 30 new words chosen for functionality. Five listeners transcribed speech samples of the 30 words in 2 conditions: speech only and speech with alphabet supplementation. Across studies 1 and 2, participants demonstrated onset-letter selection for at least 90 words. Study 1 showed evidence of the alphabetic principle for some but not all word sets. In study 2, participants readily used alphabet supplementation, enabling listeners to understand twice as many words. This is the first demonstration of alphabet supplementation in individuals with ID and minimal literacy. The large number of words learned holds promise both for improving communication and providing a foundation for improved literacy.
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers
Liu, Fang; Chan, Alice H. D.; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C. M.
2016-01-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production. PMID:27475178
Echoes of the spoken past: how auditory cortex hears context during speech perception
Skipper, Jeremy I.
2014-01-01
What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we ‘hear’ during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds. PMID:25092665
Longitudinal changes in speech recognition in older persons.
Dubno, Judy R; Lee, Fu-Shing; Matthews, Lois J; Ahlstrom, Jayne B; Horwitz, Amy R; Mills, John H
2008-01-01
Recognition of isolated monosyllabic words in quiet and recognition of key words in low- and high-context sentences in babble were measured in a large sample of older persons enrolled in a longitudinal study of age-related hearing loss. Repeated measures were obtained yearly or every 2 to 3 years. To control for concurrent changes in pure-tone thresholds and speech levels, speech-recognition scores were adjusted using an importance-weighted speech-audibility metric (AI). Linear-regression slope estimated the rate of change in adjusted speech-recognition scores. Recognition of words in quiet declined significantly faster with age than predicted by declines in speech audibility. As subjects aged, observed scores deviated increasingly from AI-predicted scores, but this effect did not accelerate with age. Rate of decline in word recognition was significantly faster for females than males and for females with high serum progesterone levels, whereas noise history had no effect. Rate of decline did not accelerate with age but increased with degree of hearing loss, suggesting that with more severe injury to the auditory system, impairments to auditory function other than reduced audibility resulted in faster declines in word recognition as subjects aged. Recognition of key words in low- and high-context sentences in babble did not decline significantly with age.
Quantitative application of the primary progressive aphasia consensus criteria.
Wicklund, Meredith R; Duffy, Joseph R; Strand, Edythe A; Machulda, Mary M; Whitwell, Jennifer L; Josephs, Keith A
2014-04-01
To determine how well the consensus criteria could classify subjects with primary progressive aphasia (PPA) using a quantitative speech and language battery that matches the test descriptions provided by the consensus criteria. A total of 105 participants with a neurodegenerative speech and language disorder were prospectively recruited and underwent neurologic, neuropsychological, and speech and language testing and MRI in this case-control study. Twenty-one participants with apraxia of speech without aphasia served as controls. Select tests from the speech and language battery were chosen for application of consensus criteria and cutoffs were employed to determine syndromic classification. Hierarchical cluster analysis was used to examine participants who could not be classified. Of the 84 participants, 58 (69%) could be classified as agrammatic (27%), semantic (7%), or logopenic (35%) variants of PPA. The remaining 31% of participants could not be classified. Of the unclassifiable participants, 2 clusters were identified. The speech and language profile of the first cluster resembled mild logopenic PPA and the second cluster semantic PPA. Gray matter patterns of loss of these 2 clusters of unclassified participants also resembled mild logopenic and semantic variants. Quantitative application of consensus PPA criteria yields the 3 syndromic variants but leaves a large proportion unclassified. Therefore, the current consensus criteria need to be modified in order to improve sensitivity.
Morphological Variation in the Adult Hard Palate and Posterior Pharyngeal Wall
Lammert, Adam; Proctor, Michael; Narayanan, Shrikanth
2013-01-01
Purpose Adult human vocal tracts display considerable morphological variation across individuals, but the nature and extent of this variation has not been extensively studied for many vocal tract structures. There exists a need to analyze morphological variation and, even more basically, to develop a methodology for morphological analysis of the vocal tract. Such analysis will facilitate fundamental characterization of the speech production system, with broad implications from modeling to explaining inter-speaker variability. Method A data-driven methodology to automatically analyze the extent and variety of morphological variation is proposed and applied to a diverse subject pool of 36 adults. Analysis is focused on two key aspects of vocal tract structure: the midsagittal shape of the hard palate and the posterior pharyngeal wall. Result Palatal morphology varies widely in its degree of concavity, but also in anteriority and sharpness. Pharyngeal wall morphology, by contrast, varies mostly in terms of concavity alone. The distribution of morphological characteristics is complex, and analysis suggests that certain variations may be categorical in nature. Conclusion Major modes of morphological variation are identified, including their relative magnitude, distribution and categorical nature. Implications of these findings for speech articulation strategies and speech acoustics are discussed. PMID:23690566
Typical versus delayed speech onset influences verbal reporting of autistic interests.
Chiodo, Liliane; Majerus, Steve; Mottron, Laurent
2017-01-01
The distinction between autism and Asperger syndrome has been abandoned in the DSM-5. However, this clinical categorization largely overlaps with the presence or absence of a speech onset delay which is associated with clinical, cognitive, and neural differences. It is unknown whether these different speech development pathways and associated cognitive differences are involved in the heterogeneity of the restricted interests that characterize autistic adults. This study tested the hypothesis that speech onset delay, or conversely, early mastery of speech, orients the nature and verbal reporting of adult autistic interests. The occurrence of a priori defined descriptors for perceptual and thematic dimensions were determined, as well as the perceived function and benefits, in the response of autistic people to a semi-structured interview on their intense interests. The number of words, grammatical categories, and proportion of perceptual / thematic descriptors were computed and compared between groups by variance analyses. The participants comprised 40 autistic adults grouped according to the presence ( N = 20) or absence ( N = 20) of speech onset delay, as well as 20 non-autistic adults, also with intense interests, matched for non-verbal intelligence using Raven's Progressive Matrices. The overall nature, function, and benefit of intense interests were similar across autistic subgroups, and between autistic and non-autistic groups. However, autistic participants with a history of speech onset delay used more perceptual than thematic descriptors when talking about their interests, whereas the opposite was true for autistic individuals without speech onset delay. This finding remained significant after controlling for linguistic differences observed between the two groups. Verbal reporting, but not the nature or positive function, of intense interests differed between adult autistic individuals depending on their speech acquisition history: oral reporting of intense interests was characterized by perceptual dominance for autistic individuals with delayed speech onset and thematic dominance for those without. This may contribute to the heterogeneous presentation observed among autistic adults of normal intelligence.
Tao, Duoduo; Deng, Rui; Jiang, Ye; Galvin, John J; Fu, Qian-Jie; Chen, Bing
2014-01-01
To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI) users. Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH) participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a) word-in-sentence recognition in quiet, (b) word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c) Chinese disyllable recognition in quiet, (d) Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork. There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants. Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical importance of voice pitch cues (albeit poorly coded by the CI) did not influence the relationship between working memory and speech perception.
a Comparative Analysis of Fluent and Cerebral Palsied Speech.
NASA Astrophysics Data System (ADS)
van Doorn, Janis Lee
Several features of the acoustic waveforms of fluent and cerebral palsied speech were compared, using six fluent and seven cerebral palsied subjects, with a major emphasis being placed on an investigation of the trajectories of the first three formants (vocal tract resonances). To provide an overall picture which included other acoustic features, fundamental frequency, intensity, speech timing (speech rate and syllable duration), and prevocalization (vocalization prior to initial stop consonants found in cerebral palsied speech) were also investigated. Measurements were made using repetitions of a test sentence which was chosen because it required large excursions of the speech articulators (lips, tongue and jaw), so that differences in the formant trajectories for the fluent and cerebral palsied speakers would be emphasized. The acoustic features were all extracted from the digitized speech waveform (10 kHz sampling rate): the fundamental frequency contours were derived manually, the intensity contours were measured using the signal covariance, speech rate and syllable durations were measured manually, as were the prevocalization durations, while the formant trajectories were derived from short time spectra which were calculated for each 10 ms of speech using linear prediction analysis. Differences which were found in the acoustic features can be summarized as follows. For cerebral palsied speakers, the fundamental frequency contours generally showed inappropriate exaggerated fluctuations, as did some of the intensity contours; the mean fundamental frequencies were either higher or the same as for the fluent subjects; speech rates were reduced, and syllable durations were longer; prevocalization was consistently present at the beginning of the test sentence; formant trajectories were found to have overall reduced frequency ranges, and to contain anomalous transitional features, but it is noteworthy that for any one cerebral palsied subject, the inappropriate trajectory pattern was generally reproducible. The anomalous transitional features took the form of (a) inappropriate transition patterns, (b) reduced frequency excursions, (c) increased transition durations, and (d) decreased maximum rates of frequency change.
Central Presbycusis: A Review and Evaluation of the Evidence
Humes, Larry E.; Dubno, Judy R.; Gordon-Salant, Sandra; Lister, Jennifer J.; Cacace, Anthony T.; Cruickshanks, Karen J.; Gates, George A.; Wilson, Richard H.; Wingfield, Arthur
2018-01-01
Background The authors reviewed the evidence regarding the existence of age-related declines in central auditory processes and the consequences of any such declines for everyday communication. Purpose This report summarizes the review process and presents its findings. Data Collection and Analysis The authors reviewed 165 articles germane to central presbycusis. Of the 165 articles, 132 articles with a focus on human behavioral measures for either speech or nonspeech stimuli were selected for further analysis. Results For 76 smaller-scale studies of speech understanding in older adults reviewed, the following findings emerged: (1) the three most commonly studied behavioral measures were speech in competition, temporally distorted speech, and binaural speech perception (especially dichotic listening); (2) for speech in competition and temporally degraded speech, hearing loss proved to have a significant negative effect on performance in most of the laboratory studies; (3) significant negative effects of age, unconfounded by hearing loss, were observed in most of the studies of speech in competing speech, time-compressed speech, and binaural speech perception; and (4) the influence of cognitive processing on speech understanding has been examined much less frequently, but when included, significant positive associations with speech understanding were observed. For 36 smaller-scale studies of the perception of nonspeech stimuli by older adults reviewed, the following findings emerged: (1) the three most frequently studied behavioral measures were gap detection, temporal discrimination, and temporal-order discrimination or identification; (2) hearing loss was seldom a significant factor; and (3) negative effects of age were almost always observed. For 18 studies reviewed that made use of test batteries and medium-to-large sample sizes, the following findings emerged: (1) all studies included speech-based measures of auditory processing; (2) 4 of the 18 studies included nonspeech stimuli; (3) for the speech-based measures, monaural speech in a competing-speech background, dichotic speech, and monaural time-compressed speech were investigated most frequently; (4) the most frequently used tests were the Synthetic Sentence Identification (SSI) test with Ipsilateral Competing Message (ICM), the Dichotic Sentence Identification (DSI) test, and time-compressed speech; (5) many of these studies using speech-based measures reported significant effects of age, but most of these studies were confounded by declines in hearing, cognition, or both; (6) for nonspeech auditory-processing measures, the focus was on measures of temporal processing in all four studies; (7) effects of cognition on nonspeech measures of auditory processing have been studied less frequently, with mixed results, whereas the effects of hearing loss on performance were minimal due to judicious selection of stimuli; and (8) there is a paucity of observational studies using test batteries and longitudinal designs. Conclusions Based on this review of the scientific literature, there is insufficient evidence to confirm the existence of central presbycusis as an isolated entity. On the other hand, recent evidence has been accumulating in support of the existence of central presbycusis as a multifactorial condition that involves age- and/or disease-related changes in the auditory system and in the brain. Moreover, there is a clear need for additional research in this area. PMID:22967738
Tóth, László; Hoffmann, Ildikó; Gosztolya, Gábor; Vincze, Veronika; Szatlóczki, Gréta; Bánréti, Zoltán; Pákáski, Magdolna; Kálmán, János
2018-01-01
Background: Even today the reliable diagnosis of the prodromal stages of Alzheimer’s disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive de-cline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Methods: Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech sig-nals, first manually (using the Praat software), and then automatically, with an automatic speech recogni-tion (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. Results: The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process – that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. Conclusion: The temporal analysis of spontaneous speech can be exploited in implementing a new, auto-matic detection-based tool for screening MCI for the community. PMID:29165085
Toth, Laszlo; Hoffmann, Ildiko; Gosztolya, Gabor; Vincze, Veronika; Szatloczki, Greta; Banreti, Zoltan; Pakaski, Magdolna; Kalman, Janos
2018-01-01
Even today the reliable diagnosis of the prodromal stages of Alzheimer's disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive decline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech signals, first manually (using the Praat software), and then automatically, with an automatic speech recognition (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process - that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. The temporal analysis of spontaneous speech can be exploited in implementing a new, automatic detection-based tool for screening MCI for the community. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
A novel speech-processing strategy incorporating tonal information for cochlear implants.
Lan, N; Nie, K B; Gao, S K; Zeng, F G
2004-05-01
Good performance in cochlear implant users depends in large part on the ability of a speech processor to effectively decompose speech signals into multiple channels of narrow-band electrical pulses for stimulation of the auditory nerve. Speech processors that extract only envelopes of the narrow-band signals (e.g., the continuous interleaved sampling (CIS) processor) may not provide sufficient information to encode the tonal cues in languages such as Chinese. To improve the performance in cochlear implant users who speak tonal language, we proposed and developed a novel speech-processing strategy, which extracted both the envelopes of the narrow-band signals and the fundamental frequency (F0) of the speech signal, and used them to modulate both the amplitude and the frequency of the electrical pulses delivered to stimulation electrodes. We developed an algorithm to extract the fundatmental frequency and identified the general patterns of pitch variations of four typical tones in Chinese speech. The effectiveness of the extraction algorithm was verified with an artificial neural network that recognized the tonal patterns from the extracted F0 information. We then compared the novel strategy with the envelope-extraction CIS strategy in human subjects with normal hearing. The novel strategy produced significant improvement in perception of Chinese tones, phrases, and sentences. This novel processor with dynamic modulation of both frequency and amplitude is encouraging for the design of a cochlear implant device for sensorineurally deaf patients who speak tonal languages.