Speech Recognition as a Transcription Aid: A Randomized Comparison With Standard Transcription
Mohr, David N.; Turner, David W.; Pond, Gregory R.; Kamath, Joseph S.; De Vos, Cathy B.; Carpenter, Paul C.
2003-01-01
Objective. Speech recognition promises to reduce information entry costs for clinical information systems. It is most likely to be accepted across an organization if physicians can dictate without concerning themselves with real-time recognition and editing; assistants can then edit and process the computer-generated document. Our objective was to evaluate the use of speech-recognition technology in a randomized controlled trial using our institutional infrastructure. Design. Clinical note dictations from physicians in two specialty divisions were randomized to either a standard transcription process or a speech-recognition process. Secretaries and transcriptionists also were assigned randomly to each of these processes. Measurements. The duration of each dictation was measured. The amount of time spent processing a dictation to yield a finished document also was measured. Secretarial and transcriptionist productivity, defined as hours of secretary work per minute of dictation processed, was determined for speech recognition and standard transcription. Results. Secretaries in the endocrinology division were 87.3% (confidence interval, 83.3%, 92.3%) as productive with the speech-recognition technology as implemented in this study as they were using standard transcription. Psychiatry transcriptionists and secretaries were similarly less productive. Author, secretary, and type of clinical note were significant (p < 0.05) predictors of productivity. Conclusion. When implemented in an organization with an existing document-processing infrastructure (which included training and interfaces of the speech-recognition editor with the existing document entry application), speech recognition did not improve the productivity of secretaries or transcriptionists. PMID:12509359
The right hemisphere is highlighted in connected natural speech production and perception.
Alexandrou, Anna Maria; Saarinen, Timo; Mäkelä, Sasu; Kujala, Jan; Salmelin, Riitta
2017-05-15
Current understanding of the cortical mechanisms of speech perception and production stems mostly from studies that focus on single words or sentences. However, it has been suggested that processing of real-life connected speech may rely on additional cortical mechanisms. In the present study, we examined the neural substrates of natural speech production and perception with magnetoencephalography by modulating three central features related to speech: amount of linguistic content, speaking rate and social relevance. The amount of linguistic content was modulated by contrasting natural speech production and perception to speech-like non-linguistic tasks. Meaningful speech was produced and perceived at three speaking rates: normal, slow and fast. Social relevance was probed by having participants attend to speech produced by themselves and an unknown person. These speech-related features were each associated with distinct spatiospectral modulation patterns that involved cortical regions in both hemispheres. Natural speech processing markedly engaged the right hemisphere in addition to the left. In particular, the right temporo-parietal junction, previously linked to attentional processes and social cognition, was highlighted in the task modulations. The present findings suggest that its functional role extends to active generation and perception of meaningful, socially relevant speech. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Theoretical Aspects of Speech Production.
ERIC Educational Resources Information Center
Stevens, Kenneth N.
1992-01-01
This paper on speech production in children and youth with hearing impairments summarizes theoretical aspects, including the speech production process, sound sources in the vocal tract, vowel production, and consonant production. Examples of spectra for several classes of vowel and consonant sounds in simple syllables are given. (DB)
Relationship between individual differences in speech processing and cognitive functions.
Ou, Jinghua; Law, Sam-Po; Fung, Roxana
2015-12-01
A growing body of research has suggested that cognitive abilities may play a role in individual differences in speech processing. The present study took advantage of a widespread linguistic phenomenon of sound change to systematically assess the relationships between speech processing and various components of attention and working memory in the auditory and visual modalities among typically developed Cantonese-speaking individuals. The individual variations in speech processing are captured in an ongoing sound change-tone merging in Hong Kong Cantonese, in which typically developed native speakers are reported to lose the distinctions between some tonal contrasts in perception and/or production. Three groups of participants were recruited, with a first group of good perception and production, a second group of good perception but poor production, and a third group of good production but poor perception. Our findings revealed that modality-independent abilities of attentional switching/control and working memory might contribute to individual differences in patterns of speech perception and production as well as discrimination latencies among typically developed speakers. The findings not only have the potential to generalize to speech processing in other languages, but also broaden our understanding of the omnipresent phenomenon of language change in all languages.
Zheng, Zane Z; Munhall, Kevin G; Johnsrude, Ingrid S
2010-08-01
The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not and by examining the overlap with the network recruited during passive listening to speech sounds. We used real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word ("Ted") and either heard this clearly or heard voice-gated masking noise. We compared this to when they listened to yoked stimuli (identical recordings of "Ted" or noise) without speaking. Activity along the STS and superior temporal gyrus bilaterally was significantly greater if the auditory stimulus was (a) processed as the auditory concomitant of speaking and (b) did not match the predicted outcome (noise). The network exhibiting this Feedback Type x Production/Perception interaction includes a superior temporal gyrus/middle temporal gyrus region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts and that processes an error signal in speech-sensitive regions when this and the sensory data do not match.
Zheng, Zane Z.; Munhall, Kevin G; Johnsrude, Ingrid S
2009-01-01
The fluency and reliability of speech production suggests a mechanism that links motor commands and sensory feedback. Here, we examine the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not, and examining the overlap with the network recruited during passive listening to speech sounds. We use real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word (‘Ted’) and either heard this clearly, or heard voice-gated masking noise. We compare this to when they listened to yoked stimuli (identical recordings of ‘Ted’ or noise) without speaking. Activity along the superior temporal sulcus (STS) and superior temporal gyrus (STG) bilaterally was significantly greater if the auditory stimulus was a) processed as the auditory concomitant of speaking and b) did not match the predicted outcome (noise). The network exhibiting this Feedback type by Production/Perception interaction includes an STG/MTG region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts, and that processes an error signal in speech-sensitive regions when this and the sensory data do not match. PMID:19642886
ERIC Educational Resources Information Center
Turney, Michael T.; And Others
This report on speech research contains papers describing experiments involving both information processing and speech production. The papers concerned with information processing cover such topics as peripheral and central processes in vision, separate speech and nonspeech processing in dichotic listening, and dichotic fusion along an acoustic…
Left Lateralized Enhancement of Orofacial Somatosensory Processing Due to Speech Sounds
ERIC Educational Resources Information Center
Ito, Takayuki; Johns, Alexis R.; Ostry, David J.
2013-01-01
Purpose: Somatosensory information associated with speech articulatory movements affects the perception of speech sounds and vice versa, suggesting an intimate linkage between speech production and perception systems. However, it is unclear which cortical processes are involved in the interaction between speech sounds and orofacial somatosensory…
Studer-Eichenberger, Esther; Studer-Eichenberger, Felix; Koenig, Thomas
2016-01-01
The objectives of the present study were to investigate temporal/spectral sound-feature processing in preschool children (4 to 7 years old) with peripheral hearing loss compared with age-matched controls. The results verified the presence of statistical learning, which was diminished in children with hearing impairments (HIs), and elucidated possible perceptual mediators of speech production. Perception and production of the syllables /ba/, /da/, /ta/, and /na/ were recorded in 13 children with normal hearing and 13 children with HI. Perception was assessed physiologically through event-related potentials (ERPs) recorded by EEG in a multifeature mismatch negativity paradigm and behaviorally through a discrimination task. Temporal and spectral features of the ERPs during speech perception were analyzed, and speech production was quantitatively evaluated using speech motor maximum performance tasks. Proximal to stimulus onset, children with HI displayed a difference in map topography, indicating diminished statistical learning. In later ERP components, children with HI exhibited reduced amplitudes in the N2 and early parts of the late disciminative negativity components specifically, which are associated with temporal and spectral control mechanisms. Abnormalities of speech perception were only subtly reflected in speech production, as the lone difference found in speech production studies was a mild delay in regulating speech intensity. In addition to previously reported deficits of sound-feature discriminations, the present study results reflect diminished statistical learning in children with HI, which plays an early and important, but so far neglected, role in phonological processing. Furthermore, the lack of corresponding behavioral abnormalities in speech production implies that impaired perceptual capacities do not necessarily translate into productive deficits.
Normal Aspects of Speech, Hearing, and Language.
ERIC Educational Resources Information Center
Minifie, Fred. D., Ed.; And Others
This book is written as a guide to the understanding of the processes involved in human speech communication. Ten authorities contributed material to provide an introduction to the physiological aspects of speech production and reception, the acoustical aspects of speech production and transmission, the psychophysics of sound reception, the nature…
Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E.; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z.
2015-01-01
In the last decade, the debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. However, the exact role of the motor system in auditory speech processing remains elusive. Here we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. The patient’s spontaneous speech was marked by frequent phonological/articulatory errors, and those errors were caused, at least in part, by motor-level impairments with speech production. We found that the patient showed a normal phonemic categorical boundary when discriminating two nonwords that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the nonword stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labeling impairment. These data suggest that the identification (i.e. labeling) of nonword speech sounds may involve the speech motor system, but that the perception of speech sounds (i.e., discrimination) does not require the motor system. This means that motor processes are not causally involved in perception of the speech signal, and suggest that the motor system may be used when other cues (e.g., meaning, context) are not available. PMID:25951749
The Use of Electroencephalography in Language Production Research: A Review
Ganushchak, Lesya Y.; Christoffels, Ingrid K.; Schiller, Niels O.
2011-01-01
Speech production long avoided electrophysiological experiments due to the suspicion that potential artifacts caused by muscle activity of overt speech may lead to a bad signal-to-noise ratio in the measurements. Therefore, researchers have sought to assess speech production by using indirect speech production tasks, such as tacit or implicit naming, delayed naming, or meta-linguistic tasks, such as phoneme-monitoring. Covert speech may, however, involve different processes than overt speech production. Recently, overt speech has been investigated using electroencephalography (EEG). As the number of papers published is rising steadily, this clearly indicates the increasing interest and demand for overt speech research within the field of cognitive neuroscience of language. Our main goal here is to review all currently available results of overt speech production involving EEG measurements, such as picture naming, Stroop naming, and reading aloud. We conclude that overt speech production can be successfully studied using electrophysiological measures, for instance, event-related brain potentials (ERPs). We will discuss possible relevant components in the ERP waveform of speech production and aim to address the issue of how to interpret the results of ERP research using overt speech, and whether the ERP components in language production are comparable to results from other fields. PMID:21909333
Orthography and Modality Influence Speech Production in Adults and Children.
Saletta, Meredith; Goffman, Lisa; Hogan, Tiffany P
2016-12-01
The acquisition of literacy skills influences the perception and production of spoken language. We examined if orthography influences implicit processing in speech production in child readers and in adult readers with low and high reading proficiency. Children (n = 17), adults with typical reading skills (n = 17), and adults demonstrating low reading proficiency (n = 18) repeated or read aloud nonwords varying in orthographic transparency. Analyses of implicit linguistic processing (segmental accuracy and speech movement stability) were conducted. The accuracy and articulatory stability of productions of the nonwords were assessed before and after repetition or reading. Segmental accuracy results indicate that all 3 groups demonstrated greater learning when they were able to read, rather than just hear, the nonwords. Speech movement results indicate that, for adults with poor reading skills, exposure to the nonwords in a transparent spelling reduces the articulatory variability of speech production. Reading skill was correlated with speech movement stability in the groups of adults. In children and adults, orthography interacts with speech production; all participants integrate orthography into their lexical representations. Adults with poor reading skills do not use the same reading or speaking strategies as children with typical reading skills.
Orthography and Modality Influence Speech Production in Adults and Children
Goffman, Lisa; Hogan, Tiffany P.
2016-01-01
Purpose The acquisition of literacy skills influences the perception and production of spoken language. We examined if orthography influences implicit processing in speech production in child readers and in adult readers with low and high reading proficiency. Method Children (n = 17), adults with typical reading skills (n = 17), and adults demonstrating low reading proficiency (n = 18) repeated or read aloud nonwords varying in orthographic transparency. Analyses of implicit linguistic processing (segmental accuracy and speech movement stability) were conducted. The accuracy and articulatory stability of productions of the nonwords were assessed before and after repetition or reading. Results Segmental accuracy results indicate that all 3 groups demonstrated greater learning when they were able to read, rather than just hear, the nonwords. Speech movement results indicate that, for adults with poor reading skills, exposure to the nonwords in a transparent spelling reduces the articulatory variability of speech production. Reading skill was correlated with speech movement stability in the groups of adults. Conclusions In children and adults, orthography interacts with speech production; all participants integrate orthography into their lexical representations. Adults with poor reading skills do not use the same reading or speaking strategies as children with typical reading skills. PMID:27942710
Speech processing and production in two-year-old children acquiring isiXhosa: A tale of two children
Rossouw, Kate; Fish, Laura; Jansen, Charne; Manley, Natalie; Powell, Michelle; Rosen, Loren
2016-01-01
We investigated the speech processing and production of 2-year-old children acquiring isiXhosa in South Africa. Two children (2 years, 5 months; 2 years, 8 months) are presented as single cases. Speech input processing, stored phonological knowledge and speech output are described, based on data from auditory discrimination, naming, and repetition tasks. Both children were approximating adult levels of accuracy in their speech output, although naming was constrained by vocabulary. Performance across tasks was variable: One child showed a relative strength with repetition, and experienced most difficulties with auditory discrimination. The other performed equally well in naming and repetition, and obtained 100% for her auditory task. There is limited data regarding typical development of isiXhosa, and the focus has mainly been on speech production. This exploratory study describes typical development of isiXhosa using a variety of tasks understood within a psycholinguistic framework. We describe some ways in which speech and language therapists can devise and carry out assessment with children in situations where few formal assessments exist, and also detail the challenges of such work. PMID:27245131
Electrocorticographic representations of segmental features in continuous speech
Lotte, Fabien; Brumberg, Jonathan S.; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L.; Guan, Cuntai; Schalk, Gerwin
2015-01-01
Acoustic speech output results from coordinated articulation of dozens of muscles, bones and cartilages of the vocal mechanism. While we commonly take the fluency and speed of our speech productions for granted, the neural mechanisms facilitating the requisite muscular control are not completely understood. Previous neuroimaging and electrophysiology studies of speech sensorimotor control has typically concentrated on speech sounds (i.e., phonemes, syllables and words) in isolation; sentence-length investigations have largely been used to inform coincident linguistic processing. In this study, we examined the neural representations of segmental features (place and manner of articulation, and voicing status) in the context of fluent, continuous speech production. We used recordings from the cortical surface [electrocorticography (ECoG)] to simultaneously evaluate the spatial topography and temporal dynamics of the neural correlates of speech articulation that may mediate the generation of hypothesized gestural or articulatory scores. We found that the representation of place of articulation involved broad networks of brain regions during all phases of speech production: preparation, execution and monitoring. In contrast, manner of articulation and voicing status were dominated by auditory cortical responses after speech had been initiated. These results provide a new insight into the articulatory and auditory processes underlying speech production in terms of their motor requirements and acoustic correlates. PMID:25759647
Effects of Real-Time Cochlear Implant Simulation on Speech Perception and Production
ERIC Educational Resources Information Center
Casserly, Elizabeth D.
2013-01-01
Real-time use of spoken language is a fundamentally interactive process involving speech perception, speech production, linguistic competence, motor control, neurocognitive abilities such as working memory, attention, and executive function, environmental noise, conversational context, and--critically--the communicative interaction between…
Spatiotemporal imaging of cortical activation during verb generation and picture naming.
Edwards, Erik; Nagarajan, Srikantan S; Dalal, Sarang S; Canolty, Ryan T; Kirsch, Heidi E; Barbaro, Nicholas M; Knight, Robert T
2010-03-01
One hundred and fifty years of neurolinguistic research has identified the key structures in the human brain that support language. However, neither the classic neuropsychological approaches introduced by Broca (1861) and Wernicke (1874), nor modern neuroimaging employing PET and fMRI has been able to delineate the temporal flow of language processing in the human brain. We recorded the electrocorticogram (ECoG) from indwelling electrodes over left hemisphere language cortices during two common language tasks, verb generation and picture naming. We observed that the very high frequencies of the ECoG (high-gamma, 70-160 Hz) track language processing with spatial and temporal precision. Serial progression of activations is seen at a larger timescale, showing distinct stages of perception, semantic association/selection, and speech production. Within the areas supporting each of these larger processing stages, parallel (or "incremental") processing is observed. In addition to the traditional posterior vs. anterior localization for speech perception vs. production, we provide novel evidence for the role of premotor cortex in speech perception and of Wernicke's and surrounding cortex in speech production. The data are discussed with regards to current leading models of speech perception and production, and a "dual ventral stream" hybrid of leading speech perception models is given. Copyright (c) 2009 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Haskins Labs., New Haven, CT.
This report on speech research contains 21 papers describing research conducted on a variety of topics concerning speech perception, processing, and production. The initial two reports deal with brain function in speech; several others concern ear function, both in terms of perception and information processing. A number of reports describe…
Computational neuroanatomy of speech production.
Hickok, Gregory
2012-01-05
Speech production has been studied predominantly from within two traditions, psycholinguistics and motor control. These traditions have rarely interacted, and the resulting chasm between these approaches seems to reflect a level of analysis difference: whereas motor control is concerned with lower-level articulatory control, psycholinguistics focuses on higher-level linguistic processing. However, closer examination of both approaches reveals a substantial convergence of ideas. The goal of this article is to integrate psycholinguistic and motor control approaches to speech production. The result of this synthesis is a neuroanatomically grounded, hierarchical state feedback control model of speech production.
Cohesive and coherent connected speech deficits in mild stroke.
Barker, Megan S; Young, Breanne; Robinson, Gail A
2017-05-01
Spoken language production theories and lesion studies highlight several important prelinguistic conceptual preparation processes involved in the production of cohesive and coherent connected speech. Cohesion and coherence broadly connect sentences with preceding ideas and the overall topic. Broader cognitive mechanisms may mediate these processes. This study aims to investigate (1) whether stroke patients without aphasia exhibit impairments in cohesion and coherence in connected speech, and (2) the role of attention and executive functions in the production of connected speech. Eighteen stroke patients (8 right hemisphere stroke [RHS]; 6 left [LHS]) and 21 healthy controls completed two self-generated narrative tasks to elicit connected speech. A multi-level analysis of within and between-sentence processing ability was conducted. Cohesion and coherence impairments were found in the stroke group, particularly RHS patients, relative to controls. In the whole stroke group, better performance on the Hayling Test of executive function, which taps verbal initiation/suppression, was related to fewer propositional repetitions and global coherence errors. Better performance on attention tasks was related to fewer propositional repetitions, and decreased global coherence errors. In the RHS group, aspects of cohesive and coherent speech were associated with better performance on attention tasks. Better Hayling Test scores were related to more cohesive and coherent speech in RHS patients, and more coherent speech in LHS patients. Thus, we documented connected speech deficits in a heterogeneous stroke group without prominent aphasia. Our results suggest that broader cognitive processes may play a role in producing connected speech at the early conceptual preparation stage. Copyright © 2017 Elsevier Inc. All rights reserved.
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers.
Liu, Fang; Chan, Alice H D; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C M
2016-07-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production.
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers
Liu, Fang; Chan, Alice H. D.; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C. M.
2016-01-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production. PMID:27475178
Gauvin, Hanna S; De Baene, Wouter; Brass, Marcel; Hartsuiker, Robert J
2016-02-01
To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated whether internal verbal monitoring takes place through the speech perception system, as proposed by perception-based theories of speech monitoring, or whether mechanisms independent of perception are applied, as proposed by production-based theories of speech monitoring. With the use of fMRI during a tongue twister task we observed that error detection in internal speech during noise-masked overt speech production and error detection in speech perception both recruit the same neural network, which includes pre-supplementary motor area (pre-SMA), dorsal anterior cingulate cortex (dACC), anterior insula (AI), and inferior frontal gyrus (IFG). Although production and perception recruit similar areas, as proposed by perception-based accounts, we did not find activation in superior temporal areas (which are typically associated with speech perception) during internal speech monitoring in speech production as hypothesized by these accounts. On the contrary, results are highly compatible with a domain general approach to speech monitoring, by which internal speech monitoring takes place through detection of conflict between response options, which is subsequently resolved by a domain general executive center (e.g., the ACC). Copyright © 2015 Elsevier Inc. All rights reserved.
A common functional neural network for overt production of speech and gesture.
Marstaller, L; Burianová, H
2015-01-22
The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
ON THE NATURE OF SPEECH SCIENCE.
ERIC Educational Resources Information Center
PETERSON, GORDON E.
IN THIS ARTICLE THE NATURE OF THE DISCIPLINE OF SPEECH SCIENCE IS CONSIDERED AND THE VARIOUS BASIC AND APPLIED AREAS OF THE DISCIPLINE ARE DISCUSSED. THE BASIC AREAS ENCOMPASS THE VARIOUS PROCESSES OF THE PHYSIOLOGY OF SPEECH PRODUCTION, THE ACOUSTICAL CHARACTERISTICS OF SPEECH, INCLUDING THE SPEECH WAVE TYPES AND THE INFORMATION-BEARING ACOUSTIC…
1983-09-30
determines, in part, what the infant says; and if perception is to guide production, the two processes must be, in some sense, isomorphic. An artificial speech ...influences on speech perception processes . Perception & Psychophysics, 24, 253-257. MacKain, K. S., Studdert-Kennedy, M., Spieker, S., & Stern, D. (1983...sentence contexts. In A. Cohen & S. E. G. Nooteboom (Eds.), Structure and process in speech perception (pp. 69-89). New York: Springer- Verlag. Larkey
Scaling and universality in the human voice.
Luque, Jordi; Luque, Bartolo; Lacasa, Lucas
2015-04-06
Speech is a distinctive complex feature of human capabilities. In order to understand the physics underlying speech production, in this work, we empirically analyse the statistics of large human speech datasets ranging several languages. We first show that during speech, the energy is unevenly released and power-law distributed, reporting a universal robust Gutenberg-Richter-like law in speech. We further show that such 'earthquakes in speech' show temporal correlations, as the interevent statistics are again power-law distributed. As this feature takes place in the intraphoneme range, we conjecture that the process responsible for this complex phenomenon is not cognitive, but it resides in the physiological (mechanical) mechanisms of speech production. Moreover, we show that these waiting time distributions are scale invariant under a renormalization group transformation, suggesting that the process of speech generation is indeed operating close to a critical point. These results are put in contrast with current paradigms in speech processing, which point towards low dimensional deterministic chaos as the origin of nonlinear traits in speech fluctuations. As these latter fluctuations are indeed the aspects that humanize synthetic speech, these findings may have an impact in future speech synthesis technologies. Results are robust and independent of the communication language or the number of speakers, pointing towards a universal pattern and yet another hint of complexity in human speech. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Temporal factors affecting somatosensory–auditory interactions in speech processing
Ito, Takayuki; Gracco, Vincent L.; Ostry, David J.
2014-01-01
Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production. PMID:25452733
Speech and Language Skills of Parents of Children with Speech Sound Disorders
ERIC Educational Resources Information Center
Lewis, Barbara A.; Freebairn, Lisa A.; Hansen, Amy J.; Miscimarra, Lara; Iyengar, Sudha K.; Taylor, H. Gerry
2007-01-01
Purpose: This study compared parents with histories of speech sound disorders (SSD) to parents without known histories on measures of speech sound production, phonological processing, language, reading, and spelling. Familial aggregation for speech and language disorders was also examined. Method: The participants were 147 parents of children with…
Articulatory speech synthesis and speech production modelling
NASA Astrophysics Data System (ADS)
Huang, Jun
This dissertation addresses the problem of speech synthesis and speech production modelling based on the fundamental principles of human speech production. Unlike the conventional source-filter model, which assumes the independence of the excitation and the acoustic filter, we treat the entire vocal apparatus as one system consisting of a fluid dynamic aspect and a mechanical part. We model the vocal tract by a three-dimensional moving geometry. We also model the sound propagation inside the vocal apparatus as a three-dimensional nonplane-wave propagation inside a viscous fluid described by Navier-Stokes equations. In our work, we first propose a combined minimum energy and minimum jerk criterion to estimate the dynamic vocal tract movements during speech production. Both theoretical error bound analysis and experimental results show that this method can achieve very close match at the target points and avoid the abrupt change in articulatory trajectory at the same time. Second, a mechanical vocal fold model is used to compute the excitation signal of the vocal tract. The advantage of this model is that it is closely coupled with the vocal tract system based on fundamental aerodynamics. As a result, we can obtain an excitation signal with much more detail than the conventional parametric vocal fold excitation model. Furthermore, strong evidence of source-tract interaction is observed. Finally, we propose a computational model of the fricative and stop types of sounds based on the physical principles of speech production. The advantage of this model is that it uses an exogenous process to model the additional nonsteady and nonlinear effects due to the flow mode, which are ignored by the conventional source- filter speech production model. A recursive algorithm is used to estimate the model parameters. Experimental results show that this model is able to synthesize good quality fricative and stop types of sounds. Based on our dissertation work, we carefully argue that the articulatory speech production model has the potential to flexibly synthesize natural-quality speech sounds and to provide a compact computational model for speech production that can be beneficial to a wide range of areas in speech signal processing.
Hashizume, Hiroshi; Taki, Yasuyuki; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Asano, Kohei; Takeuchi, Hikaru; Nouchi, Rui; Kotozaki, Yuka; Jeong, Hyeonjeong; Sugiura, Motoaki; Kawashima, Ryuta
2014-08-01
Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech. Copyright © 2014 Wiley Periodicals, Inc.
Song and speech: brain regions involved with perception and covert production.
Callan, Daniel E; Tsytsarev, Vassiliy; Hanakawa, Takashi; Callan, Akiko M; Katsuhara, Maya; Fukuyama, Hidenao; Turner, Robert
2006-07-01
This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of interest calculated from images normalized to the left-right reflected template. Consistent with some hypotheses regarding hemispheric specialization, a pattern of differential laterality for speech over singing (both covert production and listening tasks) occurs in the left temporal lobe, whereas, singing over speech (listening task only) occurs in right temporal lobe.
Communicating by Language: The Speech Process.
ERIC Educational Resources Information Center
House, Arthur S., Ed.
This document reports on a conference focused on speech problems. The main objective of these discussions was to facilitate a deeper understanding of human communication through interaction of conference participants with colleagues in other disciplines. Topics discussed included speech production, feedback, speech perception, and development of…
Yamamoto, Kosuke; Kawabata, Hideaki
2014-12-01
We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.
Goldrick, Matthew; Keshet, Joseph; Gustafson, Erin; Heller, Jordana; Needle, Jeremy
2016-04-01
Traces of the cognitive mechanisms underlying speaking can be found within subtle variations in how we pronounce sounds. While speech errors have traditionally been seen as categorical substitutions of one sound for another, acoustic/articulatory analyses show they partially reflect the intended sound. When "pig" is mispronounced as "big," the resulting /b/ sound differs from correct productions of "big," moving towards intended "pig"-revealing the role of graded sound representations in speech production. Investigating the origins of such phenomena requires detailed estimation of speech sound distributions; this has been hampered by reliance on subjective, labor-intensive manual annotation. Computational methods can address these issues by providing for objective, automatic measurements. We develop a novel high-precision computational approach, based on a set of machine learning algorithms, for measurement of elicited speech. The algorithms are trained on existing manually labeled data to detect and locate linguistically relevant acoustic properties with high accuracy. Our approach is robust, is designed to handle mis-productions, and overall matches the performance of expert coders. It allows us to analyze a very large dataset of speech errors (containing far more errors than the total in the existing literature), illuminating properties of speech sound distributions previously impossible to reliably observe. We argue that this provides novel evidence that two sources both contribute to deviations in speech errors: planning processes specifying the targets of articulation and articulatory processes specifying the motor movements that execute this plan. These findings illustrate how a much richer picture of speech provides an opportunity to gain novel insights into language processing. Copyright © 2016 Elsevier B.V. All rights reserved.
Reilly, Kevin J.; Spencer, Kristie A.
2013-01-01
The current study investigated the processes responsible for selection of sounds and syllables during production of speech sequences in 10 adults with hypokinetic dysarthria from Parkinson’s disease, five adults with ataxic dysarthria, and 14 healthy control speakers. Speech production data from a choice reaction time task were analyzed to evaluate the effects of sequence length and practice on speech sound sequencing. Speakers produced sequences that were between one and five syllables in length over five experimental runs of 60 trials each. In contrast to the healthy speakers, speakers with hypokinetic dysarthria demonstrated exaggerated sequence length effects for both inter-syllable intervals (ISIs) and speech error rates. Conversely, speakers with ataxic dysarthria failed to demonstrate a sequence length effect on ISIs and were also the only group that did not exhibit practice-related changes in ISIs and speech error rates over the five experimental runs. The exaggerated sequence length effects in the hypokinetic speakers with Parkinson’s disease are consistent with an impairment of action selection during speech sequence production. The absent length effects observed in the speakers with ataxic dysarthria is consistent with previous findings that indicate a limited capacity to buffer speech sequences in advance of their execution. In addition, the lack of practice effects in these speakers suggests that learning-related improvements in the production rate and accuracy of speech sequences involves processing by structures of the cerebellum. Together, the current findings inform models of serial control for speech in healthy speakers and support the notion that sequencing deficits contribute to speech symptoms in speakers with hypokinetic or ataxic dysarthria. In addition, these findings indicate that speech sequencing is differentially impaired in hypokinetic and ataxic dysarthria. PMID:24137121
Lessons Learned in Part-of-Speech Tagging of Conversational Speech
2010-10-01
for conversational speech recognition. In Plenary Meeting and Symposium on Prosody and Speech Processing. Slav Petrov and Dan Klein. 2007. Improved...inference for unlexicalized parsing. In HLT-NAACL. Slav Petrov. 2010. Products of random latent variable grammars. In HLT-NAACL. Brian Roark, Yang Liu
Mechanisms of Interaction in Speech Production
ERIC Educational Resources Information Center
Baese-Berk, Melissa; Goldrick, Matthew
2009-01-01
Many theories predict the presence of interactive effects involving information represented by distinct cognitive processes in speech production. There is considerably less agreement regarding the precise cognitive mechanisms that underlie these interactive effects. For example, are they driven by purely production-internal mechanisms (e.g., Dell,…
[Modeling developmental aspects of sensorimotor control of speech production].
Kröger, B J; Birkholz, P; Neuschaefer-Rube, C
2007-05-01
Detailed knowledge of the neurophysiology of speech acquisition is important for understanding the developmental aspects of speech perception and production and for understanding developmental disorders of speech perception and production. A computer implemented neural model of sensorimotor control of speech production was developed. The model is capable of demonstrating the neural functions of different cortical areas during speech production in detail. (i) Two sensory and two motor maps or neural representations and the appertaining neural mappings or projections establish the sensorimotor feedback control system. These maps and mappings are already formed and trained during the prelinguistic phase of speech acquisition. (ii) The feedforward sensorimotor control system comprises the lexical map (representations of sounds, syllables, and words of the first language) and the mappings from lexical to sensory and to motor maps. The training of the appertaining mappings form the linguistic phase of speech acquisition. (iii) Three prelinguistic learning phases--i. e. silent mouthing, quasi stationary vocalic articulation, and realisation of articulatory protogestures--can be defined on the basis of our simulation studies using the computational neural model. These learning phases can be associated with temporal phases of prelinguistic speech acquisition obtained from natural data. The neural model illuminates the detailed function of specific cortical areas during speech production. In particular it can be shown that developmental disorders of speech production may result from a delayed or incorrect process within one of the prelinguistic learning phases defined by the neural model.
ERIC Educational Resources Information Center
Chasaide, Ailbhe Ni; Davis, Eugene
The data processing system used at Trinity College's Centre for Language and Communication Studies (Ireland) enables computer-automated collection and analysis of phonetic data and has many advantages for research on speech production. The system allows accurate handling of large quantities of data, eliminates many of the limitations of manual…
Clustered functional MRI of overt speech production.
Sörös, Peter; Sokoloff, Lisa Guttman; Bose, Arpita; McIntosh, Anthony R; Graham, Simon J; Stuss, Donald T
2006-08-01
To investigate the neural network of overt speech production, event-related fMRI was performed in 9 young healthy adult volunteers. A clustered image acquisition technique was chosen to minimize speech-related movement artifacts. Functional images were acquired during the production of oral movements and of speech of increasing complexity (isolated vowel as well as monosyllabic and trisyllabic utterances). This imaging technique and behavioral task enabled depiction of the articulo-phonologic network of speech production from the supplementary motor area at the cranial end to the red nucleus at the caudal end. Speaking a single vowel and performing simple oral movements involved very similar activation of the cortical and subcortical motor systems. More complex, polysyllabic utterances were associated with additional activation in the bilateral cerebellum, reflecting increased demand on speech motor control, and additional activation in the bilateral temporal cortex, reflecting the stronger involvement of phonologic processing.
Cooke, Martin; Lu, Youyi
2010-10-01
Talkers change the way they speak in noisy conditions. For energetic maskers, speech production changes are relatively well-understood, but less is known about how informational maskers such as competing speech affect speech production. The current study examines the effect of energetic and informational maskers on speech production by talkers speaking alone or in pairs. Talkers produced speech in quiet and in backgrounds of speech-shaped noise, speech-modulated noise, and competing speech. Relative to quiet, speech output level and fundamental frequency increased and spectral tilt flattened in proportion to the energetic masking capacity of the background. In response to modulated backgrounds, talkers were able to reduce substantially the degree of temporal overlap with the noise, with greater reduction for the competing speech background. Reduction in foreground-background overlap can be expected to lead to a release from both energetic and informational masking for listeners. Passive changes in speech rate, mean pause length or pause distribution cannot explain the overlap reduction, which appears instead to result from a purposeful process of listening while speaking. Talkers appear to monitor the background and exploit upcoming pauses, a strategy which is particularly effective for backgrounds containing intelligible speech.
Overlapping Networks Engaged during Spoken Language Production and Its Cognitive Control
Wise, Richard J.S.; Mehta, Amrish; Leech, Robert
2014-01-01
Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and “rest,” to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. PMID:24966373
Overlapping networks engaged during spoken language production and its cognitive control.
Geranmayeh, Fatemeh; Wise, Richard J S; Mehta, Amrish; Leech, Robert
2014-06-25
Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and "rest," to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production. Copyright © 2014 Geranmayeh et al.
Voxel-based morphometry of auditory and speech-related cortex in stutterers.
Beal, Deryk S; Gracco, Vincent L; Lafaille, Sophie J; De Nil, Luc F
2007-08-06
Stutterers demonstrate unique functional neural activation patterns during speech production, including reduced auditory activation, relative to nonstutterers. The extent to which these functional differences are accompanied by abnormal morphology of the brain in stutterers is unclear. This study examined the neuroanatomical differences in speech-related cortex between stutterers and nonstutterers using voxel-based morphometry. Results revealed significant differences in localized grey matter and white matter densities of left and right hemisphere regions involved in auditory processing and speech production.
The Role of Native-Language Knowledge in the Perception of Casual Speech in a Second Language
Mitterer, Holger; Tuinman, Annelie
2012-01-01
Casual speech processes, such as /t/-reduction, make word recognition harder. Additionally, word recognition is also harder in a second language (L2). Combining these challenges, we investigated whether L2 learners have recourse to knowledge from their native language (L1) when dealing with casual speech processes in their L2. In three experiments, production and perception of /t/-reduction was investigated. An initial production experiment showed that /t/-reduction occurred in both languages and patterned similarly in proper nouns but differed when /t/ was a verbal inflection. Two perception experiments compared the performance of German learners of Dutch with that of native speakers for nouns and verbs. Mirroring the production patterns, German learners’ performance strongly resembled that of native Dutch listeners when the reduced /t/ was part of a word stem, but deviated where /t/ was a verbal inflection. These results suggest that a casual speech process in a second language is problematic for learners when the process is not known from the leaner’s native language, similar to what has been observed for phoneme contrasts. PMID:22811675
Orthography and Modality Influence Speech Production in Adults and Children
ERIC Educational Resources Information Center
Saletta, Meredith; Goffman, Lisa; Hogan, Tiffany P.
2016-01-01
Purpose: The acquisition of literacy skills influences the perception and production of spoken language. We examined if orthography influences implicit processing in speech production in child readers and in adult readers with low and high reading proficiency. Method: Children (n = 17), adults with typical reading skills (n = 17), and adults…
Musical melody and speech intonation: singing a different tune.
Zatorre, Robert J; Baum, Shari R
2012-01-01
Music and speech are often cited as characteristically human forms of communication. Both share the features of hierarchical structure, complex sound systems, and sensorimotor sequencing demands, and both are used to convey and influence emotions, among other functions [1]. Both music and speech also prominently use acoustical frequency modulations, perceived as variations in pitch, as part of their communicative repertoire. Given these similarities, and the fact that pitch perception and production involve the same peripheral transduction system (cochlea) and the same production mechanism (vocal tract), it might be natural to assume that pitch processing in speech and music would also depend on the same underlying cognitive and neural mechanisms. In this essay we argue that the processing of pitch information differs significantly for speech and music; specifically, we suggest that there are two pitch-related processing systems, one for more coarse-grained, approximate analysis and one for more fine-grained accurate representation, and that the latter is unique to music. More broadly, this dissociation offers clues about the interface between sensory and motor systems, and highlights the idea that multiple processing streams are a ubiquitous feature of neuro-cognitive architectures.
Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet
2015-06-01
Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z
2015-01-01
The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.
Visual Feedback of Tongue Movement for Novel Speech Sound Learning
Katz, William F.; Mehta, Sonya
2015-01-01
Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one's own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker's learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers' productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing. PMID:26635571
Interaction and Representational Integration: Evidence from Speech Errors
ERIC Educational Resources Information Center
Goldrick, Matthew; Baker, H. Ross; Murphy, Amanda; Baese-Berk, Melissa
2011-01-01
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated…
Speech Entrainment Compensates for Broca's Area Damage
Fridriksson, Julius; Basilakos, Alexandra; Hickok, Gregory; Bonilha, Leonardo; Rorden, Chris
2015-01-01
Speech entrainment (SE), the online mimicking of an audiovisual speech model, has been shown to increase speech fluency in patients with Broca's aphasia. However, not all individuals with aphasia benefit from SE. The purpose of this study was to identify patterns of cortical damage that predict a positive response SE's fluency-inducing effects. Forty-four chronic patients with left hemisphere stroke (15 female) were included in this study. Participants completed two tasks: 1) spontaneous speech production, and 2) audiovisual SE. Number of different words per minute was calculated as a speech output measure for each task, with the difference between SE and spontaneous speech conditions yielding a measure of fluency improvement. Voxel-wise lesion-symptom mapping (VLSM) was used to relate the number of different words per minute for spontaneous speech, SE, and SE-related improvement to patterns of brain damage in order to predict lesion locations associated with the fluency-inducing response to speech entrainment. Individuals with Broca's aphasia demonstrated a significant increase in different words per minute during speech entrainment versus spontaneous speech. A similar pattern of improvement was not seen in patients with other types of aphasia. VLSM analysis revealed damage to the inferior frontal gyrus predicted this response. Results suggest that SE exerts its fluency-inducing effects by providing a surrogate target for speech production via internal monitoring processes. Clinically, these results add further support for the use of speech entrainment to improve speech production and may help select patients for speech entrainment treatment. PMID:25989443
Speech perception as an active cognitive process
Heald, Shannon L. M.; Nusbaum, Howard C.
2014-01-01
One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processing with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or therapy. PMID:24672438
Carroll, Julia M; Mundy, Ian R; Cunningham, Anna J
2014-09-01
It is well established that speech, language and phonological skills are closely associated with literacy, and that children with a family risk of dyslexia (FRD) tend to show deficits in each of these areas in the preschool years. This paper examines what the relationships are between FRD and these skills, and whether deficits in speech, language and phonological processing fully account for the increased risk of dyslexia in children with FRD. One hundred and fifty-three 4-6-year-old children, 44 of whom had FRD, completed a battery of speech, language, phonology and literacy tasks. Word reading and spelling were retested 6 months later, and text reading accuracy and reading comprehension were tested 3 years later. The children with FRD were at increased risk of developing difficulties in reading accuracy, but not reading comprehension. Four groups were compared: good and poor readers with and without FRD. In most cases good readers outperformed poor readers regardless of family history, but there was an effect of family history on naming and nonword repetition regardless of literacy outcome, suggesting a role for speech production skills as an endophenotype of dyslexia. Phonological processing predicted spelling, while language predicted text reading accuracy and comprehension. FRD was a significant additional predictor of reading and spelling after controlling for speech production, language and phonological processing, suggesting that children with FRD show additional difficulties in literacy that cannot be fully explained in terms of their language and phonological skills. © 2014 John Wiley & Sons Ltd.
Speech motor planning and execution deficits in early childhood stuttering.
Walsh, Bridget; Mettel, Kathleen Marie; Smith, Anne
2015-01-01
Five to eight percent of preschool children develop stuttering, a speech disorder with clearly observable, hallmark symptoms: sound repetitions, prolongations, and blocks. While the speech motor processes underlying stuttering have been widely documented in adults, few studies to date have assessed the speech motor dynamics of stuttering near its onset. We assessed fundamental characteristics of speech movements in preschool children who stutter and their fluent peers to determine if atypical speech motor characteristics described for adults are early features of the disorder or arise later in the development of chronic stuttering. Orofacial movement data were recorded from 58 children who stutter and 43 children who do not stutter aged 4;0 to 5;11 (years; months) in a sentence production task. For single speech movements and multiple speech movement sequences, we computed displacement amplitude, velocity, and duration. For the phrase level movement sequence, we computed an index of articulation coordination consistency for repeated productions of the sentence. Boys who stutter, but not girls, produced speech with reduced amplitudes and velocities of articulatory movement. All children produced speech with similar durations. Boys, particularly the boys who stuttered, had more variable patterns of articulatory coordination compared to girls. This study is the first to demonstrate sex-specific differences in speech motor control processes between preschool boys and girls who are stuttering. The sex-specific lag in speech motor development in many boys who stutter likely has significant implications for the dramatically different recovery rates between male and female preschoolers who stutter. Further, our findings document that atypical speech motor development is an early feature of stuttering.
Voice-processing technologies--their application in telecommunications.
Wilpon, J G
1995-01-01
As the telecommunications industry evolves over the next decade to provide the products and services that people will desire, several key technologies will become commonplace. Two of these, automatic speech recognition and text-to-speech synthesis, will provide users with more freedom on when, where, and how they access information. While these technologies are currently in their infancy, their capabilities are rapidly increasing and their deployment in today's telephone network is expanding. The economic impact of just one application, the automation of operator services, is well over $100 million per year. Yet there still are many technical challenges that must be resolved before these technologies can be deployed ubiquitously in products and services throughout the worldwide telephone network. These challenges include: (i) High level of accuracy. The technology must be perceived by the user as highly accurate, robust, and reliable. (ii) Easy to use. Speech is only one of several possible input/output modalities for conveying information between a human and a machine, much like a computer terminal or Touch-Tone pad on a telephone. It is not the final product. Therefore, speech technologies must be hidden from the user. That is, the burden of using the technology must be on the technology itself. (iii) Quick prototyping and development of new products and services. The technology must support the creation of new products and services based on speech in an efficient and timely fashion. In this paper I present a vision of the voice-processing industry with a focus on the areas with the broadest base of user penetration: speech recognition, text-to-speech synthesis, natural language processing, and speaker recognition technologies. The current and future applications of these technologies in the telecommunications industry will be examined in terms of their strengths, limitations, and the degree to which user needs have been or have yet to be met. Although noteworthy gains have been made in areas with potentially small user bases and in the more mature speech-coding technologies, these subjects are outside the scope of this paper. Images Fig. 1 PMID:7479815
Sensorimotor Integration in Speech Processing: Computational Basis and Neural Organization
Hickok, Gregory; Houde, John; Rong, Feng
2011-01-01
Sensorimotor integration is an active domain of speech research and is characterized by two main ideas, that the auditory system is critically involved in speech production, and that the motor system is critically involved in speech perception. Despite the complementarity of these ideas, there is little crosstalk between these literatures. We propose an integrative model of the speech-related “dorsal stream” in which sensorimotor interaction primarily supports speech production, in the form of a state feedback control architecture. A critical component of this control system is forward sensory prediction, which affords a natural mechanism for limited motor influence on perception, as recent perceptual research has suggested. Evidence shows that this influence is modulatory but not necessary for speech perception. The neuroanatomy of the proposed circuit is discussed as well as some probable clinical correlates including conduction aphasia, stuttering, and aspects of schizophrenia. PMID:21315253
Acquisition by Processing Theory: A Theory of Everything?
ERIC Educational Resources Information Center
Carroll, Susanne E.
2004-01-01
Truscott and Sharwood Smith (henceforth T&SS) propose a novel theory of language acquisition, "Acquisition by Processing Theory" (APT), designed to account for both first and second language acquisition, monolingual and bilingual speech perception and parsing, and speech production. This is a tall order. Like any theoretically ambitious…
Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing
Rauschecker, Josef P; Scott, Sophie K
2010-01-01
Speech and language are considered uniquely human abilities: animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, spoken language must have emerged from neural mechanisms at least partially available in animals. In this paper, we will demonstrate how our understanding of speech perception, one important facet of language, has profited from findings and theory in nonhuman primate studies. Chief among these are physiological and anatomical studies showing that primate auditory cortex, across species, shows patterns of hierarchical structure, topographic mapping and streams of functional processing. We will identify roles for different cortical areas in the perceptual processing of speech and review functional imaging work in humans that bears on our understanding of how the brain decodes and monitors speech. A new model connects structures in the temporal, frontal and parietal lobes linking speech perception and production. PMID:19471271
Toward dynamic magnetic resonance imaging of the vocal tract during speech production.
Ventura, Sandra M Rua; Freitas, Diamantino Rui S; Tavares, João Manuel R S
2011-07-01
The most recent and significant magnetic resonance imaging (MRI) improvements allow for the visualization of the vocal tract during speech production, which has been revealed to be a powerful tool in dynamic speech research. However, a synchronization technique with enhanced temporal resolution is still required. The study design was transversal in nature. Throughout this work, a technique for the dynamic study of the vocal tract with MRI by using the heart's signal to synchronize and trigger the imaging-acquisition process is presented and described. The technique in question is then used in the measurement of four speech articulatory parameters to assess three different syllables (articulatory gestures) of European Portuguese Language. The acquired MR images are automatically reconstructed so as to result in a variable sequence of images (slices) of different vocal tract shapes in articulatory positions associated with Portuguese speech sounds. The knowledge obtained as a result of the proposed technique represents a direct contribution to the improvement of speech synthesis algorithms, thereby allowing for novel perceptions in coarticulation studies, in addition to providing further efficient clinical guidelines in the pursuit of more proficient speech rehabilitation processes. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Short-Term Exposure to One Dialect Affects Processing of Another
ERIC Educational Resources Information Center
Hay, Jen; Drager, Katie; Warren, Paul
2010-01-01
It is well established that speakers accommodate in speech production. Recent work has shown a similar effect in perception--speech perception is affected by a listener's beliefs about the speaker. In this paper, we explore the consequences of such perceptual accommodation for experiments in speech perception and lexical access. Our interest is…
Altered Gesture and Speech Production in ASD Detract from In-Person Communicative Quality
ERIC Educational Resources Information Center
Morett, Laura M.; O'Hearn, Kirsten; Luna, Beatriz; Ghuman, Avniel Singh
2016-01-01
This study disentangled the influences of language and social processing on communication in autism spectrum disorder (ASD) by examining whether gesture and speech production differs as a function of social context. The results indicate that, unlike other adolescents, adolescents with ASD did not increase their coherency and engagement in the…
An Experimental Study of Interference between Receptive and Productive Processes Involving Speech
ERIC Educational Resources Information Center
Goldman-Eisler, Frieda; Cohen, Michele
1975-01-01
Reports an experiment designed to throw light on the interference between the reception and production of speech by controlling the level of interference between decoding and encoding, using hesitancy as an indicator of interference. This proved effective in spotting the levels at which interference takes place. (Author/RM)
Tremblay, Pascale; Small, Steven L.
2011-01-01
What is the nature of the interface between speech perception and production, where auditory and motor representations converge? One set of explanations suggests that during perception, the motor circuits involved in producing a perceived action are in some way enacting the action without actually causing movement (covert simulation) or sending along the motor information to be used to predict its sensory consequences (i.e., efference copy). Other accounts either reject entirely the involvement of motor representations in perception, or explain their role as being more supportive than integral, and not employing the identical circuits used in production. Using fMRI, we investigated whether there are brain regions that are conjointly active for both speech perception and production, and whether these regions are sensitive to articulatory (syllabic) complexity during both processes, which is predicted by a covert simulation account. A group of healthy young adults (1) observed a female speaker produce a set of familiar words (perception), and (2) observed and then repeated the words (production). There were two types of words, varying in articulatory complexity, as measured by the presence or absence of consonant clusters. The simple words contained no consonant cluster (e.g. “palace”), while the complex words contained one to three consonant clusters (e.g. “planet”). Results indicate that the left ventral premotor cortex (PMv) was significantly active during speech perception and speech production but that activation in this region was scaled to articulatory complexity only during speech production, revealing an incompletely specified efferent motor signal during speech perception. The right planum temporal (PT) was also active during speech perception and speech production, and activation in this region was scaled to articulatory complexity during both production and perception. These findings are discussed in the context of current theories theory of speech perception, with particular attention to accounts that include an explanatory role for mirror neurons. PMID:21664275
Linguistic Processing of Accented Speech Across the Lifespan
Cristia, Alejandrina; Seidl, Amanda; Vaughn, Charlotte; Schmale, Rachel; Bradlow, Ann; Floccia, Caroline
2012-01-01
In most of the world, people have regular exposure to multiple accents. Therefore, learning to quickly process accented speech is a prerequisite to successful communication. In this paper, we examine work on the perception of accented speech across the lifespan, from early infancy to late adulthood. Unfamiliar accents initially impair linguistic processing by infants, children, younger adults, and older adults, but listeners of all ages come to adapt to accented speech. Emergent research also goes beyond these perceptual abilities, by assessing links with production and the relative contributions of linguistic knowledge and general cognitive skills. We conclude by underlining points of convergence across ages, and the gaps left to face in future work. PMID:23162513
Speech motor development: Integrating muscles, movements, and linguistic units.
Smith, Anne
2006-01-01
A fundamental problem for those interested in human communication is to determine how ideas and the various units of language structure are communicated through speaking. The physiological concepts involved in the control of muscle contraction and movement are theoretically distant from the processing levels and units postulated to exist in language production models. A review of the literature on adult speakers suggests that they engage complex, parallel processes involving many units, including sentence, phrase, syllable, and phoneme levels. Infants must develop multilayered interactions among language and motor systems. This discussion describes recent studies of speech motor performance relative to varying linguistic goals during the childhood, teenage, and young adult years. Studies of the developing interactions between speech motor and language systems reveal both qualitative and quantitative differences between the developing and the mature systems. These studies provide an experimental basis for a more comprehensive theoretical account of how mappings between units of language and units of action are formed and how they function. Readers will be able to: (1) understand the theoretical differences between models of speech motor control and models of language processing, as well as the nature of the concepts used in the two different kinds of models, (2) explain the concept of coarticulation and state why this phenomenon has confounded attempts to determine the role of linguistic units, such as syllables and phonemes, in speech production, (3) describe the development of speech motor performance skills and specify quantitative and qualitative differences between speech motor performance in children and adults, and (4) describe experimental methods that allow scientists to study speech and limb motor control, as well as compare units of action used to study non-speech and speech movements.
Grammatical constraints on phonological encoding in speech production.
Heller, Jordana R; Goldrick, Matthew
2014-12-01
To better understand the influence of grammatical encoding on the retrieval and encoding of phonological word-form information during speech production, we examine how grammatical class constraints influence the activation of phonological neighbors (words phonologically related to the target--e.g., MOON, TWO for target TUNE). Specifically, we compare how neighbors that share a target's grammatical category (here, nouns) influence its planning and retrieval, assessed by picture naming latencies, and phonetic encoding, assessed by word productions in picture names, when grammatical constraints are strong (in sentence contexts) versus weak (bare naming). Within-category (noun) neighbors influenced planning time and phonetic encoding more strongly in sentence contexts. This suggests that grammatical encoding constrains phonological processing; the influence of phonological neighbors is grammatically dependent. Moreover, effects on planning times could not fully account for phonetic effects, suggesting that phonological interaction affects articulation after speech onset. These results support production theories integrating grammatical, phonological, and phonetic processes.
Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu
2016-10-01
The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.
Philosophy of Research in Motor Speech Disorders
ERIC Educational Resources Information Center
Weismer, Gary
2006-01-01
The primary objective of this position paper is to assess the theoretical and empirical support that exists for the Mayo Clinic view of motor speech disorders in general, and for oromotor, nonverbal tasks as a window to speech production processes in particular. Literature both in support of and against the Mayo clinic view and the associated use…
Neurophysiology of speech differences in childhood apraxia of speech.
Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole
2014-01-01
Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.
Terband, H; Maassen, B; Guenther, F H; Brumberg, J
2014-01-01
Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. The reader will be able to: (1) identify the difficulties in studying disordered speech motor development; (2) describe the differences in speech motor characteristics between SSD and subtype CAS; (3) describe the different types of learning that occur in the sensory-motor system during babbling and early speech acquisition; (4) identify the neural control subsystems involved in speech production; (5) describe the potential role of auditory self-monitoring in developmental speech disorders. Copyright © 2014 Elsevier Inc. All rights reserved.
Action planning and predictive coding when speaking
Wang, Jun; Mathalon, Daniel H.; Roach, Brian J.; Reilly, James; Keedy, Sarah; Sweeney, John A.; Ford, Judith M.
2014-01-01
Across the animal kingdom, sensations resulting from an animal's own actions are processed differently from sensations resulting from external sources, with self-generated sensations being suppressed. A forward model has been proposed to explain this process across sensorimotor domains. During vocalization, reduced processing of one's own speech is believed to result from a comparison of speech sounds to corollary discharges of intended speech production generated from efference copies of commands to speak. Until now, anatomical and functional evidence validating this model in humans has been indirect. Using EEG with anatomical MRI to facilitate source localization, we demonstrate that inferior frontal gyrus activity during the 300ms before speaking was associated with suppressed processing of speech sounds in auditory cortex around 100ms after speech onset (N1). These findings indicate that an efference copy from speech areas in prefrontal cortex is transmitted to auditory cortex, where it is used to suppress processing of anticipated speech sounds. About 100ms after N1, a subsequent auditory cortical component (P2) was not suppressed during talking. The combined N1 and P2 effects suggest that although sensory processing is suppressed as reflected in N1, perceptual gaps are filled as reflected in the lack of P2 suppression, explaining the discrepancy between sensory suppression and preserved sensory experiences. These findings, coupled with the coherence between relevant brain regions before and during speech, provide new mechanistic understanding of the complex interactions between action planning and sensory processing that provide for differentiated tagging and monitoring of one's own speech, processes disrupted in neuropsychiatric disorders. PMID:24423729
Smith, Anne; Goffman, Lisa; Sasisekaran, Jayanthi; Weber-Fox, Christine
2012-01-01
Stuttering is a disorder of speech production that typically arises in the preschool years, and many accounts of its onset and development implicate language and motor processes as critical underlying factors. There have, however, been very few studies of speech motor control processes in preschool children who stutter. Hearing novel nonwords and reproducing them engages multiple neural networks, including those involved in phonological analysis and storage and speech motor programming and execution. We used this task to explore speech motor and language abilities of 31 children aged 4–5 years who were diagnosed as stuttering. We also used sensitive and specific standardized tests of speech and language abilities to determine which of the children who stutter had concomitant language and/or phonological disorders. Approximately half of our sample of stuttering children had language and/or phonological disorders. As previous investigations would suggest, the stuttering children with concomitant language or speech sound disorders produced significantly more errors on the nonword repetition task compared to typically developing children. In contrast, the children who were diagnosed as stuttering, but who had normal speech sound and language abilities, performed the nonword repetition task with equal accuracy compared to their normally fluent peers. Analyses of interarticulator motions during accurate and fluent productions of the nonwords revealed that the children who stutter (without concomitant disorders) showed higher variability in oral motor coordination indices. These results provide new evidence that preschool children diagnosed as stuttering lag their typically developing peers in maturation of speech motor control processes. Educational objectives The reader will be able to: (a) discuss why performance on nonword repetition tasks has been investigated in children who stutter; (b) discuss why children who stutter in the current study had a higher incidence of concomitant language deficits compared to several other studies; (c) describe how performance differed on a nonword repetition test between children who stutter who do and do not have concomitant speech or language deficits; (d) make a general statement about speech motor control for nonword production in children who stutter compared to controls. PMID:23218217
Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram
2009-03-01
Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands audiovisual processing both in speech and language treatment and in the diagnosis of oral-facial apraxia. The purpose of this study was to investigate differences in audiovisual perception of speech as compared to non-speech oral gestures. Bimodal and unimodal speech and non-speech items were used and additionally discordant stimuli constructed, which were presented for imitation. This study examined a group of healthy volunteers and a group of patients with lesions of the left hemisphere. Patients made substantially more errors than controls, but the factors influencing imitation accuracy were more or less the same in both groups. Error analyses in both groups suggested different types of representations for speech as compared to the non-speech domain, with speech having a stronger weight on the auditory modality and non-speech processing on the visual modality. Additionally, this study was able to show that the McGurk effect is not limited to speech.
PETER, BEATE; BUTTON, LE; STOEL-GAMMON, CAROL; CHAPMAN, KATHY; RASKIND, WENDY H.
2013-01-01
The purpose of this study was to evaluate a global deficit in sequential processing as candidate endophenotypein a family with familial childhood apraxia of speech (CAS). Of 10 adults and 13 children in a three-generational family with speech sound disorder (SSD) consistent with CAS, 3 adults and 6 children had past or present SSD diagnoses. Two preschoolers with unremediated CAS showed a high number of sequencing errors during single-word production. Performance on tasks with high sequential processing loads differentiated between the affected and unaffected family members, whereas there were no group differences in tasks with low processing loads. Adults with a history of SSD produced more sequencing errors during nonword and multisyllabic real word imitation, compared to those without such a history. Results are consistent with a global deficit in sequential processing that influences speech development as well as cognitive and linguistic processing. PMID:23339324
Advances in natural language processing.
Hirschberg, Julia; Manning, Christopher D
2015-07-17
Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.
Advancements in robust algorithm formulation for speaker identification of whispered speech
NASA Astrophysics Data System (ADS)
Fan, Xing
Whispered speech is an alternative speech production mode from neutral speech, which is used by talkers intentionally in natural conversational scenarios to protect privacy and to avoid certain content from being overheard/made public. Due to the profound differences between whispered and neutral speech in production mechanism and the absence of whispered adaptation data, the performance of speaker identification systems trained with neutral speech degrades significantly. This dissertation therefore focuses on developing a robust closed-set speaker recognition system for whispered speech by using no or limited whispered adaptation data from non-target speakers. This dissertation proposes the concept of "High''/"Low'' performance whispered data for the purpose of speaker identification. A variety of acoustic properties are identified that contribute to the quality of whispered data. An acoustic analysis is also conducted to compare the phoneme/speaker dependency of the differences between whispered and neutral data in the feature domain. The observations from those acoustic analysis are new in this area and also serve as a guidance for developing robust speaker identification systems for whispered speech. This dissertation further proposes two systems for speaker identification of whispered speech. One system focuses on front-end processing. A two-dimensional feature space is proposed to search for "Low''-quality performance based whispered utterances and separate feature mapping functions are applied to vowels and consonants respectively in order to retain the speaker's information shared between whispered and neutral speech. The other system focuses on speech-mode-independent model training. The proposed method generates pseudo whispered features from neutral features by using the statistical information contained in a whispered Universal Background model (UBM) trained from extra collected whispered data from non-target speakers. Four modeling methods are proposed for the transformation estimation in order to generate the pseudo whispered features. Both of the above two systems demonstrate a significant improvement over the baseline system on the evaluation data. This dissertation has therefore contributed to providing a scientific understanding of the differences between whispered and neutral speech as well as improved front-end processing and modeling method for speaker identification of whispered speech. Such advancements will ultimately contribute to improve the robustness of speech processing systems.
Speech entrainment compensates for Broca's area damage.
Fridriksson, Julius; Basilakos, Alexandra; Hickok, Gregory; Bonilha, Leonardo; Rorden, Chris
2015-08-01
Speech entrainment (SE), the online mimicking of an audiovisual speech model, has been shown to increase speech fluency in patients with Broca's aphasia. However, not all individuals with aphasia benefit from SE. The purpose of this study was to identify patterns of cortical damage that predict a positive response SE's fluency-inducing effects. Forty-four chronic patients with left hemisphere stroke (15 female) were included in this study. Participants completed two tasks: 1) spontaneous speech production, and 2) audiovisual SE. Number of different words per minute was calculated as a speech output measure for each task, with the difference between SE and spontaneous speech conditions yielding a measure of fluency improvement. Voxel-wise lesion-symptom mapping (VLSM) was used to relate the number of different words per minute for spontaneous speech, SE, and SE-related improvement to patterns of brain damage in order to predict lesion locations associated with the fluency-inducing response to SE. Individuals with Broca's aphasia demonstrated a significant increase in different words per minute during SE versus spontaneous speech. A similar pattern of improvement was not seen in patients with other types of aphasia. VLSM analysis revealed damage to the inferior frontal gyrus predicted this response. Results suggest that SE exerts its fluency-inducing effects by providing a surrogate target for speech production via internal monitoring processes. Clinically, these results add further support for the use of SE to improve speech production and may help select patients for SE treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Hodgson, Catherine; Lambon Ralph, Matthew A.
2008-01-01
Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…
Perceptions of Refusals to Invitations: Exploring the Minds of Foreign Language Learners
ERIC Educational Resources Information Center
Felix-Brasdefer, J. Cesar
2008-01-01
Descriptions of speech act realisations of native and non-native speakers abound in the cross-cultural and interlanguage pragmatics literature. Yet, what is lacking is an analysis of the cognitive processes involved in the production of speech acts. This study examines the cognitive processes and perceptions of learners of Spanish when refusing…
Can mergers-in-progress be unmerged in speech accommodation?
Babel, Molly; McAuliffe, Michael; Haber, Graham
2013-01-01
This study examines spontaneous phonetic accommodation of a dialect with distinct categories by speakers who are in the process of merging those categories. We focus on the merger of the NEAR and SQUARE lexical sets in New Zealand English, presenting New Zealand participants with an unmerged speaker of Australian English. Mergers-in-progress are a uniquely interesting sound change as they showcase the asymmetry between speech perception and production. Yet, we examine mergers using spontaneous phonetic imitation, which is phenomenon that is necessarily a behavior where perceptual input influences speech production. Phonetic imitation is quantified by a perceptual measure and an acoustic calculation of mergedness using a Pillai-Bartlett trace. The results from both analyses indicate spontaneous phonetic imitation is moderated by extra-linguistic factors such as the valence of assigned conditions and social bias. We also find evidence for a decrease in the degree of mergedness in post-exposure productions. Taken together, our results suggest that under the appropriate conditions New Zealanders phonetically accommodate to Australian English and that in the process of speech imitation, mergers-in-progress can, but do not consistently, become less merged.
Can mergers-in-progress be unmerged in speech accommodation?
Babel, Molly; McAuliffe, Michael; Haber, Graham
2013-01-01
This study examines spontaneous phonetic accommodation of a dialect with distinct categories by speakers who are in the process of merging those categories. We focus on the merger of the NEAR and SQUARE lexical sets in New Zealand English, presenting New Zealand participants with an unmerged speaker of Australian English. Mergers-in-progress are a uniquely interesting sound change as they showcase the asymmetry between speech perception and production. Yet, we examine mergers using spontaneous phonetic imitation, which is phenomenon that is necessarily a behavior where perceptual input influences speech production. Phonetic imitation is quantified by a perceptual measure and an acoustic calculation of mergedness using a Pillai-Bartlett trace. The results from both analyses indicate spontaneous phonetic imitation is moderated by extra-linguistic factors such as the valence of assigned conditions and social bias. We also find evidence for a decrease in the degree of mergedness in post-exposure productions. Taken together, our results suggest that under the appropriate conditions New Zealanders phonetically accommodate to Australian English and that in the process of speech imitation, mergers-in-progress can, but do not consistently, become less merged. PMID:24069011
Shin, Young Hoon; Seo, Jiwon
2016-01-01
People with hearing or speaking disabilities are deprived of the benefits of conventional speech recognition technology because it is based on acoustic signals. Recent research has focused on silent speech recognition systems that are based on the motions of a speaker’s vocal tract and articulators. Because most silent speech recognition systems use contact sensors that are very inconvenient to users or optical systems that are susceptible to environmental interference, a contactless and robust solution is hence required. Toward this objective, this paper presents a series of signal processing algorithms for a contactless silent speech recognition system using an impulse radio ultra-wide band (IR-UWB) radar. The IR-UWB radar is used to remotely and wirelessly detect motions of the lips and jaw. In order to extract the necessary features of lip and jaw motions from the received radar signals, we propose a feature extraction algorithm. The proposed algorithm noticeably improved speech recognition performance compared to the existing algorithm during our word recognition test with five speakers. We also propose a speech activity detection algorithm to automatically select speech segments from continuous input signals. Thus, speech recognition processing is performed only when speech segments are detected. Our testbed consists of commercial off-the-shelf radar products, and the proposed algorithms are readily applicable without designing specialized radar hardware for silent speech processing. PMID:27801867
Shin, Young Hoon; Seo, Jiwon
2016-10-29
People with hearing or speaking disabilities are deprived of the benefits of conventional speech recognition technology because it is based on acoustic signals. Recent research has focused on silent speech recognition systems that are based on the motions of a speaker's vocal tract and articulators. Because most silent speech recognition systems use contact sensors that are very inconvenient to users or optical systems that are susceptible to environmental interference, a contactless and robust solution is hence required. Toward this objective, this paper presents a series of signal processing algorithms for a contactless silent speech recognition system using an impulse radio ultra-wide band (IR-UWB) radar. The IR-UWB radar is used to remotely and wirelessly detect motions of the lips and jaw. In order to extract the necessary features of lip and jaw motions from the received radar signals, we propose a feature extraction algorithm. The proposed algorithm noticeably improved speech recognition performance compared to the existing algorithm during our word recognition test with five speakers. We also propose a speech activity detection algorithm to automatically select speech segments from continuous input signals. Thus, speech recognition processing is performed only when speech segments are detected. Our testbed consists of commercial off-the-shelf radar products, and the proposed algorithms are readily applicable without designing specialized radar hardware for silent speech processing.
Research in speech communication.
Flanagan, J
1995-10-24
Advances in digital speech processing are now supporting application and deployment of a variety of speech technologies for human/machine communication. In fact, new businesses are rapidly forming about these technologies. But these capabilities are of little use unless society can afford them. Happily, explosive advances in microelectronics over the past two decades have assured affordable access to this sophistication as well as to the underlying computing technology. The research challenges in speech processing remain in the traditionally identified areas of recognition, synthesis, and coding. These three areas have typically been addressed individually, often with significant isolation among the efforts. But they are all facets of the same fundamental issue--how to represent and quantify the information in the speech signal. This implies deeper understanding of the physics of speech production, the constraints that the conventions of language impose, and the mechanism for information processing in the auditory system. In ongoing research, therefore, we seek more accurate models of speech generation, better computational formulations of language, and realistic perceptual guides for speech processing--along with ways to coalesce the fundamental issues of recognition, synthesis, and coding. Successful solution will yield the long-sought dictation machine, high-quality synthesis from text, and the ultimate in low bit-rate transmission of speech. It will also open the door to language-translating telephony, where the synthetic foreign translation can be in the voice of the originating talker.
Asynchronous sampling of speech with some vocoder experimental results
NASA Technical Reports Server (NTRS)
Babcock, M. L.
1972-01-01
The method of asynchronously sampling speech is based upon the derivatives of the acoustical speech signal. The following results are apparent from experiments to date: (1) It is possible to represent speech by a string of pulses of uniform amplitude, where the only information contained in the string is the spacing of the pulses in time; (2) the string of pulses may be produced in a simple analog manner; (3) the first derivative of the original speech waveform is the most important for the encoding process; (4) the resulting pulse train can be utilized to control an acoustical signal production system to regenerate the intelligence of the original speech.
Neurophysiology of Speech Differences in Childhood Apraxia of Speech
Preston, Jonathan L.; Molfese, Peter J.; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia; Landi, Nicole
2014-01-01
Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes. PMID:25090016
A Neural Model for Language and Speech.
ERIC Educational Resources Information Center
Buckingham, Hugh W., Jr.; Hollien, Harry
1978-01-01
A neural model in the form of a servo-mechanism is developed to account for certain aspects of language and speech in the human nervous system. Emphasis is placed on encoding processes as well as on-going feedback during production. (SW)
NASA Astrophysics Data System (ADS)
Golfinopoulos, Elisa
Acoustic variability in fluent speech can arise at many stages in speech production planning and execution. For example, at the phonological encoding stage, the grouping of phonemes into syllables determines which segments are coarticulated and, by consequence, segment-level acoustic variation. Likewise phonetic encoding, which determines the spatiotemporal extent of articulatory gestures, will affect the acoustic detail of segments. Functional magnetic resonance imaging (fMRI) was used to measure brain activity of fluent adult speakers in four speaking conditions: fast, normal, clear, and emphatic (or stressed) speech. These speech manner changes typically result in acoustic variations that do not change the lexical or semantic identity of productions but do affect the acoustic saliency of phonemes, syllables and/or words. Acoustic responses recorded inside the scanner were assessed quantitatively using eight acoustic measures and sentence duration was used as a covariate of non-interest in the neuroimaging analysis. Compared to normal speech, emphatic speech was characterized acoustically by a greater difference between stressed and unstressed vowels in intensity, duration, and fundamental frequency, and neurally by increased activity in right middle premotor cortex and supplementary motor area, and bilateral primary sensorimotor cortex. These findings are consistent with right-lateralized motor planning of prosodic variation in emphatic speech. Clear speech involved an increase in average vowel and sentence durations and average vowel spacing, along with increased activity in left middle premotor cortex and bilateral primary sensorimotor cortex. These findings are consistent with an increased reliance on feedforward control, resulting in hyper-articulation, under clear as compared to normal speech. Fast speech was characterized acoustically by reduced sentence duration and average vowel spacing, and neurally by increased activity in left anterior frontal operculum and posterior dorsal inferior frontal gyms pars opercularis -- regions thought to be involved in sequencing and phrase-level structural processing. Taken together these findings identify the acoustic and neural correlates of adjusting speech manner and underscore the different processing stages that can contribute to acoustic variability in fluent sentence production.
Musical expertise and second language learning.
Chobert, Julie; Besson, Mireille
2013-06-06
Increasing evidence suggests that musical expertise influences brain organization and brain functions. Moreover, results at the behavioral and neurophysiological levels reveal that musical expertise positively influences several aspects of speech processing, from auditory perception to speech production. In this review, we focus on the main results of the literature that led to the idea that musical expertise may benefit second language acquisition. We discuss several interpretations that may account for the influence of musical expertise on speech processing in native and foreign languages, and we propose new directions for future research.
Musical Expertise and Second Language Learning
Chobert, Julie; Besson, Mireille
2013-01-01
Increasing evidence suggests that musical expertise influences brain organization and brain functions. Moreover, results at the behavioral and neurophysiological levels reveal that musical expertise positively influences several aspects of speech processing, from auditory perception to speech production. In this review, we focus on the main results of the literature that led to the idea that musical expertise may benefit second language acquisition. We discuss several interpretations that may account for the influence of musical expertise on speech processing in native and foreign languages, and we propose new directions for future research. PMID:24961431
ERIC Educational Resources Information Center
Carroll, Julia M.; Mundy, Ian R.; Cunningham, Anna J.
2014-01-01
It is well established that speech, language and phonological skills are closely associated with literacy, and that children with a family risk of dyslexia (FRD) tend to show deficits in each of these areas in the preschool years. This paper examines what the relationships are between FRD and these skills, and whether deficits in speech, language…
Cohesion and Joint Speech: Right Hemisphere Contributions to Synchronized Vocal Production
Jasmin, Kyle M.; McGettigan, Carolyn; Agnew, Zarinah K.; Lavan, Nadine; Josephs, Oliver; Cummins, Fred
2016-01-01
Synchronized behavior (chanting, singing, praying, dancing) is found in all human cultures and is central to religious, military, and political activities, which require people to act collaboratively and cohesively; however, we know little about the neural underpinnings of many kinds of synchronous behavior (e.g., vocal behavior) or its role in establishing and maintaining group cohesion. In the present study, we measured neural activity using fMRI while participants spoke simultaneously with another person. We manipulated whether the couple spoke the same sentence (allowing synchrony) or different sentences (preventing synchrony), and also whether the voice the participant heard was “live” (allowing rich reciprocal interaction) or prerecorded (with no such mutual influence). Synchronous speech was associated with increased activity in posterior and anterior auditory fields. When, and only when, participants spoke with a partner who was both synchronous and “live,” we observed a lack of the suppression of auditory cortex, which is commonly seen as a neural correlate of speech production. Instead, auditory cortex responded as though it were processing another talker's speech. Our results suggest that detecting synchrony leads to a change in the perceptual consequences of one's own actions: they are processed as though they were other-, rather than self-produced. This may contribute to our understanding of synchronized behavior as a group-bonding tool. SIGNIFICANCE STATEMENT Synchronized human behavior, such as chanting, dancing, and singing, are cultural universals with functional significance: these activities increase group cohesion and cause participants to like each other and behave more prosocially toward each other. Here we use fMRI brain imaging to investigate the neural basis of one common form of cohesive synchronized behavior: joint speaking (e.g., the synchronous speech seen in chants, prayers, pledges). Results showed that joint speech recruits additional right hemisphere regions outside the classic speech production network. Additionally, we found that a neural marker of self-produced speech, suppression of sensory cortices, did not occur during joint synchronized speech, suggesting that joint synchronized behavior may alter self-other distinctions in sensory processing. PMID:27122026
Cohesion and Joint Speech: Right Hemisphere Contributions to Synchronized Vocal Production.
Jasmin, Kyle M; McGettigan, Carolyn; Agnew, Zarinah K; Lavan, Nadine; Josephs, Oliver; Cummins, Fred; Scott, Sophie K
2016-04-27
Synchronized behavior (chanting, singing, praying, dancing) is found in all human cultures and is central to religious, military, and political activities, which require people to act collaboratively and cohesively; however, we know little about the neural underpinnings of many kinds of synchronous behavior (e.g., vocal behavior) or its role in establishing and maintaining group cohesion. In the present study, we measured neural activity using fMRI while participants spoke simultaneously with another person. We manipulated whether the couple spoke the same sentence (allowing synchrony) or different sentences (preventing synchrony), and also whether the voice the participant heard was "live" (allowing rich reciprocal interaction) or prerecorded (with no such mutual influence). Synchronous speech was associated with increased activity in posterior and anterior auditory fields. When, and only when, participants spoke with a partner who was both synchronous and "live," we observed a lack of the suppression of auditory cortex, which is commonly seen as a neural correlate of speech production. Instead, auditory cortex responded as though it were processing another talker's speech. Our results suggest that detecting synchrony leads to a change in the perceptual consequences of one's own actions: they are processed as though they were other-, rather than self-produced. This may contribute to our understanding of synchronized behavior as a group-bonding tool. Synchronized human behavior, such as chanting, dancing, and singing, are cultural universals with functional significance: these activities increase group cohesion and cause participants to like each other and behave more prosocially toward each other. Here we use fMRI brain imaging to investigate the neural basis of one common form of cohesive synchronized behavior: joint speaking (e.g., the synchronous speech seen in chants, prayers, pledges). Results showed that joint speech recruits additional right hemisphere regions outside the classic speech production network. Additionally, we found that a neural marker of self-produced speech, suppression of sensory cortices, did not occur during joint synchronized speech, suggesting that joint synchronized behavior may alter self-other distinctions in sensory processing. Copyright © 2016 Jasmin et al.
De Looze, Céline; Moreau, Noémie; Renié, Laurent; Kelly, Finnian; Ghio, Alain; Rico, Audrey; Audoin, Bertrand; Viallet, François; Pelletier, Jean; Petrone, Caterina
2017-05-24
Cognitive impairment (CI) affects 40-65% of patients with multiple sclerosis (MS). CI can have a negative impact on a patient's everyday activities, such as engaging in conversations. Speech production planning ability is crucial for successful verbal interactions and thus for preserving social and occupational skills. This study investigates the effect of cognitive-linguistic demand and CI on speech production planning in MS, as reflected in speech prosody. A secondary aim is to explore the clinical potential of prosodic features for the prediction of an individual's cognitive status in MS. A total of 45 subjects, that is 22 healthy controls (HC) and 23 patients in early stages of relapsing-remitting MS, underwent neuropsychological tests probing specific cognitive processes involved in speech production planning. All subjects also performed a read speech task, in which they had to read isolated sentences manipulated as for phonological length. Results show that the speech of MS patients with CI is mainly affected at the temporal level (articulation and speech rate, pause duration). Regression analyses further indicate that rate measures are correlated with working memory scores. In addition, linear discriminant analysis shows the ROC AUC of identifying MS patients with CI is 0.70 (95% confidence interval: 0.68-0.73). Our findings indicate that prosodic planning is deficient in patients with MS-CI and that the scope of planning depends on patients' cognitive abilities. We discuss how speech-based approaches could be used as an ecological method for the assessment and monitoring of CI in MS. © 2017 The British Psychological Society.
Brumberg, Jonathan S; Krusienski, Dean J; Chakrabarti, Shreya; Gunduz, Aysegul; Brunner, Peter; Ritaccio, Anthony L; Schalk, Gerwin
2016-01-01
How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech. Specifically, we asked subjects to repeat continuous sentences aloud or silently while we recorded electrical signals directly from the surface of the brain (electrocorticography (ECoG)). We then determined the relationship between cortical activity and speech output across different areas of cortex and at sub-second timescales. The results highlight a spatio-temporal progression of cortical involvement in the continuous speech process that initiates utterances in frontal-motor areas and ends with the monitoring of auditory feedback in superior temporal gyrus. Direct comparison of cortical activity related to overt versus covert conditions revealed a common network of brain regions involved in speech that may implement orthographic and phonological processing. Our results provide one of the first characterizations of the spatiotemporal electrophysiological representations of the continuous speech process, and also highlight the common neural substrate of overt and covert speech. These results thereby contribute to a refined understanding of speech functions in the human brain.
Brumberg, Jonathan S.; Krusienski, Dean J.; Chakrabarti, Shreya; Gunduz, Aysegul; Brunner, Peter; Ritaccio, Anthony L.; Schalk, Gerwin
2016-01-01
How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech. Specifically, we asked subjects to repeat continuous sentences aloud or silently while we recorded electrical signals directly from the surface of the brain (electrocorticography (ECoG)). We then determined the relationship between cortical activity and speech output across different areas of cortex and at sub-second timescales. The results highlight a spatio-temporal progression of cortical involvement in the continuous speech process that initiates utterances in frontal-motor areas and ends with the monitoring of auditory feedback in superior temporal gyrus. Direct comparison of cortical activity related to overt versus covert conditions revealed a common network of brain regions involved in speech that may implement orthographic and phonological processing. Our results provide one of the first characterizations of the spatiotemporal electrophysiological representations of the continuous speech process, and also highlight the common neural substrate of overt and covert speech. These results thereby contribute to a refined understanding of speech functions in the human brain. PMID:27875590
ERIC Educational Resources Information Center
Hantsch, Ansgar; Jescheniak, Jorg D.; Schriefers, Herbert
2009-01-01
A number of recent studies have questioned the idea that lexical selection during speech production is a competitive process. One type of evidence against selection by competition is the observation that in the picture-word interference task semantically related distractors may facilitate the naming of a picture, whereas the selection by…
ERIC Educational Resources Information Center
Smith, Anne; Goffman, Lisa; Sasisekaran, Jayanthi; Weber-Fox, Christine
2012-01-01
Stuttering is a disorder of speech production that typically arises in the preschool years, and many accounts of its onset and development implicate language and motor processes as critical underlying factors. There have, however, been very few studies of speech motor control processes in preschool children who stutter. Hearing novel nonwords and…
Network dysfunction predicts speech production after left hemisphere stroke.
Geranmayeh, Fatemeh; Leech, Robert; Wise, Richard J S
2016-03-09
To investigate the role of multiple distributed brain networks, including the default mode, fronto-temporo-parietal, and cingulo-opercular networks, which mediate domain-general and task-specific processes during speech production after aphasic stroke. We conducted an observational functional MRI study to investigate the effects of a previous left hemisphere stroke on functional connectivity within and between distributed networks as patients described pictures. Study design included various baseline tasks, and we compared results to those of age-matched healthy participants performing the same tasks. We used independent component and psychophysiological interaction analyses. Although activity within individual networks was not predictive of speech production, relative activity between networks was a predictor of both within-scanner and out-of-scanner language performance, over and above that predicted from lesion volume, age, sex, and years of education. Specifically, robust functional imaging predictors were the differential activity between the default mode network and both the left and right fronto-temporo-parietal networks, respectively activated and deactivated during speech. We also observed altered between-network functional connectivity of these networks in patients during speech production. Speech production is dependent on complex interactions among widely distributed brain networks, indicating that residual speech production after stroke depends on more than the restoration of local domain-specific functions. Our understanding of the recovery of function following focal lesions is not adequately captured by consideration of ipsilesional or contralesional brain regions taking over lost domain-specific functions, but is perhaps best considered as the interaction between what remains of domain-specific networks and domain-general systems that regulate behavior. © 2016 American Academy of Neurology.
Network dysfunction predicts speech production after left hemisphere stroke
Leech, Robert; Wise, Richard J.S.
2016-01-01
Objective: To investigate the role of multiple distributed brain networks, including the default mode, fronto-temporo-parietal, and cingulo-opercular networks, which mediate domain-general and task-specific processes during speech production after aphasic stroke. Methods: We conducted an observational functional MRI study to investigate the effects of a previous left hemisphere stroke on functional connectivity within and between distributed networks as patients described pictures. Study design included various baseline tasks, and we compared results to those of age-matched healthy participants performing the same tasks. We used independent component and psychophysiological interaction analyses. Results: Although activity within individual networks was not predictive of speech production, relative activity between networks was a predictor of both within-scanner and out-of-scanner language performance, over and above that predicted from lesion volume, age, sex, and years of education. Specifically, robust functional imaging predictors were the differential activity between the default mode network and both the left and right fronto-temporo-parietal networks, respectively activated and deactivated during speech. We also observed altered between-network functional connectivity of these networks in patients during speech production. Conclusions: Speech production is dependent on complex interactions among widely distributed brain networks, indicating that residual speech production after stroke depends on more than the restoration of local domain-specific functions. Our understanding of the recovery of function following focal lesions is not adequately captured by consideration of ipsilesional or contralesional brain regions taking over lost domain-specific functions, but is perhaps best considered as the interaction between what remains of domain-specific networks and domain-general systems that regulate behavior. PMID:26962070
Structure and Processing in Tunisian Arabic: Speech Error Data
ERIC Educational Resources Information Center
Hamrouni, Nadia
2010-01-01
This dissertation presents experimental research on speech errors in Tunisian Arabic. The nonconcatenative morphology of Arabic shows interesting interactions of phrasal and lexical constraints with morphological structure during language production. The central empirical questions revolve around properties of "exchange errors". These…
Trébuchon-Da Fonseca, Agnès; Bénar, Christian-G; Bartoloméi, Fabrice; Régis, Jean; Démonet, Jean-François; Chauvel, Patrick; Liégeois-Chauvel, Catherine
2009-03-01
Regions involved in language processing have been observed in the inferior part of the left temporal lobe. Although collectively labelled 'the Basal Temporal Language Area' (BTLA), these territories are functionally heterogeneous and are involved in language perception (i.e. reading or semantic task) or language production (speech arrest after stimulation). The objective of this study was to clarify the role of BTLA in the language network in an epileptic patient who displayed jargonaphasia. Intracerebral evoked related potentials to verbal and non-verbal stimuli in auditory and visual modalities were recorded from BTLA. Time-frequency analysis was performed during ictal events. Evoked potentials and induced gamma-band activity provided direct evidence that BTLA is sensitive to language stimuli in both modalities, 350 ms after stimulation. In addition, spontaneous gamma-band discharges were recorded from this region during which we observed phonological jargon. The findings emphasize the multimodal nature of this region in speech perception. In the context of transient dysfunction, the patient's lexical semantic processing network is disrupted, reducing spoken output to meaningless phoneme combinations. This rare opportunity to study the BTLA "in vivo" demonstrates its pivotal role in lexico-semantic processing for speech production and its multimodal nature in speech perception.
D'Ausilio, A.; Maffongelli, L.; Bartoli, E.; Campanella, M.; Ferrari, E.; Berry, J.; Fadiga, L.
2014-01-01
The activation of listener's motor system during speech processing was first demonstrated by the enhancement of electromyographic tongue potentials as evoked by single-pulse transcranial magnetic stimulation (TMS) over tongue motor cortex. This technique is, however, technically challenging and enables only a rather coarse measurement of this motor mirroring. Here, we applied TMS to listeners’ tongue motor area in association with ultrasound tissue Doppler imaging to describe fine-grained tongue kinematic synergies evoked by passive listening to speech. Subjects listened to syllables requiring different patterns of dorso-ventral and antero-posterior movements (/ki/, /ko/, /ti/, /to/). Results show that passive listening to speech sounds evokes a pattern of motor synergies mirroring those occurring during speech production. Moreover, mirror motor synergies were more evident in those subjects showing good performances in discriminating speech in noise demonstrating a role of the speech-related mirror system in feed-forward processing the speaker's ongoing motor plan. PMID:24778384
Inhibitory control and the speech patterns of second language users.
Korko, Malgorzata; Williams, Simon A
2017-02-01
Inhibitory control (IC), an ability to suppress irrelevant and/or conflicting information, has been found to underlie performance on a variety of cognitive tasks, including bilingual language processing. This study examines the relationship between IC and the speech patterns of second language (L2) users from the perspective of individual differences. While the majority of studies have supported the role of IC in bilingual language processing using single-word production paradigms, this work looks at inhibitory processes in the context of extended speech, with a particular emphasis on disfluencies. We hypothesized that the speech of individuals with poorer IC would be characterized by reduced fluency. A series of regression analyses, in which we controlled for age and L2 proficiency, revealed that IC (in terms of accuracy on the Stroop task) could reliably predict the occurrence of reformulations and the frequency and duration of silent pauses in L2 speech. No statistically significant relationship was found between IC and other L2 spoken output measures, such as repetitions, filled pauses, and performance errors. Conclusions focus on IC as one out of a number of cognitive functions in the service of spoken language production. A more qualitative approach towards the question of whether L2 speakers rely on IC is advocated. © 2016 The British Psychological Society.
Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia
2015-01-01
We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.
A bilateral cortical network responds to pitch perturbations in speech feedback
Kort, Naomi S.; Nagarajan, Srikantan S.; Houde, John F.
2014-01-01
Auditory feedback is used to monitor and correct for errors in speech production, and one of the clearest demonstrations of this is the pitch perturbation reflex. During ongoing phonation, speakers respond rapidly to shifts of the pitch of their auditory feedback, altering their pitch production to oppose the direction of the applied pitch shift. In this study, we examine the timing of activity within a network of brain regions thought to be involved in mediating this behavior. To isolate auditory feedback processing relevant for motor control of speech, we used magnetoencephalography (MEG) to compare neural responses to speech onset and to transient (400ms) pitch feedback perturbations during speaking with responses to identical acoustic stimuli during passive listening. We found overlapping, but distinct bilateral cortical networks involved in monitoring speech onset and feedback alterations in ongoing speech. Responses to speech onset during speaking were suppressed in bilateral auditory and left ventral supramarginal gyrus/posterior superior temporal sulcus (vSMG/pSTS). In contrast, during pitch perturbations, activity was enhanced in bilateral vSMG/pSTS, bilateral premotor cortex, right primary auditory cortex, and left higher order auditory cortex. We also found speaking-induced delays in responses to both unaltered and altered speech in bilateral primary and secondary auditory regions, the left vSMG/pSTS and right premotor cortex. The network dynamics reveal the cortical processing involved in both detecting the speech error and updating the motor plan to create the new pitch output. These results implicate vSMG/pSTS as critical in both monitoring auditory feedback and initiating rapid compensation to feedback errors. PMID:24076223
An articulatorily constrained, maximum entropy approach to speech recognition and speech coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, J.
Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values aremore » constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.« less
Speech Recognition for A Digital Video Library.
ERIC Educational Resources Information Center
Witbrock, Michael J.; Hauptmann, Alexander G.
1998-01-01
Production of the meta-data supporting the Informedia Digital Video Library interface is automated using techniques derived from artificial intelligence research. Speech recognition and natural-language processing, information retrieval, and image analysis are applied to produce an interface that helps users locate information and navigate more…
Zheng, Yingjun; Wu, Chao; Li, Juanhua; Li, Ruikeng; Peng, Hongjun; She, Shenglin; Ning, Yuping; Li, Liang
2018-04-04
Speech recognition under noisy "cocktail-party" environments involves multiple perceptual/cognitive processes, including target detection, selective attention, irrelevant signal inhibition, sensory/working memory, and speech production. Compared to health listeners, people with schizophrenia are more vulnerable to masking stimuli and perform worse in speech recognition under speech-on-speech masking conditions. Although the schizophrenia-related speech-recognition impairment under "cocktail-party" conditions is associated with deficits of various perceptual/cognitive processes, it is crucial to know whether the brain substrates critically underlying speech detection against informational speech masking are impaired in people with schizophrenia. Using functional magnetic resonance imaging (fMRI), this study investigated differences between people with schizophrenia (n = 19, mean age = 33 ± 10 years) and their matched healthy controls (n = 15, mean age = 30 ± 9 years) in intra-network functional connectivity (FC) specifically associated with target-speech detection under speech-on-speech-masking conditions. The target-speech detection performance under the speech-on-speech-masking condition in participants with schizophrenia was significantly worse than that in matched healthy participants (healthy controls). Moreover, in healthy controls, but not participants with schizophrenia, the strength of intra-network FC within the bilateral caudate was positively correlated with the speech-detection performance under the speech-masking conditions. Compared to controls, patients showed altered spatial activity pattern and decreased intra-network FC in the caudate. In people with schizophrenia, the declined speech-detection performance under speech-on-speech masking conditions is associated with reduced intra-caudate functional connectivity, which normally contributes to detecting target speech against speech masking via its functions of suppressing masking-speech signals.
Speech recognition systems on the Cell Broadband Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Jones, H; Vaidya, S
In this paper we describe our design, implementation, and first results of a prototype connected-phoneme-based speech recognition system on the Cell Broadband Engine{trademark} (Cell/B.E.). Automatic speech recognition decodes speech samples into plain text (other representations are possible) and must process samples at real-time rates. Fortunately, the computational tasks involved in this pipeline are highly data-parallel and can receive significant hardware acceleration from vector-streaming architectures such as the Cell/B.E. Identifying and exploiting these parallelism opportunities is challenging, but also critical to improving system performance. We observed, from our initial performance timings, that a single Cell/B.E. processor can recognize speech from thousandsmore » of simultaneous voice channels in real time--a channel density that is orders-of-magnitude greater than the capacity of existing software speech recognizers based on CPUs (central processing units). This result emphasizes the potential for Cell/B.E.-based speech recognition and will likely lead to the future development of production speech systems using Cell/B.E. clusters.« less
Research in speech communication.
Flanagan, J
1995-01-01
Advances in digital speech processing are now supporting application and deployment of a variety of speech technologies for human/machine communication. In fact, new businesses are rapidly forming about these technologies. But these capabilities are of little use unless society can afford them. Happily, explosive advances in microelectronics over the past two decades have assured affordable access to this sophistication as well as to the underlying computing technology. The research challenges in speech processing remain in the traditionally identified areas of recognition, synthesis, and coding. These three areas have typically been addressed individually, often with significant isolation among the efforts. But they are all facets of the same fundamental issue--how to represent and quantify the information in the speech signal. This implies deeper understanding of the physics of speech production, the constraints that the conventions of language impose, and the mechanism for information processing in the auditory system. In ongoing research, therefore, we seek more accurate models of speech generation, better computational formulations of language, and realistic perceptual guides for speech processing--along with ways to coalesce the fundamental issues of recognition, synthesis, and coding. Successful solution will yield the long-sought dictation machine, high-quality synthesis from text, and the ultimate in low bit-rate transmission of speech. It will also open the door to language-translating telephony, where the synthetic foreign translation can be in the voice of the originating talker. Images Fig. 1 Fig. 2 Fig. 5 Fig. 8 Fig. 11 Fig. 12 Fig. 13 PMID:7479806
2014-01-01
The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined. Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production. PMID:25060583
Zourmand, Alireza; Mirhassani, Seyed Mostafa; Ting, Hua-Nong; Bux, Shaik Ismail; Ng, Kwan Hoong; Bilgen, Mehmet; Jalaludin, Mohd Amin
2014-07-25
The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined.Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production.
Loss of regional accent after damage to the speech production network.
Berthier, Marcelo L; Dávila, Guadalupe; Moreno-Torres, Ignacio; Beltrán-Corbellini, Álvaro; Santana-Moreno, Daniel; Roé-Vellvé, Núria; Thurnhofer-Hemsi, Karl; Torres-Prioris, María José; Massone, María Ignacia; Ruiz-Cruces, Rafael
2015-01-01
Lesion-symptom mapping studies reveal that selective damage to one or more components of the speech production network can be associated with foreign accent syndrome, changes in regional accent (e.g., from Parisian accent to Alsatian accent), stronger regional accent, or re-emergence of a previously learned and dormant regional accent. Here, we report loss of regional accent after rapidly regressive Broca's aphasia in three Argentinean patients who had suffered unilateral or bilateral focal lesions in components of the speech production network. All patients were monolingual speakers with three different native Spanish accents (Cordobés or central, Guaranítico or northeast, and Bonaerense). Samples of speech production from the patient with native Córdoba accent were compared with previous recordings of his voice, whereas data from the patient with native Guaranítico accent were compared with speech samples from one healthy control matched for age, gender, and native accent. Speech samples from the patient with native Buenos Aires's accent were compared with data obtained from four healthy control subjects with the same accent. Analysis of speech production revealed discrete slowing in speech rate, inappropriate long pauses, and monotonous intonation. Phonemic production remained similar to those of healthy Spanish speakers, but phonetic variants peculiar to each accent (e.g., intervocalic aspiration of /s/ in Córdoba accent) were absent. While basic normal prosodic features of Spanish prosody were preserved, features intrinsic to melody of certain geographical areas (e.g., rising end F0 excursion in declarative sentences intoned with Córdoba accent) were absent. All patients were also unable to produce sentences with different emotional prosody. Brain imaging disclosed focal left hemisphere lesions involving the middle part of the motor cortex, the post-central cortex, the posterior inferior and/or middle frontal cortices, insula, anterior putamen and supplementary motor area. Our findings suggest that lesions affecting the middle part of the left motor cortex and other components of the speech production network disrupt neural processes involved in the production of regional accent features.
Loss of regional accent after damage to the speech production network
Berthier, Marcelo L.; Dávila, Guadalupe; Moreno-Torres, Ignacio; Beltrán-Corbellini, Álvaro; Santana-Moreno, Daniel; Roé-Vellvé, Núria; Thurnhofer-Hemsi, Karl; Torres-Prioris, María José; Massone, María Ignacia; Ruiz-Cruces, Rafael
2015-01-01
Lesion-symptom mapping studies reveal that selective damage to one or more components of the speech production network can be associated with foreign accent syndrome, changes in regional accent (e.g., from Parisian accent to Alsatian accent), stronger regional accent, or re-emergence of a previously learned and dormant regional accent. Here, we report loss of regional accent after rapidly regressive Broca’s aphasia in three Argentinean patients who had suffered unilateral or bilateral focal lesions in components of the speech production network. All patients were monolingual speakers with three different native Spanish accents (Cordobés or central, Guaranítico or northeast, and Bonaerense). Samples of speech production from the patient with native Córdoba accent were compared with previous recordings of his voice, whereas data from the patient with native Guaranítico accent were compared with speech samples from one healthy control matched for age, gender, and native accent. Speech samples from the patient with native Buenos Aires’s accent were compared with data obtained from four healthy control subjects with the same accent. Analysis of speech production revealed discrete slowing in speech rate, inappropriate long pauses, and monotonous intonation. Phonemic production remained similar to those of healthy Spanish speakers, but phonetic variants peculiar to each accent (e.g., intervocalic aspiration of /s/ in Córdoba accent) were absent. While basic normal prosodic features of Spanish prosody were preserved, features intrinsic to melody of certain geographical areas (e.g., rising end F0 excursion in declarative sentences intoned with Córdoba accent) were absent. All patients were also unable to produce sentences with different emotional prosody. Brain imaging disclosed focal left hemisphere lesions involving the middle part of the motor cortex, the post-central cortex, the posterior inferior and/or middle frontal cortices, insula, anterior putamen and supplementary motor area. Our findings suggest that lesions affecting the middle part of the left motor cortex and other components of the speech production network disrupt neural processes involved in the production of regional accent features. PMID:26594161
The Relationship Between Speech Production and Speech Perception Deficits in Parkinson's Disease.
De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet
2016-10-01
This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified through a standardized speech intelligibility assessment, acoustic analysis, and speech intensity measurements. Second, an overall estimation task and an intensity estimation task were addressed to evaluate overall speech perception and speech intensity perception, respectively. Finally, correlation analysis was performed between the speech characteristics of the overall estimation task and the corresponding acoustic analysis. The interaction between speech production and speech intensity perception was investigated by an intensity imitation task. Acoustic analysis and speech intensity measurements demonstrated significant differences in speech production between patients with PD and the HCs. A different pattern in the auditory perception of speech and speech intensity was found in the PD group. Auditory perceptual deficits may influence speech production in patients with PD. The present results suggest a disturbed auditory perception related to an automatic monitoring deficit in PD.
Age differences in the motor control of speech: An fMRI study of healthy aging.
Tremblay, Pascale; Sato, Marc; Deschamps, Isabelle
2017-05-01
Healthy aging is associated with a decline in cognitive, executive, and motor processes that are concomitant with changes in brain activation patterns, particularly at high complexity levels. While speech production relies on all these processes, and is known to decline with age, the mechanisms that underlie these changes remain poorly understood, despite the importance of communication on everyday life. In this cross-sectional group study, we investigated age differences in the neuromotor control of speech production by combining behavioral and functional magnetic resonance imaging (fMRI) data. Twenty-seven healthy adults underwent fMRI while performing a speech production task consisting in the articulation of nonwords of different sequential and motor complexity. Results demonstrate strong age differences in movement time (MT), with longer and more variable MT in older adults. The fMRI results revealed extensive age differences in the relationship between BOLD signal and MT, within and outside the sensorimotor system. Moreover, age differences were also found in relation to sequential complexity within the motor and attentional systems, reflecting both compensatory and de-differentiation mechanisms. At very high complexity level (high motor complexity and high sequence complexity), age differences were found in both MT data and BOLD response, which increased in several sensorimotor and executive control areas. Together, these results suggest that aging of motor and executive control mechanisms may contribute to age differences in speech production. These findings highlight the importance of studying functionally relevant behavior such as speech to understand the mechanisms of human brain aging. Hum Brain Mapp 38:2751-2771, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
How reading differs from object naming at the neuronal level.
Price, C J; McCrory, E; Noppeney, U; Mechelli, A; Moore, C J; Biggio, N; Devlin, J T
2006-01-15
This paper uses whole brain functional neuroimaging in neurologically normal participants to explore how reading aloud differs from object naming in terms of neuronal implementation. In the first experiment, we directly compared brain activation during reading aloud and object naming. This revealed greater activation for reading in bilateral premotor, left posterior superior temporal and precuneus regions. In a second experiment, we segregated the object-naming system into object recognition and speech production areas by factorially manipulating the presence or absence of objects (pictures of objects or their meaningless scrambled counterparts) with the presence or absence of speech production (vocal vs. finger press responses). This demonstrated that the areas associated with speech production (object naming and repetitively saying "OK" to meaningless scrambled pictures) corresponded exactly to the areas where responses were higher for reading aloud than object naming in Experiment 1. Collectively the results suggest that, relative to object naming, reading increases the demands on shared speech production processes. At a cognitive level, enhanced activation for reading in speech production areas may reflect the multiple and competing phonological codes that are generated from the sublexical parts of written words. At a neuronal level, it may reflect differences in the speed with which different areas are activated and integrate with one another.
Some Problems in Psycholinguistics.
ERIC Educational Resources Information Center
Hadding-Koch, Kerstin
1968-01-01
Among the most important questions in psycholinguistics today are the following: By which processes does man organize and understand speech? Which are the smallest linguistic units and rules stored in the memory and used in the production and perception of speech? Are the same mechanisms at work in both cases? Discussed in this paper are…
Opposing and following responses in sensorimotor speech control: Why responses go both ways.
Franken, Matthias K; Acheson, Daniel J; McQueen, James M; Hagoort, Peter; Eisner, Frank
2018-06-04
When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some responses follow the perturbation. In the present study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is made. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: The system initially responds by doing the opposite of what it was doing. This effect and the nontrivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production system's state at the time of perturbation.
Direct speech quotations promote low relative-clause attachment in silent reading of English.
Yao, Bo; Scheepers, Christoph
2018-07-01
The implicit prosody hypothesis (Fodor, 1998, 2002) proposes that silent reading coincides with a default, implicit form of prosody to facilitate sentence processing. Recent research demonstrated that a more vivid form of implicit prosody is mentally simulated during silent reading of direct speech quotations (e.g., Mary said, "This dress is beautiful"), with neural and behavioural consequences (e.g., Yao, Belin, & Scheepers, 2011; Yao & Scheepers, 2011). Here, we explored the relation between 'default' and 'simulated' implicit prosody in the context of relative-clause (RC) attachment in English. Apart from confirming a general low RC-attachment preference in both production (Experiment 1) and comprehension (Experiments 2 and 3), we found that during written sentence completion (Experiment 1) or when reading silently (Experiment 2), the low RC-attachment preference was reliably enhanced when the critical sentences were embedded in direct speech quotations as compared to indirect speech or narrative sentences. However, when reading aloud (Experiment 3), direct speech did not enhance the general low RC-attachment preference. The results from Experiments 1 and 2 suggest a quantitative boost to implicit prosody (via auditory perceptual simulation) during silent production/comprehension of direct speech. By contrast, when reading aloud (Experiment 3), prosody becomes equally salient across conditions due to its explicit nature; indirect speech and narrative sentences thus become as susceptible to prosody-induced syntactic biases as direct speech. The present findings suggest a shared cognitive basis between default implicit prosody and simulated implicit prosody, providing a new platform for studying the effects of implicit prosody on sentence processing. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Adank, Patti
2012-01-01
The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…
The functional neuroanatomy of language
NASA Astrophysics Data System (ADS)
Hickok, Gregory
2009-09-01
There has been substantial progress over the last several years in understanding aspects of the functional neuroanatomy of language. Some of these advances are summarized in this review. It will be argued that recognizing speech sounds is carried out in the superior temporal lobe bilaterally, that the superior temporal sulcus bilaterally is involved in phonological-level aspects of this process, that the frontal/motor system is not central to speech recognition although it may modulate auditory perception of speech, that conceptual access mechanisms are likely located in the lateral posterior temporal lobe (middle and inferior temporal gyri), that speech production involves sensory-related systems in the posterior superior temporal lobe in the left hemisphere, that the interface between perceptual and motor systems is supported by a sensory-motor circuit for vocal tract actions (not dedicated to speech) that is very similar to sensory-motor circuits found in primate parietal lobe, and that verbal short-term memory can be understood as an emergent property of this sensory-motor circuit. These observations are considered within the context of a dual stream model of speech processing in which one pathway supports speech comprehension and the other supports sensory-motor integration. Additional topics of discussion include the functional organization of the planum temporale for spatial hearing and speech-related sensory-motor processes, the anatomical and functional basis of a form of acquired language disorder, conduction aphasia, the neural basis of vocabulary development, and sentence-level/grammatical processing.
Coupled neural systems underlie the production and comprehension of naturalistic narrative speech
Silbert, Lauren J.; Honey, Christopher J.; Simony, Erez; Poeppel, David; Hasson, Uri
2014-01-01
Neuroimaging studies of language have typically focused on either production or comprehension of single speech utterances such as syllables, words, or sentences. In this study we used a new approach to functional MRI acquisition and analysis to characterize the neural responses during production and comprehension of complex real-life speech. First, using a time-warp based intrasubject correlation method, we identified all areas that are reliably activated in the brains of speakers telling a 15-min-long narrative. Next, we identified areas that are reliably activated in the brains of listeners as they comprehended that same narrative. This allowed us to identify networks of brain regions specific to production and comprehension, as well as those that are shared between the two processes. The results indicate that production of a real-life narrative is not localized to the left hemisphere but recruits an extensive bilateral network, which overlaps extensively with the comprehension system. Moreover, by directly comparing the neural activity time courses during production and comprehension of the same narrative we were able to identify not only the spatial overlap of activity but also areas in which the neural activity is coupled across the speaker’s and listener’s brains during production and comprehension of the same narrative. We demonstrate widespread bilateral coupling between production- and comprehension-related processing within both linguistic and nonlinguistic areas, exposing the surprising extent of shared processes across the two systems. PMID:25267658
Speech outcomes in Cantonese patients after glossectomy.
Wong, Ripley Kit; Poon, Esther Sok-Man; Woo, Cynthia Yuen-Man; Chan, Sabina Ching-Shun; Wong, Elsa Siu-Ping; Chu, Ada Wai-Sze
2007-08-01
We sought to determine the major factors affecting speech production of Cantonese-speaking glossectomized patients. Error pattern was analyzed. Forty-one Cantonese-speaking subjects who had undergone glossectomy > or = 6 months previously were recruited. Speech production evaluation included (1) phonetic error analysis in nonsense syllable; (2) speech intelligibility in sentences evaluated by naive listeners; (3) overall speech intelligibility in conversation evaluated by experienced speech therapists. Patients receiving adjuvant radiotherapy had significantly poorer segmental and connected speech production. Total or subtotal glossectomy also resulted in poor speech outcomes. Patients having free flap reconstruction showed the best speech outcomes. Patients without lymph node metastasis had significantly better speech scores when compared with patients with lymph node metastasis. Initial consonant production had the worst scores, while vowel production was the least affected. Speech outcomes of Cantonese-speaking glossectomized patients depended on the severity of the disease. Initial consonants had the greatest effect on speech intelligibility.
The role of linguistic experience in the processing of probabilistic information in production.
Gustafson, Erin; Goldrick, Matthew
2018-01-01
Speakers track the probability that a word will occur in a particular context and utilize this information during phonetic processing. For example, content words that have high probability within a discourse tend to be realized with reduced acoustic/articulatory properties. Such probabilistic information may influence L1 and L2 speech processing in distinct ways (reflecting differences in linguistic experience across groups and the overall difficulty of L2 speech processing). To examine this issue, L1 and L2 speakers performed a referential communication task, describing sequences of simple actions. The two groups of speakers showed similar effects of discourse-dependent probabilistic information on production, suggesting that L2 speakers can successfully track discourse-dependent probabilities and use such information to modulate phonetic processing.
Dick, Fred; Deutsch, Diana; Sereno, Marty
2013-01-01
It is normally obvious to listeners whether a human vocalization is intended to be heard as speech or song. However, the 2 signals are remarkably similar acoustically. A naturally occurring boundary case between speech and song has been discovered where a spoken phrase sounds as if it were sung when isolated and repeated. In the present study, an extensive search of audiobooks uncovered additional similar examples, which were contrasted with samples from the same corpus that do not sound like song, despite containing clear prosodic pitch contours. Using functional magnetic resonance imaging, we show that hearing these 2 closely matched stimuli is not associated with differences in response of early auditory areas. Rather, we find that a network of 8 regions, including the anterior superior temporal gyrus (STG) just anterior to Heschl's gyrus and the right midposterior STG, respond more strongly to speech perceived as song than to mere speech. This network overlaps a number of areas previously associated with pitch extraction and song production, confirming that phrases originally intended to be heard as speech can, under certain circumstances, be heard as song. Our results suggest that song processing compared with speech processing makes increased demands on pitch processing and auditory–motor integration. PMID:22314043
Rhythm as a Coordinating Device: Entrainment With Disordered Speech
Borrie, Stephanie A.; Liss, Julie M.
2014-01-01
Purpose The rhythmic entrainment (coordination) of behavior during human interaction is a powerful phenomenon, considered essential for successful communication, supporting social and emotional connection, and facilitating sense-making and information exchange. Disruption in entrainment likely occurs in conversations involving those with speech and language impairment, but its contribution to communication disorders has not been defined. As a first step to exploring this phenomenon in clinical populations, the present investigation examined the influence of disordered speech on the speech production properties of healthy interactants. Method Twenty-nine neurologically healthy interactants participated in a quasi-conversational paradigm, in which they read sentences (response) in response to hearing prerecorded sentences (exposure) from speakers with dysarthria (n = 4) and healthy controls (n = 4). Recordings of read sentences prior to the task were also collected (habitual). Results Findings revealed that interactants modified their speaking rate and pitch variation to align more closely with the disordered speech. Production shifts in these rhythmic properties, however, remained significantly different from corresponding properties in dysarthric speech. Conclusion Entrainment offers a new avenue for exploring speech and language impairment, addressing a communication process not currently explained by existing frameworks. This article offers direction for advancing this line of inquiry. PMID:24686410
Gutierrez-Sigut, Eva; Payne, Heather; MacSweeney, Mairéad
2015-01-01
Although there is consensus that the left hemisphere plays a critical role in language processing, some questions remain. Here we examine the influence of overt versus covert speech production on lateralization, the relationship between lateralization and behavioural measures of language performance and the strength of lateralization across the subcomponents of language. The present study used functional transcranial Doppler sonography (fTCD) to investigate lateralization of phonological and semantic fluency during both overt and covert word generation in right-handed adults. The laterality index (LI) was left lateralized in all conditions, and there was no difference in the strength of LI between overt and covert speech. This supports the validity of using overt speech in fTCD studies, another benefit of which is a reliable measure of speech production. PMID:24875468
Milovanov, Riia; Huotilainen, Minna; Esquef, Paulo A A; Alku, Paavo; Välimäki, Vesa; Tervaniemi, Mari
2009-08-28
We examined 10-12-year old elementary school children's ability to preattentively process sound durations in music and speech stimuli. In total, 40 children had either advanced foreign language production skills and higher musical aptitude or less advanced results in both musicality and linguistic tests. Event-related potential (ERP) recordings of the mismatch negativity (MMN) show that the duration changes in musical sounds are more prominently and accurately processed than changes in speech sounds. Moreover, children with advanced pronunciation and musicality skills displayed enhanced MMNs to duration changes in both speech and musical sounds. Thus, our study provides further evidence for the claim that musical aptitude and linguistic skills are interconnected and the musical features of the stimuli could have a preponderant role in preattentive duration processing.
Concurrent Processing of Words and Their Replacements during Speech
ERIC Educational Resources Information Center
Hartsuiker, Robert J.; Catchpole, Ciara M.; de Jong, Nivja H.; Pickering, Martin J.
2008-01-01
Two picture naming experiments, in which an initial picture was occasionally replaced with another (target) picture, were conducted to study the temporal coordination of abandoning one word and resuming with another word in speech production. In Experiment 1, participants abandoned saying the initial name, and resumed with the name of the target…
Lexicality Drives Audio-Motor Transformations in Broca's Area
ERIC Educational Resources Information Center
Kotz, S. A.; D'Ausilio, A.; Raettig, T.; Begliomini, C.; Craighero, L.; Fabbri-Destro, M.; Zingales, C.; Haggard, P.; Fadiga, L.
2010-01-01
Broca's area is classically associated with speech production. Recently, Broca's area has also been implicated in speech perception and non-linguistic information processing. With respect to the latter function, Broca's area is considered to be a central area in a network constituting the human mirror system, which maps observed or heard actions…
Network Analysis with Stochastic Grammars
2015-09-17
Language Processing ( NLP ) domain SCFG...sentence into starting symbol. Figure 2 is an NLP part-of- speech example modified from [38] of an SCFG production rule set that reads a limited set of...English sentences for the purpose of determining grammatical validity and meaning through part-of-speech assignment. In the NLP domain, each word is in
Reynolds, Michael G; Schlöffel, Sophie; Peressotti, Francesca
2015-01-01
One approach used to gain insight into the processes underlying bilingual language comprehension and production examines the costs that arise from switching languages. For unbalanced bilinguals, asymmetric switch costs are reported in speech production, where the switch cost for L1 is larger than the switch cost for L2, whereas, symmetric switch costs are reported in language comprehension tasks, where the cost of switching is the same for L1 and L2. Presently, it is unclear why asymmetric switch costs are observed in speech production, but not in language comprehension. Three experiments are reported that simultaneously examine methodological explanations of task related differences in the switch cost asymmetry and the predictions of three accounts of the switch cost asymmetry in speech production. The results of these experiments suggest that (1) the type of language task (comprehension vs. production) determines whether an asymmetric switch cost is observed and (2) at least some of the switch cost asymmetry arises within the language system.
Reynolds, Michael G.; Schlöffel, Sophie; Peressotti, Francesca
2016-01-01
One approach used to gain insight into the processes underlying bilingual language comprehension and production examines the costs that arise from switching languages. For unbalanced bilinguals, asymmetric switch costs are reported in speech production, where the switch cost for L1 is larger than the switch cost for L2, whereas, symmetric switch costs are reported in language comprehension tasks, where the cost of switching is the same for L1 and L2. Presently, it is unclear why asymmetric switch costs are observed in speech production, but not in language comprehension. Three experiments are reported that simultaneously examine methodological explanations of task related differences in the switch cost asymmetry and the predictions of three accounts of the switch cost asymmetry in speech production. The results of these experiments suggest that (1) the type of language task (comprehension vs. production) determines whether an asymmetric switch cost is observed and (2) at least some of the switch cost asymmetry arises within the language system. PMID:26834659
Nakayama, Masataka; Saito, Satoru
2015-08-01
The present study investigated principles of phonological planning, a common serial ordering mechanism for speech production and phonological short-term memory. Nakayama and Saito (2014) have investigated the principles by using a speech-error induction technique, in which participants were exposed to an auditory distracIor word immediately before an utterance of a target word. They demonstrated within-word adjacent mora exchanges and serial position effects on error rates. These findings support, respectively, the temporal distance and the edge principles at a within-word level. As this previous study induced errors using word distractors created by exchanging adjacent morae in the target words, it is possible that the speech errors are expressions of lexical intrusions reflecting interactive activation of phonological and lexical/semantic representations. To eliminate this possibility, the present study used nonword distractors that had no lexical or semantic representations. This approach successfully replicated the error patterns identified in the abovementioned study, further confirming that the temporal distance and edge principles are organizing precepts in phonological planning.
Effects of sentence-structure complexity on speech initiation time and disfluency.
Tsiamtsiouris, Jim; Cairns, Helen Smith
2013-03-01
There is general agreement that stuttering is caused by a variety of factors, and language formulation and speech motor control are two important factors that have been implicated in previous research, yet the exact nature of their effects is still not well understood. Our goal was to test the hypothesis that sentences of high structural complexity would incur greater processing costs than sentences of low structural complexity and these costs would be higher for adults who stutter than for adults who do not stutter. Fluent adults and adults who stutter participated in an experiment that required memorization of a sentence classified as low or high structural complexity followed by production of that sentence upon a visual cue. Both groups of speakers initiated most sentences significantly faster in the low structural complexity condition than in the high structural complexity condition. Adults who stutter were over-all slower in speech initiation than were fluent speakers, but there were no significant interactions between complexity and group. However, adults who stutter produced significantly more disfluencies in sentences of high structural complexity than in those of low complexity. After reading this article, the learner will be able to: (a) identify integral parts of all well-known models of adult sentence production; (b) summarize the way that sentence structure might negatively influence the speech production processes; (c) discuss whether sentence structure influences speech initiation time and disfluencies. Copyright © 2012 Elsevier Inc. All rights reserved.
Auditory Cortex Processes Variation in Our Own Speech
Sitek, Kevin R.; Mathalon, Daniel H.; Roach, Brian J.; Houde, John F.; Niziolek, Caroline A.; Ford, Judith M.
2013-01-01
As we talk, we unconsciously adjust our speech to ensure it sounds the way we intend it to sound. However, because speech production involves complex motor planning and execution, no two utterances of the same sound will be exactly the same. Here, we show that auditory cortex is sensitive to natural variations in self-produced speech from utterance to utterance. We recorded event-related potentials (ERPs) from ninety-nine subjects while they uttered “ah” and while they listened to those speech sounds played back. Subjects' utterances were sorted based on their formant deviations from the previous utterance. Typically, the N1 ERP component is suppressed during talking compared to listening. By comparing ERPs to the least and most variable utterances, we found that N1 was less suppressed to utterances that differed greatly from their preceding neighbors. In contrast, an utterance's difference from the median formant values did not affect N1. Trial-to-trial pitch (f0) deviation and pitch difference from the median similarly did not affect N1. We discuss mechanisms that may underlie the change in N1 suppression resulting from trial-to-trial formant change. Deviant utterances require additional auditory cortical processing, suggesting that speaking-induced suppression mechanisms are optimally tuned for a specific production. PMID:24349399
Neural network connectivity differences in children who stutter
Zhu, David C.
2013-01-01
Affecting 1% of the general population, stuttering impairs the normally effortless process of speech production, which requires precise coordination of sequential movement occurring among the articulatory, respiratory, and resonance systems, all within millisecond time scales. Those afflicted experience frequent disfluencies during ongoing speech, often leading to negative psychosocial consequences. The aetiology of stuttering remains unclear; compared to other neurodevelopmental disorders, few studies to date have examined the neural bases of childhood stuttering. Here we report, for the first time, results from functional (resting state functional magnetic resonance imaging) and structural connectivity analyses (probabilistic tractography) of multimodal neuroimaging data examining neural networks in children who stutter. We examined how synchronized brain activity occurring among brain areas associated with speech production, and white matter tracts that interconnect them, differ in young children who stutter (aged 3–9 years) compared with age-matched peers. Results showed that children who stutter have attenuated connectivity in neural networks that support timing of self-paced movement control. The results suggest that auditory-motor and basal ganglia-thalamocortical networks develop differently in stuttering children, which may in turn affect speech planning and execution processes needed to achieve fluent speech motor control. These results provide important initial evidence of neurological differences in the early phases of symptom onset in children who stutter. PMID:24131593
Effects of utterance length and vocal loudness on speech breathing in older adults.
Huber, Jessica E
2008-12-31
Age-related reductions in pulmonary elastic recoil and respiratory muscle strength can affect how older adults generate subglottal pressure required for speech production. The present study examined age-related changes in speech breathing by manipulating utterance length and loudness during a connected speech task (monologue). Twenty-three older adults and twenty-eight young adults produced a monologue at comfortable loudness and pitch and with multi-talker babble noise playing in the room to elicit louder speech. Dependent variables included sound pressure level, speech rate, and lung volume initiation, termination, and excursion. Older adults produced shorter utterances than young adults overall. Age-related effects were larger for longer utterances. Older adults demonstrated very different lung volume adjustments for loud speech than young adults. These results suggest that older adults have a more difficult time when the speech system is being taxed by both utterance length and loudness. The data were consistent with the hypothesis that both young and older adults use utterance length in premotor speech planning processes.
Aging and the Vulnerability of Speech to Dual Task Demands
Kemper, Susan; Schmalzried, RaLynn; Hoffman, Lesa; Herman, Ruth
2010-01-01
Tracking a digital pursuit rotor task was used to measure dual task costs of language production by young and older adults. Tracking performance by both groups was affected by dual task demands: time on target declined and tracking error increased as dual task demands increased from the baseline condition to a moderately demanding dual task condition to a more demanding dual task condition. When dual task demands were moderate, older adults’ speech rate declined but their fluency, grammatical complexity, and content were unaffected. When the dual task was more demanding, older adults’ speech, like young adults’ speech, became highly fragmented, ungrammatical, and incoherent. Vocabulary, working memory, processing speed, and inhibition affected vulnerability to dual task costs: vocabulary provided some protection for sentence length and grammaticality, working memory conferred some protection for grammatical complexity, and processing speed provided some protection for speech rate, propositional density, coherence, and lexical diversity. Further, vocabulary and working memory capacity provided more protection for older adults than for young adults although the protective effect of processing speed was somewhat reduced for older adults as compared to the young adults. PMID:21186917
Visual stimuli in intervention approaches for pre-schoolers diagnosed with phonological delay.
Pedro, Cassandra Ferreira; Lousada, Marisa; Hall, Andreia; Jesus, Luis M T
2018-04-01
The aim of this study was to develop and content validate specific speech and language intervention picture cards: The Letter-Sound (L&S) cards. The present study was also focused on assessing the influence of these cards on letter-sound correspondences and speech sound production. An expert panel of six speech and language therapists analysed and discussed the L&S cards based on several criteria previously established. A Speech and Language Therapist carried out a 6-week therapeutic intervention with a group of seven Portuguese phonologically delayed pre-schoolers aged 5;3 to 6;5. The modified Bland-Altman method revealed good agreement among evaluators, that is the majority of the values was between the agreement limits. Additional outcome measures were collected before and after the therapeutic intervention process. Results indicate that the L&S cards facilitate the acquisition of letter-sound correspondences. Regarding speech sound production, some improvements were also observed at word level. The L&S cards are therefore likely to give phonetic cues, which are crucial for the correct production of therapeutic targets. These visual cues seemed to have helped children with phonological delay develop the above-mentioned skills.
Hodgson, Jessica C; Hudson, John M
2017-03-01
Research using clinical populations to explore the relationship between hemispheric speech lateralization and handedness has focused on individuals with speech and language disorders, such as dyslexia or specific language impairment (SLI). Such work reveals atypical patterns of cerebral lateralization and handedness in these groups compared to controls. There are few studies that examine this relationship in people with motor coordination impairments but without speech or reading deficits, which is a surprising omission given the prevalence of theories suggesting a common neural network underlying both functions. We use an emerging imaging technique in cognitive neuroscience; functional transcranial Doppler (fTCD) ultrasound, to assess whether individuals with developmental coordination disorder (DCD) display reduced left-hemisphere lateralization for speech production compared to control participants. Twelve adult control participants and 12 adults with DCD, but no other developmental/cognitive impairments, performed a word-generation task whilst undergoing fTCD imaging to establish a hemispheric lateralization index for speech production. All participants also completed an electronic peg-moving task to determine hand skill. As predicted, the DCD group showed a significantly reduced left lateralization pattern for the speech production task compared to controls. Performance on the motor skill task showed a clear preference for the dominant hand across both groups; however, the DCD group mean movement times were significantly higher for the non-dominant hand. This is the first study of its kind to assess hand skill and speech lateralization in DCD. The results reveal a reduced leftwards asymmetry for speech and a slower motor performance. This fits alongside previous work showing atypical cerebral lateralization in DCD for other cognitive processes (e.g., executive function and short-term memory) and thus speaks to debates on theories of the links between motor control and language production. © 2016 The Authors. Journal of Neuropsychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
Retrieving Tract Variables From Acoustics: A Comparison of Different Machine Learning Strategies.
Mitra, Vikramjit; Nam, Hosung; Espy-Wilson, Carol Y; Saltzman, Elliot; Goldstein, Louis
2010-09-13
Many different studies have claimed that articulatory information can be used to improve the performance of automatic speech recognition systems. Unfortunately, such articulatory information is not readily available in typical speaker-listener situations. Consequently, such information has to be estimated from the acoustic signal in a process which is usually termed "speech-inversion." This study aims to propose and compare various machine learning strategies for speech inversion: Trajectory mixture density networks (TMDNs), feedforward artificial neural networks (FF-ANN), support vector regression (SVR), autoregressive artificial neural network (AR-ANN), and distal supervised learning (DSL). Further, using a database generated by the Haskins Laboratories speech production model, we test the claim that information regarding constrictions produced by the distinct organs of the vocal tract (vocal tract variables) is superior to flesh-point information (articulatory pellet trajectories) for the inversion process.
Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans.
Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth
2006-10-01
This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given.
Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans (L)
Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth
2007-01-01
This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given. PMID:17069275
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015
The Role of Production in Infant Word Learning
ERIC Educational Resources Information Center
Vihman, Marilyn May; DePaolis, Rory A.; Keren-Portnoy, Tamar
2014-01-01
Studies of phonological development that combine speech-processing experiments with observation and analysis of production remain rare, although production experience is necessarily relevant to developmental advance. Here we focus on three proposals regarding the relationship of production to word learning: (1) "Articulatory filter": The…
Liu, Xiaoluan; Xu, Yi
2015-01-01
This study compares affective piano performance with speech production from the perspective of dynamics: unlike previous research, this study uses finger force and articulatory effort as indexes reflecting the dynamics of affective piano performance and speech production respectively. Moreover, for the first time physical constraints such as piano fingerings and speech articulatory constraints are included due to their potential contribution to different patterns of dynamics. A piano performance experiment and speech production experiment were conducted in four emotions: anger, fear, happiness and sadness. The results show that in both piano performance and speech production, anger and happiness generally have high dynamics while sadness has the lowest dynamics. Fingerings interact with fear in the piano experiment and articulatory constraints interact with anger in the speech experiment, i.e., large physical constraints produce significantly higher dynamics than small physical constraints in piano performance under the condition of fear and in speech production under the condition of anger. Using production experiments, this study firstly supports previous perception studies on relations between affective music and speech. Moreover, this is the first study to show quantitative evidence for the importance of considering motor aspects such as dynamics in comparing music performance and speech production in which motor mechanisms play a crucial role.
Liu, Xiaoluan; Xu, Yi
2015-01-01
This study compares affective piano performance with speech production from the perspective of dynamics: unlike previous research, this study uses finger force and articulatory effort as indexes reflecting the dynamics of affective piano performance and speech production respectively. Moreover, for the first time physical constraints such as piano fingerings and speech articulatory constraints are included due to their potential contribution to different patterns of dynamics. A piano performance experiment and speech production experiment were conducted in four emotions: anger, fear, happiness and sadness. The results show that in both piano performance and speech production, anger and happiness generally have high dynamics while sadness has the lowest dynamics. Fingerings interact with fear in the piano experiment and articulatory constraints interact with anger in the speech experiment, i.e., large physical constraints produce significantly higher dynamics than small physical constraints in piano performance under the condition of fear and in speech production under the condition of anger. Using production experiments, this study firstly supports previous perception studies on relations between affective music and speech. Moreover, this is the first study to show quantitative evidence for the importance of considering motor aspects such as dynamics in comparing music performance and speech production in which motor mechanisms play a crucial role. PMID:26217252
The use of ultrasound in the study of articulatory properties of vowels in clear speech.
Song, Jae Yung
2017-01-01
Although the acoustic properties of clear speech have been extensively studied, its underlying articulatory details have not been well understood. The purpose of the present study is twofold: To examine the specific articulatory processes of clear speech using ultrasound and to investigate whether and how the type of listener (hard of hearing, normal hearing) and the lexical property of words (frequency) interact in the production of clear speech. To this end, we examined productions of /ɑ/, /æ/ and /u/ from 16 speakers of US English. Overall, our ultrasound results suggested that the tongue's highest point moved in a direction that exaggerated the three vowels' phonological features, resulting in an expanded articulatory vowel space for the hard-of-hearing listener and low-frequency words. No interaction was found between the listener and word frequency, suggesting that the effects of word frequency hold constant across the two types of listeners.
Jordan, Timothy R; Abedipour, Lily
2010-01-01
Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.
Speed-Accuracy Tradeoffs in Speech Production
2017-06-01
imaging data of speech production. A theoretical framework for considering Fitts’ law in the domain of speech production is elucidated. Methodological ...articulatory kinematics conform to Fitts’ law. A second, associated goal is to address the methodological challenges inherent in performing Fitts-style...analysis on rtMRI data of speech production. Methodological challenges include segmenting continuous speech into specific motor tasks, defining key
The Relationship between Speech Production and Speech Perception Deficits in Parkinson's Disease
ERIC Educational Resources Information Center
De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet
2016-01-01
Purpose: This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Method: Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified…
2016-01-07
news. Both of these resemble typical activities of intelligence analysts in OSINT processing and production applications. We assessed two task...intelligence analysts in a number of OSINT processing and production applications. (5) Summary of the most important results In both settings
Fluid Dynamics of Human Phonation and Speech
NASA Astrophysics Data System (ADS)
Mittal, Rajat; Erath, Byron D.; Plesniak, Michael W.
2013-01-01
This article presents a review of the fluid dynamics, flow-structure interactions, and acoustics associated with human phonation and speech. Our voice is produced through the process of phonation in the larynx, and an improved understanding of the underlying physics of this process is essential to advancing the treatment of voice disorders. Insights into the physics of phonation and speech can also contribute to improved vocal training and the development of new speech compression and synthesis schemes. This article introduces the key biomechanical features of the laryngeal physiology, reviews the basic principles of voice production, and summarizes the progress made over the past half-century in understanding the flow physics of phonation and speech. Laryngeal pathologies, which significantly enhance the complexity of phonatory dynamics, are discussed. After a thorough examination of the state of the art in computational modeling and experimental investigations of phonatory biomechanics, we present a synopsis of the pacing issues in this arena and an outlook for research in this fascinating subject.
Cognitive Load in Voice Therapy Carry-Over Exercises.
Iwarsson, Jenny; Morris, David Jackson; Balling, Laura Winther
2017-01-01
The cognitive load generated by online speech production may vary with the nature of the speech task. This article examines 3 speech tasks used in voice therapy carry-over exercises, in which a patient is required to adopt and automatize new voice behaviors, ultimately in daily spontaneous communication. Twelve subjects produced speech in 3 conditions: rote speech (weekdays), sentences in a set form, and semispontaneous speech. Subjects simultaneously performed a secondary visual discrimination task for which response times were measured. On completion of each speech task, subjects rated their experience on a questionnaire. Response times from the secondary, visual task were found to be shortest for the rote speech, longer for the semispontaneous speech, and longest for the sentences within the set framework. Principal components derived from the subjective ratings were found to be linked to response times on the secondary visual task. Acoustic measures reflecting fundamental frequency distribution and vocal fold compression varied across the speech tasks. The results indicate that consideration should be given to the selection of speech tasks during the process leading to automation of revised speech behavior and that self-reports may be a reliable index of cognitive load.
Measures to Evaluate the Effects of DBS on Speech Production
Weismer, Gary; Yunusova, Yana; Bunton, Kate
2011-01-01
The purpose of this paper is to review and evaluate measures of speech production that could be used to document effects of Deep Brain Stimulation (DBS) on speech performance, especially in persons with Parkinson disease (PD). A small set of evaluative criteria for these measures is presented first, followed by consideration of several speech physiology and speech acoustic measures that have been studied frequently and reported on in the literature on normal speech production, and speech production affected by neuromotor disorders (dysarthria). Each measure is reviewed and evaluated against the evaluative criteria. Embedded within this review and evaluation is a presentation of new data relating speech motions to speech intelligibility measures in speakers with PD, amyotrophic lateral sclerosis (ALS), and control speakers (CS). These data are used to support the conclusion that at the present time the slope of second formant transitions (F2 slope), an acoustic measure, is well suited to make inferences to speech motion and to predict speech intelligibility. The use of other measures should not be ruled out, however, and we encourage further development of evaluative criteria for speech measures designed to probe the effects of DBS or any treatment with potential effects on speech production and communication skills. PMID:24932066
Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc
2017-07-01
Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f 0 from their own mean f 0 was measured to evaluate the ability to converge to each acoustic target. showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Huh, Young Eun; Park, Jongkyu; Suh, Mee Kyung; Lee, Sang Eun; Kim, Jumin; Jeong, Yuri; Kim, Hee-Tae; Cho, Jin Whan
2015-08-01
In Parkinson variant of multiple system atrophy (MSA-P), patterns of early speech impairment and their distinguishing features from Parkinson's disease (PD) require further exploration. Here, we compared speech data among patients with early-stage MSA-P, PD, and healthy subjects using quantitative acoustic and perceptual analyses. Variables were analyzed for men and women in view of gender-specific features of speech. Acoustic analysis revealed that male patients with MSA-P exhibited more profound speech abnormalities than those with PD, regarding increased voice pitch, prolonged pause time, and reduced speech rate. This might be due to widespread pathology of MSA-P in nigrostriatal or extra-striatal structures related to speech production. Although several perceptual measures were mildly impaired in MSA-P and PD patients, none of these parameters showed a significant difference between patient groups. Detailed speech analysis using acoustic measures may help distinguish between MSA-P and PD early in the disease process. Copyright © 2015 Elsevier Inc. All rights reserved.
Speech perception and production in severe environments
NASA Astrophysics Data System (ADS)
Pisoni, David B.
1990-09-01
The goal was to acquire new knowledge about speech perception and production in severe environments such as high masking noise, increased cognitive load or sustained attentional demands. Changes were examined in speech production under these adverse conditions through acoustic analysis techniques. One set of studies focused on the effects of noise on speech production. The experiments in this group were designed to generate a database of speech obtained in noise and in quiet. A second set of experiments was designed to examine the effects of cognitive load on the acoustic-phonetic properties of speech. Talkers were required to carry out a demanding perceptual motor task while they read lists of test words. A final set of experiments explored the effects of vocal fatigue on the acoustic-phonetic properties of speech. Both cognitive load and vocal fatigue are present in many applications where speech recognition technology is used, yet their influence on speech production is poorly understood.
Revealing the dual streams of speech processing.
Fridriksson, Julius; Yourganov, Grigori; Bonilha, Leonardo; Basilakos, Alexandra; Den Ouden, Dirk-Bart; Rorden, Christopher
2016-12-27
Several dual route models of human speech processing have been proposed suggesting a large-scale anatomical division between cortical regions that support motor-phonological aspects vs. lexical-semantic aspects of speech processing. However, to date, there is no complete agreement on what areas subserve each route or the nature of interactions across these routes that enables human speech processing. Relying on an extensive behavioral and neuroimaging assessment of a large sample of stroke survivors, we used a data-driven approach using principal components analysis of lesion-symptom mapping to identify brain regions crucial for performance on clusters of behavioral tasks without a priori separation into task types. Distinct anatomical boundaries were revealed between a dorsal frontoparietal stream and a ventral temporal-frontal stream associated with separate components. Collapsing over the tasks primarily supported by these streams, we characterize the dorsal stream as a form-to-articulation pathway and the ventral stream as a form-to-meaning pathway. This characterization of the division in the data reflects both the overlap between tasks supported by the two streams as well as the observation that there is a bias for phonological production tasks supported by the dorsal stream and lexical-semantic comprehension tasks supported by the ventral stream. As such, our findings show a division between two processing routes that underlie human speech processing and provide an empirical foundation for studying potential computational differences that distinguish between the two routes.
ERIC Educational Resources Information Center
McGarr, Nancy S.; Whitehead, Robert
1992-01-01
This paper on physiologic correlates of speech production in children and youth with hearing impairments focuses specifically on the production of phonemes and includes data on respiration for speech production, phonation, speech aerodynamics, articulation, and acoustic analyses of speech by hearing-impaired persons. (Author/DB)
Developing Socio-Cultural Scaffolding Model to Elicit Learners's Speech Production
ERIC Educational Resources Information Center
Englishtina, Inti
2015-01-01
This study is concerned with developing scaffolding model to elicit bilingual kindergarten children's English speech production. It is aimed at describing what the teachers need in eliciting their students' speech production; how a scaffolding model should be developed to elicit the children's speech production; and how effective is the…
Non-fluent speech following stroke is caused by impaired efference copy.
Feenaughty, Lynda; Basilakos, Alexandra; Bonilha, Leonardo; den Ouden, Dirk-Bart; Rorden, Chris; Stark, Brielle; Fridriksson, Julius
2017-09-01
Efference copy is a cognitive mechanism argued to be critical for initiating and monitoring speech: however, the extent to which breakdown of efference copy mechanisms impact speech production is unclear. This study examined the best mechanistic predictors of non-fluent speech among 88 stroke survivors. Objective speech fluency measures were subjected to a principal component analysis (PCA). The primary PCA factor was then entered into a multiple stepwise linear regression analysis as the dependent variable, with a set of independent mechanistic variables. Participants' ability to mimic audio-visual speech ("speech entrainment response") was the best independent predictor of non-fluent speech. We suggest that this "speech entrainment" factor reflects integrity of internal monitoring (i.e., efference copy) of speech production, which affects speech initiation and maintenance. Results support models of normal speech production and suggest that therapy focused on speech initiation and maintenance may improve speech fluency for individuals with chronic non-fluent aphasia post stroke.
Masapollo, Matthew; Polka, Linda; Ménard, Lucie
2016-03-01
To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre-babbling infants (at 4-6 months) prefer listening to vowel sounds with infant vocal properties over vowel sounds with adult vocal properties. A listening preference favoring infant vowels may derive from their higher voice pitch, which has been shown to attract infant attention in infant-directed speech (IDS). In addition, infants' nascent articulatory abilities may induce a bias favoring infant speech given that 4- to 6-month-olds are beginning to produce vowel sounds. We created infant and adult /i/ ('ee') vowels using a production-based synthesizer that simulates the act of speaking in talkers at different ages and then tested infants across four experiments using a sequential preferential listening task. The findings provide the first evidence that infants preferentially attend to vowel sounds with infant voice pitch and/or formants over vowel sounds with no infant-like vocal properties, supporting the view that infants' production abilities influence how they process infant speech. The findings with respect to voice pitch also reveal parallels between IDS and infant speech, raising new questions about the role of this speech register in infant development. Research exploring the underpinnings and impact of this perceptual bias can expand our understanding of infant language development. © 2015 John Wiley & Sons Ltd.
Spencer, Linda J.; Oleson, Jacob J.
2011-01-01
Objectives Previous studies have reported that children who use cochlear implants (CIs) tend to achieve higher reading levels than their peers with profound hearing loss who use hearing aids. The purpose of this study was to investigate the influences of auditory information provided by the CI on the later reading skills of children born with profound deafness. The hypothesis was that there would be a positive and predictive relationship between earlier speech perception, production, and subsequent reading comprehension. Design The speech perception and production skills at the vowel, consonant, phoneme, and word level of 72 children with prelingual, profound hearing loss were assessed after 48 mos of CI use. The children's reading skills were subsequently assessed using word and passage comprehension measures after an average of 89.5 mos of CI use. A regression analysis determined the amount of variance in reading that could be explained by the variables of perception, production, and socioeconomic status. Results Regression analysis revealed that it was possible to explain 59% of the variance of later reading skills by assessing the early speech perception and production performance. The results indicated that early speech perception and production skills of children with profound hearing loss who receive CIs predict future reading achievement skills. Furthermore, the study implies that better early speech perception and production skills result in higher reading achievement. It is speculated that the early access to sound helps to build better phonological processing skills, which is one of the likely contributors to eventual reading success. PMID:18595191
Masson-Carro, Ingrid; Goudbeek, Martijn; Krahmer, Emiel
2016-10-01
Past research has sought to elucidate how speakers and addressees establish common ground in conversation, yet few studies have focused on how visual cues such as co-speech gestures contribute to this process. Likewise, the effect of cognitive constraints on multimodal grounding remains to be established. This study addresses the relationship between the verbal and gestural modalities during grounding in referential communication. We report data from a collaborative task where repeated references were elicited, and a time constraint was imposed to increase cognitive load. Our results reveal no differential effects of repetition or cognitive load on the semantic-based gesture rate, suggesting that representational gestures and speech are closely coordinated during grounding. However, gestures and speech differed in their execution, especially under time pressure. We argue that speech and gesture are two complementary streams that might be planned in conjunction but that unfold independently in later stages of language production, with speakers emphasizing the form of their gestures, but not of their words, to better meet the goals of the collaborative task. Copyright © 2016 Cognitive Science Society, Inc.
Haldin, Célise; Acher, Audrey; Kauffmann, Louise; Hueber, Thomas; Cousin, Emilie; Badin, Pierre; Perrier, Pascal; Fabre, Diandra; Perennou, Dominic; Detante, Olivier; Jaillard, Assia; Lœvenbruck, Hélène; Baciu, Monica
2017-11-17
The rehabilitation of speech disorders benefits from providing visual information which may improve speech motor plans in patients. We tested the proof of concept of a rehabilitation method (Sensori-Motor Fusion, SMF; Ultraspeech player) in one post-stroke patient presenting chronic non-fluent aphasia. SMF allows visualisation by the patient of target tongue and lips movements using high-speed ultrasound and video imaging. This can improve the patient's awareness of his/her own lingual and labial movements, which can, in turn, improve the representation of articulatory movements and increase the ability to coordinate and combine articulatory gestures. The auditory and oro-sensory feedback received by the patient as a result of his/her own pronunciation can be integrated with the target articulatory movements they watch. Thus, this method is founded on sensorimotor integration during speech. The SMF effect on this patient was assessed through qualitative comparison of language scores and quantitative analysis of acoustic parameters measured in a speech production task, before and after rehabilitation. We also investigated cerebral patterns of language reorganisation for rhyme detection and syllable repetition, to evaluate the influence of SMF on phonological-phonetic processes. Our results showed that SMF had a beneficial effect on this patient who qualitatively improved in naming, reading, word repetition and rhyme judgment tasks. Quantitative measurements of acoustic parameters indicate that the patient's production of vowels and syllables also improved. Compared with pre-SMF, the fMRI data in the post-SMF session revealed the activation of cerebral regions related to articulatory, auditory and somatosensory processes, which were expected to be recruited by SMF. We discuss neurocognitive and linguistic mechanisms which may explain speech improvement after SMF, as well as the advantages of using this speech rehabilitation method.
The predictive roles of neural oscillations in speech motor adaptability.
Sengupta, Ranit; Nasir, Sazzad M
2016-06-01
The human speech system exhibits a remarkable flexibility by adapting to alterations in speaking environments. While it is believed that speech motor adaptation under altered sensory feedback involves rapid reorganization of speech motor networks, the mechanisms by which different brain regions communicate and coordinate their activity to mediate adaptation remain unknown, and explanations of outcome differences in adaption remain largely elusive. In this study, under the paradigm of altered auditory feedback with continuous EEG recordings, the differential roles of oscillatory neural processes in motor speech adaptability were investigated. The predictive capacities of different EEG frequency bands were assessed, and it was found that theta-, beta-, and gamma-band activities during speech planning and production contained significant and reliable information about motor speech adaptability. It was further observed that these bands do not work independently but interact with each other suggesting an underlying brain network operating across hierarchically organized frequency bands to support motor speech adaptation. These results provide novel insights into both learning and disorders of speech using time frequency analysis of neural oscillations. Copyright © 2016 the American Physiological Society.
AuBuchon, Angela M.; Pisoni, David B.; Kronenberger, William G.
2015-01-01
OBJECTIVES Determine if early-implanted, long-term cochlear implant (CI) users display delays in verbal short-term and working memory capacity when processes related to audibility and speech production are eliminated. DESIGN Twenty-three long-term CI users and 23 normal-hearing controls each completed forward and backward digit span tasks under testing conditions which differed in presentation modality (auditory or visual) and response output (spoken recall or manual pointing). RESULTS Normal-hearing controls reproduced more lists of digits than the CI users, even when the test items were presented visually and the responses were made manually via touchscreen response. CONCLUSIONS Short-term and working memory delays observed in CI users are not due to greater demands from peripheral sensory processes such as audibility or from overt speech-motor planning and response output organization. Instead, CI users are less efficient at encoding and maintaining phonological representations in verbal short-term memory utilizing phonological and linguistic strategies during memory tasks. PMID:26496666
AuBuchon, Angela M; Pisoni, David B; Kronenberger, William G
2015-01-01
To determine whether early-implanted, long-term cochlear implant (CI) users display delays in verbal short-term and working memory capacity when processes related to audibility and speech production are eliminated. Twenty-three long-term CI users and 23 normal-hearing controls each completed forward and backward digit span tasks under testing conditions that differed in presentation modality (auditory or visual) and response output (spoken recall or manual pointing). Normal-hearing controls reproduced more lists of digits than the CI users, even when the test items were presented visually and the responses were made manually via touchscreen response. Short-term and working memory delays observed in CI users are not due to greater demands from peripheral sensory processes such as audibility or from overt speech-motor planning and response output organization. Instead, CI users are less efficient at encoding and maintaining phonological representations in verbal short-term memory using phonological and linguistic strategies during memory tasks.
ERIC Educational Resources Information Center
Lu, Shuang
2013-01-01
The relationship between speech perception and production has been debated for a long time. The Motor Theory of speech perception (Liberman et al., 1989) claims that perceiving speech is identifying the intended articulatory gestures rather than perceiving the sound patterns. It seems to suggest that speech production precedes speech perception,…
ERIC Educational Resources Information Center
Zaccagnini, Cindy M.; Antia, Shirin D.
1993-01-01
This study of the effects of intensive multisensory speech training on the speech production of a profoundly hearing-impaired child (age nine) found that the addition of Visual Phonics hand cues did not result in speech production gains. All six target phonemes were generalized to new words and maintained after the intervention was discontinued.…
Phonological working memory and FOXP2.
Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer
2018-01-08
The discovery and description of the affected members of the KE family (aKE) initiated research on how genes enable the unique human trait of speech and language. Many aspects of this genetic influence on speech-related cognitive mechanisms are still elusive, e.g. if and how cognitive processes not directly involved in speech production are affected. In the current study we investigated the effect of the FOXP2 mutation on Working Memory (WM). Half the members of the multigenerational KE family have an inherited speech-language disorder, characterised as a verbal and orofacial dyspraxia caused by a mutation of the FOXP2 gene. The core phenotype of the affected KE members (aKE) is a deficiency in repeating words, especially complex non-words, and in coordinating oromotor sequences generally. Execution of oromotor sequences and repetition of phonological sequences both require WM, but to date the aKE's memory ability in this domain has not been examined in detail. To do so we used a test series based on the Baddeley and Hitch WM model, which posits that the central executive (CE), important for planning and manipulating information, works in conjunction with two modality-specific components: The phonological loop (PL), specialized for processing speech-based information; and the visuospatial sketchpad (VSSP), dedicated to processing visual and spatial information. We compared WM performance related to CE, PL, and VSSP function in five aKE and 15 healthy controls (including three unaffected members of the KE family who do not have the FOXP2 mutation). The aKE scored significantly below this control group on the PL component, but not on the VSSP or CE components. Further, the aKE were impaired relative to the controls not only in motor (i.e. articulatory) output but also on the recognition-based PL subtest (word-list matching), which does not require speech production. These results suggest that the aKE's impaired phonological WM may be due to a defect in subvocal rehearsal of speech-based material, and that this defect may be due in turn to compromised speech-based representations. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Franken, Matthias K; Eisner, Frank; Acheson, Daniel J; McQueen, James M; Hagoort, Peter; Schoffelen, Jan-Mathijs
2018-06-21
Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Johari, Karim; Behroozmand, Roozbeh
2017-08-01
Skilled movement is mediated by motor commands executed with extremely fine temporal precision. The question of how the brain incorporates temporal information to perform motor actions has remained unanswered. This study investigated the effect of stimulus temporal predictability on response timing of speech and hand movement. Subjects performed a randomized vowel vocalization or button press task in two counterbalanced blocks in response to temporally-predictable and unpredictable visual cues. Results indicated that speech and hand reaction time was decreased for predictable compared with unpredictable stimuli. This finding suggests that a temporal predictive code is established to capture temporal dynamics of sensory cues in order to produce faster movements in responses to predictable stimuli. In addition, results revealed a main effect of modality, indicating faster hand movement compared with speech. We suggest that this effect is accounted for by the inherent complexity of speech production compared with hand movement. Lastly, we found that movement inhibition was faster than initiation for both hand and speech, suggesting that movement initiation requires a longer processing time to coordinate activities across multiple regions in the brain. These findings provide new insights into the mechanisms of temporal information processing during initiation and inhibition of speech and hand movement. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liberman, A. M.
1982-03-01
This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation and practical applications. Manuscripts cover the following topics: Speech perception and memory coding in relation to reading ability; The use of orthographic structure by deaf adults: Recognition of finger-spelled letters; Exploring the information support for speech; The stream of speech; Using the acoustic signal to make inferences about place and duration of tongue-palate contact. Patterns of human interlimb coordination emerge from the the properties of nonlinear limit cycle oscillatory processes: Theory and data; Motor control: Which themes do we orchestrate? Exploring the nature of motor control in Down's syndrome; Periodicity and auditory memory: A pilot study; Reading skill and language skill: On the role of sign order and morphological structure in memory for American Sign Language sentences; Perception of nasal consonants with special reference to Catalan; and Speech production Characteristics of the hearing impaired.
ERIC Educational Resources Information Center
Walsh, Bridget; Smith, Anne
2011-01-01
Purpose: To investigate the effects of increased syntactic complexity and utterance length demands on speech production and comprehension in individuals with Parkinson's disease (PD) using behavioral and physiological measures. Method: Speech response latency, interarticulatory coordinative consistency, accuracy of speech production, and response…
ERIC Educational Resources Information Center
Jescheniak, Jorg D.; Hahne, Anja; Hoffmann, Stefanie; Wagner, Valentin
2006-01-01
There is a long-standing debate in the area of speech production on the question of whether only words selected for articulation are phonologically activated (as maintained by serial-discrete models) or whether this is also true for their semantic competitors (as maintained by forward-cascading and interactive models). Past research has addressed…
Damage to the anterior arcuate fasciculus predicts non-fluent speech production in aphasia.
Fridriksson, Julius; Guo, Dazhou; Fillmore, Paul; Holland, Audrey; Rorden, Chris
2013-11-01
Non-fluent aphasia implies a relatively straightforward neurological condition characterized by limited speech output. However, it is an umbrella term for different underlying impairments affecting speech production. Several studies have sought the critical lesion location that gives rise to non-fluent aphasia. The results have been mixed but typically implicate anterior cortical regions such as Broca's area, the left anterior insula, and deep white matter regions. To provide a clearer picture of cortical damage in non-fluent aphasia, the current study examined brain damage that negatively influences speech fluency in patients with aphasia. It controlled for some basic speech and language comprehension factors in order to better isolate the contribution of different mechanisms to fluency, or its lack. Cortical damage was related to overall speech fluency, as estimated by clinical judgements using the Western Aphasia Battery speech fluency scale, diadochokinetic rate, rudimentary auditory language comprehension, and executive functioning (scores on a matrix reasoning test) in 64 patients with chronic left hemisphere stroke. A region of interest analysis that included brain regions typically implicated in speech and language processing revealed that non-fluency in aphasia is primarily predicted by damage to the anterior segment of the left arcuate fasciculus. An improved prediction model also included the left uncinate fasciculus, a white matter tract connecting the middle and anterior temporal lobe with frontal lobe regions, including the pars triangularis. Models that controlled for diadochokinetic rate, picture-word recognition, or executive functioning also revealed a strong relationship between anterior segment involvement and speech fluency. Whole brain analyses corroborated the findings from the region of interest analyses. An additional exploratory analysis revealed that involvement of the uncinate fasciculus adjudicated between Broca's and global aphasia, the two most common kinds of non-fluent aphasia. In summary, the current results suggest that the anterior segment of the left arcuate fasciculus, a white matter tract that lies deep to posterior portions of Broca's area and the sensory-motor cortex, is a robust predictor of impaired speech fluency in aphasic patients, even when motor speech, lexical processing, and executive functioning are included as co-factors. Simply put, damage to those regions results in non-fluent aphasic speech; when they are undamaged, fluent aphasias result.
Damage to the anterior arcuate fasciculus predicts non-fluent speech production in aphasia
Guo, Dazhou; Fillmore, Paul; Holland, Audrey; Rorden, Chris
2013-01-01
Non-fluent aphasia implies a relatively straightforward neurological condition characterized by limited speech output. However, it is an umbrella term for different underlying impairments affecting speech production. Several studies have sought the critical lesion location that gives rise to non-fluent aphasia. The results have been mixed but typically implicate anterior cortical regions such as Broca’s area, the left anterior insula, and deep white matter regions. To provide a clearer picture of cortical damage in non-fluent aphasia, the current study examined brain damage that negatively influences speech fluency in patients with aphasia. It controlled for some basic speech and language comprehension factors in order to better isolate the contribution of different mechanisms to fluency, or its lack. Cortical damage was related to overall speech fluency, as estimated by clinical judgements using the Western Aphasia Battery speech fluency scale, diadochokinetic rate, rudimentary auditory language comprehension, and executive functioning (scores on a matrix reasoning test) in 64 patients with chronic left hemisphere stroke. A region of interest analysis that included brain regions typically implicated in speech and language processing revealed that non-fluency in aphasia is primarily predicted by damage to the anterior segment of the left arcuate fasciculus. An improved prediction model also included the left uncinate fasciculus, a white matter tract connecting the middle and anterior temporal lobe with frontal lobe regions, including the pars triangularis. Models that controlled for diadochokinetic rate, picture-word recognition, or executive functioning also revealed a strong relationship between anterior segment involvement and speech fluency. Whole brain analyses corroborated the findings from the region of interest analyses. An additional exploratory analysis revealed that involvement of the uncinate fasciculus adjudicated between Broca’s and global aphasia, the two most common kinds of non-fluent aphasia. In summary, the current results suggest that the anterior segment of the left arcuate fasciculus, a white matter tract that lies deep to posterior portions of Broca’s area and the sensory-motor cortex, is a robust predictor of impaired speech fluency in aphasic patients, even when motor speech, lexical processing, and executive functioning are included as co-factors. Simply put, damage to those regions results in non-fluent aphasic speech; when they are undamaged, fluent aphasias result. PMID:24131592
Lexical Effects on Second Language Acquisition
ERIC Educational Resources Information Center
Kemp, Renee Lorraine
2017-01-01
Speech production and perception are inextricably linked systems. Speakers modify their speech in response to listener characteristics, such as age, hearing ability, and language background. Listener-oriented modifications in speech production, commonly referred to as clear speech, have also been found to affect speech perception by enhancing…
Speech production gains following constraint-induced movement therapy in children with hemiparesis.
Allison, Kristen M; Reidy, Teressa Garcia; Boyle, Mary; Naber, Erin; Carney, Joan; Pidcock, Frank S
2017-01-01
The purpose of this study was to investigate changes in speech skills of children who have hemiparesis and speech impairment after participation in a constraint-induced movement therapy (CIMT) program. While case studies have reported collateral speech gains following CIMT, the effect of CIMT on speech production has not previously been directly investigated to the knowledge of these investigators. Eighteen children with hemiparesis and co-occurring speech impairment participated in a 21-day clinical CIMT program. The Goldman-Fristoe Test of Articulation-2 (GFTA-2) was used to assess children's articulation of speech sounds before and after the intervention. Changes in percent of consonants correct (PCC) on the GFTA-2 were used as a measure of change in speech production. Children made significant gains in PCC following CIMT. Gains were similar in children with left and right-sided hemiparesis, and across age groups. This study reports significant collateral gains in speech production following CIMT and suggests benefits of CIMT may also spread to speech motor domains.
Visual Grouping in Accordance With Utterance Planning Facilitates Speech Production.
Zhao, Liming; Paterson, Kevin B; Bai, Xuejun
2018-01-01
Research on language production has focused on the process of utterance planning and involved studying the synchronization between visual gaze and the production of sentences that refer to objects in the immediate visual environment. However, it remains unclear how the visual grouping of these objects might influence this process. To shed light on this issue, the present research examined the effects of the visual grouping of objects in a visual display on utterance planning in two experiments. Participants produced utterances of the form "The snail and the necklace are above/below/on the left/right side of the toothbrush" for objects containing these referents (e.g., a snail, a necklace and a toothbrush). These objects were grouped using classic Gestalt principles of color similarity (Experiment 1) and common region (Experiment 2) so that the induced perceptual grouping was congruent or incongruent with the required phrasal organization. The results showed that speech onset latencies were shorter in congruent than incongruent conditions. The findings therefore reveal that the congruency between the visual grouping of referents and the required phrasal organization can influence speech production. Such findings suggest that, when language is produced in a visual context, speakers make use of both visual and linguistic cues to plan utterances.
Wu, Jinsong; Lu, Junfeng; Zhang, Han; Zhang, Jie; Yao, Chengjun; Zhuang, Dongxiao; Qiu, Tianming; Guo, Qihao; Hu, Xiaobing; Mao, Ying; Zhou, Liangfu
2015-12-01
Chinese processing has been suggested involving distinct brain areas from English. However, current functional localization studies on Chinese speech processing use mostly "indirect" techniques such as functional magnetic resonance imaging and electroencephalography, lacking direct evidence by means of electrocortical recording. In this study, awake craniotomies in 66 Chinese-speaking glioma patients provide a unique opportunity to directly map eloquent language areas. Intraoperative electrocortical stimulation was conducted and the positive sites for speech arrest, anomia, and alexia were identified separately. With help of stereotaxic neuronavigation system and computational modeling, all positive sites elicited by stimulation were integrated and a series of two- and three-dimension Chinese language probability maps were built. We performed statistical comparisons between the Chinese maps and previously derived English maps. While most Chinese speech arrest areas located at typical language production sites (i.e., 50% positive sites in ventral precentral gyrus, 28% in pars opercularis and pars triangularis), which also serve English production, an additional brain area, the left middle frontal gyrus (Brodmann's areas 6/9), was found to be unique in Chinese production (P < 0.05). Moreover, Chinese speakers' inferior ventral precentral gyrus (Brodmann's area 6) was used more than that in English speakers. Our finding suggests that Chinese involves more perisylvian region (extending to left middle frontal gyrus) than English. This is the first time that direct evidence supports cross-cultural neurolinguistics differences in human beings. The Chinese language atlas will also helpful in brain surgery planning for Chinese-speakers. Copyright © 2015 Wiley Periodicals, Inc.
Ackermann, Hermann; Riecker, Axel
2010-06-01
Skilled spoken language production requires fast and accurate coordination of up to 100 muscles. A long-standing concept--tracing ultimately back to Paul Broca--assumes posterior parts of the inferior frontal gyrus to support the orchestration of the respective movement sequences prior to innervation of the vocal tract. At variance with this tradition, the insula has more recently been declared the relevant "region for coordinating speech articulation", based upon clinico-neuroradiological correlation studies. However, these findings have been criticized on methodological grounds. A survey of the clinical literature (cerebrovascular disorders, brain tumours, stimulation mapping) yields a still inconclusive picture. By contrast, functional imaging studies report more consistently hemodynamic insular responses in association with motor aspects of spoken language. Most noteworthy, a relatively small area at the junction of insular and opercular cortex was found sensitive to the phonetic-linguistic structure of verbal utterances, a strong argument for its engagement in articulatory control processes. Nevertheless, intrasylvian hemodynamic activation does not appear restricted to articulatory processes and might also be engaged in the adjustment of the autonomic system to ventilatory needs during speech production: Whereas the posterior insula could be involved in the cortical representation of respiration-related metabolic (interoceptive) states, the more rostral components, acting upon autonomic functions, might serve as a corollary pathway to "voluntary control of breathing" bound to corticospinal and -bulbar fiber tracts. For example, the insula could participate in the implementation of task-specific autonomic settings such as the maintenance of a state of relative hyperventilation during speech production.
Aging-related gains and losses associated with word production in connected speech.
Dennis, Paul A; Hess, Thomas M
2016-11-01
Older adults have been observed to use more nonnormative, or atypical, words than younger adults in connected speech. We examined whether aging-related losses in word-finding abilities or gains in language expertise underlie these age differences. Sixty younger and 60 older adults described two neutral photographs. These descriptions were processed into word types, and textual analysis was used to identify interrupted speech (e.g., pauses), reflecting word-finding difficulty. Word types were assessed for normativeness, with nonnormative word types defined as those used by six (5%) or fewer participants to describe a particular picture. Accuracy and precision ratings were provided by another sample of 48 high-vocabulary younger and older adults. Older adults produced more interrupted and, as predicted, nonnormative words than younger adults. Older adults were more likely than younger adults to use nonnormative language via interrupted speech, suggesting a compensatory process. However, older adults' nonnormative words were more precise and trended for having higher accuracy, reflecting expertise. In tasks offering response flexibility, like connected speech, older adults may be able to offset instances of aging-related deficits by maximizing their expertise in other instances.
Lim, Hayoung A
2010-01-01
The study compared the effect of music training, speech training and no-training on the verbal production of children with Autism Spectrum Disorders (ASD). Participants were 50 children with ASD, age range 3 to 5 years, who had previously been evaluated on standard tests of language and level of functioning. They were randomly assigned to one of three 3-day conditions. Participants in music training (n = 18) watched a music video containing 6 songs and pictures of the 36 target words; those in speech training (n = 18) watched a speech video containing 6 stories and pictures, and those in the control condition (n = 14) received no treatment. Participants' verbal production including semantics, phonology, pragmatics, and prosody was measured by an experimenter designed verbal production evaluation scale. Results showed that participants in both music and speech training significantly increased their pre to posttest verbal production. Results also indicated that both high and low functioning participants improved their speech production after receiving either music or speech training; however, low functioning participants showed a greater improvement after the music training than the speech training. Children with ASD perceive important linguistic information embedded in music stimuli organized by principles of pattern perception, and produce the functional speech.
Imitation and speech: commonalities within Broca's area.
Kühn, Simone; Brass, Marcel; Gallinat, Jürgen
2013-11-01
The so-called embodiment of communication has attracted considerable interest. Recently a growing number of studies have proposed a link between Broca's area's involvement in action processing and its involvement in speech. The present quantitative meta-analysis set out to test whether neuroimaging studies on imitation and overt speech show overlap within inferior frontal gyrus. By means of activation likelihood estimation (ALE), we investigated concurrence of brain regions activated by object-free hand imitation studies as well as overt speech studies including simple syllable and more complex word production. We found direct overlap between imitation and speech in bilateral pars opercularis (BA 44) within Broca's area. Subtraction analyses revealed no unique localization neither for speech nor for imitation. To verify the potential of ALE subtraction analysis to detect unique involvement within Broca's area, we contrasted the results of a meta-analysis on motor inhibition and imitation and found separable regions involved for imitation. This is the first meta-analysis to compare the neural correlates of imitation and overt speech. The results are in line with the proposed evolutionary roots of speech in imitation.
Yang, Wu-xia; Feng, Jie; Huang, Wan-ting; Zhang, Cheng-xiang; Nan, Yun
2014-01-01
Congenital amusia is a musical disorder that mainly affects pitch perception. Among Mandarin speakers, some amusics also have difficulties in processing lexical tones (tone agnosics). To examine to what extent these perceptual deficits may be related to pitch production impairments in music and Mandarin speech, eight amusics, eight tone agnosics, and 12 age- and IQ-matched normal native Mandarin speakers were asked to imitate music note sequences and Mandarin words of comparable lengths. The results indicated that both the amusics and tone agnosics underperformed the controls on musical pitch production. However, tone agnosics performed no worse than the amusics, suggesting that lexical tone perception deficits may not aggravate musical pitch production difficulties. Moreover, these three groups were all able to imitate lexical tones with perfect intelligibility. Taken together, the current study shows that perceptual musical pitch and lexical tone deficits might coexist with musical pitch production difficulties. But at the same time these perceptual pitch deficits might not affect lexical tone production or the intelligibility of the speech words that were produced. The perception-production relationship for pitch among individuals with perceptual pitch deficits may be, therefore, domain-dependent. PMID:24474944
Cat Got Your Tongue? Using the Tip-of-the-Tongue State to Investigate Fixed Expressions
ERIC Educational Resources Information Center
Nordmann, Emily; Cleland, Alexandra A.; Bull, Rebecca
2013-01-01
Despite the fact that they play a prominent role in everyday speech, the representation and processing of fixed expressions during language production is poorly understood. Here, we report a study investigating the processes underlying fixed expression production. "Tip-of-the-tongue" (TOT) states were elicited for well-known idioms…
Paatsch, Louise E; Blamey, Peter J; Sarant, Julia Z; Bow, Catherine P
2006-01-01
A group of 21 hard-of-hearing and deaf children attending primary school were trained by their teachers on the production of selected consonants and on the meanings of selected words. Speech production, vocabulary knowledge, reading aloud, and speech perception measures were obtained before and after each type of training. The speech production training produced a small but significant improvement in the percentage of consonants correctly produced in words. The vocabulary training improved knowledge of word meanings substantially. Performance on speech perception and reading aloud were significantly improved by both types of training. These results were in accord with the predictions of a mathematical model put forward to describe the relationships between speech perception, speech production, and language measures in children (Paatsch, Blamey, Sarant, Martin, & Bow, 2004). These training data demonstrate that the relationships between the measures are causal. In other words, improvements in speech production and vocabulary performance produced by training will carry over into predictable improvements in speech perception and reading scores. Furthermore, the model will help educators identify the most effective methods of improving receptive and expressive spoken language for individual children who are deaf or hard of hearing.
Speech and oromotor outcome in adolescents born preterm: relationship to motor tract integrity.
Northam, Gemma B; Liégeois, Frédérique; Chong, Wui K; Baker, Kate; Tournier, Jacques-Donald; Wyatt, John S; Baldeweg, Torsten; Morgan, Angela
2012-03-01
To assess speech abilities in adolescents born preterm and investigate whether there is an association between specific speech deficits and brain abnormalities. Fifty adolescents born prematurely (<33 weeks' gestation) with a spectrum of brain injuries were recruited (mean age, 16 years). Speech examination included tests of speech-sound processing and production and speech and oromotor control. Conventional magnetic resonance imaging and diffusion-weighted imaging was acquired in all adolescents born preterm and 30 term-born control subjects. Radiological ratings of brain injury were recorded and the integrity of the primary motor projections was measured (corticospinal tract and speech-motor corticobulbar tract [CST/CBT]). There were no clinical diagnoses of developmental dysarthria, dyspraxia, or a speech-sound disorder, but difficulties in speech and oromotor control were common. A regression analysis revealed that presence of a neurologic impairment, and diffusion-weighted imaging abnormalities in the left CST/CBT were significant independent predictors of poor speech and oromotor outcome. These left-lateralized abnormalities were most evident at the level of the posterior limb of the internal capsule. Difficulties in speech and oromotor control are common in adolescents born preterm, and adolescents with injury to the CST/CBT pathways in the left-hemisphere may be most at risk. Copyright © 2012 Mosby, Inc. All rights reserved.
Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius
2015-01-01
Background and Purpose Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions regarding if AOS emerges from a unique pattern of brain damage or as a sub-element of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Methods Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The Apraxia of Speech Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with AOS and/or aphasia. Localized brain damage was identified using structural MRI, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS and/or aphasia, and brain damage. Results The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS and/or aphasia were associated with damage to the temporal lobe and the inferior pre-central frontal regions. Conclusion AOS likely occurs in conjunction with aphasia due to the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. PMID:25908457
Davidow, Jason H; Bothe, Anne K; Richardson, Jessica D; Andreatta, Richard D
2010-12-01
This study introduces a series of systematic investigations intended to clarify the parameters of the fluency-inducing conditions (FICs) in stuttering. Participants included 11 adults, aged 20-63 years, with typical speech-production skills. A repeated measures design was used to examine the relationships between several speech production variables (vowel duration, voice onset time, fundamental frequency, intraoral pressure, pressure rise time, transglottal airflow, and phonated intervals) and speech rate and instatement style during metronome-entrained rhythmic speech. Measures of duration (vowel duration, voice onset time, and pressure rise time) differed across different metronome conditions. When speech rates were matched between the control condition and metronome condition, voice onset time was the only variable that changed. Results confirm that speech rate and instatement style can influence speech production variables during the production of fluency-inducing conditions. Future studies of normally fluent speech and of stuttered speech must control both features and should further explore the importance of voice onset time, which may be influenced by rate during metronome stimulation in a way that the other variables are not.
Relationship between Speech Production and Perception in People Who Stutter.
Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter
2016-01-01
Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl's gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS.
Cerebral specialization for speech production in persons with Down syndrome.
Heath, M; Elliott, D
1999-09-01
The study of cerebral specialization in persons with Down syndrome (DS) has revealed an anomalous pattern of organization. Specifically, dichotic listening studies (e.g., Elliott & Weeks, 1993) have suggested a left ear/right hemisphere dominance for speech perception for persons with DS. In the current investigation, the cerebral dominance for speech production was examined using the mouth asymmetry technique. In right-handed, nonhandicapped subjects, mouth asymmetry methodology has shown that during speech, the right side of the mouth opens sooner and to a larger degree then the left side (Graves, Goodglass, & Landis, 1982). The phenomenon of right mouth asymmetry (RMA) is believed to reflect the direct access that the musculature on the right side of the face has to the left hemisphere's speech production systems. This direct access may facilitate the transfer of innervatory patterns to the muscles on the right side of the face. In the present study, the lateralization for speech production was investigated in 10 right-handed participants with DS and 10 nonhandicapped subjects. A RMA at the initiation and end of speech production occurred for subjects in both groups. Surprisingly, the degree of asymmetry between groups did not differ, suggesting that the lateralization of speech production is similar for persons with and persons without DS. These results support the biological dissociation model (Elliott, Weeks, & Elliott, 1987), which holds that persons with DS display a unique dissociation between speech perception (right hemisphere) and speech production (left hemisphere). Copyright 1999 Academic Press.
Speech and nonspeech: What are we talking about?
Maas, Edwin
2017-08-01
Understanding of the behavioural, cognitive and neural underpinnings of speech production is of interest theoretically, and is important for understanding disorders of speech production and how to assess and treat such disorders in the clinic. This paper addresses two claims about the neuromotor control of speech production: (1) speech is subserved by a distinct, specialised motor control system and (2) speech is holistic and cannot be decomposed into smaller primitives. Both claims have gained traction in recent literature, and are central to a task-dependent model of speech motor control. The purpose of this paper is to stimulate thinking about speech production, its disorders and the clinical implications of these claims. The paper poses several conceptual and empirical challenges for these claims - including the critical importance of defining speech. The emerging conclusion is that a task-dependent model is called into question as its two central claims are founded on ill-defined and inconsistently applied concepts. The paper concludes with discussion of methodological and clinical implications, including the potential utility of diadochokinetic (DDK) tasks in assessment of motor speech disorders and the contraindication of nonspeech oral motor exercises to improve speech function.
Klintö, Kristina; Svensson, Henry; Elander, Anna; Lohmander, Anette
2014-05-01
Objective : To describe and compare speech and phonology at age 3 years in children born with unilateral complete cleft lip and palate treated with three different methods for primary palatal surgery. Design : Prospective study. Setting : Primary care university hospitals. Participants : Twenty-eight Swedish-speaking children born with nonsyndromic unilateral complete cleft lip and palate. Interventions : Three methods for primary palatal surgery: two-stage closure with soft palate closure between 3.4 and 6.4 months and hard palate closure at mean age 12.3 months (n = 9) or 36.2 months (n = 9) or one-stage closure at mean age 13.6 months (n = 10). Main Outcome Measures : Based on independent judgments performed by two speech-language pathologists from standardized video recordings: percent correct consonants adjusted for age, percent active cleft speech characteristics, total number of phonological processes, number of different phonological processes, hypernasality, and audible nasal air leakage. The hard palate was unrepaired in nine of the children treated with two-stage closure. Results : The group treated with one-stage closure showed significantly better results than the group with an unoperated hard palate regarding percent active cleft speech characteristics and total number of phonological processes. Conclusions : Early primary palatal surgery in one or two stages did not result in any significant differences in speech production at age 3 years. However, children with an unoperated hard palate had significantly poorer speech and phonology than peers who had been treated with one-stage palatal closure at about 13 months of age.
Tomblin, J. Bruce; Peng, Shu-Chen; Spencer, Linda J.; Lu, Nelson
2011-01-01
Purpose This study characterized the development of speech sound production in prelingually deaf children with a minimum of 8 years of cochlear implant (CI) experience. Method Twenty-seven pediatric CI recipients' spontaneous speech samples from annual evaluation sessions were phonemically transcribed. Accuracy for these speech samples was evaluated in piecewise regression models. Results As a group, pediatric CI recipients showed steady improvement in speech sound production following implantation, but the improvement rate declined after 6 years of device experience. Piecewise regression models indicated that the slope estimating the participants' improvement rate was statistically greater than 0 during the first 6 years postimplantation, but not after 6 years. The group of pediatric CI recipients' accuracy of speech sound production after 4 years of device experience reasonably predicts their speech sound production after 5–10 years of device experience. Conclusions The development of speech sound production in prelingually deaf children stabilizes after 6 years of device experience, and typically approaches a plateau by 8 years of device use. Early growth in speech before 4 years of device experience did not predict later rates of growth or levels of achievement. However, good predictions could be made after 4 years of device use. PMID:18695018
Walking the talk--speech activates the leg motor cortex.
Liuzzi, Gianpiero; Ellger, Tanja; Flöel, Agnes; Breitenstein, Caterina; Jansen, Andreas; Knecht, Stefan
2008-09-01
Speech may have evolved from earlier modes of communication based on gestures. Consistent with such a motor theory of speech, cortical orofacial and hand motor areas are activated by both speech production and speech perception. However, the extent of speech-related activation of the motor cortex remains unclear. Therefore, we examined if reading and listening to continuous prose also activates non-brachiofacial motor representations like the leg motor cortex. We found corticospinal excitability of bilateral leg muscle representations to be enhanced by speech production and silent reading. Control experiments showed that speech production yielded stronger facilitation of the leg motor system than non-verbal tongue-mouth mobilization and silent reading more than a visuo-attentional task thus indicating speech-specificity of the effect. In the frame of the motor theory of speech this finding suggests that the system of gestural communication, from which speech may have evolved, is not confined to the hand but includes gestural movements of other body parts as well.
ERIC Educational Resources Information Center
Trebits, Anna
2016-01-01
The aim of this study was to investigate the effects of cognitive task complexity and individual differences in input, processing, and output anxiety (IPOA) on L2 narrative production. The participants were enrolled in a bilingual secondary educational program. They performed two narrative tasks in speech and writing. The participants' level of…
Cumming, Ruth; Wilson, Angela; Goswami, Usha
2015-01-01
Children with specific language impairments (SLIs) show impaired perception and production of spoken language, and can also present with motor, auditory, and phonological difficulties. Recent auditory studies have shown impaired sensitivity to amplitude rise time (ART) in children with SLIs, along with non-speech rhythmic timing difficulties. Linguistically, these perceptual impairments should affect sensitivity to speech prosody and syllable stress. Here we used two tasks requiring sensitivity to prosodic structure, the DeeDee task and a stress misperception task, to investigate this hypothesis. We also measured auditory processing of ART, rising pitch and sound duration, in both speech (“ba”) and non-speech (tone) stimuli. Participants were 45 children with SLI aged on average 9 years and 50 age-matched controls. We report data for all the SLI children (N = 45, IQ varying), as well as for two independent SLI subgroupings with intact IQ. One subgroup, “Pure SLI,” had intact phonology and reading (N = 16), the other, “SLI PPR” (N = 15), had impaired phonology and reading. Problems with syllable stress and prosodic structure were found for all the group comparisons. Both sub-groups with intact IQ showed reduced sensitivity to ART in speech stimuli, but the PPR subgroup also showed reduced sensitivity to sound duration in speech stimuli. Individual differences in processing syllable stress were associated with auditory processing. These data support a new hypothesis, the “prosodic phrasing” hypothesis, which proposes that grammatical difficulties in SLI may reflect perceptual difficulties with global prosodic structure related to auditory impairments in processing amplitude rise time and duration. PMID:26217286
Spectral analysis method and sample generation for real time visualization of speech
NASA Astrophysics Data System (ADS)
Hobohm, Klaus
A method for translating speech signals into optical models, characterized by high sound discrimination and learnability and designed to provide to deaf persons a feedback towards control of their way of speaking, is presented. Important properties of speech production and perception processes and organs involved in these mechanisms are recalled in order to define requirements for speech visualization. It is established that the spectral representation of time, frequency and amplitude resolution of hearing must be fair and continuous variations of acoustic parameters of speech signal must be depicted by a continuous variation of images. A color table was developed for dynamic illustration and sonograms were generated with five spectral analysis methods such as Fourier transformations and linear prediction coding. For evaluating sonogram quality, test persons had to recognize consonant/vocal/consonant words and an optimized analysis method was achieved with a fast Fourier transformation and a postprocessor. A hardware concept of a real time speech visualization system, based on multiprocessor technology in a personal computer, is presented.
Age-related changes to the production of linguistic prosody
NASA Astrophysics Data System (ADS)
Barnes, Daniel R.
The production of speech prosody (the rhythm, pausing, and intonation associated with natural speech) is critical to effective communication. The current study investigated the impact of age-related changes to physiology and cognition in relation to the production of two types of linguistic prosody: lexical stress and the disambiguation of syntactically ambiguous utterances. Analyses of the acoustic correlates of stress: speech intensity (or sound-pressure level; SPL), fundamental frequency (F0), key word/phrase duration, and pause duration revealed that both young and older adults effectively use these acoustic features to signal linguistic prosody, although the relative weighting of cues differed by group. Differences in F0 were attributed to age-related physiological changes in the laryngeal subsystem, while group differences in duration measures were attributed to relative task complexity and the cognitive-linguistic load of these respective tasks. The current study provides normative acoustic data for older adults which informs interpretation of clinical findings as well as research pertaining to dysprosody as the result of disease processes.
Speed Accuracy Tradeoffs in Human Speech Production
2017-05-01
for considering Fitts’ law in the domain of speech production is elucidated. Methodological challenges in applying Fitts-style analysis are addressed...order to assess whether articulatory kinematics conform to Fitts’ law. A second, associated goal is to address the methodological challenges inherent in...performing Fitts-style analysis on rtMRI data of speech production. Methodological challenges include segmenting continuous speech into specific motor
ERIC Educational Resources Information Center
Zheng, Zane Z.; Munhall, Kevin G.; Johnsrude, Ingrid S.
2010-01-01
The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or…
Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius
2015-06-01
Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions on whether AOS emerges from a unique pattern of brain damage or as a subelement of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The AOS Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with both AOS and aphasia. Localized brain damage was identified using structural magnetic resonance imaging, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS or aphasia, and brain damage. The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS or aphasia were associated with damage to the temporal lobe and the inferior precentral frontal regions. AOS likely occurs in conjunction with aphasia because of the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. © 2015 American Heart Association, Inc.
The role of accent imitation in sensorimotor integration during processing of intelligible speech
Adank, Patti; Rueschemeyer, Shirley-Ann; Bekkering, Harold
2013-01-01
Recent theories on how listeners maintain perceptual invariance despite variation in the speech signal allocate a prominent role to imitation mechanisms. Notably, these simulation accounts propose that motor mechanisms support perception of ambiguous or noisy signals. Indeed, imitation of ambiguous signals, e.g., accented speech, has been found to aid effective speech comprehension. Here, we explored the possibility that imitation in speech benefits perception by increasing activation in speech perception and production areas. Participants rated the intelligibility of sentences spoken in an unfamiliar accent of Dutch in a functional Magnetic Resonance Imaging experiment. Next, participants in one group repeated the sentences in their own accent, while a second group vocally imitated the accent. Finally, both groups rated the intelligibility of accented sentences in a post-test. The neuroimaging results showed an interaction between type of training and pre- and post-test sessions in left Inferior Frontal Gyrus, Supplementary Motor Area, and left Superior Temporal Sulcus. Although alternative explanations such as task engagement and fatigue need to be considered as well, the results suggest that imitation may aid effective speech comprehension by supporting sensorimotor integration. PMID:24109447
ERIC Educational Resources Information Center
Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.
2016-01-01
Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…
Relationship between Speech Production and Perception in People Who Stutter
Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter
2016-01-01
Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl’s gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS. PMID:27242487
Speech processing using maximum likelihood continuity mapping
Hogden, John E.
2000-01-01
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Speech processing using maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, J.E.
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Leonard, Matthew K; Desai, Maansi; Hungate, Dylan; Cai, Ruofan; Singhal, Nilika S; Knowlton, Robert C; Chang, Edward F
2018-05-22
Music and speech are human-specific behaviours that share numerous properties, including the fine motor skills required to produce them. Given these similarities, previous work has suggested that music and speech may at least partially share neural substrates. To date, much of this work has focused on perception, and has not investigated the neural basis of production, particularly in trained musicians. Here, we report two rare cases of musicians undergoing neurosurgical procedures, where it was possible to directly stimulate the left hemisphere cortex during speech and piano/guitar music production tasks. We found that stimulation to left inferior frontal cortex, including pars opercularis and ventral pre-central gyrus, caused slowing and arrest for both speech and music, and note sequence errors for music. Stimulation to posterior superior temporal cortex only caused production errors during speech. These results demonstrate partially dissociable networks underlying speech and music production, with a shared substrate in frontal regions.
Word production inconsistency of Singaporean-English-speaking adolescents with Down Syndrome.
Wong, Betty; Brebner, Chris; McCormack, Paul; Butcher, Andy
2015-01-01
The nature of speech disorders in individuals with Down Syndrome (DS) remains controversial despite various explanations put forth in the literature to account for the observed speech profiles. A high level of word production inconsistency in children with DS has led researchers to query whether the inconsistency continues into adolescence, and if the inconsistency stems from inconsistent phonological disorder (IPD) or childhood apraxia of speech (CAS). Of the studies that have been published, most suggest that the speech profile of individuals with DS is delayed, while a few recent studies suggest a combination of delayed and disordered patterns. However, no studies have explored the nature of word production inconsistency in this population, and the relationship between word production inconsistency, receptive vocabulary and severity of speech disorder. To investigate in a pilot study the extent of word production inconsistency in adolescents with DS and to examine the correlations between word production inconsistency, measures of receptive vocabulary, severity of speech disorder and oromotor skills in adolescents with DS. The participants were 32 native speakers of Singaporean-English adolescents, comprising 16 participants with DS and 16 typically developing (TD) participants. The participants completed a battery of standardized speech and language assessments, including The Diagnostic Evaluation of Articulation and Phonology (DEAP) assessment. Results from each test were correlated to determine relationships. Qualitative analyses were also carried out on all the data collected. In this study, seven out of 16 participants with DS scored above 40% on word production inconsistency, a diagnostic criterion for IPD. In addition, all participants with DS performed poorly on the oromotor assessment of DEAP. The overall speech profile observed did not exactly correspond with the cluster symptoms observed in children with IPD or CAS. Word production inconsistency is a noticeable feature in the speech of individuals with DS. In addition, the speech profiles of individuals with DS consist of atypical and unusual errors alongside developmental errors. Significant correlations were found between the measures investigated, suggesting that speech disorder in DS is multifactorial. The results from this study will help to improve differential diagnosis of speech disorders and individualized treatment plans in the population with DS. © 2015 Royal College of Speech and Language Therapists.
Rendall, Drew; Vasey, Paul L; McKenzie, Jared
2008-02-01
Popular stereotypes concerning the speech of homosexuals typically attribute speech patterns characteristic of the opposite-sex, i.e., broadly feminized speech in gay men and broadly masculinized speech in lesbian women. A small body of recent empirical research has begun to address the subject more systematically and to consider specific mechanistic hypotheses to account for the potentially distinctive features of homosexual speech. Results do not yet fully endorse the stereotypes but they do not entirely discount them either; nor do they cleanly favor any single mechanistic hypothesis. To contribute to this growing body of research, we report acoustic analyses of 2,875 vowel sounds from a balanced set of 125 speakers representing heterosexual and homosexual individuals of each sex from southern Alberta, Canada. Analyses focused on voice pitch and formant frequencies which together determine the principle perceptual features of vowels. There was no significant difference in mean voice pitch between heterosexual and homosexual men or between heterosexual and homosexual women, but there were significant differences in the formant frequencies of vowels produced by both homosexual groups compared to their heterosexual counterparts. Formant frequency differences were specific to only certain vowel sounds and some could be attributed to basic differences in body size between heterosexual and homosexual speakers. The remaining formant frequency differences were not obviously due to differences in vocal tract anatomy between heterosexual and homosexual speakers, nor did they reflect global feminization or masculinization of vowel production patterns in homosexual men and women, respectively. The vowel-specific differences observed could reflect social modeling processes in which only certain speech patterns of the opposite-sex, or of same-sex homosexuals, are selectively adopted. However, we introduce an alternative biosocial hypothesis, specifically that the distinctive, vowel-specific features of homosexual speakers relative to heterosexual speakers arise incidentally as a product of broader psychobehavioral differences between the two groups that are, in turn, continuous with and flow from the physiological processes that affect sexual orientation to begin with.
Price, Cathy J.
2012-01-01
The anatomy of language has been investigated with PET or fMRI for more than 20 years. Here I attempt to provide an overview of the brain areas associated with heard speech, speech production and reading. The conclusions of many hundreds of studies were considered, grouped according to the type of processing, and reported in the order that they were published. Many findings have been replicated time and time again leading to some consistent and undisputable conclusions. These are summarised in an anatomical model that indicates the location of the language areas and the most consistent functions that have been assigned to them. The implications for cognitive models of language processing are also considered. In particular, a distinction can be made between processes that are localized to specific structures (e.g. sensory and motor processing) and processes where specialisation arises in the distributed pattern of activation over many different areas that each participate in multiple functions. For example, phonological processing of heard speech is supported by the functional integration of auditory processing and articulation; and orthographic processing is supported by the functional integration of visual processing, articulation and semantics. Future studies will undoubtedly be able to improve the spatial precision with which functional regions can be dissociated but the greatest challenge will be to understand how different brain regions interact with one another in their attempts to comprehend and produce language. PMID:22584224
Phrase-level speech simulation with an airway modulation model of speech production
Story, Brad H.
2012-01-01
Artificial talkers and speech synthesis systems have long been used as a means of understanding both speech production and speech perception. The development of an airway modulation model is described that simulates the time-varying changes of the glottis and vocal tract, as well as acoustic wave propagation, during speech production. The result is a type of artificial talker that can be used to study various aspects of how sound is generated by humans and how that sound is perceived by a listener. The primary components of the model are introduced and simulation of words and phrases are demonstrated. PMID:23503742
Changes in Speech Production Associated with Alphabet Supplementation
ERIC Educational Resources Information Center
Hustad, Katherine C.; Lee, Jimin
2008-01-01
Purpose: This study examined the effect of alphabet supplementation (AS) on temporal and spectral features of speech production in individuals with cerebral palsy and dysarthria. Method: Twelve speakers with dysarthria contributed speech samples using habitual speech and while using AS. One hundred twenty listeners orthographically transcribed…
Gauvin, Hanna S.; Hartsuiker, Robert J.; Huettig, Falk
2013-01-01
The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception. PMID:24339809
Hemodynamics of speech production: An fNIRS investigation of children who stutter.
Walsh, B; Tian, F; Tourville, J A; Yücel, M A; Kuczek, T; Bostian, A J
2017-06-22
Stuttering affects nearly 1% of the population worldwide and often has life-altering negative consequences, including poorer mental health and emotional well-being, and reduced educational and employment achievements. Over two decades of neuroimaging research reveals clear anatomical and physiological differences in the speech neural networks of adults who stutter. However, there have been few neurophysiological investigations of speech production in children who stutter. Using functional near-infrared spectroscopy (fNIRS), we examined hemodynamic responses over neural regions integral to fluent speech production including inferior frontal gyrus, premotor cortex, and superior temporal gyrus during a picture description task. Thirty-two children (16 stuttering and 16 controls) aged 7-11 years participated in the study. We found distinctly different speech-related hemodynamic responses in the group of children who stutter compared to the control group. Whereas controls showed significant activation over left dorsal inferior frontal gyrus and left premotor cortex, children who stutter exhibited deactivation over these left hemisphere regions. This investigation of neural activation during natural, connected speech production in children who stutter demonstrates that in childhood stuttering, atypical functional organization for speech production is present and suggests promise for the use of fNIRS during natural speech production in future research with typical and atypical child populations.
Lim, Hayoung A; Draper, Ellary
2011-01-01
This study compared a common form of Applied Behavior Analysis Verbal Behavior (ABA VB) approach and music incorporated with ABA VB method as part of developmental speech-language training in the speech production of children with Autism Spectrum Disorders (ASD). This study explored how the perception of musical patterns incorporated in ABA VB operants impacted the production of speech in children with ASD. Participants were 22 children with ASD, age range 3 to 5 years, who were verbal or pre verbal with presence of immediate echolalia. They were randomly assigned a set of target words for each of the 3 training conditions: (a) music incorporated ABA VB, (b) speech (ABA VB) and (c) no-training. Results showed both music and speech trainings were effective for production of the four ABA verbal operants; however, the difference between music and speech training was not statistically different. Results also indicated that music incorporated ABA VB training was most effective in echoic production, and speech training was most effective in tact production. Music can be incorporated into the ABA VB training method, and musical stimuli can be used as successfully as ABA VB speech training to enhance the functional verbal production in children with ASD.
Structural brain aging and speech production: a surface-based brain morphometry study.
Tremblay, Pascale; Deschamps, Isabelle
2016-07-01
While there has been a growing number of studies examining the neurofunctional correlates of speech production over the past decade, the neurostructural correlates of this immensely important human behaviour remain less well understood, despite the fact that previous studies have established links between brain structure and behaviour, including speech and language. In the present study, we thus examined, for the first time, the relationship between surface-based cortical thickness (CT) and three different behavioural indexes of sublexical speech production: response duration, reaction times and articulatory accuracy, in healthy young and older adults during the production of simple and complex meaningless sequences of syllables (e.g., /pa-pa-pa/ vs. /pa-ta-ka/). The results show that each behavioural speech measure was sensitive to the complexity of the sequences, as indicated by slower reaction times, longer response durations and decreased articulatory accuracy in both groups for the complex sequences. Older adults produced longer speech responses, particularly during the production of complex sequence. Unique age-independent and age-dependent relationships between brain structure and each of these behavioural measures were found in several cortical and subcortical regions known for their involvement in speech production, including the bilateral anterior insula, the left primary motor area, the rostral supramarginal gyrus, the right inferior frontal sulcus, the bilateral putamen and caudate, and in some region less typically associated with speech production, such as the posterior cingulate cortex.
Neural Recruitment for the Production of Native and Novel Speech Sounds
Moser, Dana; Fridriksson, Julius; Bonilha, Leonardo; Healy, Eric W.; Baylis, Gordon; Baker, Julie; Rorden, Chris
2010-01-01
Two primary areas of damage have been implicated in apraxia of speech (AOS) based on the time post-stroke: (1) the left inferior frontal gyrus (IFG) in acute patients, and (2) the left anterior insula (aIns) in chronic patients. While AOS is widely characterized as a disorder in motor speech planning, little is known about the specific contributions of each of these regions in speech. The purpose of this study was to investigate cortical activation during speech production with a specific focus on the aIns and the IFG in normal adults. While undergoing sparse fMRI, 30 normal adults completed a 30-minute speech-repetition task consisting of three-syllable nonwords that contained either (a) English (native) syllables or (b) Non-English (novel) syllables. When the novel syllable productions were compared to the native syllable productions, greater neural activation was observed in the aIns and IFG, particularly during the first 10 minutes of the task when novelty was the greatest. Although activation in the aIns remained high throughout the task for novel productions, greater activation was clearly demonstrated when the initial 10 minutes were compared to the final 10 minutes of the task. These results suggest increased activity within an extensive neural network, including the aIns and IFG, when the motor speech system is taxed, such as during the production of novel speech. We speculate that the amount of left aIns recruitment during speech production may be related to the internal construction of the motor speech unit such that the degree of novelty/automaticity would result in more or less demands respectively. The role of the IFG as a storehouse and integrative processor for previously acquired routines is also discussed. PMID:19385020
Functional speech disorders: clinical manifestations, diagnosis, and management.
Duffy, J R
2016-01-01
Acquired psychogenic or functional speech disorders are a subtype of functional neurologic disorders. They can mimic organic speech disorders and, although any aspect of speech production can be affected, they manifest most often as dysphonia, stuttering, or prosodic abnormalities. This chapter reviews the prevalence of functional speech disorders, the spectrum of their primary clinical characteristics, and the clues that help distinguish them from organic neurologic diseases affecting the sensorimotor networks involved in speech production. Diagnosis of a speech disorder as functional can be supported by sometimes rapidly achieved positive outcomes of symptomatic speech therapy. The general principles of such therapy are reviewed. © 2016 Elsevier B.V. All rights reserved.
SUBTHALAMIC NUCLEUS NEURONS DIFFERENTIALLY ENCODE EARLY AND LATE ASPECTS OF SPEECH PRODUCTION.
Lipski, W J; Alhourani, A; Pirnia, T; Jones, P W; Dastolfo-Hromack, C; Helou, L B; Crammond, D J; Shaiman, S; Dickey, M W; Holt, L L; Turner, R S; Fiez, J A; Richardson, R M
2018-05-22
Basal ganglia-thalamocortical loops mediate all motor behavior, yet little detail is known about the role of basal ganglia nuclei in speech production. Using intracranial recording during deep brain stimulation surgery in humans with Parkinson's disease, we tested the hypothesis that the firing rate of subthalamic nucleus neurons is modulated in sync with motor execution aspects of speech. Nearly half of seventy-nine unit recordings exhibited firing rate modulation, during a syllable reading task across twelve subjects (male and female). Trial-to-trial timing of changes in subthalamic neuronal activity, relative to cue onset versus production onset, revealed that locking to cue presentation was associated more with units that decreased firing rate, while locking to speech onset was associated more with units that increased firing rate. These unique data indicate that subthalamic activity is dynamic during the production of speech, reflecting temporally-dependent inhibition and excitation of separate populations of subthalamic neurons. SIGNIFICANCE STATEMENT The basal ganglia are widely assumed to participate in speech production, yet no prior studies have reported detailed examination of speech-related activity in basal ganglia nuclei. Using microelectrode recordings from the subthalamic nucleus during a single syllable reading task, in awake humans undergoing deep brain stimulation implantation surgery, we show that the firing rate of subthalamic nucleus neurons is modulated in response to motor execution aspects of speech. These results are the first to establish a role for subthalamic nucleus neurons in encoding of aspects of speech production, and they lay the groundwork for launching a modern subfield to explore basal ganglia function in human speech. Copyright © 2018 the authors.
Phonological and Motor Errors in Individuals with Acquired Sound Production Impairment
ERIC Educational Resources Information Center
Buchwald, Adam; Miozzo, Michele
2012-01-01
Purpose: This study aimed to compare sound production errors arising due to phonological processing impairment with errors arising due to motor speech impairment. Method: Two speakers with similar clinical profiles who produced similar consonant cluster simplification errors were examined using a repetition task. We compared both overall accuracy…
ERIC Educational Resources Information Center
Greene, John O.; And Others
1993-01-01
Finds that the increased cognitive load accompanying multiple-goal messages arises from demands on time and processing capacity associated with assembling incompatible message features and that multiple-goal messages are characterized by heavier demand on processing capacity associated with maintaining more complex message-relevant specifications…
Audiovisual speech perception development at varying levels of perceptual processing
Lalonde, Kaylah; Holt, Rachael Frush
2016-01-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318
Audiovisual speech perception development at varying levels of perceptual processing.
Lalonde, Kaylah; Holt, Rachael Frush
2016-04-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.
Machado-Nascimento, Nárli; Melo E Kümmer, Arthur; Lemos, Stela Maris Aguiar
2016-01-01
To systematically review the scientific production on the relationship between Attention Deficit Hyperactivity Disorder (ADHD) and Speech-language Pathology and to methodologically analyze the observational studies on the theme. Systematic review of the literature conducted at the databases Medical Literature Analysis and Retrieval System online (MEDLINE, USA), Literature of Latin America and the Caribbean Health Sciences (LILACS, Brazil) and Spanish Bibliographic Index of Health Sciences (IBECS, Spain) using the descriptors: "Language", "Language Development", "Attention Deficit Hyperactivity Disorder", "ADHD" and "Auditory Perception". Articles published between 2008 and 2013. Inclusion criteria: full articles published in national and international journals from 2008 to 2013. Exclusion criteria: articles not focused on the speech-language pathology alterations present in the attention deficit hyperactivity disorder. The articles were read in full and the data were extracted for characterization of methodology and content. The 23 articles found were separated according to two themes: Speech-language Pathology and Attention Deficit Hyperactivity Disorder. The study of the scientific production revealed that the alterations most commonly discussed were reading disorders and that there are few reports on the relationship between auditory processing and these disorders, as well as on the role of the speech-language pathologist in the evaluation and treatment of children with Attention Deficit Hyperactivity Disorder.
Instrumental and perceptual phonetic analyses: the case for two-tier transcriptions.
Howard, Sara; Heselwood, Barry
2011-11-01
In this article, we discuss the relationship between instrumental and perceptual phonetic analyses. Using data drawn from typical and atypical speech production, we argue that the use of two-tier transcriptions, which can compare and contrast perceptual and instrumental information, is valuable both for our general understanding of the mechanisms of speech production and perception and also for assessment and intervention for individuals with atypical speech production. The central tenet of our case is that instrumental and perceptual analyses are not in competition to give a single 'correct' account of speech data. They take instead perspectives on complementary phonetic domains, which interlock in the speech chain to encompass production, transmission and perception.
An acoustic comparison of two women's infant- and adult-directed speech
NASA Astrophysics Data System (ADS)
Andruski, Jean; Katz-Gershon, Shiri
2003-04-01
In addition to having prosodic characteristics that are attractive to infant listeners, infant-directed (ID) speech shares certain characteristics of adult-directed (AD) clear speech, such as increased acoustic distance between vowels, that might be expected to make ID speech easier for adults to perceive in noise than AD conversational speech. However, perceptual tests of two women's ID productions by Andruski and Bessega [J. Acoust. Soc. Am. 112, 2355] showed that is not always the case. In a word identification task that compared ID speech with AD clear and conversational speech, one speaker's ID productions were less well-identified than AD clear speech, but better identified than AD conversational speech. For the second woman, ID speech was the least accurately identified of the three speech registers. For both speakers, hard words (infrequent words with many lexical neighbors) were also at an increased disadvantage relative to easy words (frequent words with few lexical neighbors) in speech registers that were less accurately perceived. This study will compare several acoustic properties of these women's productions, including pitch and formant-frequency characteristics. Results of the acoustic analyses will be examined with the original perceptual results to suggest reasons for differences in listener's accuracy in identifying these two women's ID speech in noise.
Syllable Structure in Dysfunctional Portuguese Children's Speech
ERIC Educational Resources Information Center
Candeias, Sara; Perdigao, Fernando
2010-01-01
The goal of this work is to investigate whether children with speech dysfunctions (SD) show a deficit in planning some Portuguese syllable structures (PSS) in continuous speech production. Knowledge of which aspects of speech production are affected by SD is necessary for efficient improvement in the therapy techniques. The case-study is focused…
Civier, Oren; Tasko, Stephen M; Guenther, Frank H
2010-09-01
This paper investigates the hypothesis that stuttering may result in part from impaired readout of feedforward control of speech, which forces persons who stutter (PWS) to produce speech with a motor strategy that is weighted too much toward auditory feedback control. Over-reliance on feedback control leads to production errors which if they grow large enough, can cause the motor system to "reset" and repeat the current syllable. This hypothesis is investigated using computer simulations of a "neurally impaired" version of the DIVA model, a neural network model of speech acquisition and production. The model's outputs are compared to published acoustic data from PWS' fluent speech, and to combined acoustic and articulatory movement data collected from the dysfluent speech of one PWS. The simulations mimic the errors observed in the PWS subject's speech, as well as the repairs of these errors. Additional simulations were able to account for enhancements of fluency gained by slowed/prolonged speech and masking noise. Together these results support the hypothesis that many dysfluencies in stuttering are due to a bias away from feedforward control and toward feedback control. The reader will be able to (a) describe the contribution of auditory feedback control and feedforward control to normal and stuttered speech production, (b) summarize the neural modeling approach to speech production and its application to stuttering, and (c) explain how the DIVA model accounts for enhancements of fluency gained by slowed/prolonged speech and masking noise.
Clear Speech Modifications in Children Aged 6-10
NASA Astrophysics Data System (ADS)
Taylor, Griffin Lijding
Modifications to speech production made by adult talkers in response to instructions to speak clearly have been well documented in the literature. Targeting adult populations has been motivated by efforts to improve speech production for the benefit of the communication partners, however, many adults also have communication partners who are children. Surprisingly, there is limited literature on whether children can change their speech production when cued to speak clearly. Pettinato, Tuomainen, Granlund, and Hazan (2016) showed that by age 12, children exhibited enlarged vowel space areas and reduced articulation rate when prompted to speak clearly, but did not produce any other adult-like clear speech modifications in connected speech. Moreover, Syrett and Kawahara (2013) suggested that preschoolers produced longer and more intense vowels when prompted to speak clearly at the word level. These findings contrasted with adult talkers who show significant temporal and spectral differences between speech produced in control and clear speech conditions. Therefore, it was the purpose of this study to analyze changes in temporal and spectral characteristics of speech production that children aged 6-10 made in these experimental conditions. It is important to elucidate the clear speech profile of this population to better understand which adult-like clear speech modifications they make spontaneously and which modifications are still developing. Understanding these baselines will advance future studies that measure the impact of more explicit instructions and children's abilities to better accommodate their interlocutors, which is a critical component of children's pragmatic and speech-motor development.
Davidow, Jason H.; Bothe, Anne K.; Ye, Jun
2011-01-01
The most common way to induce fluency using rhythm requires persons who stutter to speak one syllable or one word to each beat of a metronome, but stuttering can also be eliminated when the stimulus is of a particular duration (e.g., 1 s). The present study examined stuttering frequency, speech production changes, and speech naturalness during rhythmic speech that alternated 1 s of reading with 1 s of silence. A repeated-measures design was used to compare data obtained during a control reading condition and during rhythmic reading in 10 persons who stutter (PWS) and 10 normally fluent controls. Ratings for speech naturalness were also gathered from naïve listeners. Results showed that mean vowel duration increased significantly, and the percentage of short phonated intervals decreased significantly, for both groups from the control to the experimental condition. Mean phonated interval length increased significantly for the fluent controls. Mean speech naturalness ratings during the experimental condition were approximately 7 on a 1–9 scale (1 = highly natural; 9 = highly unnatural), and these ratings were significantly correlated with vowel duration and phonated intervals for PWS. The findings indicate that PWS may be altering vocal fold vibration duration to obtain fluency during this rhythmic speech style, and that vocal fold vibration duration may have an impact on speech naturalness during rhythmic speech. Future investigations should examine speech production changes and speech naturalness during variations of this rhythmic condition. Educational Objectives The reader will be able to: (1) describe changes (from a control reading condition) in speech production variables when alternating between 1 s of reading and 1 s of silence, (2) describe which rhythmic conditions have been found to sound and feel the most natural, (3) describe methodological issues for studies about alterations in speech production variables during fluency-inducing conditions, and (4) describe which fluency-inducing conditions have been shown to involve a reduction in short phonated intervals. PMID:21664528
Age and Function Differences in Shared Task Performance: Walking and Talking
ERIC Educational Resources Information Center
Williams, Kathleen; Hinton, Virginia A.; Bories, Tamara; Kovacs, Christopher R.
2006-01-01
Less is known about the effects of normal aging on speech output than other motor actions, because studies of communication integrity have focused on voice production and linguistic parameters rather than speech production characteristics. Studies investigating speech production in older adults have reported increased syllable duration (Slawinski,…
A Generative Model of Speech Production in Broca’s and Wernicke’s Areas
Price, Cathy J.; Crinion, Jenny T.; MacSweeney, Mairéad
2011-01-01
Speech production involves the generation of an auditory signal from the articulators and vocal tract. When the intended auditory signal does not match the produced sounds, subsequent articulatory commands can be adjusted to reduce the difference between the intended and produced sounds. This requires an internal model of the intended speech output that can be compared to the produced speech. The aim of this functional imaging study was to identify brain activation related to the internal model of speech production after activation related to vocalization, auditory feedback, and movement in the articulators had been controlled. There were four conditions: silent articulation of speech, non-speech mouth movements, finger tapping, and visual fixation. In the speech conditions, participants produced the mouth movements associated with the words “one” and “three.” We eliminated auditory feedback from the spoken output by instructing participants to articulate these words without producing any sound. The non-speech mouth movement conditions involved lip pursing and tongue protrusions to control for movement in the articulators. The main difference between our speech and non-speech mouth movement conditions is that prior experience producing speech sounds leads to the automatic and covert generation of auditory and phonological associations that may play a role in predicting auditory feedback. We found that, relative to non-speech mouth movements, silent speech activated Broca’s area in the left dorsal pars opercularis and Wernicke’s area in the left posterior superior temporal sulcus. We discuss these results in the context of a generative model of speech production and propose that Broca’s and Wernicke’s areas may be involved in predicting the speech output that follows articulation. These predictions could provide a mechanism by which rapid movement of the articulators is precisely matched to the intended speech outputs during future articulations. PMID:21954392
Guo, Zhiqiang; Wu, Xiuqin; Li, Weifeng; Jones, Jeffery A; Yan, Nan; Sheft, Stanley; Liu, Peng; Liu, Hanjun
2017-10-25
Although working memory (WM) is considered as an emergent property of the speech perception and production systems, the role of WM in sensorimotor integration during speech processing is largely unknown. We conducted two event-related potential experiments with female and male young adults to investigate the contribution of WM to the neurobehavioural processing of altered auditory feedback during vocal production. A delayed match-to-sample task that required participants to indicate whether the pitch feedback perturbations they heard during vocalizations in test and sample sequences matched, elicited significantly larger vocal compensations, larger N1 responses in the left middle and superior temporal gyrus, and smaller P2 responses in the left middle and superior temporal gyrus, inferior parietal lobule, somatosensory cortex, right inferior frontal gyrus, and insula compared with a control task that did not require memory retention of the sequence of pitch perturbations. On the other hand, participants who underwent extensive auditory WM training produced suppressed vocal compensations that were correlated with improved auditory WM capacity, and enhanced P2 responses in the left middle frontal gyrus, inferior parietal lobule, right inferior frontal gyrus, and insula that were predicted by pretraining auditory WM capacity. These findings indicate that WM can enhance the perception of voice auditory feedback errors while inhibiting compensatory vocal behavior to prevent voice control from being excessively influenced by auditory feedback. This study provides the first evidence that auditory-motor integration for voice control can be modulated by top-down influences arising from WM, rather than modulated exclusively by bottom-up and automatic processes. SIGNIFICANCE STATEMENT One outstanding question that remains unsolved in speech motor control is how the mismatch between predicted and actual voice auditory feedback is detected and corrected. The present study provides two lines of converging evidence, for the first time, that working memory cannot only enhance the perception of vocal feedback errors but also exert inhibitory control over vocal motor behavior. These findings represent a major advance in our understanding of the top-down modulatory mechanisms that support the detection and correction of prediction-feedback mismatches during sensorimotor control of speech production driven by working memory. Rather than being an exclusively bottom-up and automatic process, auditory-motor integration for voice control can be modulated by top-down influences arising from working memory. Copyright © 2017 the authors 0270-6474/17/3710324-11$15.00/0.
Neural evidence for predictive coding in auditory cortex during speech production.
Okada, Kayoko; Matchin, William; Hickok, Gregory
2018-02-01
Recent models of speech production suggest that motor commands generate forward predictions of the auditory consequences of those commands, that these forward predications can be used to monitor and correct speech output, and that this system is hierarchically organized (Hickok, Houde, & Rong, Neuron, 69(3), 407--422, 2011; Pickering & Garrod, Behavior and Brain Sciences, 36(4), 329--347, 2013). Recent psycholinguistic research has shown that internally generated speech (i.e., imagined speech) produces different types of errors than does overt speech (Oppenheim & Dell, Cognition, 106(1), 528--537, 2008; Oppenheim & Dell, Memory & Cognition, 38(8), 1147-1160, 2010). These studies suggest that articulated speech might involve predictive coding at additional levels than imagined speech. The current fMRI experiment investigates neural evidence of predictive coding in speech production. Twenty-four participants from UC Irvine were recruited for the study. Participants were scanned while they were visually presented with a sequence of words that they reproduced in sync with a visual metronome. On each trial, they were cued to either silently articulate the sequence or to imagine the sequence without overt articulation. As expected, silent articulation and imagined speech both engaged a left hemisphere network previously implicated in speech production. A contrast of silent articulation with imagined speech revealed greater activation for articulated speech in inferior frontal cortex, premotor cortex and the insula in the left hemisphere, consistent with greater articulatory load. Although both conditions were silent, this contrast also produced significantly greater activation in auditory cortex in dorsal superior temporal gyrus in both hemispheres. We suggest that these activations reflect forward predictions arising from additional levels of the perceptual/motor hierarchy that are involved in monitoring the intended speech output.
Smiljanić, Rajka; Bradlow, Ann R.
2011-01-01
This study investigated how native language background interacts with speaking style adaptations in determining levels of speech intelligibility. The aim was to explore whether native and high proficiency non-native listeners benefit similarly from native and non-native clear speech adjustments. The sentence-in-noise perception results revealed that fluent non-native listeners gained a large clear speech benefit from native clear speech modifications. Furthermore, proficient non-native talkers in this study implemented conversational-to-clear speaking style modifications in their second language (L2) that resulted in significant intelligibility gain for both native and non-native listeners. The results of the accentedness ratings obtained for native and non-native conversational and clear speech sentences showed that while intelligibility was improved, the presence of foreign accent remained constant in both speaking styles. This suggests that objective intelligibility and subjective accentedness are two independent dimensions of non-native speech. Overall, these results provide strong evidence that greater experience in L2 processing leads to improved intelligibility in both production and perception domains. These results also demonstrated that speaking style adaptations along with less signal distortion can contribute significantly towards successful native and non-native interactions. PMID:22225056
Acoustic evidence for phonologically mismatched speech errors.
Gormley, Andrea
2015-04-01
Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.
Language-Specific Developmental Differences in Speech Production: A Cross-Language Acoustic Study
ERIC Educational Resources Information Center
Li, Fangfang
2012-01-01
Speech productions of 40 English- and 40 Japanese-speaking children (aged 2-5) were examined and compared with the speech produced by 20 adult speakers (10 speakers per language). Participants were recorded while repeating words that began with "s" and "sh" sounds. Clear language-specific patterns in adults' speech were found,…
The Importance of Production Frequency in Therapy for Childhood Apraxia of Speech
ERIC Educational Resources Information Center
Edeal, Denice Michelle; Gildersleeve-Neumann, Christina Elke
2011-01-01
Purpose: This study explores the importance of production frequency during speech therapy to determine whether more practice of speech targets leads to increased performance within a treatment session, as well as to motor learning, in the form of generalization to untrained words. Method: Two children with childhood apraxia of speech were treated…
Brain-to-text: decoding spoken phrases from phone representations in the brain.
Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja
2015-01-01
It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.
Brain-to-text: decoding spoken phrases from phone representations in the brain
Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja
2015-01-01
It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech. PMID:26124702
Vuolo, Janet; Goffman, Lisa
2017-01-01
This exploratory treatment study used phonetic transcription and speech kinematics to examine changes in segmental and articulatory variability. Nine children, ages 4- to 8-years-old, served as participants, including two with childhood apraxia of speech (CAS), five with speech sound disorder (SSD), and two who were typically developing (TD). Children practised producing agent + action phrases in an imitation task (low linguistic load) and a retrieval task (high linguistic load) over five sessions. In the imitation task in session one, both participants with CAS showed high degrees of segmental and articulatory variability. After five sessions, imitation practice resulted in increased articulatory variability for five participants. Retrieval practice resulted in decreased articulatory variability in three participants with SSD. These results suggest that short-term speech production practice in rote imitation disrupts articulatory control in children with and without CAS. In contrast, tasks that require linguistic processing may scaffold learning for children with SSD but not CAS. PMID:27960554
Widening the temporal window: Processing support in the treatment of aphasic language production
Linebarger, Marcia; McCall, Denise; Virata, Telana; Berndt, Rita Sloan
2007-01-01
Investigations of language processing in aphasia have increasingly implicated performance factors such as slowed activation and/or rapid decay of linguistic information. This approach is supported by studies utilizing a communication system (SentenceShaper™) which functions as a “processing prosthesis.” The system may reduce the impact of processing limitations by allowing repeated refreshing of working memory and by increasing the opportunity for aphasic subjects to monitor their own speech. Some aphasic subjects are able to produce markedly more structured speech on the system than they are able to produce spontaneously, and periods of largely independent home use of SentenceShaper have been linked to treatment effects, that is, to gains in speech produced without the use of the system. The purpose of the current study was to follow up on these studies with a new group of subjects. A second goal was to determine whether repeated, unassisted elicitations of the same narratives at baseline would give rise to practice effects, which could undermine claims for the efficacy of the system. PMID:17069883
On the Evolution of Human Language.
ERIC Educational Resources Information Center
Lieberman, Philip
Human linguistic ability depends, in part, on the gradual evolution of man's supralaryngeal vocal tract. The anatomic basis of human speech production is the result of a long evolutionary process in which the Darwinian process of natural selection acted to retain mutations. For auditory perception, the listener operates in terms of the acoustic…
Reconciling phonological neighborhood effects in speech production through single trial analysis.
Sadat, Jasmin; Martin, Clara D; Costa, Albert; Alario, F-Xavier
2014-02-01
A crucial step for understanding how lexical knowledge is represented is to describe the relative similarity of lexical items, and how it influences language processing. Previous studies of the effects of form similarity on word production have reported conflicting results, notably within and across languages. The aim of the present study was to clarify this empirical issue to provide specific constraints for theoretical models of language production. We investigated the role of phonological neighborhood density in a large-scale picture naming experiment using fine-grained statistical models. The results showed that increasing phonological neighborhood density has a detrimental effect on naming latencies, and re-analyses of independently obtained data sets provide supplementary evidence for this effect. Finally, we reviewed a large body of evidence concerning phonological neighborhood density effects in word production, and discussed the occurrence of facilitatory and inhibitory effects in accuracy measures. The overall pattern shows that phonological neighborhood generates two opposite forces, one facilitatory and one inhibitory. In cases where speech production is disrupted (e.g. certain aphasic symptoms), the facilitatory component may emerge, but inhibitory processes dominate in efficient naming by healthy speakers. These findings are difficult to accommodate in terms of monitoring processes, but can be explained within interactive activation accounts combining phonological facilitation and lexical competition. Copyright © 2013 Elsevier Inc. All rights reserved.
Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim
2015-06-15
Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim
2015-01-01
Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433
Orthographic vs. Phonologic Syllables in Handwriting Production
ERIC Educational Resources Information Center
Kandel, Sonia; Herault, Lucie; Grosjacques, Geraldine; Lambert, Eric; Fayol, Michel
2009-01-01
French children program the words they write syllable by syllable. We examined whether the syllable the children use to segment words is determined phonologically (i.e., is derived from speech production processes) or orthographically. Third, 4th and 5th graders wrote on a digitiser words that were mono-syllables phonologically (e.g.…
The Role of Writing Pedagogy in Vocabulary Improvement
ERIC Educational Resources Information Center
Mirhassani, Seyyed Akbar; Samar, Reza Ghafar; Fattahipoor, Majid
2006-01-01
To improve and activate the vocabulary of EFL learners, an alternative to common advice in trying to use them in speech can be invited. As two quite different methodologies in writing pedagogy are process and product writing, it is of concern to find which holds more promise for the vocabulary improvement. Product writing pedagogy encompasses…
Munson, Benjamin; Johnson, Julie M.; Edwards, Jan
2013-01-01
Purpose This study examined whether experienced speech-language pathologists differ from inexperienced people in their perception of phonetic detail in children's speech. Method Convenience samples comprising 21 experienced speech-language pathologist and 21 inexperienced listeners participated in a series of tasks in which they made visual-analog scale (VAS) ratings of children's natural productions of target /s/-/θ/, /t/-/k/, and /d/-/ɡ/ in word-initial position. Listeners rated the perception distance between individual productions and ideal productions. Results The experienced listeners' ratings differed from inexperienced listeners' in four ways: they had higher intra-rater reliability, they showed less bias toward a more frequent sound, their ratings were more closely related to the acoustic characteristics of the children's speech, and their responses were related to a different set of predictor variables. Conclusions Results suggest that experience working as a speech-language pathologist leads to better perception of phonetic detail in children's speech. Limitations and future research are discussed. PMID:22230182
The somatotopy of speech: Phonation and articulation in the human motor cortex
Brown, Steven; Laird, Angela R.; Pfordresher, Peter Q.; Thelen, Sarah M.; Turkeltaub, Peter; Liotti, Mario
2010-01-01
A sizable literature on the neuroimaging of speech production has reliably shown activations in the orofacial region of the primary motor cortex. These activations have invariably been interpreted as reflecting “mouth” functioning and thus articulation. We used functional magnetic resonance imaging to compare an overt speech task with tongue movement, lip movement, and vowel phonation. The results showed that the strongest motor activation for speech was the somatotopic larynx area of the motor cortex, thus reflecting the significant contribution of phonation to speech production. In order to analyze further the phonatory component of speech, we performed a voxel-based meta-analysis of neuroimaging studies of syllable-singing (11 studies) and compared the results with a previously-published meta-analysis of oral reading (11 studies), showing again a strong overlap in the larynx motor area. Overall, these findings highlight the under-recognized presence of phonation in imaging studies of speech production, and support the role of the larynx motor cortex in mediating the “melodicity” of speech. PMID:19162389
Hinterhuber, Hartmann
2007-01-01
In both his The Psychopathology of Everyday Life and his Lectures Sigmund Freud derived the terms unconscious, preconscious and conscious, particularly from slips in speech, slips in reading and forgetfulness. In these slips, Freud recognised parallels to dreams. In the work mentioned, he analysed these in depth as part of mental motivation. In the papers referred to, Sigmund Freud paid tribute to Rudolf Meringer and Carl Mayer's study which was published in 1895. Meringer and Mayer showed as phenomena reversals and rearrangement of whole words, syllables or sounds, along with pre-tones or anticipations and echoes, word contaminations and word substitutions as responsible for slips of the tongue. The present work demonstrates how passionately these three scientists have contributed to the controversy of their standpoints. For modern psycholinguistics and the psychology of language, speech errors are always an expression of a momentary malfunction of the human speech production system: for the cognitive process of speech production slips of the tongue offer an insight into speech processing. Pre-tones and echoes, serialization errors, as Meringer and Mayer recognised, represent the vast majority of slips of the tongue. They do not reveal any hidden point. But with lexical-semantic slips of the tongue the question of mental motivation is admissible. This short paper is a sign of appreciation and gratitude: firstly, a modest birthday gift for Sigmund Freud, secondly homage to Carl Mayer, who influenced generations of neurologists in his 40 years of chairing the Psychiatric-Neurological Clinic in Innsbruck, so that Hans Ganner rightly spoke of a "Carl Mayer School". But lastly, this short study is also-and especially-a late recognition of Rudolf Meringer, the great Austrian linguist. The view an individual has concerning mental processes and the "topology of the psychic apparatus" is decisive as to the power of determination attached to the unconscious.
The Frame Constraint on Experimentally Elicited Speech Errors in Japanese.
Saito, Akie; Inoue, Tomoyoshi
2017-06-01
The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the same position within a syllable or word. Most of the evidence for the effect comes from analyses of naturally occurring speech errors in Indo-European languages, and there are few studies examining the effect in experimentally elicited speech errors and in other languages. This study examined whether experimentally elicited sound errors in Japanese exhibits the syllable position effect. In Japanese, the sub-syllabic unit known as "mora" is considered to be a basic sound unit in production. Results showed that the syllable position effect occurred in mora errors, suggesting that the frame constrains the ordering of sounds during speech production.
The minimal unit of phonological encoding: prosodic or lexical word.
Wheeldon, Linda R; Lahiri, Aditi
2002-09-01
Wheeldon and Lahiri (Journal of Memory and Language 37 (1997) 356) used a prepared speech production task (Sternberg, S., Monsell, S., Knoll, R. L., & Wright, C. E. (1978). The latency and duration of rapid movement sequences: comparisons of speech and typewriting. In G. E. Stelmach (Ed.), Information processing in motor control and learning (pp. 117-152). New York: Academic Press; Sternberg, S., Wright, C. E., Knoll, R. L., & Monsell, S. (1980). Motor programs in rapid speech: additional evidence. In R. A. Cole (Ed.), The perception and production of fluent speech (pp. 507-534). Hillsdale, NJ: Erlbaum) to demonstrate that the latency to articulate a sentence is a function of the number of phonological words it comprises. Latencies for the sentence [Ik zoek het] [water] 'I seek the water' were shorter than latencies for sentences like [Ik zoek] [vers] [water] 'I seek fresh water'. We extend this research by examining the prepared production of utterances containing phonological words that are less than a lexical word in length. Dutch compounds (e.g. ooglid 'eyelid') form a single morphosyntactic word and a phonological word, which in turn includes two phonological words. We compare their prepared production latencies to those syntactic phrases consisting of an adjective and a noun (e.g. oud lid 'old member') which comprise two morphosyntactic and two phonological words, and to morphologically simple words (e.g. orgel 'organ') which comprise one morphosyntactic and one phonological word. Our findings demonstrate that the effect is limited to phrasal level phonological words, suggesting that production models need to make a distinction between lexical and phrasal phonology.
Noise on, voicing off: Speech perception deficits in children with specific language impairment.
Ziegler, Johannes C; Pech-Georgel, Catherine; George, Florence; Lorenzi, Christian
2011-11-01
Speech perception of four phonetic categories (voicing, place, manner, and nasality) was investigated in children with specific language impairment (SLI) (n=20) and age-matched controls (n=19) in quiet and various noise conditions using an AXB two-alternative forced-choice paradigm. Children with SLI exhibited robust speech perception deficits in silence, stationary noise, and amplitude-modulated noise. Comparable deficits were obtained for fast, intermediate, and slow modulation rates, and this speaks against the various temporal processing accounts of SLI. Children with SLI exhibited normal "masking release" effects (i.e., better performance in fluctuating noise than in stationary noise), again suggesting relatively spared spectral and temporal auditory resolution. In terms of phonetic categories, voicing was more affected than place, manner, or nasality. The specific nature of this voicing deficit is hard to explain with general processing impairments in attention or memory. Finally, speech perception in noise correlated with an oral language component but not with either a memory or IQ component, and it accounted for unique variance beyond IQ and low-level auditory perception. In sum, poor speech perception seems to be one of the primary deficits in children with SLI that might explain poor phonological development, impaired word production, and poor word comprehension. Copyright © 2011 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Ashtiani, Farshid Tayari; Zafarghandi, Amir Mahdavi
2015-01-01
The present study was an attempt to investigate the impact of English verbal songs on connected speech aspects of adult English learners' speech production. 40 participants were selected based on the results of their performance in a piloted and validated version of NELSON test given to 60 intermediate English learners in a language institute in…
ERIC Educational Resources Information Center
Choe, Wook Kyung
2013-01-01
The current dissertation represents one of the first systematic studies of the distribution of speech errors within supralexical prosodic units. Four experiments were conducted to gain insight into the specific role of these units in speech planning and production. The first experiment focused on errors in adult English. These were found to be…
Lexical effects on speech production and intelligibility in Parkinson's disease
NASA Astrophysics Data System (ADS)
Chiu, Yi-Fang
Individuals with Parkinson's disease (PD) often have speech deficits that lead to reduced speech intelligibility. Previous research provides a rich database regarding the articulatory deficits associated with PD including restricted vowel space (Skodda, Visser, & Schlegel, 2011) and flatter formant transitions (Tjaden & Wilding, 2004; Walsh & Smith, 2012). However, few studies consider the effect of higher level structural variables of word usage frequency and the number of similar sounding words (i.e. neighborhood density) on lower level articulation or on listeners' perception of dysarthric speech. The purpose of the study is to examine the interaction of lexical properties and speech articulation as measured acoustically in speakers with PD and healthy controls (HC) and the effect of lexical properties on the perception of their speech. Individuals diagnosed with PD and age-matched healthy controls read sentences with words that varied in word frequency and neighborhood density. Acoustic analysis was performed to compare second formant transitions in diphthongs, an indicator of the dynamics of tongue movement during speech production, across different lexical characteristics. Young listeners transcribed the spoken sentences and the transcription accuracy was compared across lexical conditions. The acoustic results indicate that both PD and HC speakers adjusted their articulation based on lexical properties but the PD group had significant reductions in second formant transitions compared to HC. Both groups of speakers increased second formant transitions for words with low frequency and low density, but the lexical effect is diphthong dependent. The change in second formant slope was limited in the PD group when the required formant movement for the diphthong is small. The data from listeners' perception of the speech by PD and HC show that listeners identified high frequency words with greater accuracy suggesting the use of lexical knowledge during the recognition process. The relationship between acoustic results and perceptual accuracy is limited in this study suggesting that listeners incorporate acoustic and non-acoustic information to maximize speech intelligibility.
Phonological Feature Repetition Suppression in the Left Inferior Frontal Gyrus.
Okada, Kayoko; Matchin, William; Hickok, Gregory
2018-06-07
Models of speech production posit a role for the motor system, predominantly the posterior inferior frontal gyrus, in encoding complex phonological representations for speech production, at the phonemic, syllable, and word levels [Roelofs, A. A dorsal-pathway account of aphasic language production: The WEAVER++/ARC model. Cortex, 59(Suppl. C), 33-48, 2014; Hickok, G. Computational neuroanatomy of speech production. Nature Reviews Neuroscience, 13, 135-145, 2012; Guenther, F. H. Cortical interactions underlying the production of speech sounds. Journal of Communication Disorders, 39, 350-365, 2006]. However, phonological theory posits subphonemic units of representation, namely phonological features [Chomsky, N., & Halle, M. The sound pattern of English, 1968; Jakobson, R., Fant, G., & Halle, M. Preliminaries to speech analysis. The distinctive features and their correlates. Cambridge, MA: MIT Press, 1951], that specify independent articulatory parameters of speech sounds, such as place and manner of articulation. Therefore, motor brain systems may also incorporate phonological features into speech production planning units. Here, we add support for such a role with an fMRI experiment of word sequence production using a phonemic similarity manipulation. We adapted and modified the experimental paradigm of Oppenheim and Dell [Oppenheim, G. M., & Dell, G. S. Inner speech slips exhibit lexical bias, but not the phonemic similarity effect. Cognition, 106, 528-537, 2008; Oppenheim, G. M., & Dell, G. S. Motor movement matters: The flexible abstractness of inner speech. Memory & Cognition, 38, 1147-1160, 2010]. Participants silently articulated words cued by sequential visual presentation that varied in degree of phonological feature overlap in consonant onset position: high overlap (two shared phonological features; e.g., /r/ and /l/) or low overlap (one shared phonological feature, e.g., /r/ and /b/). We found a significant repetition suppression effect in the left posterior inferior frontal gyrus, with increased activation for phonologically dissimilar words compared with similar words. These results suggest that phonemes, particularly phonological features, are part of the planning units of the motor speech system.
Neurophysiological Influence of Musical Training on Speech Perception
Shahin, Antoine J.
2011-01-01
Does musical training affect our perception of speech? For example, does learning to play a musical instrument modify the neural circuitry for auditory processing in a way that improves one's ability to perceive speech more clearly in noisy environments? If so, can speech perception in individuals with hearing loss (HL), who struggle in noisy situations, benefit from musical training? While music and speech exhibit some specialization in neural processing, there is evidence suggesting that skills acquired through musical training for specific acoustical processes may transfer to, and thereby improve, speech perception. The neurophysiological mechanisms underlying the influence of musical training on speech processing and the extent of this influence remains a rich area to be explored. A prerequisite for such transfer is the facilitation of greater neurophysiological overlap between speech and music processing following musical training. This review first establishes a neurophysiological link between musical training and speech perception, and subsequently provides further hypotheses on the neurophysiological implications of musical training on speech perception in adverse acoustical environments and in individuals with HL. PMID:21716639
Neurophysiological influence of musical training on speech perception.
Shahin, Antoine J
2011-01-01
Does musical training affect our perception of speech? For example, does learning to play a musical instrument modify the neural circuitry for auditory processing in a way that improves one's ability to perceive speech more clearly in noisy environments? If so, can speech perception in individuals with hearing loss (HL), who struggle in noisy situations, benefit from musical training? While music and speech exhibit some specialization in neural processing, there is evidence suggesting that skills acquired through musical training for specific acoustical processes may transfer to, and thereby improve, speech perception. The neurophysiological mechanisms underlying the influence of musical training on speech processing and the extent of this influence remains a rich area to be explored. A prerequisite for such transfer is the facilitation of greater neurophysiological overlap between speech and music processing following musical training. This review first establishes a neurophysiological link between musical training and speech perception, and subsequently provides further hypotheses on the neurophysiological implications of musical training on speech perception in adverse acoustical environments and in individuals with HL.
Skipper, Jeremy I.; van Wassenhove, Virginie; Nusbaum, Howard C.; Small, Steven L.
2009-01-01
Observing a speaker’s mouth profoundly influences speech perception. For example, listeners perceive an “illusory” “ta” when the video of a face producing /ka/ is dubbed onto an audio /pa/. Here, we show how cortical areas supporting speech production mediate this illusory percept and audiovisual (AV) speech perception more generally. Specifically, cortical activity during AV speech perception occurs in many of the same areas that are active during speech production. We find that different perceptions of the same syllable and the perception of different syllables are associated with different distributions of activity in frontal motor areas involved in speech production. Activity patterns in these frontal motor areas resulting from the illusory “ta” percept are more similar to the activity patterns evoked by AV/ta/ than they are to patterns evoked by AV/pa/ or AV/ka/. In contrast to the activity in frontal motor areas, stimulus-evoked activity for the illusory “ta” in auditory and somatosensory areas and visual areas initially resembles activity evoked by AV/pa/ and AV/ka/, respectively. Ultimately, though, activity in these regions comes to resemble activity evoked by AV/ta/. Together, these results suggest that AV speech elicits in the listener a motor plan for the production of the phoneme that the speaker might have been attempting to produce, and that feedback in the form of efference copy from the motor system ultimately influences the phonetic interpretation. PMID:17218482
Burnett, Greg C.; Holzrichter, John F.; Ng, Lawrence C.
2002-01-01
Low power EM waves are used to detect motions of vocal tract tissues of the human speech system before, during, and after voiced speech. A voiced excitation function is derived. The excitation function provides speech production information to enhance speech characterization and to enable noise removal from human speech.
NASA Astrophysics Data System (ADS)
Xing, Fangxu; Ye, Chuyang; Woo, Jonghye; Stone, Maureen; Prince, Jerry
2015-03-01
The human tongue is composed of multiple internal muscles that work collaboratively during the production of speech. Assessment of muscle mechanics can help understand the creation of tongue motion, interpret clinical observations, and predict surgical outcomes. Although various methods have been proposed for computing the tongue's motion, associating motion with muscle activity in an interdigitated fiber framework has not been studied. In this work, we aim to develop a method that reveals different tongue muscles' activities in different time phases during speech. We use fourdimensional tagged magnetic resonance (MR) images and static high-resolution MR images to obtain tongue motion and muscle anatomy, respectively. Then we compute strain tensors and local tissue compression along the muscle fiber directions in order to reveal their shortening pattern. This process relies on the support from multiple image analysis methods, including super-resolution volume reconstruction from MR image slices, segmentation of internal muscles, tracking the incompressible motion of tissue points using tagged images, propagation of muscle fiber directions over time, and calculation of strain in the line of action, etc. We evaluated the method on a control subject and two postglossectomy patients in a controlled speech task. The normal subject's tongue muscle activity shows high correspondence with the production of speech in different time instants, while both patients' muscle activities show different patterns from the control due to their resected tongues. This method shows potential for relating overall tongue motion to particular muscle activity, which may provide novel information for future clinical and scientific studies.
Impairment of speech production predicted by lesion load of the left arcuate fasciculus.
Marchina, Sarah; Zhu, Lin L; Norton, Andrea; Zipse, Lauryn; Wan, Catherine Y; Schlaug, Gottfried
2011-08-01
Previous studies have suggested that patients' potential for poststroke language recovery is related to lesion size; however, lesion location may also be of importance, particularly when fiber tracts that are critical to the sensorimotor mapping of sounds for articulation (eg, the arcuate fasciculus) have been damaged. In this study, we tested the hypothesis that lesion loads of the arcuate fasciculus (ie, volume of arcuate fasciculus that is affected by a patient's lesion) and of 2 other tracts involved in language processing (the extreme capsule and the uncinate fasciculus) are inversely related to the severity of speech production impairments in patients with stroke with aphasia. Thirty patients with chronic stroke with residual impairments in speech production underwent high-resolution anatomic MRI and a battery of cognitive and language tests. Impairment was assessed using 3 functional measures of spontaneous speech (eg, rate, informativeness, and overall efficiency) as well as naming ability. To quantitatively analyze the relationship between impairment scores and lesion load along the 3 fiber tracts, we calculated tract-lesion overlap volumes for each patient using probabilistic maps of the tracts derived from diffusion tensor images of 10 age-matched healthy subjects. Regression analyses showed that arcuate fasciculus lesion load, but not extreme capsule or uncinate fasciculus lesion load or overall lesion size, significantly predicted rate, informativeness, and overall efficiency of speech as well as naming ability. A new variable, arcuate fasciculus lesion load, complements established voxel-based lesion mapping techniques and, in the future, may potentially be used to estimate impairment and recovery potential after stroke and refine inclusion criteria for experimental rehabilitation programs.
Speech Intelligibility Predicted from Neural Entrainment of the Speech Envelope.
Vanthornhout, Jonas; Decruy, Lien; Wouters, Jan; Simon, Jonathan Z; Francart, Tom
2018-04-01
Speech intelligibility is currently measured by scoring how well a person can identify a speech signal. The results of such behavioral measures reflect neural processing of the speech signal, but are also influenced by language processing, motivation, and memory. Very often, electrophysiological measures of hearing give insight in the neural processing of sound. However, in most methods, non-speech stimuli are used, making it hard to relate the results to behavioral measures of speech intelligibility. The use of natural running speech as a stimulus in electrophysiological measures of hearing is a paradigm shift which allows to bridge the gap between behavioral and electrophysiological measures. Here, by decoding the speech envelope from the electroencephalogram, and correlating it with the stimulus envelope, we demonstrate an electrophysiological measure of neural processing of running speech. We show that behaviorally measured speech intelligibility is strongly correlated with our electrophysiological measure. Our results pave the way towards an objective and automatic way of assessing neural processing of speech presented through auditory prostheses, reducing confounds such as attention and cognitive capabilities. We anticipate that our electrophysiological measure will allow better differential diagnosis of the auditory system, and will allow the development of closed-loop auditory prostheses that automatically adapt to individual users.
Development and evaluation of the LittlEARS® Early Speech Production Questionnaire - LEESPQ.
Wachtlin, Bianka; Brachmaier, Joanna; Amann, Edda; Hoffmann, Vanessa; Keilmann, Annerose
2017-03-01
Universal Newborn Hearing Screening programs, now instituted throughout the German-speaking countries, allow hearing loss to be detected and treated much earlier than ever before. With this earlier detection, arises the need for tools fit for assessing the very early speech and language production development of today's younger (0-18 month old) children. We have created the LittlEARS ® Early Speech Production Questionnaire, with the aim of meeting this need. 600 questionnaires of the pilot version of the LittlEARS ® Early Speech Production Questionnaire were distributed to parents via pediatricians' practices, day care centers, and personal contact. The completed questionnaires were statistically analyzed to determine their reliability, predictive accuracy, internal consistency, and to what extent gender or unilingualism influenced a child's score. Further, a norm curve was generated to plot the children's increased expected speech production ability with age. Analysis of the data from the 352/600 returned questionnaires revealed that scores on LittlEARS ® Early Speech Production Questionnaire correlate positively with a child's age, with older children scoring higher than do younger children. Further, the questionnaire has a high measuring reliability, high predictability, high unidemensionality of scale, and is not significantly gender or uni-/multilingually biased. A norm curve for expected development with age was created. The LittlEARS ® Early Speech Production Questionnaire (LEESPQ) is a valid tool for assessing the most important milestones in very early development of speech and language production of German language children with normal hearing aged 0-18 months old. The questionnaire is a potentially useful tool for long-term infant screening and follow-up testing and for children with normal hearing and those who would benefit from or use hearing devices. Copyright © 2017 Elsevier B.V. All rights reserved.
Cummine, Jacqueline; Cribben, Ivor; Luu, Connie; Kim, Esther; Bahktiari, Reyhaneh; Georgiou, George; Boliek, Carol A
2016-05-01
The neural circuitry associated with language processing is complex and dynamic. Graphical models are useful for studying complex neural networks as this method provides information about unique connectivity between regions within the context of the entire network of interest. Here, the authors explored the neural networks during covert reading to determine the role of feedforward and feedback loops in covert speech production. Brain activity of skilled adult readers was assessed in real word and pseudoword reading tasks with functional MRI (fMRI). The authors provide evidence for activity coherence in the feedforward system (inferior frontal gyrus-supplementary motor area) during real word reading and in the feedback system (supramarginal gyrus-precentral gyrus) during pseudoword reading. Graphical models provided evidence of an extensive, highly connected, neural network when individuals read real words that relied on coordination of the feedforward system. In contrast, when individuals read pseudowords the authors found a limited/restricted network that relied on coordination of the feedback system. Together, these results underscore the importance of considering multiple pathways and articulatory loops during language tasks and provide evidence for a print-to-speech neural network. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Do perceived context pictures automatically activate their phonological code?
Jescheniak, Jörg D; Oppermann, Frank; Hantsch, Ansgar; Wagner, Valentin; Mädebach, Andreas; Schriefers, Herbert
2009-01-01
Morsella and Miozzo (Morsella, E., & Miozzo, M. (2002). Evidence for a cascade model of lexical access in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 555-563) have reported that the to-be-ignored context pictures become phonologically activated when participants name a target picture, and took this finding as support for cascaded models of lexical retrieval in speech production. In a replication and extension of their experiment in German, we failed to obtain priming effects from context pictures phonologically related to a to-be-named target picture. By contrast, corresponding context words (i.e., the names of the respective pictures) and the same context pictures, when used in an identity condition, did reliably facilitate the naming process. This pattern calls into question the generality of the claim advanced by Morsella and Miozzo that perceptual processing of pictures in the context of a naming task automatically leads to the activation of corresponding lexical-phonological codes.
Davidow, Jason H; Bothe, Anne K; Ye, Jun
2011-06-01
The most common way to induce fluency using rhythm requires persons who stutter to speak one syllable or one word to each beat of a metronome, but stuttering can also be eliminated when the stimulus is of a particular duration (e.g., 1 second [s]). The present study examined stuttering frequency, speech production changes, and speech naturalness during rhythmic speech that alternated 1s of reading with 1s of silence. A repeated-measures design was used to compare data obtained during a control reading condition and during rhythmic reading in 10 persons who stutter (PWS) and 10 normally fluent controls. Ratings for speech naturalness were also gathered from naïve listeners. Results showed that mean vowel duration increased significantly, and the percentage of short phonated intervals decreased significantly, for both groups from the control to the experimental condition. Mean phonated interval length increased significantly for the fluent controls. Mean speech naturalness ratings during the experimental condition were approximately "7" on a 1-9 scale (1=highly natural; 9=highly unnatural), and these ratings were significantly correlated with vowel duration and phonated intervals for PWS. The findings indicate that PWS may be altering vocal fold vibration duration to obtain fluency during this rhythmic speech style, and that vocal fold vibration duration may have an impact on speech naturalness during rhythmic speech. Future investigations should examine speech production changes and speech naturalness during variations of this rhythmic condition. The reader will be able to: (1) describe changes (from a control reading condition) in speech production variables when alternating between 1s of reading and 1s of silence, (2) describe which rhythmic conditions have been found to sound and feel the most natural, (3) describe methodological issues for studies about alterations in speech production variables during fluency-inducing conditions, and (4) describe which fluency-inducing conditions have been shown to involve a reduction in short phonated intervals. Copyright © 2011 Elsevier Inc. All rights reserved.
Asad, Areej Nimer; Purdy, Suzanne C; Ballard, Elaine; Fairgray, Liz; Bowen, Caroline
2018-04-27
In this descriptive study, phonological processes were examined in the speech of children aged 5;0-7;6 (years; months) with mild to profound hearing loss using hearing aids (HAs) and cochlear implants (CIs), in comparison to their peers. A second aim was to compare phonological processes of HA and CI users. Children with hearing loss (CWHL, N = 25) were compared to children with normal hearing (CWNH, N = 30) with similar age, gender, linguistic, and socioeconomic backgrounds. Speech samples obtained from a list of 88 words, derived from three standardized speech tests, were analyzed using the CASALA (Computer Aided Speech and Language Analysis) program to evaluate participants' phonological systems, based on lax (a process appeared at least twice in the speech of at least two children) and strict (a process appeared at least five times in the speech of at least two children) counting criteria. Developmental phonological processes were eliminated in the speech of younger and older CWNH while eleven developmental phonological processes persisted in the speech of both age groups of CWHL. CWHL showed a similar trend of age of elimination to CWNH, but at a slower rate. Children with HAs and CIs produced similar phonological processes. Final consonant deletion, weak syllable deletion, backing, and glottal replacement were present in the speech of HA users, affecting their overall speech intelligibility. Developmental and non-developmental phonological processes persist in the speech of children with mild to profound hearing loss compared to their peers with typical hearing. The findings indicate that it is important for clinicians to consider phonological assessment in pre-school CWHL and the use of evidence-based speech therapy in order to reduce non-developmental and non-age-appropriate developmental processes, thereby enhancing their speech intelligibility. Copyright © 2018 Elsevier Inc. All rights reserved.
What happens to the motor theory of perception when the motor system is damaged?
Stasenko, Alena; Garcea, Frank E; Mahon, Bradford Z
2013-09-01
Motor theories of perception posit that motor information is necessary for successful recognition of actions. Perhaps the most well known of this class of proposals is the motor theory of speech perception, which argues that speech recognition is fundamentally a process of identifying the articulatory gestures (i.e. motor representations) that were used to produce the speech signal. Here we review neuropsychological evidence from patients with damage to the motor system, in the context of motor theories of perception applied to both manual actions and speech. Motor theories of perception predict that patients with motor impairments will have impairments for action recognition. Contrary to that prediction, the available neuropsychological evidence indicates that recognition can be spared despite profound impairments to production. These data falsify strong forms of the motor theory of perception, and frame new questions about the dynamical interactions that govern how information is exchanged between input and output systems.
What happens to the motor theory of perception when the motor system is damaged?
Stasenko, Alena; Garcea, Frank E.; Mahon, Bradford Z.
2016-01-01
Motor theories of perception posit that motor information is necessary for successful recognition of actions. Perhaps the most well known of this class of proposals is the motor theory of speech perception, which argues that speech recognition is fundamentally a process of identifying the articulatory gestures (i.e. motor representations) that were used to produce the speech signal. Here we review neuropsychological evidence from patients with damage to the motor system, in the context of motor theories of perception applied to both manual actions and speech. Motor theories of perception predict that patients with motor impairments will have impairments for action recognition. Contrary to that prediction, the available neuropsychological evidence indicates that recognition can be spared despite profound impairments to production. These data falsify strong forms of the motor theory of perception, and frame new questions about the dynamical interactions that govern how information is exchanged between input and output systems. PMID:26823687
Effects of neurological damage on production of formulaic language
Sidtis, D.; Canterucci, G.; Katsnelson, D.
2014-01-01
Early studies reported preserved formulaic language in left hemisphere damaged subjects and reduced incidence of formulaic expressions in the conversational speech of stroke patients with right hemispheric damage. Clinical observations suggest a possible role also of subcortical nuclei. This study examined formulaic language in the spontaneous speech of stroke patients with left, right, or subcortical damage. Four subjects were interviewed and their speech samples compared to normal speakers. Raters classified formulaic expressions as speech formulae, fillers, sentence stems, and proper nouns. Results demonstrated that brain damage affected novel and formulaic language competence differently, with a significantly smaller proportion of formulaic expressions in subjects with right or subcortical damage compared to left hemisphere damaged or healthy speakers. These findings converge with previous studies that support the proposal of a right hemisphere/subcortical circuit in the management of formulaic expressions, based on a dual-process model of language incorporating novel and formulaic language use. PMID:19382014
Fridriksson, Julius; den Ouden, Dirk-Bart; Hillis, Argye E; Hickok, Gregory; Rorden, Chris; Basilakos, Alexandra; Yourganov, Grigori; Bonilha, Leonardo
2018-01-17
In most cases, aphasia is caused by strokes involving the left hemisphere, with more extensive damage typically being associated with more severe aphasia. The classical model of aphasia commonly adhered to in the Western world is the Wernicke-Lichtheim model. The model has been in existence for over a century, and classification of aphasic symptomatology continues to rely on it. However, far more detailed models of speech and language localization in the brain have been formulated. In this regard, the dual stream model of cortical brain organization proposed by Hickok and Poeppel is particularly influential. Their model describes two processing routes, a dorsal stream and a ventral stream, that roughly support speech production and speech comprehension, respectively, in normal subjects. Despite the strong influence of the dual stream model in current neuropsychological research, there has been relatively limited focus on explaining aphasic symptoms in the context of this model. Given that the dual stream model represents a more nuanced picture of cortical speech and language organization, cortical damage that causes aphasic impairment should map clearly onto the dual processing streams. Here, we present a follow-up study to our previous work that used lesion data to reveal the anatomical boundaries of the dorsal and ventral streams supporting speech and language processing. Specifically, by emphasizing clinical measures, we examine the effect of cortical damage and disconnection involving the dorsal and ventral streams on aphasic impairment. The results reveal that measures of motor speech impairment mostly involve damage to the dorsal stream, whereas measures of impaired speech comprehension are more strongly associated with ventral stream involvement. Equally important, many clinical tests that target behaviours such as naming, speech repetition, or grammatical processing rely on interactions between the two streams. This latter finding explains why patients with seemingly disparate lesion locations often experience similar impairments on given subtests. Namely, these individuals' cortical damage, although dissimilar, affects a broad cortical network that plays a role in carrying out a given speech or language task. The current data suggest this is a more accurate characterization than ascribing specific lesion locations as responsible for specific language deficits.awx363media15705668782001. © The Author(s) (2018). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Ward, Roslyn; Leitão, Suze; Strauss, Geoff
2014-08-01
This study evaluates perceptual changes in speech production accuracy in six children (3-11 years) with moderate-to-severe speech impairment associated with cerebral palsy before, during, and after participation in a motor-speech intervention program (Prompts for Restructuring Oral Muscular Phonetic Targets). An A1BCA2 single subject research design was implemented. Subsequent to the baseline phase (phase A1), phase B targeted each participant's first intervention priority on the PROMPT motor-speech hierarchy. Phase C then targeted one level higher. Weekly speech probes were administered, containing trained and untrained words at the two levels of intervention, plus an additional level that served as a control goal. The speech probes were analysed for motor-speech-movement-parameters and perceptual accuracy. Analysis of the speech probe data showed all participants recorded a statistically significant change. Between phases A1-B and B-C 6/6 and 4/6 participants, respectively, recorded a statistically significant increase in performance level on the motor speech movement patterns targeted during the training of that intervention. The preliminary data presented in this study make a contribution to providing evidence that supports the use of a treatment approach aligned with dynamic systems theory to improve the motor-speech movement patterns and speech production accuracy in children with cerebral palsy.
Shriberg, Lawrence D; Strand, Edythe A; Fourakis, Marios; Jakielski, Kathy J; Hall, Sheryl D; Karlsson, Heather B; Mabie, Heather L; McSweeny, Jane L; Tilkens, Christie M; Wilson, David L
2017-04-14
Previous articles in this supplement described rationale for and development of the pause marker (PM), a diagnostic marker of childhood apraxia of speech (CAS), and studies supporting its validity and reliability. The present article assesses the theoretical coherence of the PM with speech processing deficits in CAS. PM and other scores were obtained for 264 participants in 6 groups: CAS in idiopathic, neurogenetic, and complex neurodevelopmental disorders; adult-onset apraxia of speech (AAS) consequent to stroke and primary progressive apraxia of speech; and idiopathic speech delay. Participants with CAS and AAS had significantly lower scores than typically speaking reference participants and speech delay controls on measures posited to assess representational and transcoding processes. Representational deficits differed between CAS and AAS groups, with support for both underspecified linguistic representations and memory/access deficits in CAS, but for only the latter in AAS. CAS-AAS similarities in the age-sex standardized percentages of occurrence of the most frequent type of inappropriate pauses (abrupt) and significant differences in the standardized occurrence of appropriate pauses were consistent with speech processing findings. Results support the hypotheses of core representational and transcoding speech processing deficits in CAS and theoretical coherence of the PM's pause-speech elements with these deficits.
On pure word deafness, temporal processing, and the left hemisphere.
Stefanatos, Gerry A; Gershkoff, Arthur; Madigan, Sean
2005-07-01
Pure word deafness (PWD) is a rare neurological syndrome characterized by severe difficulties in understanding and reproducing spoken language, with sparing of written language comprehension and speech production. The pathognomonic disturbance of auditory comprehension appears to be associated with a breakdown in processes involved in mapping auditory input to lexical representations of words, but the functional locus of this disturbance and the localization of the responsible lesion have long been disputed. We report here on a woman with PWD resulting from a circumscribed unilateral infarct involving the left superior temporal lobe who demonstrated significant problems processing transitional spectrotemporal cues in both speech and nonspeech sounds. On speech discrimination tasks, she exhibited poor differentiation of stop consonant-vowel syllables distinguished by voicing onset and brief formant frequency transitions. Isolated formant transitions could be reliably discriminated only at very long durations (> 200 ms). By contrast, click fusion threshold, which depends on millisecond-level resolution of brief auditory events, was normal. These results suggest that the problems with speech analysis in this case were not secondary to general constraints on auditory temporal resolution. Rather, they point to a disturbance of left hemisphere auditory mechanisms that preferentially analyze rapid spectrotemporal variations in frequency. The findings have important implications for our conceptualization of PWD and its subtypes.
Obstructive sleep apnea, seizures, and childhood apraxia of speech.
Caspari, Susan S; Strand, Edythe A; Kotagal, Suresh; Bergqvist, Christina
2008-06-01
Associations between obstructive sleep apnea and motor speech disorders in adults have been suggested, though little has been written about possible effects of sleep apnea on speech acquisition in children with motor speech disorders. This report details the medical and speech history of a nonverbal child with seizures and severe apraxia of speech. For 6 years, he made no functional gains in speech production, despite intensive speech therapy. After tonsillectomy for obstructive sleep apnea at age 6 years, he experienced a reduction in seizures and rapid growth in speech production. The findings support a relationship between obstructive sleep apnea and childhood apraxia of speech. The rather late diagnosis and treatment of obstructive sleep apnea, especially in light of what was such a life-altering outcome (gaining functional speech), has significant implications. Most speech sounds develop during ages 2-5 years, which is also the peak time of occurrence of adenotonsillar hypertrophy and childhood obstructive sleep apnea. Hence it is important to establish definitive diagnoses, and to consider early and more aggressive treatments for obstructive sleep apnea, in children with motor speech disorders.
Analysis of Acoustic Features in Speakers with Cognitive Disorders and Speech Impairments
NASA Astrophysics Data System (ADS)
Saz, Oscar; Simón, Javier; Rodríguez, W. Ricardo; Lleida, Eduardo; Vaquero, Carlos
2009-12-01
This work presents the results in the analysis of the acoustic features (formants and the three suprasegmental features: tone, intensity and duration) of the vowel production in a group of 14 young speakers suffering different kinds of speech impairments due to physical and cognitive disorders. A corpus with unimpaired children's speech is used to determine the reference values for these features in speakers without any kind of speech impairment within the same domain of the impaired speakers; this is 57 isolated words. The signal processing to extract the formant and pitch values is based on a Linear Prediction Coefficients (LPCs) analysis of the segments considered as vowels in a Hidden Markov Model (HMM) based Viterbi forced alignment. Intensity and duration are also based in the outcome of the automated segmentation. As main conclusion of the work, it is shown that intelligibility of the vowel production is lowered in impaired speakers even when the vowel is perceived as correct by human labelers. The decrease in intelligibility is due to a 30% of increase in confusability in the formants map, a reduction of 50% in the discriminative power in energy between stressed and unstressed vowels and to a 50% increase of the standard deviation in the length of the vowels. On the other hand, impaired speakers keep good control of tone in the production of stressed and unstressed vowels.
Donaldson, Morag L; Cooper, Lynn S M
2013-09-01
Young children's speech is typically more linguistically sophisticated than their writing. However, there are grounds for asking whether production of cohesive devices, such as verb-phrase anaphora (VPA), might represent an exception to this developmental pattern, as cohesive devices are generally more important in writing than in speech and so might be expected to be more frequent in children's writing than in their speech. The study reported herein aims to compare the frequency of children's production of VPA constructions (e.g., Mary is eating an apple and so is John) between a written and a spoken task. Forty-eight children participated from each of two age groups: 7-year-olds and 10-year-olds. All the children received both a spoken and a written sentence completion task designed to elicit production of VPA. Task order was counterbalanced. VPA production was significantly more frequent in speech than in writing and when the spoken task was presented first. Surprisingly, the 7-year-olds produced VPA constructions more frequently than the 10-year-olds. Despite the greater importance of cohesion in writing than in speech, children's production of VPA is similar to their production of most other aspects of language in that more sophisticated constructions are used more frequently in speech than in writing. Children's written production of cohesive devices could probably be enhanced by presenting spoken tasks immediately before written tasks. The lower frequency of VPA production in the older children may reflect syntactic priming effects or a belief that they should produce sentences that are as fully specified as possible. © 2012 The British Psychological Society.
Simonyan, Kristina; Fuertinger, Stefan
2015-04-01
Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network. Copyright © 2015 the American Physiological Society.
The Downside of Greater Lexical Influences: Selectively Poorer Speech Perception in Noise
Xie, Zilong; Tessmer, Rachel; Chandrasekaran, Bharath
2017-01-01
Purpose Although lexical information influences phoneme perception, the extent to which reliance on lexical information enhances speech processing in challenging listening environments is unclear. We examined the extent to which individual differences in lexical influences on phonemic processing impact speech processing in maskers containing varying degrees of linguistic information (2-talker babble or pink noise). Method Twenty-nine monolingual English speakers were instructed to ignore the lexical status of spoken syllables (e.g., gift vs. kift) and to only categorize the initial phonemes (/g/ vs. /k/). The same participants then performed speech recognition tasks in the presence of 2-talker babble or pink noise in audio-only and audiovisual conditions. Results Individuals who demonstrated greater lexical influences on phonemic processing experienced greater speech processing difficulties in 2-talker babble than in pink noise. These selective difficulties were present across audio-only and audiovisual conditions. Conclusion Individuals with greater reliance on lexical processes during speech perception exhibit impaired speech recognition in listening conditions in which competing talkers introduce audible linguistic interferences. Future studies should examine the locus of lexical influences/interferences on phonemic processing and speech-in-speech processing. PMID:28586824
Phonology and Vocal Behavior in Toddlers with Autism Spectrum Disorders
Schoen, Elizabeth; Paul, Rhea; Chawarska, Katyrzyna
2011-01-01
Scientific Abstract The purpose of this study is to examine the phonological and other vocal productions of children, 18-36 months, with autism spectrum disorder (ASD) and to compare these productions to those of age-matched and language-matched controls. Speech samples were obtained from 30 toddlers with ASD, 11 age-matched toddlers and 23 language-matched toddlers during either parent-child or clinician-child play sessions. Samples were coded for a variety of speech-like and non-speech vocalization productions. Toddlers with ASD produced speech-like vocalizations similar to those of language-matched peers, but produced significantly more atypical non-speech vocalizations when compared to both control groups.Toddlers with ASD show speech-like sound production that is linked to their language level, in a manner similar to that seen in typical development. The main area of difference in vocal development in this population is in the production of atypical vocalizations. Findings suggest that toddlers with autism spectrum disorders might not tune into the language model of their environment. Failure to attend to the ambient language environment negatively impacts the ability to acquire spoken language. PMID:21308998
Venezia, Jonathan H; Fillmore, Paul; Matchin, William; Isenberg, A Lisette; Hickok, Gregory; Fridriksson, Julius
2016-02-01
Sensory information is critical for movement control, both for defining the targets of actions and providing feedback during planning or ongoing movements. This holds for speech motor control as well, where both auditory and somatosensory information have been shown to play a key role. Recent clinical research demonstrates that individuals with severe speech production deficits can show a dramatic improvement in fluency during online mimicking of an audiovisual speech signal suggesting the existence of a visuomotor pathway for speech motor control. Here we used fMRI in healthy individuals to identify this new visuomotor circuit for speech production. Participants were asked to perceive and covertly rehearse nonsense syllable sequences presented auditorily, visually, or audiovisually. The motor act of rehearsal, which is prima facie the same whether or not it is cued with a visible talker, produced different patterns of sensorimotor activation when cued by visual or audiovisual speech (relative to auditory speech). In particular, a network of brain regions including the left posterior middle temporal gyrus and several frontoparietal sensorimotor areas activated more strongly during rehearsal cued by a visible talker versus rehearsal cued by auditory speech alone. Some of these brain regions responded exclusively to rehearsal cued by visual or audiovisual speech. This result has significant implications for models of speech motor control, for the treatment of speech output disorders, and for models of the role of speech gesture imitation in development. Copyright © 2015 Elsevier Inc. All rights reserved.
Venezia, Jonathan H.; Fillmore, Paul; Matchin, William; Isenberg, A. Lisette; Hickok, Gregory; Fridriksson, Julius
2015-01-01
Sensory information is critical for movement control, both for defining the targets of actions and providing feedback during planning or ongoing movements. This holds for speech motor control as well, where both auditory and somatosensory information have been shown to play a key role. Recent clinical research demonstrates that individuals with severe speech production deficits can show a dramatic improvement in fluency during online mimicking of an audiovisual speech signal suggesting the existence of a visuomotor pathway for speech motor control. Here we used fMRI in healthy individuals to identify this new visuomotor circuit for speech production. Participants were asked to perceive and covertly rehearse nonsense syllable sequences presented auditorily, visually, or audiovisually. The motor act of rehearsal, which is prima facie the same whether or not it is cued with a visible talker, produced different patterns of sensorimotor activation when cued by visual or audiovisual speech (relative to auditory speech). In particular, a network of brain regions including the left posterior middle temporal gyrus and several frontoparietal sensorimotor areas activated more strongly during rehearsal cued by a visible talker versus rehearsal cued by auditory speech alone. Some of these brain regions responded exclusively to rehearsal cued by visual or audiovisual speech. This result has significant implications for models of speech motor control, for the treatment of speech output disorders, and for models of the role of speech gesture imitation in development. PMID:26608242
Davidow, Jason H
2014-01-01
Metronome-paced speech results in the elimination, or substantial reduction, of stuttering moments. The cause of fluency during this fluency-inducing condition is unknown. Several investigations have reported changes in speech pattern characteristics from a control condition to a metronome-paced speech condition, but failure to control speech rate between conditions limits our ability to determine if the changes were necessary for fluency. This study examined the effect of speech rate on several speech production variables during one-syllable-per-beat metronomic speech in order to determine changes that may be important for fluency during this fluency-inducing condition. Thirteen persons who stutter (PWS), aged 18-62 years, completed a series of speaking tasks. Several speech production variables were compared between conditions produced at different metronome beat rates, and between a control condition and a metronome-paced speech condition produced at a rate equal to the control condition. Vowel duration, voice onset time, pressure rise time and phonated intervals were significantly impacted by metronome beat rate. Voice onset time and the percentage of short (30-100 ms) phonated intervals significantly decreased from the control condition to the equivalent rate metronome-paced speech condition. A reduction in the percentage of short phonated intervals may be important for fluency during syllable-based metronome-paced speech for PWS. Future studies should continue examining the necessity of this reduction. In addition, speech rate must be controlled in future fluency-inducing condition studies, including neuroimaging investigations, in order for this research to make a substantial contribution to finding the fluency-inducing mechanism of fluency-inducing conditions. © 2013 Royal College of Speech and Language Therapists.
ERIC Educational Resources Information Center
Millar, Diane C.; Light, Janice C.; Schlosser, Ralf W.
2006-01-01
Purpose: This article presents the results of a meta-analysis to determine the effect of augmentative and alternative communication (AAC) on the speech production of individuals with developmental disabilities. Method: A comprehensive search of the literature published between 1975 and 2003, which included data on speech production before, during,…
Johari, Karim; Behroozmand, Roozbeh
2017-05-01
The predictive coding model suggests that neural processing of sensory information is facilitated for temporally-predictable stimuli. This study investigated how temporal processing of visually-presented sensory cues modulates movement reaction time and neural activities in speech and hand motor systems. Event-related potentials (ERPs) were recorded in 13 subjects while they were visually-cued to prepare to produce a steady vocalization of a vowel sound or press a button in a randomized order, and to initiate the cued movement following the onset of a go signal on the screen. Experiment was conducted in two counterbalanced blocks in which the time interval between visual cue and go signal was temporally-predictable (fixed delay at 1000 ms) or unpredictable (variable between 1000 and 2000 ms). Results of the behavioral response analysis indicated that movement reaction time was significantly decreased for temporally-predictable stimuli in both speech and hand modalities. We identified premotor ERP activities with a left-lateralized parietal distribution for hand and a frontocentral distribution for speech that were significantly suppressed in response to temporally-predictable compared with unpredictable stimuli. The premotor ERPs were elicited approximately -100 ms before movement and were significantly correlated with speech and hand motor reaction times only in response to temporally-predictable stimuli. These findings suggest that the motor system establishes a predictive code to facilitate movement in response to temporally-predictable sensory stimuli. Our data suggest that the premotor ERP activities are robust neurophysiological biomarkers of such predictive coding mechanisms. These findings provide novel insights into the temporal processing mechanisms of speech and hand motor systems.
Neural correlates of phonetic convergence and speech imitation.
Garnier, Maëva; Lamalle, Laurent; Sato, Marc
2013-01-01
Speakers unconsciously tend to mimic their interlocutor's speech during communicative interaction. This study aims at examining the neural correlates of phonetic convergence and deliberate imitation, in order to explore whether imitation of phonetic features, deliberate, or unconscious, might reflect a sensory-motor recalibration process. Sixteen participants listened to vowels with pitch varying around the average pitch of their own voice, and then produced the identified vowels, while their speech was recorded and their brain activity was imaged using fMRI. Three degrees and types of imitation were compared (unconscious, deliberate, and inhibited) using a go-nogo paradigm, which enabled the comparison of brain activations during the whole imitation process, its active perception step, and its production. Speakers followed the pitch of voices they were exposed to, even unconsciously, without being instructed to do so. After being informed about this phenomenon, 14 participants were able to inhibit it, at least partially. The results of whole brain and ROI analyses support the fact that both deliberate and unconscious imitations are based on similar neural mechanisms and networks, involving regions of the dorsal stream, during both perception and production steps of the imitation process. While no significant difference in brain activation was found between unconscious and deliberate imitations, the degree of imitation, however, appears to be determined by processes occurring during the perception step. Four regions of the dorsal stream: bilateral auditory cortex, bilateral supramarginal gyrus (SMG), and left Wernicke's area, indeed showed an activity that correlated significantly with the degree of imitation during the perception step.
Hodgson, Catherine; Lambon Ralph, Matthew A
2008-01-01
Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study utilised a novel method- tempo picture naming. Experiment 1 showed that, compared to standard deadline naming tasks, participants made more errors on the tempo picture naming tasks. Further, RTs were longer and more errors were produced to living items than non-living items a pattern seen in both semantic dementia and semantically-impaired stroke aphasic patients. Experiment 2 showed that providing the initial phoneme as a cue enhanced performance whereas providing an incorrect phonemic cue further reduced performance. These results support the contention that the tempo picture naming paradigm reduces the time allowed for controlled semantic processing causing increased error rates. This experimental procedure would, therefore, appear to mimic the performance of aphasic patients with multi-modal semantic impairment that results from poor semantic control rather than the degradation of semantic representations observed in semantic dementia [Jefferies, E. A., & Lambon Ralph, M. A. (2006). Semantic impairment in stoke aphasia vs. semantic dementia: A case-series comparison. Brain, 129, 2132-2147]. Further implications for theories of semantic cognition and models of speech processing are discussed.
Eliciting Spontaneous Speech in Bilingual Students: Methods & Techniques.
ERIC Educational Resources Information Center
Cornejo, Ricardo J.; And Others
Intended to provide practical information pertaining to methods and techniques for speech elicitation and production, the monograph offers specific methods and techniques to elicit spontaneous speech in bilingual students. Chapter 1, "Traditional Methodologies for Language Production and Recording," presents an overview of studies using…
Yoder, Paul J.; Molfese, Dennis; Murray, Micah M.; Key, Alexandra P. F.
2013-01-01
Typically developing (TD) preschoolers and age-matched preschoolers with specific language impairment (SLI) received event-related potentials (ERPs) to four monosyllabic speech sounds prior to treatment and, in the SLI group, after 6 months of grammatical treatment. Before treatment, the TD group processed speech sounds faster than the SLI group. The SLI group increased the speed of their speech processing after treatment. Post-treatment speed of speech processing predicted later impairment in comprehending phrase elaboration in the SLI group. During the treatment phase, change in speed of speech processing predicted growth rate of grammar in the SLI group. PMID:24219693
ERIC Educational Resources Information Center
Bernales, Carolina
2016-01-01
Previous research on foreign language classroom participation has shown that oral production has a privileged status compared to less salient forms of participation, such as mental involvement and engagement in class activities. This mixed-methods study presents an alternative look at classroom participation by investigating the relationship…
Grammatical Gender in L2: A Production or a Real-Time Processing Problem?
ERIC Educational Resources Information Center
Gruter, Theres; Lew-Williams, Casey; Fernald, Anne
2012-01-01
Mastery of grammatical gender is difficult to achieve in a second language (L2). This study investigates whether persistent difficulty with grammatical gender often observed in the speech of otherwise highly proficient L2 learners is best characterized as a production-specific performance problem, or as difficulty with the retrieval of gender…
Mapping the Speech Code: Cortical Responses Linking the Perception and Production of Vowels
Schuerman, William L.; Meyer, Antje S.; McQueen, James M.
2017-01-01
The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation. PMID:28439232
Recovering With Acquired Apraxia of Speech: The First 2 Years.
Haley, Katarina L; Shafer, Jennifer N; Harmon, Tyson G; Jacks, Adam
2016-12-01
This study was intended to document speech recovery for 1 person with acquired apraxia of speech quantitatively and on the basis of her lived experience. The second author sustained a traumatic brain injury that resulted in acquired apraxia of speech. Over a 2-year period, she documented her recovery through 22 video-recorded monologues. We analyzed these monologues using a combination of auditory perceptual, acoustic, and qualitative methods. Recovery was evident for all quantitative variables examined. For speech sound production, the recovery was most prominent during the first 3 months, but slower improvement was evident for many months. Measures of speaking rate, fluency, and prosody changed more gradually throughout the entire period. A qualitative analysis of topics addressed in the monologues was consistent with the quantitative speech recovery and indicated a subjective dynamic relationship between accuracy and rate, an observation that several factors made speech sound production variable, and a persisting need for cognitive effort while speaking. Speech features improved over an extended time, but the recovery trajectories differed, indicating dynamic reorganization of the underlying speech production system. The relationship among speech dimensions should be examined in other cases and in population samples. The combination of quantitative and qualitative analysis methods offers advantages for understanding clinically relevant aspects of recovery.
ERIC Educational Resources Information Center
van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.
2007-01-01
Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…
Reaction times of normal listeners to laryngeal, alaryngeal, and synthetic speech.
Evitts, Paul M; Searl, Jeff
2006-12-01
The purpose of this study was to compare listener processing demands when decoding alaryngeal compared to laryngeal speech. Fifty-six listeners were presented with single words produced by 1 proficient speaker from 5 different modes of speech: normal, tracheosophageal (TE), esophageal (ES), electrolaryngeal (EL), and synthetic speech (SS). Cognitive processing load was indexed by listener reaction time (RT). To account for significant durational differences among the modes of speech, an RT ratio was calculated (stimulus duration divided by RT). Results indicated that the cognitive processing load was greater for ES and EL relative to normal speech. TE and normal speech did not differ in terms of RT ratio, suggesting fairly comparable cognitive demands placed on the listener. SS required greater cognitive processing load than normal and alaryngeal speech. The results are discussed relative to alaryngeal speech intelligibility and the role of the listener. Potential clinical applications and directions for future research are also presented.
General-Purpose Monitoring during Speech Production
ERIC Educational Resources Information Center
Ries, Stephanie; Janssen, Niels; Dufau, Stephane; Alario, F.-Xavier; Burle, Boris
2011-01-01
The concept of "monitoring" refers to our ability to control our actions on-line. Monitoring involved in speech production is often described in psycholinguistic models as an inherent part of the language system. We probed the specificity of speech monitoring in two psycholinguistic experiments where electroencephalographic activities were…
Pisoni, David B; Kronenberger, William G; Roman, Adrienne S; Geers, Ann E
2011-02-01
Conventional assessments of outcomes in deaf children with cochlear implants (CIs) have focused primarily on endpoint or product measures of speech and language. Little attention has been devoted to understanding the basic underlying core neurocognitive factors involved in the development and processing of speech and language. In this study, we examined the development of factors related to the quality of phonological information in immediate verbal memory, including immediate memory capacity and verbal rehearsal speed, in a sample of deaf children after >10 yrs of CI use and assessed the correlations between these two process measures and a set of speech and language outcomes. Of an initial sample of 180 prelingually deaf children with CIs assessed at ages 8 to 9 yrs after 3 to 7 yrs of CI use, 112 returned for testing again in adolescence after 10 more years of CI experience. In addition to completing a battery of conventional speech and language outcome measures, subjects were administered the Wechsler Intelligence Scale for Children-III Digit Span subtest to measure immediate verbal memory capacity. Sentence durations obtained from the McGarr speech intelligibility test were used as a measure of verbal rehearsal speed. Relative to norms for normal-hearing children, Digit Span scores were well below average for children with CIs at both elementary and high school ages. Improvement was observed over the 8-yr period in the mean longest digit span forward score but not in the mean longest digit span backward score. Longest digit span forward scores at ages 8 to 9 yrs were significantly correlated with all speech and language outcomes in adolescence, but backward digit spans correlated significantly only with measures of higher-order language functioning over that time period. While verbal rehearsal speed increased for almost all subjects between elementary grades and high school, it was still slower than the rehearsal speed obtained from a control group of normal-hearing adolescents. Verbal rehearsal speed at ages 8 to 9 yrs was also found to be strongly correlated with speech and language outcomes and Digit Span scores in adolescence. Despite improvement after 8 additional years of CI use, measures of immediate verbal memory capacity and verbal rehearsal speed, which reflect core fundamental information processing skills associated with representational efficiency and information processing capacity, continue to be delayed in children with CIs relative to NH peers. Furthermore, immediate verbal memory capacity and verbal rehearsal speed at 8 to 9 yrs of age were both found to predict speech and language outcomes in adolescence, demonstrating the important contribution of these processing measures for speech-language development in children with CIs. Understanding the relations between these core underlying processes and speech-language outcomes in children with CIs may help researchers to develop new approaches to intervention and treatment of deaf children who perform poorly with their CIs. Moreover, this knowledge could be used for early identification of deaf children who may be at high risk for poor speech and language outcomes after cochlear implantation as well as for the development of novel targeted interventions that focus selectively on these core elementary information processing variables.
Davidow, Jason H.
2013-01-01
Background Metronome-paced speech results in the elimination, or substantial reduction, of stuttering moments. The cause of fluency during this fluency-inducing condition is unknown. Several investigations have reported changes in speech pattern characteristics from a control condition to a metronome-paced speech condition, but failure to control speech rate between conditions limits our ability to determine if the changes were necessary for fluency. Aims This study examined the effect of speech rate on several speech production variables during one-syllable-per-beat metronomic speech, in order to determine changes that may be important for fluency during this fluency-inducing condition. Methods and Procedures Thirteen persons who stutter (PWS), aged 18–62 years, completed a series of speaking tasks. Several speech production variables were compared between conditions produced at different metronome beat rates, and between a control condition and a metronome-paced speech condition produced at a rate equal to the control condition. Outcomes & Results Vowel duration, voice onset time, pressure rise time, and phonated intervals were significantly impacted by metronome beat rate. Voice onset time and the percentage of short (30–100 ms) phonated intervals significantly decreased from the control condition to the equivalent rate metronome-paced speech condition. Conclusions & Implications A reduction in the percentage of short phonated intervals may be important for fluency during syllable-based metronome-paced speech for PWS. Future studies should continue examining the necessity of this reduction. In addition, speech rate must be controlled in future fluency-inducing condition studies, including neuroimaging investigations, in order for this research to make a substantial contribution to finding the fluency-inducing mechanism of fluency-inducing conditions. PMID:24372888
Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.
2017-01-01
Purpose Previous articles in this supplement described rationale for and development of the pause marker (PM), a diagnostic marker of childhood apraxia of speech (CAS), and studies supporting its validity and reliability. The present article assesses the theoretical coherence of the PM with speech processing deficits in CAS. Method PM and other scores were obtained for 264 participants in 6 groups: CAS in idiopathic, neurogenetic, and complex neurodevelopmental disorders; adult-onset apraxia of speech (AAS) consequent to stroke and primary progressive apraxia of speech; and idiopathic speech delay. Results Participants with CAS and AAS had significantly lower scores than typically speaking reference participants and speech delay controls on measures posited to assess representational and transcoding processes. Representational deficits differed between CAS and AAS groups, with support for both underspecified linguistic representations and memory/access deficits in CAS, but for only the latter in AAS. CAS–AAS similarities in the age–sex standardized percentages of occurrence of the most frequent type of inappropriate pauses (abrupt) and significant differences in the standardized occurrence of appropriate pauses were consistent with speech processing findings. Conclusions Results support the hypotheses of core representational and transcoding speech processing deficits in CAS and theoretical coherence of the PM's pause-speech elements with these deficits. PMID:28384751
Differentiating primary progressive aphasias in a brief sample of connected speech
Evans, Emily; O'Shea, Jessica; Powers, John; Boller, Ashley; Weinberg, Danielle; Haley, Jenna; McMillan, Corey; Irwin, David J.; Rascovsky, Katya; Grossman, Murray
2013-01-01
Objective: A brief speech expression protocol that can be administered and scored without special training would aid in the differential diagnosis of the 3 principal forms of primary progressive aphasia (PPA): nonfluent/agrammatic PPA, logopenic variant PPA, and semantic variant PPA. Methods: We used a picture-description task to elicit a short speech sample, and we evaluated impairments in speech-sound production, speech rate, lexical retrieval, and grammaticality. We compared the results with those obtained by a longer, previously validated protocol and further validated performance with multimodal imaging to assess the neuroanatomical basis of the deficits. Results: We found different patterns of impaired grammar in each PPA variant, and additional language production features were impaired in each: nonfluent/agrammatic PPA was characterized by speech-sound errors; logopenic variant PPA by dysfluencies (false starts and hesitations); and semantic variant PPA by poor retrieval of nouns. Strong correlations were found between this brief speech sample and a lengthier narrative speech sample. A composite measure of grammaticality and other measures of speech production were correlated with distinct regions of gray matter atrophy and reduced white matter fractional anisotropy in each PPA variant. Conclusions: These findings provide evidence that large-scale networks are required for fluent, grammatical expression; that these networks can be selectively disrupted in PPA syndromes; and that quantitative analysis of a brief speech sample can reveal the corresponding distinct speech characteristics. PMID:23794681
Processing melodic contour and speech intonation in congenital amusics with Mandarin Chinese.
Jiang, Cunmei; Hamm, Jeff P; Lim, Vanessa K; Kirk, Ian J; Yang, Yufang
2010-07-01
Congenital amusia is a disorder in the perception and production of musical pitch. It has been suggested that early exposure to a tonal language may compensate for the pitch disorder (Peretz, 2008). If so, it is reasonable to expect that there would be different characterizations of pitch perception in music and speech in congenital amusics who speak a tonal language, such as Mandarin. In this study, a group of 11 adults with amusia whose first language was Mandarin were tested with melodic contour and speech intonation discrimination and identification tasks. The participants with amusia were impaired in discriminating and identifying melodic contour. These abnormalities were also detected in identifying both speech and non-linguistic analogue derived patterns for the Mandarin intonation tasks. In addition, there was an overall trend for the participants with amusia to show deficits with respect to controls in the intonation discrimination tasks for both speech and non-linguistic analogues. These findings suggest that the amusics' melodic pitch deficits may extend to the perception of speech, and could potentially result in some language deficits in those who speak a tonal language. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
[Parent's perspective on child rearing and corporal punishment].
Donoso, Miguir Terezinha Vieccelli; Ricas, Janete
2009-02-01
To describe parents' current perception of corporal punishment associated to child rearing and its practices. There were studied 31 family members whose children were warded due to child abuse complaints (12) and not warded (19) at a health care unit and a local social service unit in the city of Belo Horizonte (Southeastern Brazil) in 2006. Data was collected through semi-structured interviews and speech analysis was performed grouped by subjects and categories. ANALYSIS OF DISCOURSE: There was limitation of the respondents' speeches based on their production means. There was a diversity of conceptions on child rearing and its practices and corporal punishment was reported by all parents, even among those who expressed strong disapproval of this practice. Speeches were characterized by heterogeneity and polyphony with emphasis on the tradition speech, the religious speech and the popular scientific speech. Respondents did not express concepts of legal interdiction of corporal punishment or its excesses. The culture of corporal punishment of children is changing; tradition approving it has weakened and prohibition has been slowly adopted. Reinforcing legal actions against this practice can contribute to speed up the process to end corporal punishment of children.
Assessment of Individuals with Primary Progressive Aphasia.
Henry, Maya L; Grasso, Stephanie M
2018-07-01
Speech-language pathologists play a crucial role in the assessment and treatment of individuals with primary progressive aphasia (PPA). The speech-language evaluation is a critical aspect of the diagnostic and rehabilitative process, informing differential diagnosis as well as intervention planning and monitoring of cognitive-linguistic status over time. The evaluation should include a thorough case history and interview and a detailed assessment of speech-language and cognitive functions, with tasks designed to detect core and associated deficits outlined in current diagnostic criteria. In this paper, we review assessments that can be utilized to examine communication and cognition in PPA, including general aphasia batteries designed for stroke and/or progressive aphasia as well as tests of specific cognitive-linguistic functions, including naming, object/person knowledge, single-word and sentence comprehension, repetition, spontaneous speech/language production, motor speech, written language, and nonlinguistic cognitive domains. The comprehensive evaluation can inform diagnostic decision making and facilitate planning of interventions that are tailored to the patient's current status and likely progression of deficits. As such, the speech-language evaluation allows the medical team to provide individuals with PPA and their families with appropriate recommendations for the present and the future. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Speech Production in 12-Month-Old Children with and without Hearing Loss
ERIC Educational Resources Information Center
McGowan, Richard S.; Nittrouer, Susan; Chenausky, Karen
2008-01-01
Purpose: The purpose of this study was to compare speech production at 12 months of age for children with hearing loss (HL) who were identified and received intervention before 6 months of age with those of children with normal hearing (NH). Method: The speech production of 10 children with NH was compared with that of 10 children with HL whose…
ERIC Educational Resources Information Center
Hickok, Gregory
2012-01-01
Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…
The Timing and Effort of Lexical Access in Natural and Degraded Speech
Wagner, Anita E.; Toffanin, Paolo; Başkent, Deniz
2016-01-01
Understanding speech is effortless in ideal situations, and although adverse conditions, such as caused by hearing impairment, often render it an effortful task, they do not necessarily suspend speech comprehension. A prime example of this is speech perception by cochlear implant users, whose hearing prostheses transmit speech as a significantly degraded signal. It is yet unknown how mechanisms of speech processing deal with such degraded signals, and whether they are affected by effortful processing of speech. This paper compares the automatic process of lexical competition between natural and degraded speech, and combines gaze fixations, which capture the course of lexical disambiguation, with pupillometry, which quantifies the mental effort involved in processing speech. Listeners’ ocular responses were recorded during disambiguation of lexical embeddings with matching and mismatching durational cues. Durational cues were selected due to their substantial role in listeners’ quick limitation of the number of lexical candidates for lexical access in natural speech. Results showed that lexical competition increased mental effort in processing natural stimuli in particular in presence of mismatching cues. Signal degradation reduced listeners’ ability to quickly integrate durational cues in lexical selection, and delayed and prolonged lexical competition. The effort of processing degraded speech was increased overall, and because it had its sources at the pre-lexical level this effect can be attributed to listening to degraded speech rather than to lexical disambiguation. In sum, the course of lexical competition was largely comparable for natural and degraded speech, but showed crucial shifts in timing, and different sources of increased mental effort. We argue that well-timed progress of information from sensory to pre-lexical and lexical stages of processing, which is the result of perceptual adaptation during speech development, is the reason why in ideal situations speech is perceived as an undemanding task. Degradation of the signal or the receiver channel can quickly bring this well-adjusted timing out of balance and lead to increase in mental effort. Incomplete and effortful processing at the early pre-lexical stages has its consequences on lexical processing as it adds uncertainty to the forming and revising of lexical hypotheses. PMID:27065901
Speech systems research at Texas Instruments
NASA Technical Reports Server (NTRS)
Doddington, George R.
1977-01-01
An assessment of automatic speech processing technology is presented. Fundamental problems in the development and the deployment of automatic speech processing systems are defined and a technology forecast for speech systems is presented.
Acute stress reduces speech fluency.
Buchanan, Tony W; Laures-Gore, Jacqueline S; Duff, Melissa C
2014-03-01
People often report word-finding difficulties and other language disturbances when put in a stressful situation. There is, however, scant empirical evidence to support the claim that stress affects speech productivity. To address this issue, we measured speech and language variables during a stressful Trier Social Stress Test (TSST) as well as during a less stressful "placebo" TSST (Het et al., 2009). Compared to the non-stressful speech, participants showed higher word productivity during the TSST. By contrast, participants paused more during the stressful TSST, an effect that was especially pronounced in participants who produced a larger cortisol and heart rate response to the stressor. Findings support anecdotal evidence of stress-impaired speech production abilities. Copyright © 2014 Elsevier B.V. All rights reserved.
Pinheiro, Ana P; Rezaii, Neguine; Nestor, Paul G; Rauber, Andréia; Spencer, Kevin M; Niznikiewicz, Margaret
2016-02-01
During speech comprehension, multiple cues need to be integrated at a millisecond speed, including semantic information, as well as voice identity and affect cues. A processing advantage has been demonstrated for self-related stimuli when compared with non-self stimuli, and for emotional relative to neutral stimuli. However, very few studies investigated self-other speech discrimination and, in particular, how emotional valence and voice identity interactively modulate speech processing. In the present study we probed how the processing of words' semantic valence is modulated by speaker's identity (self vs. non-self voice). Sixteen healthy subjects listened to 420 prerecorded adjectives differing in voice identity (self vs. non-self) and semantic valence (neutral, positive and negative), while electroencephalographic data were recorded. Participants were instructed to decide whether the speech they heard was their own (self-speech condition), someone else's (non-self speech), or if they were unsure. The ERP results demonstrated interactive effects of speaker's identity and emotional valence on both early (N1, P2) and late (Late Positive Potential - LPP) processing stages: compared with non-self speech, self-speech with neutral valence elicited more negative N1 amplitude, self-speech with positive valence elicited more positive P2 amplitude, and self-speech with both positive and negative valence elicited more positive LPP. ERP differences between self and non-self speech occurred in spite of similar accuracy in the recognition of both types of stimuli. Together, these findings suggest that emotion and speaker's identity interact during speech processing, in line with observations of partially dependent processing of speech and speaker information. Copyright © 2016. Published by Elsevier Inc.
Resource Guide for Persons with Speech or Language Impairments.
ERIC Educational Resources Information Center
IBM, Atlanta, GA. National Support Center for Persons with Disabilities.
The resource guide identifies products which assist speech or language impaired individuals in accessing IBM (International Business Machine) Personal Computers or the IBM Personal System/2 family of products. An introduction provides a general overview of ways computers can help persons with speech or language handicaps. The document then…
Cortical Interactions Underlying the Production of Speech Sounds
ERIC Educational Resources Information Center
Guenther, Frank H.
2006-01-01
Speech production involves the integration of auditory, somatosensory, and motor information in the brain. This article describes a model of speech motor control in which a feedforward control system, involving premotor and primary motor cortex and the cerebellum, works in concert with auditory and somatosensory feedback control systems that…
Nonword repetition and nonword reading abilities in adults who do and do not stutter.
Sasisekaran, Jayanthi
2013-09-01
In the present study a nonword repetition and a nonword reading task were used to investigate the behavioral (speech accuracy) and speech kinematic (movement variability measured as lip aperture variability index; speech duration) profiles of groups of young adults who do (AWS) and do not stutter (control). Participants were 9 AWS (8 males, Mean age=32.2, SD=14.7) and 9 age- and sex-matched control participants (Mean age=31.8, SD=14.6). For the nonword repetition task, participants were administered the Nonword Repetition Test (Dollaghan & Campbell, 1998). For the reading task, participants were required to read out target nonwords varying in length (6 vs. 11 syllables). Repeated measures analyses of variance were conducted to compare the groups in percent speech accuracy for both tasks; only for the nonword reading task, the groups were compared in movement variability and speech duration. The groups were comparable in percent accuracy in nonword repetition. Findings from nonword reading revealed a trend for the AWS to show a lower percent of accurate productions compared to the control group. AWS also showed significantly higher movement variability and longer speech durations compared to the control group in nonword reading. Some preliminary evidence for group differences in practice effect (seen as differences between the early vs. later 5 trials) was evident in speech duration. Findings suggest differences between AWS and control groups in phonemic encoding and/or speech motor planning and production. Findings from nonword repetition vs. reading highlight the need for careful consideration of nonword properties. At the end of this activity the reader will be able to: (a) summarize the literature on nonword repetition skills in adults who stutter, (b) describe processes underlying nonword repetition and nonword reading, (c) summarize whether or not adults who stutter differ from those who do not in the behavioral and kinematic markers of nonword reading performance, (d) discuss future directions for research. Copyright © 2013 Elsevier Inc. All rights reserved.
Koenig, Laura L.; Lucero, Jorge C.; Perlman, Elizabeth
2008-01-01
This study investigates token-to-token variability in fricative production of 5 year olds, 10 year olds, and adults. Previous studies have reported higher intrasubject variability in children than adults, in speech as well as nonspeech tasks, but authors have disagreed on the causes and implications of this finding. The current work assessed the characteristics of age-related variability across articulators (larynx and tongue) as well as in temporal versus spatial domains. Oral airflow signals, which reflect changes in both laryngeal and supralaryngeal apertures, were obtained for multiple productions of ∕h s z∕. The data were processed using functional data analysis, which provides a means of obtaining relatively independent indices of amplitude and temporal (phasing) variability. Consistent with past work, both temporal and amplitude variabilities were higher in children than adults, but the temporal indices were generally less adultlike than the amplitude indices for both groups of children. Quantitative and qualitative analyses showed considerable speaker- and consonant-specific patterns of variability. The data indicate that variability in ∕s∕ may represent laryngeal as well as supralaryngeal control and further that a simple random noise factor, higher in children than in adults, is insufficient to explain developmental differences in speech production variability. PMID:19045800
Characterising receptive language processing in schizophrenia using word and sentence tasks.
Tan, Eric J; Yelland, Gregory W; Rossell, Susan L
2016-01-01
Language dysfunction is proposed to relate to the speech disturbances in schizophrenia, which are more commonly referred to as formal thought disorder (FTD). Presently, language production deficits in schizophrenia are better characterised than language comprehension difficulties. This study thus aimed to examine three aspects of language comprehension in schizophrenia: (1) the role of lexical processing, (2) meaning attribution for words and sentences, and (3) the relationship between comprehension and production. Fifty-seven schizophrenia/schizoaffective disorder patients and 48 healthy controls completed a clinical assessment and three language tasks assessing word recognition, synonym identification, and sentence comprehension. Poorer patient performance was expected on the latter two tasks. Recognition of word form was not impaired in schizophrenia, indicating intact lexical processing. Whereas single-word synonym identification was not significantly impaired, there was a tendency to attribute word meanings based on phonological similarity with increasing FTD severity. Importantly, there was a significant sentence comprehension deficit for processing deep structure, which correlated with FTD severity. These findings established a receptive language deficit in schizophrenia at the syntactic level. There was also evidence for a relationship between some aspects of language comprehension and speech production/FTD. Apart from indicating language as another mechanism in FTD aetiology, the data also suggest that remediating language comprehension problems may be an avenue to pursue in alleviating FTD symptomatology.
Language learning impairments: integrating basic science, technology, and remediation.
Tallal, P; Merzenich, M M; Miller, S; Jenkins, W
1998-11-01
One of the fundamental goals of the modern field of neuroscience is to understand how neuronal activity gives rise to higher cortical function. However, to bridge the gap between neurobiology and behavior, we must understand higher cortical functions at the behavioral level at least as well as we have come to understand neurobiological processes at the cellular and molecular levels. This is certainly the case in the study of speech processing, where critical studies of behavioral dysfunction have provided key insights into the basic neurobiological mechanisms relevant to speech perception and production. Much of this progress derives from a detailed analysis of the sensory, perceptual, cognitive, and motor abilities of children who fail to acquire speech, language, and reading skills normally within the context of otherwise normal development. Current research now shows that a dysfunction in normal phonological processing, which is critical to the development of oral and written language, may derive, at least in part, from difficulties in perceiving and producing basic sensory-motor information in rapid succession--within tens of ms (see Tallal et al. 1993a for a review). There is now substantial evidence supporting the hypothesis that basic temporal integration processes play a fundamental role in establishing neural representations for the units of speech (phonemes), which must be segmented from the (continuous) speech stream and combined to form words, in order for the normal development of oral and written language to proceed. Results from magnetic resonance imaging (MRI) and positron emission tomography (PET) studies, as well as studies of behavioral performance in normal and language impaired children and adults, will be reviewed to support the view that the integration of rapidly changing successive acoustic events plays a primary role in phonological development and disorders. Finally, remediation studies based on this research, coupled with neuroplasticity research, will be presented.
A Proposed Process for Managing the First Amendment Aspects of Campus Hate Speech.
ERIC Educational Resources Information Center
Kaplan, William A.
1992-01-01
A carefully structured process for campus administrative decision making concerning hate speech is proposed and suggestions for implementation are offered. In addition, criteria for evaluating hate speech processes are outlined, and First Amendment principles circumscribing the institution's discretion to regulate hate speech are discussed.…
Situational influences on rhythmicity in speech, music, and their interaction
Hawkins, Sarah
2014-01-01
Brain processes underlying the production and perception of rhythm indicate considerable flexibility in how physical signals are interpreted. This paper explores how that flexibility might play out in rhythmicity in speech and music. There is much in common across the two domains, but there are also significant differences. Interpretations are explored that reconcile some of the differences, particularly with respect to how functional properties modify the rhythmicity of speech, within limits imposed by its structural constraints. Functional and structural differences mean that music is typically more rhythmic than speech, and that speech will be more rhythmic when the emotions are more strongly engaged, or intended to be engaged. The influence of rhythmicity on attention is acknowledged, and it is suggested that local increases in rhythmicity occur at times when attention is required to coordinate joint action, whether in talking or music-making. Evidence is presented which suggests that while these short phases of heightened rhythmical behaviour are crucial to the success of transitions in communicative interaction, their modality is immaterial: they all function to enhance precise temporal prediction and hence tightly coordinated joint action. PMID:25385776
Automatic evaluation of hypernasality based on a cleft palate speech database.
He, Ling; Zhang, Jing; Liu, Qi; Yin, Heng; Lech, Margaret; Huang, Yunzhi
2015-05-01
The hypernasality is one of the most typical characteristics of cleft palate (CP) speech. The evaluation outcome of hypernasality grading decides the necessity of follow-up surgery. Currently, the evaluation of CP speech is carried out by experienced speech therapists. However, the result strongly depends on their clinical experience and subjective judgment. This work aims to propose an automatic evaluation system for hypernasality grading in CP speech. The database tested in this work is collected by the Hospital of Stomatology, Sichuan University, which has the largest number of CP patients in China. Based on the production process of hypernasality, source sound pulse and vocal tract filter features are presented. These features include pitch, the first and second energy amplified frequency bands, cepstrum based features, MFCC, short-time energy in the sub-bands features. These features combined with KNN classier are applied to automatically classify four grades of hypernasality: normal, mild, moderate and severe. The experiment results show that the proposed system achieves a good performance. The classification rates for four hypernasality grades reach up to 80.4%. The sensitivity of proposed features to the gender is also discussed.
Using visible speech to train perception and production of speech for individuals with hearing loss.
Massaro, Dominic W; Light, Joanna
2004-04-01
The main goal of this study was to implement a computer-animated talking head, Baldi, as a language tutor for speech perception and production for individuals with hearing loss. Baldi can speak slowly; illustrate articulation by making the skin transparent to reveal the tongue, teeth, and palate; and show supplementary articulatory features, such as vibration of the neck to show voicing and turbulent airflow to show frication. Seven students with hearing loss between the ages of 8 and 13 were trained for 6 hours across 21 weeks on 8 categories of segments (4 voiced vs. voiceless distinctions, 3 consonant cluster distinctions, and 1 fricative vs. affricate distinction). Training included practice at the segment and the word level. Perception and production improved for each of the 7 children. Speech production also generalized to new words not included in the training lessons. Finally, speech production deteriorated somewhat after 6 weeks without training, indicating that the training method rather than some other experience was responsible for the improvement that was found.
Dimension-Based Statistical Learning Affects Both Speech Perception and Production.
Lehet, Matthew; Holt, Lori L
2017-04-01
Multiple acoustic dimensions signal speech categories. However, dimensions vary in their informativeness; some are more diagnostic of category membership than others. Speech categorization reflects these dimensional regularities such that diagnostic dimensions carry more "perceptual weight" and more effectively signal category membership to native listeners. Yet perceptual weights are malleable. When short-term experience deviates from long-term language norms, such as in a foreign accent, the perceptual weight of acoustic dimensions in signaling speech category membership rapidly adjusts. The present study investigated whether rapid adjustments in listeners' perceptual weights in response to speech that deviates from the norms also affects listeners' own speech productions. In a word recognition task, the correlation between two acoustic dimensions signaling consonant categories, fundamental frequency (F0) and voice onset time (VOT), matched the correlation typical of English, and then shifted to an "artificial accent" that reversed the relationship, and then shifted back. Brief, incidental exposure to the artificial accent caused participants to down-weight perceptual reliance on F0, consistent with previous research. Throughout the task, participants were intermittently prompted with pictures to produce these same words. In the block in which listeners heard the artificial accent with a reversed F0 × VOT correlation, F0 was a less robust cue to voicing in listeners' own speech productions. The statistical regularities of short-term speech input affect both speech perception and production, as evidenced via shifts in how acoustic dimensions are weighted. Copyright © 2016 Cognitive Science Society, Inc.
Dimension-based statistical learning affects both speech perception and production
Lehet, Matthew; Holt, Lori L.
2016-01-01
Multiple acoustic dimensions signal speech categories. However, dimensions vary in their informativeness; some are more diagnostic of category membership than others. Speech categorization reflects these dimensional regularities such that diagnostic dimensions carry more “perceptual weight” and more effectively signal category membership to native listeners. Yet, perceptual weights are malleable. When short-term experience deviates from long-term language norms, such as in a foreign accent, the perceptual weight of acoustic dimensions in signaling speech category membership rapidly adjusts. The present study investigated whether rapid adjustments in listeners’ perceptual weights in response to speech that deviates from the norms also affects listeners’ own speech productions. In a word recognition task, the correlation between two acoustic dimensions signaling consonant categories, fundamental frequency (F0) and voice onset time (VOT), matched the correlation typical of English, then shifted to an “artificial accent” that reversed the relationship, and then shifted back. Brief, incidental exposure to the artificial accent caused participants to down-weight perceptual reliance on F0, consistent with previous research. Throughout the task, participants were intermittently prompted with pictures to produce these same words. In the block in which listeners heard the artificial accent with a reversed F0 x VOT correlation, F0 was a less robust cue to voicing in listeners’ own speech productions. The statistical regularities of short-term speech input affect both speech perception and production, as evidenced via shifts in how acoustic dimensions are weighted. PMID:27666146
Speech Processing and Recognition (SPaRe)
2011-01-01
results in the areas of automatic speech recognition (ASR), speech processing, machine translation (MT), natural language processing ( NLP ), and...Processing ( NLP ), Information Retrieval (IR) 16. SECURITY CLASSIFICATION OF: UNCLASSIFED 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME...Figure 9, the IOC was only expected to provide document submission and search; automatic speech recognition (ASR) for English, Spanish, Arabic , and
Deutsch, Avital
2016-02-01
In the present study we investigated to what extent the morphological facilitation effect induced by the derivational root morpheme in Hebrew is independent of semantic meaning and grammatical information of the part of speech involved. Using the picture-word interference paradigm with auditorily presented distractors, Experiment 1 compared the facilitation effect induced by semantically transparent versus semantically opaque morphologically related distractor words (i.e., a shared root) on the production latency of bare nouns. The results revealed almost the same amount of facilitation for both relatedness conditions. These findings accord with the results of the few studies that have addressed this issue in production in Indo-European languages, as well as previous studies in written word perception in Hebrew. Experiment 2 compared the root's facilitation effect, induced by morphologically related nominal versus verbal distractors, on the production latency of bare nouns. The results revealed a facilitation effect of similar size induced by the shared root, regardless of the distractor's part of speech. It is suggested that the principle that governs lexical organization at the level of morphology, at least for Hebrew roots, is form-driven and independent of semantic meaning. This principle of organization crosses the linguistic domains of production and written word perception, as well as grammatical organization according to part of speech.
Speech endpoint detection with non-language speech sounds for generic speech processing applications
NASA Astrophysics Data System (ADS)
McClain, Matthew; Romanowski, Brian
2009-05-01
Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.
Millman, Zachary B; Goss, James; Schiffman, Jason; Mejias, Johana; Gupta, Tina; Mittal, Vijay A
2014-09-01
Gesture is integrally linked with language and cognitive systems, and recent years have seen a growing attention to these movements in patients with schizophrenia. To date, however, there have been no investigations of gesture in youth at ultra high risk (UHR) for psychosis. Examining gesture in UHR individuals may help to elucidate other widely recognized communicative and cognitive deficits in this population and yield new clues for treatment development. In this study, mismatch (indicating semantic incongruency between the content of speech and a given gesture) and retrieval (used during pauses in speech while a person appears to be searching for a word or idea) gestures were evaluated in 42 UHR individuals and 36 matched healthy controls. Cognitive functions relevant to gesture production (i.e., speed of visual information processing and verbal production) as well as positive and negative symptomatologies were assessed. Although the overall frequency of cases exhibiting these behaviors was low, UHR individuals produced substantially more mismatch and retrieval gestures than controls. The UHR group also exhibited significantly poorer verbal production performance when compared with controls. In the patient group, mismatch gestures were associated with poorer visual processing speed and elevated negative symptoms, while retrieval gestures were associated with higher speed of visual information-processing and verbal production, but not symptoms. Taken together these findings indicate that gesture abnormalities are present in individuals at high risk for psychosis. While mismatch gestures may be closely related to disease processes, retrieval gestures may be employed as a compensatory mechanism. Copyright © 2014 Elsevier B.V. All rights reserved.
Speech perception and spoken word recognition: past and present.
Jusezyk, Peter W; Luce, Paul A
2002-02-01
The scientific study of the perception of spoken language has been an exciting, prolific, and productive area of research for more than 50 yr. We have learned much about infants' and adults' remarkable capacities for perceiving and understanding the sounds of their language, as evidenced by our increasingly sophisticated theories of acquisition, process, and representation. We present a selective, but we hope, representative review of the past half century of research on speech perception, paying particular attention to the historical and theoretical contexts within which this research was conducted. Our foci in this review fall on three principle topics: early work on the discrimination and categorization of speech sounds, more recent efforts to understand the processes and representations that subserve spoken word recognition, and research on how infants acquire the capacity to perceive their native language. Our intent is to provide the reader a sense of the progress our field has experienced over the last half century in understanding the human's extraordinary capacity for the perception of spoken language.
ERIC Educational Resources Information Center
Pouplier, Marianne; Marin, Stefania; Waltl, Susanne
2014-01-01
Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…
ERIC Educational Resources Information Center
Gass, Susan M., Ed.; Neu, Joyce, Ed.
Articles on speech acts and intercultural communication include: "Investigating the Production of Speech Act Sets" (Andrew Cohen); "Non-Native Refusals: A Methodological Perspective" (Noel Houck, Susan M. Gass); "Natural Speech Act Data versus Written Questionnaire Data: How Data Collection Method Affects Speech Act…
Sadness is unique: neural processing of emotions in speech prosody in musicians and non-musicians.
Park, Mona; Gutyrchik, Evgeny; Welker, Lorenz; Carl, Petra; Pöppel, Ernst; Zaytseva, Yuliya; Meindl, Thomas; Blautzik, Janusch; Reiser, Maximilian; Bao, Yan
2014-01-01
Musical training has been shown to have positive effects on several aspects of speech processing, however, the effects of musical training on the neural processing of speech prosody conveying distinct emotions are yet to be better understood. We used functional magnetic resonance imaging (fMRI) to investigate whether the neural responses to speech prosody conveying happiness, sadness, and fear differ between musicians and non-musicians. Differences in processing of emotional speech prosody between the two groups were only observed when sadness was expressed. Musicians showed increased activation in the middle frontal gyrus, the anterior medial prefrontal cortex, the posterior cingulate cortex and the retrosplenial cortex. Our results suggest an increased sensitivity of emotional processing in musicians with respect to sadness expressed in speech, possibly reflecting empathic processes.
Wie, Ona Bø; Falkenberg, Eva-Signe; Tvete, Ole; Tomblin, Bruce
2007-05-01
The objectives of the study were to describe the characteristics of the first 79 prelingually deaf cochlear implant users in Norway and to investigate to what degree the variation in speech recognition, speech- recognition growth rate, and speech production could be explained by the characteristics of the child, the cochlear implant, the family, and the educational setting. Data gathered longitudinally were analysed using descriptive statistics, multiple regression, and growth-curve analysis. The results show that more than 50% of the variation could be explained by these characteristics. Daily user-time, non-verbal intelligence, mode of communication, length of CI experience, and educational placement had the highest effect on the outcome. The results also indicate that children educated in a bilingual approach to education have better speech perception and faster speech perception growth rate with increased focus on spoken language.
The influence of speech rate and accent on access and use of semantic information.
Sajin, Stanislav M; Connine, Cynthia M
2017-04-01
Circumstances in which the speech input is presented in sub-optimal conditions generally lead to processing costs affecting spoken word recognition. The current study indicates that some processing demands imposed by listening to difficult speech can be mitigated by feedback from semantic knowledge. A set of lexical decision experiments examined how foreign accented speech and word duration impact access to semantic knowledge in spoken word recognition. Results indicate that when listeners process accented speech, the reliance on semantic information increases. Speech rate was not observed to influence semantic access, except in the setting in which unusually slow accented speech was presented. These findings support interactive activation models of spoken word recognition in which attention is modulated based on speech demands.
Comparison of formant detection methods used in speech processing applications
NASA Astrophysics Data System (ADS)
Belean, Bogdan
2013-11-01
The paper describes time frequency representations of speech signal together with the formant significance in speech processing applications. Speech formants can be used in emotion recognition, sex discrimination or diagnosing different neurological diseases. Taking into account the various applications of formant detection in speech signal, two methods for detecting formants are presented. First, the poles resulted after a complex analysis of LPC coefficients are used for formants detection. The second approach uses the Kalman filter for formant prediction along the speech signal. Results are presented for both approaches on real life speech spectrograms. A comparison regarding the features of the proposed methods is also performed, in order to establish which method is more suitable in case of different speech processing applications.
ERIC Educational Resources Information Center
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…
Influence of Sound Immersion and Communicative Interaction on the Lombard Effect
ERIC Educational Resources Information Center
Garnier, Maeva; Henrich, Nathalie; Dubois, Daniele
2010-01-01
Purpose: To examine the influence of sound immersion techniques and speech production tasks on speech adaptation in noise. Method: In Experiment 1, we compared the modification of speakers' perception and speech production in noise when noise is played into headphones (with and without additional self-monitoring feedback) or over loudspeakers. We…
Warming up Improves Speech Production in Patients with Adult Onset Myotonic Dystrophy
ERIC Educational Resources Information Center
de Swart, B.J.M.; van Engelen, B.G.M.; Maassen, B.A.M.
2007-01-01
This investigation was conducted to study whether warming up decreases myotonia (muscle stiffness) during speech production or causes adverse effects due to fatigue or exhaustion caused by intensive speech activity in patients with adult onset myotonic dystrophy. Thirty patients with adult onset myotonic dystrophy (MD) and ten healthy controls…
Articulatory Control in Childhood Apraxia of Speech in a Novel Word-Learning Task
ERIC Educational Resources Information Center
Case, Julie; Grigos, Maria I.
2016-01-01
Purpose: Articulatory control and speech production accuracy were examined in children with childhood apraxia of speech (CAS) and typically developing (TD) controls within a novel word-learning task to better understand the influence of planning and programming deficits in the production of unfamiliar words. Method: Participants included 16…
ERIC Educational Resources Information Center
Howard, Sara
2004-01-01
A combination of perceptual and electropalatographic (EPG) analysis is used to investigate speech production in three adolescent speakers with a history of cleft palate. All the subjects still sound markedly atypical. Their speech output is analysed in three conditions: diadochokinetic tasks; single word production; connected speech. Comparison of…
Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.
Borrie, Stephanie A
2015-03-01
This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.
Rhythmic Priming Enhances the Phonological Processing of Speech
ERIC Educational Resources Information Center
Cason, Nia; Schon, Daniele
2012-01-01
While natural speech does not possess the same degree of temporal regularity found in music, there is recent evidence to suggest that temporal regularity enhances speech processing. The aim of this experiment was to examine whether speech processing would be enhanced by the prior presentation of a rhythmical prime. We recorded electrophysiological…
ERIC Educational Resources Information Center
Davidow, Jason H.
2014-01-01
Background: Metronome-paced speech results in the elimination, or substantial reduction, of stuttering moments. The cause of fluency during this fluency-inducing condition is unknown. Several investigations have reported changes in speech pattern characteristics from a control condition to a metronome-paced speech condition, but failure to control…
Visual and Auditory Input in Second-Language Speech Processing
ERIC Educational Resources Information Center
Hardison, Debra M.
2010-01-01
The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…
Civier, Oren; Tasko, Stephen M.; Guenther, Frank H.
2010-01-01
This paper investigates the hypothesis that stuttering may result in part from impaired readout of feedforward control of speech, which forces persons who stutter (PWS) to produce speech with a motor strategy that is weighted too much toward auditory feedback control. Over-reliance on feedback control leads to production errors which, if they grow large enough, can cause the motor system to “reset” and repeat the current syllable. This hypothesis is investigated using computer simulations of a “neurally impaired” version of the DIVA model, a neural network model of speech acquisition and production. The model’s outputs are compared to published acoustic data from PWS’ fluent speech, and to combined acoustic and articulatory movement data collected from the dysfluent speech of one PWS. The simulations mimic the errors observed in the PWS subject’s speech, as well as the repairs of these errors. Additional simulations were able to account for enhancements of fluency gained by slowed/prolonged speech and masking noise. Together these results support the hypothesis that many dysfluencies in stuttering are due to a bias away from feedforward control and toward feedback control. PMID:20831971
NASA Astrophysics Data System (ADS)
Feenaughty, Lynda
Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners judged each speech sample using the perceptual construct of Speech Severity using a visual analog scale. Additional measures obtained to describe participants included the Sentence Intelligibility Test (SIT), the 10-item Communication Participation Item Bank (CPIB), and standard biopsychosocial measures of depression (Beck Depression Inventory-Fast Screen; BDI-FS), fatigue (Fatigue Severity Scale; FSS), and overall disease severity (Expanded Disability Status Scale; EDSS). Healthy controls completed all measures, with the exception of the CPIB and EDSS. All data were analyzed using standard, descriptive and parametric statistics. For the MSCI group, the relationship between neuropsychological test scores and speech-language variables were explored for each speech task using Pearson correlations. The relationship between neuropsychological test scores and Speech Severity also was explored. Results and Discussion: Topic familiarity for descriptive discourse did not strongly influence speech production or perceptual variables; however, results indicated predicted task-related differences for some spoken language measures. With the exception of the MSCI group, all speaker groups produced the same or slower global speech timing (i.e., speech and articulatory rates), more silent and filled pauses, more grammatical and longer silent pause durations in spontaneous discourse compared to reading aloud. Results revealed no appreciable task differences for linguistic complexity measures. Results indicated group differences for speech rate. The MSCI group produced significantly faster speech rates compared to the MSDYS group. Both the MSDYS and the MSCI groups were judged to have significantly poorer perceived Speech Severity compared to typically aging adults. The Task x Group interaction was only significant for the number of silent pauses. The MSDYS group produced fewer silent pauses in spontaneous speech and more silent pauses in the reading task compared to other groups. Finally, correlation analysis revealed moderate relationships between neuropsychological test scores and speech hesitation measures, within the MSCI group. Slower information processing and poorer memory were significantly correlated with more silent pauses and poorer executive function was associated with fewer filled pauses in the Unfamiliar discourse task. Results have both clinical and theoretical implications. Overall, clinicians should demonstrate caution when interpreting global measures of speech timing and perceptual measures in the absence of information about cognitive ability. Results also have implications for a comprehensive model of spoken language incorporating cognitive, linguistic, and motor variables.
Peng, Shu-Chen; Tomblin, J Bruce; Turner, Christopher W
2008-06-01
Current cochlear implant (CI) devices are limited in providing voice pitch information that is critical for listeners' recognition of prosodic contrasts of speech (e.g., intonation and lexical tones). As a result, mastery of the production and perception of such speech contrasts can be very challenging for prelingually deafened individuals who received a CI in their childhood (i.e., pediatric CI recipients). The purpose of this study was to investigate (a) pediatric CI recipients' mastery of the production and perception of speech intonation contrasts, in comparison with their age-matched peers with normal hearing (NH), and (b) the relationships between intonation production and perception in CI and NH individuals. Twenty-six pediatric CI recipients aged from 7.44 to 20.74 yrs and 17 age-matched individuals with NH participated. All CI users were prelingually deafened, and each of them received a CI between 1.48 and 6.34 yrs of age. Each participant performed an intonation production task and an intonation perception task. In the production task, 10 questions and 10 statements that were syntactically matched (e.g., "The girl is on the playground." versus "The girl is on the playground?") were elicited from each participant using interactive discourse involving pictures. These utterances were judged by a panel of eight adult listeners with NH in terms of utterance type accuracy (question versus statement) and contour appropriateness (on a five-point scale). In the perception task, each participant identified the speech intonation contrasts of natural utterances in a two-alternative forced-choice task. The results from the production task indicated that CI participants' scores for both utterance type accuracy and contour appropriateness were significantly lower than the scores of NH participants (both p < 0.001). The results from the perception task indicated that CI participants' identification accuracy was significantly lower than that of their NH peers (CI, 70.13% versus NH, 97.11%, p < 0.001). The Pearson correlation coefficients (r) between CI participants' performance levels in the production and perception tasks were approximately 0.65 (p = 0.001). As a group, pediatric CI recipients do not show mastery of speech intonation in their production or perception to the same extent as their NH peers. Pediatric CI recipients' performance levels in the production and perception of speech intonation contrasts are moderately correlated. Intersubject variability exists in pediatric CI recipients' mastery levels in the production and perception of speech intonation contrasts. These findings suggest the importance of addressing both aspects (production and perception) of speech intonation in the aural rehabilitation and speech intervention programs for prelingually deafened children and young adults who use a CI.
Stoppelman, Nadav; Harpaz, Tamar; Ben-Shachar, Michal
2013-05-01
Speech processing engages multiple cortical regions in the temporal, parietal, and frontal lobes. Isolating speech-sensitive cortex in individual participants is of major clinical and scientific importance. This task is complicated by the fact that responses to sensory and linguistic aspects of speech are tightly packed within the posterior superior temporal cortex. In functional magnetic resonance imaging (fMRI), various baseline conditions are typically used in order to isolate speech-specific from basic auditory responses. Using a short, continuous sampling paradigm, we show that reversed ("backward") speech, a commonly used auditory baseline for speech processing, removes much of the speech responses in frontal and temporal language regions of adult individuals. On the other hand, signal correlated noise (SCN) serves as an effective baseline for removing primary auditory responses while maintaining strong signals in the same language regions. We show that the response to reversed speech in left inferior frontal gyrus decays significantly faster than the response to speech, thus suggesting that this response reflects bottom-up activation of speech analysis followed up by top-down attenuation once the signal is classified as nonspeech. The results overall favor SCN as an auditory baseline for speech processing.
Han, Zaizhu; Ma, Yujun; Gong, Gaolang; Huang, Ruiwang; Song, Luping; Bi, Yanchao
2016-01-01
In speech production, an important step before motor programming is the retrieval and encoding of the phonological elements of target words. It has been proposed that phonological encoding is supported by multiple regions in the left frontal, temporal and parietal regions and their underlying white matter, especially the left arcuate fasciculus (AF) or superior longitudinal fasciculus (SLF). It is unclear, however, whether the effects of AF/SLF are indeed related to phonological encoding for output and whether there are other white matter tracts that also contribute to this process. We comprehensively investigated the anatomical connectivity supporting phonological encoding in production by studying the relationship between the integrity of all major white matter tracts across the entire brain and phonological encoding deficits in a group of 69 patients with brain damage. The integrity of each white matter tract was measured both by the percentage of damaged voxels (structural imaging) and the mean fractional anisotropy value (diffusion tensor imaging). The phonological encoding deficits were assessed by various measures in two oral production tasks that involve phonological encoding: the percentage of nonword (phonological) errors in oral picture naming and the accuracy of word reading aloud with word comprehension ability regressed out. We found that the integrity of the left SLF in both the structural and diffusion tensor imaging measures consistently predicted the severity of phonological encoding impairment in the two phonological production tasks. Such effects of the left SLF on phonological production remained significant when a range of potential confounding factors were considered through partial correlation, including total lesion volume, demographic factors, lesions on phonological-relevant grey matter regions, or effects originating from the phonological perception or semantic processes. Our results therefore conclusively demonstrate the central role of the left SLF in phonological encoding in speech production.
Derrick, Donald; Stavness, Ian; Gick, Bryan
2015-03-01
The assumption that units of speech production bear a one-to-one relationship to speech motor actions pervades otherwise widely varying theories of speech motor behavior. This speech production and simulation study demonstrates that commonly occurring flap sequences may violate this assumption. In the word "Saturday," a sequence of three sounds may be produced using a single, cyclic motor action. Under this view, the initial upward tongue tip motion, starting with the first vowel and moving to contact the hard palate on the way to a retroflex position, is under active muscular control, while the downward movement of the tongue tip, including the second contact with the hard palate, results from gravity and elasticity during tongue muscle relaxation. This sequence is reproduced using a three-dimensional computer simulation of human vocal tract biomechanics and differs greatly from other observed sequences for the same word, which employ multiple targeted speech motor actions. This outcome suggests that a goal of a speaker is to produce an entire sequence in a biomechanically efficient way at the expense of maintaining parity within the individual parts of the sequence.
Intensive treatment of speech disorders in robin sequence: a case report.
Pinto, Maria Daniela Borro; Pegoraro-Krook, Maria Inês; Andrade, Laura Katarine Félix de; Correa, Ana Paula Carvalho; Rosa-Lugo, Linda Iris; Dutka, Jeniffer de Cássia Rillo
2017-10-23
To describe the speech of a patient with Pierre Robin Sequence (PRS) and severe speech disorders before and after participating in an Intensive Speech Therapy Program (ISTP). The ISTP consisted of two daily sessions of therapy over a 36-week period, resulting in a total of 360 therapy sessions. The sessions included the phases of establishment, generalization, and maintenance. A combination of strategies, such as modified contrast therapy and speech sound perception training, were used to elicit adequate place of articulation. The ISTP addressed correction of place of production of oral consonants and maximization of movement of the pharyngeal walls with a speech bulb reduction program. Therapy targets were addressed at the phonetic level with a gradual increase in the complexity of the productions hierarchically (e.g., syllables, words, phrases, conversation) while simultaneously addressing the velopharyngeal hypodynamism with speech bulb reductions. Re-evaluation after the ISTP revealed normal speech resonance and articulation with the speech bulb. Nasoendoscopic assessment indicated consistent velopharyngeal closure for all oral sounds with the speech bulb in place. Intensive speech therapy, combined with the use of the speech bulb, yielded positive outcomes in the rehabilitation of a clinical case with severe speech disorders associated with velopharyngeal dysfunction in Pierre Robin Sequence.
Sadness is unique: neural processing of emotions in speech prosody in musicians and non-musicians
Park, Mona; Gutyrchik, Evgeny; Welker, Lorenz; Carl, Petra; Pöppel, Ernst; Zaytseva, Yuliya; Meindl, Thomas; Blautzik, Janusch; Reiser, Maximilian; Bao, Yan
2015-01-01
Musical training has been shown to have positive effects on several aspects of speech processing, however, the effects of musical training on the neural processing of speech prosody conveying distinct emotions are yet to be better understood. We used functional magnetic resonance imaging (fMRI) to investigate whether the neural responses to speech prosody conveying happiness, sadness, and fear differ between musicians and non-musicians. Differences in processing of emotional speech prosody between the two groups were only observed when sadness was expressed. Musicians showed increased activation in the middle frontal gyrus, the anterior medial prefrontal cortex, the posterior cingulate cortex and the retrosplenial cortex. Our results suggest an increased sensitivity of emotional processing in musicians with respect to sadness expressed in speech, possibly reflecting empathic processes. PMID:25688196
Nonlinear Frequency Compression in Hearing Aids: Impact on Speech and Language Development
Bentler, Ruth; Walker, Elizabeth; McCreery, Ryan; Arenas, Richard M.; Roush, Patricia
2015-01-01
Objectives The research questions of this study were: (1) Are children using nonlinear frequency compression (NLFC) in their hearing aids getting better access to the speech signal than children using conventional processing schemes? The authors hypothesized that children whose hearing aids provided wider input bandwidth would have more access to the speech signal, as measured by an adaptation of the Speech Intelligibility Index, and (2) are speech and language skills different for children who have been fit with the two different technologies; if so, in what areas? The authors hypothesized that if the children were getting increased access to the speech signal as a result of their NLFC hearing aids (question 1), it would be possible to see improved performance in areas of speech production, morphosyntax, and speech perception compared with the group with conventional processing. Design Participants included 66 children with hearing loss recruited as part of a larger multisite National Institutes of Health–funded study, Outcomes for Children with Hearing Loss, designed to explore the developmental outcomes of children with mild to severe hearing loss. For the larger study, data on communication, academic and psychosocial skills were gathered in an accelerated longitudinal design, with entry into the study between 6 months and 7 years of age. Subjects in this report consisted of 3-, 4-, and 5-year-old children recruited at the North Carolina test site. All had at least at least 6 months of current hearing aid usage with their NLFC or conventional amplification. Demographic characteristics were compared at the three age levels as well as audibility and speech/language outcomes; speech-perception scores were compared for the 5-year-old groups. Results Results indicate that the audibility provided did not differ between the technology options. As a result, there was no difference between groups on speech or language outcome measures at 4 or 5 years of age, and no impact on speech perception (measured at 5 years of age). The difference in Comprehensive Assessment of Spoken Language and mean length of utterance scores for the 3-year-old group favoring the group with conventional amplification may be a consequence of confounding factors such as increased incidence of prematurity in the group using NLFC. Conclusions Children fit with NLFC had similar audibility, as measured by a modified Speech Intelligibility Index, compared with a matched group of children using conventional technology. In turn, there were no differences in their speech and language abilities. PMID:24892229
Nonlinear frequency compression in hearing aids: impact on speech and language development.
Bentler, Ruth; Walker, Elizabeth; McCreery, Ryan; Arenas, Richard M; Roush, Patricia
2014-01-01
The research questions of this study were: (1) Are children using nonlinear frequency compression (NLFC) in their hearing aids getting better access to the speech signal than children using conventional processing schemes? The authors hypothesized that children whose hearing aids provided wider input bandwidth would have more access to the speech signal, as measured by an adaptation of the Speech Intelligibility Index, and (2) are speech and language skills different for children who have been fit with the two different technologies; if so, in what areas? The authors hypothesized that if the children were getting increased access to the speech signal as a result of their NLFC hearing aids (question 1), it would be possible to see improved performance in areas of speech production, morphosyntax, and speech perception compared with the group with conventional processing. Participants included 66 children with hearing loss recruited as part of a larger multisite National Institutes of Health-funded study, Outcomes for Children with Hearing Loss, designed to explore the developmental outcomes of children with mild to severe hearing loss. For the larger study, data on communication, academic and psychosocial skills were gathered in an accelerated longitudinal design, with entry into the study between 6 months and 7 years of age. Subjects in this report consisted of 3-, 4-, and 5-year-old children recruited at the North Carolina test site. All had at least at least 6 months of current hearing aid usage with their NLFC or conventional amplification. Demographic characteristics were compared at the three age levels as well as audibility and speech/language outcomes; speech-perception scores were compared for the 5-year-old groups. Results indicate that the audibility provided did not differ between the technology options. As a result, there was no difference between groups on speech or language outcome measures at 4 or 5 years of age, and no impact on speech perception (measured at 5 years of age). The difference in Comprehensive Assessment of Spoken Language and mean length of utterance scores for the 3-year-old group favoring the group with conventional amplification may be a consequence of confounding factors such as increased incidence of prematurity in the group using NLFC. Children fit with NLFC had similar audibility, as measured by a modified Speech Intelligibility Index, compared with a matched group of children using conventional technology. In turn, there were no differences in their speech and language abilities.
Neural correlates of phonetic convergence and speech imitation
Garnier, Maëva; Lamalle, Laurent; Sato, Marc
2013-01-01
Speakers unconsciously tend to mimic their interlocutor's speech during communicative interaction. This study aims at examining the neural correlates of phonetic convergence and deliberate imitation, in order to explore whether imitation of phonetic features, deliberate, or unconscious, might reflect a sensory-motor recalibration process. Sixteen participants listened to vowels with pitch varying around the average pitch of their own voice, and then produced the identified vowels, while their speech was recorded and their brain activity was imaged using fMRI. Three degrees and types of imitation were compared (unconscious, deliberate, and inhibited) using a go-nogo paradigm, which enabled the comparison of brain activations during the whole imitation process, its active perception step, and its production. Speakers followed the pitch of voices they were exposed to, even unconsciously, without being instructed to do so. After being informed about this phenomenon, 14 participants were able to inhibit it, at least partially. The results of whole brain and ROI analyses support the fact that both deliberate and unconscious imitations are based on similar neural mechanisms and networks, involving regions of the dorsal stream, during both perception and production steps of the imitation process. While no significant difference in brain activation was found between unconscious and deliberate imitations, the degree of imitation, however, appears to be determined by processes occurring during the perception step. Four regions of the dorsal stream: bilateral auditory cortex, bilateral supramarginal gyrus (SMG), and left Wernicke's area, indeed showed an activity that correlated significantly with the degree of imitation during the perception step. PMID:24062704
The sensorimotor and social sides of the architecture of speech.
Pezzulo, Giovanni; Barca, Laura; D'Ausilio, Alessando
2014-12-01
Speech is a complex skill to master. In addition to sophisticated phono-articulatory abilities, speech acquisition requires neuronal systems configured for vocal learning, with adaptable sensorimotor maps that couple heard speech sounds with motor programs for speech production; imitation and self-imitation mechanisms that can train the sensorimotor maps to reproduce heard speech sounds; and a "pedagogical" learning environment that supports tutor learning.
Integrating mechanisms of visual guidance in naturalistic language production.
Coco, Moreno I; Keller, Frank
2015-05-01
Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.
Callan, Daniel E.; Jones, Jeffery A.; Callan, Akiko
2014-01-01
Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures. PMID:24860526
Cortical oscillations and entrainment in speech processing during working memory load.
Hjortkjaer, Jens; Märcher-Rørsted, Jonatan; Fuglsang, Søren A; Dau, Torsten
2018-02-02
Neuronal oscillations are thought to play an important role in working memory (WM) and speech processing. Listening to speech in real-life situations is often cognitively demanding but it is unknown whether WM load influences how auditory cortical activity synchronizes to speech features. Here, we developed an auditory n-back paradigm to investigate cortical entrainment to speech envelope fluctuations under different degrees of WM load. We measured the electroencephalogram, pupil dilations and behavioural performance from 22 subjects listening to continuous speech with an embedded n-back task. The speech stimuli consisted of long spoken number sequences created to match natural speech in terms of sentence intonation, syllabic rate and phonetic content. To burden different WM functions during speech processing, listeners performed an n-back task on the speech sequences in different levels of background noise. Increasing WM load at higher n-back levels was associated with a decrease in posterior alpha power as well as increased pupil dilations. Frontal theta power increased at the start of the trial and increased additionally with higher n-back level. The observed alpha-theta power changes are consistent with visual n-back paradigms suggesting general oscillatory correlates of WM processing load. Speech entrainment was measured as a linear mapping between the envelope of the speech signal and low-frequency cortical activity (< 13 Hz). We found that increases in both types of WM load (background noise and n-back level) decreased cortical speech envelope entrainment. Although entrainment persisted under high load, our results suggest a top-down influence of WM processing on cortical speech entrainment. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
The influence of (central) auditory processing disorder in speech sound disorders.
Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein
2016-01-01
Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
ERIC Educational Resources Information Center
Sperbeck, Mieko
2010-01-01
The primary aim of this dissertation was to investigate the relationship between speech perception and speech production difficulties among Japanese second language (L2) learners of English, in their learning complex syllable structures. Japanese L2 learners and American English controls were tested in a categorical ABX discrimination task of…
ERIC Educational Resources Information Center
Overby, Megan S.; Masterson, Julie J.; Preston, Jonathan L.
2015-01-01
Purpose: This archival investigation examined the relationship between preliteracy speech sound production skill (SSPS) and spelling in Grade 3 using a dataset in which children's receptive vocabulary was generally within normal limits, speech therapy was not provided until Grade 2, and phonological awareness instruction was discouraged at the…
Changes in Speech Production in an Early Deafened Adult with a Cochlear Implant
ERIC Educational Resources Information Center
Wong, Patrick C. M.
2007-01-01
Background and Aims: The current study is a first investigation reporting the speech production characteristics of an early deafened adult cochlear implant user after a course of speech-language treatment. Methods and Procedures: The participant is culturally deaf and received the cochlear implant when she was 43 years old. A 24-week ABCABC…
Yoo, Sejin; Chung, Jun-Young; Jeon, Hyeon-Ae; Lee, Kyoung-Min; Kim, Young-Bo; Cho, Zang-Hee
2012-07-01
Speech production is inextricably linked to speech perception, yet they are usually investigated in isolation. In this study, we employed a verbal-repetition task to identify the neural substrates of speech processing with two ends active simultaneously using functional MRI. Subjects verbally repeated auditory stimuli containing an ambiguous vowel sound that could be perceived as either a word or a pseudoword depending on the interpretation of the vowel. We found verbal repetition commonly activated the audition-articulation interface bilaterally at Sylvian fissures and superior temporal sulci. Contrasting word-versus-pseudoword trials revealed neural activities unique to word repetition in the left posterior middle temporal areas and activities unique to pseudoword repetition in the left inferior frontal gyrus. These findings imply that the tasks are carried out using different speech codes: an articulation-based code of pseudowords and an acoustic-phonetic code of words. It also supports the dual-stream model and imitative learning of vocabulary. Copyright © 2012 Elsevier Inc. All rights reserved.
Acoustic analysis of trill sounds.
Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri
2012-04-01
In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.
Francisco, Danira Tavares; Wertzner, Haydée Fiszbein
2017-01-01
This study describes the criteria that are used in ultrasound to measure the differences between the tongue contours that produce [s] and [ʃ] sounds in the speech of adults, typically developing children (TDC), and children with speech sound disorder (SSD) with the phonological process of palatal fronting. Overlapping images of the tongue contours that resulted from 35 subjects producing the [s] and [ʃ] sounds were analysed to select 11 spokes on the radial grid that were spread over the tongue contour. The difference was calculated between the mean contour of the [s] and [ʃ] sounds for each spoke. A cluster analysis produced groups with some consistency in the pattern of articulation across subjects and differentiated adults and TDC to some extent and children with SSD with a high level of success. Children with SSD were less likely to show differentiation of the tongue contours between the articulation of [s] and [ʃ].
The Physiologic Development of Speech Motor Control: Lip and Jaw Coordination
Green, Jordan R.; Moore, Christopher A.; Higashikawa, Masahiko; Steeve, Roger W.
2010-01-01
This investigation was designed to describe the development of lip and jaw coordination during speech and to evaluate the potential influence of speech motor development on phonologic development. Productions of syllables containing bilabial consonants were observed from speakers in four age groups (i.e., 1-year-olds, 2-year-olds, 6-year-olds, and young adults). A video-based movement tracking system was used to transduce movement of the upper lip, lower lip, and jaw. The coordinative organization of these articulatory gestures was shown to change dramatically during the first several years of life and to continue to undergo refinement past age 6. The present results are consistent with three primary phases in the development of lip and jaw coordination for speech: integration, differentiation, and refinement. Each of these developmental processes entails the existence of distinct coordinative constraints on early articulatory movement. It is suggested that these constraints will have predictable consequences for the sequence of phonologic development. PMID:10668666
Morin, Alain; Hamper, Breanne
2012-01-01
Inner speech involvement in self-reflection was examined by reviewing 130 studies assessing brain activation during self-referential processing in key self-domains: agency, self-recognition, emotions, personality traits, autobiographical memory, and miscellaneous (e.g., prospection, judgments). The left inferior frontal gyrus (LIFG) has been shown to be reliably recruited during inner speech production. The percentage of studies reporting LIFG activity for each self-dimension was calculated. Fifty five percent of all studies reviewed indicated LIFG (and presumably inner speech) activity during self-reflection tasks; on average LIFG activation is observed 16% of the time during completion of non-self tasks (e.g., attention, perception). The highest LIFG activation rate was observed during retrieval of autobiographical information. The LIFG was significantly more recruited during conceptual tasks (e.g., prospection, traits) than during perceptual tasks (agency and self-recognition). This constitutes additional evidence supporting the idea of a participation of inner speech in self-related thinking. PMID:23049653
Morin, Alain; Hamper, Breanne
2012-01-01
Inner speech involvement in self-reflection was examined by reviewing 130 studies assessing brain activation during self-referential processing in key self-domains: agency, self-recognition, emotions, personality traits, autobiographical memory, and miscellaneous (e.g., prospection, judgments). The left inferior frontal gyrus (LIFG) has been shown to be reliably recruited during inner speech production. The percentage of studies reporting LIFG activity for each self-dimension was calculated. Fifty five percent of all studies reviewed indicated LIFG (and presumably inner speech) activity during self-reflection tasks; on average LIFG activation is observed 16% of the time during completion of non-self tasks (e.g., attention, perception). The highest LIFG activation rate was observed during retrieval of autobiographical information. The LIFG was significantly more recruited during conceptual tasks (e.g., prospection, traits) than during perceptual tasks (agency and self-recognition). This constitutes additional evidence supporting the idea of a participation of inner speech in self-related thinking.
ERIC Educational Resources Information Center
Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.
2017-01-01
Purpose: Previous articles in this supplement described rationale for and development of the pause marker (PM), a diagnostic marker of childhood apraxia of speech (CAS), and studies supporting its validity and reliability. The present article assesses the theoretical coherence of the PM with speech processing deficits in CAS. Method: PM and other…
Speech entrainment enables patients with Broca’s aphasia to produce fluent speech
Hubbard, H. Isabel; Hudspeth, Sarah Grace; Holland, Audrey L.; Bonilha, Leonardo; Fromm, Davida; Rorden, Chris
2012-01-01
A distinguishing feature of Broca’s aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect ‘speech entrainment’ and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca’s aphasia. In Experiment 1, 13 patients with Broca’s aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca’s area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production. Behavioural and functional magnetic resonance imaging data were collected before and after the treatment phase. Patients were able to produce a greater variety of words with and without speech entrainment at 1 and 6 weeks after training. Treatment-related decrease in cortical activation associated with speech entrainment was found in areas of the left posterior-inferior parietal lobe. We conclude that speech entrainment allows patients with Broca’s aphasia to double their speech output compared with spontaneous speech. Neuroimaging results suggest that speech entrainment allows patients to produce fluent speech by providing an external gating mechanism that yokes a ventral language network that encodes conceptual aspects of speech. Preliminary results suggest that training with speech entrainment improves speech production in Broca’s aphasia providing a potential therapeutic method for a disorder that has been shown to be particularly resistant to treatment. PMID:23250889
Evolution: Language Use and the Evolution of Languages
NASA Astrophysics Data System (ADS)
Croft, William
Language change can be understood as an evolutionary process. Language change occurs at two different timescales, corresponding to the two steps of the evolutionary process. The first timescale is very short, namely, the production of an utterance: this is where linguistic structures are replicated and language variation is generated. The second timescale is (or can be) very long, namely, the propagation of linguistic variants in the speech community: this is where certain variants are selected over others. At both timescales, the evolutionary process is driven by social interaction and the role language plays in it. An understanding of social interaction at the micro-level—face-to-face interactions—and at the macro-level—the structure of speech communities—gives us the basis for understanding the generation and propagation of language structures, and understanding the nature of language itself.
Auditory-Motor Processing of Speech Sounds
Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.
2013-01-01
The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846
Jenson, David; Bowers, Andrew L.; Harkrider, Ashley W.; Thornton, David; Cuellar, Megan; Saltuklaroglu, Tim
2014-01-01
Activity in anterior sensorimotor regions is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20 Hz) and alpha (~10 Hz) spectral power within the EEG μ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different) of syllables pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to (1) identify clusters of μ components common to all conditions and (2) examine real-time event-related spectral perturbations (ERSP) within alpha and beta bands. 17 and 15 out of 20 participants produced left and right μ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR < 0.05) early alpha event-related synchronization (ERS) prior to and during stimulus presentation and later alpha event-related desynchronization (ERD) following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that μ-beta ERD indexes early predictive coding (e.g., internal modeling) and/or overt and covert attentional/motor processes. μ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while μ-alpha ERD may index sensory feedback during speech rehearsal and production. PMID:25071633
Galilee, Alena; Stefanidou, Chrysi; McCleery, Joseph P
2017-01-01
Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.
Stefanidou, Chrysi; McCleery, Joseph P.
2017-01-01
Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6—year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age. PMID:28738063
FDA-Required Tobacco Product Inserts & Onserts–and the First Amendment.
Lindblom, Eric N; Berman, Micah L; Thrasher, James F
In 2012, a federal court of appeals struck down an FDA rule requiring graphic health warnings on cigarettes as violating First Amendment commercial speech protections. Tobacco product inserts and onserts can more readily avoid First Amendment constraints while delivering more extensive information to tobacco users, and can work effectively to support and encourage smoking cessation. This paper examines FDA’s authority to require effective inserts and onserts and shows how FDA could design and support them to avoid First Amendment problems. Through this process, the paper offers helpful insights regarding how key Tobacco Control Act provisions can and should be interpreted and applied to follow and promote the statute’s purposes and objectives. The paper’s rigorous analysis of existing First Amendment case law relating to compelled commercial speech also provides useful guidance for any government efforts either to compel product disclosures or to require government messaging in or on commercial products or their advertising, whether done for remedial, purely informational, or behavior modification purposes.
Jungblut, Monika; Huber, Walter; Mais, Christiane
2014-01-01
Difficulties with temporal coordination or sequencing of speech movements are frequently reported in aphasia patients with concomitant apraxia of speech (AOS). Our major objective was to investigate the effects of specific rhythmic-melodic voice training on brain activation of those patients. Three patients with severe chronic nonfluent aphasia and AOS were included in this study. Before and after therapy, patients underwent the same fMRI procedure as 30 healthy control subjects in our prestudy, which investigated the neural substrates of sung vowel changes in untrained rhythm sequences. A main finding was that post-minus pretreatment imaging data yielded significant perilesional activations in all patients for example, in the left superior temporal gyrus, whereas the reverse subtraction revealed either no significant activation or right hemisphere activation. Likewise, pre- and posttreatment assessments of patients' vocal rhythm production, language, and speech motor performance yielded significant improvements for all patients. Our results suggest that changes in brain activation due to the applied training might indicate specific processes of reorganization, for example, improved temporal sequencing of sublexical speech components. In this context, a training that focuses on rhythmic singing with differently demanding complexity levels as concerns motor and cognitive capabilities seems to support paving the way for speech. PMID:24977055
Rowbotham, Samantha; Wardy, April J; Lloyd, Donna M; Wearden, Alison; Holler, Judith
2014-01-01
Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic information) to communicate their experience. Research suggests that the production of nonverbal pain behaviours is positively associated with pain intensity, but it is not known whether this is also the case for speech and co-speech gestures. The present study explored whether increased pain intensity is associated with greater speech and gesture production during face-to-face communication about acute, experimental pain. Participants (N = 26) were exposed to experimentally elicited pressure pain to the fingernail bed at high and low intensities and took part in video-recorded semi-structured interviews. Despite rating more intense pain as more difficult to communicate (t(25) = 2.21, p = .037), participants produced significantly longer verbal pain descriptions and more co-speech gestures in the high intensity pain condition (Words: t(25) = 3.57, p = .001; Gestures: t(25) = 3.66, p = .001). This suggests that spoken and gestural communication about pain is enhanced when pain is more intense. Thus, in addition to conveying detailed semantic information about pain, speech and co-speech gestures may provide a cue to pain intensity, with implications for the treatment and support received by pain sufferers. Future work should consider whether these findings are applicable within the context of clinical interactions about pain.
NASA Astrophysics Data System (ADS)
Jelinek, H. J.
1986-01-01
This is the Final Report of Electronic Design Associates on its Phase I SBIR project. The purpose of this project is to develop a method for correcting helium speech, as experienced in diver-surface communication. The goal of the Phase I study was to design, prototype, and evaluate a real time helium speech corrector system based upon digital signal processing techniques. The general approach was to develop hardware (an IBM PC board) to digitize helium speech and software (a LAMBDA computer based simulation) to translate the speech. As planned in the study proposal, this initial prototype may now be used to assess expected performance from a self contained real time system which uses an identical algorithm. The Final Report details the work carried out to produce the prototype system. Four major project tasks were: a signal processing scheme for converting helium speech to normal sounding speech was generated. The signal processing scheme was simulated on a general purpose (LAMDA) computer. Actual helium speech was supplied to the simulation and the converted speech was generated. An IBM-PC based 14 bit data Input/Output board was designed and built. A bibliography of references on speech processing was generated.
Speech disorders of Parkinsonism: a review.
Critchley, E M
1981-01-01
Study of the speech disorders of Parkinsonism provides a paradigm of the integration of phonation, articulation and language in the production of speech. The initial defect in the untreated patient is a failure to control respiration for the purpose of speech and there follows a forward progression of articulatory symptoms involving larynx, pharynx, tongue and finally lips. There is evidence that the integration of speech production is organised asymmetrically at thalamic level. Experimental or therapeutic lesions in the region of the inferior medial portion of ventro-lateral thalamus may influence the initiation, respiratory control, rate and prosody of speech. Higher language functions may also be involved in thalamic integration: different forms of anomia are reported with pulvinar and ventrolateral thalamic lesions and transient aphasia may follow stereotaxis. The results of treatment with levodopa indicates that neurotransmitter substances enhance the clarity, volume and persistence of phonation and the latency and smoothness of articulation. The improvement of speech performance is not necessarily in phase with locomotor changes. The dose-related dyskinetic effects of levodopa, which appear to have a physiological basis in observations previously made in post-encephalitic Parkinsonism, not only influence the prosody of speech with near-mutism, hesitancy and dysfluency but may affect work-finding ability and in instances of excitement (erethism) even involve the association of long-term memory with speech. In future, neurologists will need to examine more closely the role of neurotransmitters in speech production and formulation. PMID:7031185
Processing changes when listening to foreign-accented speech
Romero-Rivas, Carlos; Martin, Clara D.; Costa, Albert
2015-01-01
This study investigates the mechanisms responsible for fast changes in processing foreign-accented speech. Event Related brain Potentials (ERPs) were obtained while native speakers of Spanish listened to native and foreign-accented speakers of Spanish. We observed a less positive P200 component for foreign-accented speech relative to native speech comprehension. This suggests that the extraction of spectral information and other important acoustic features was hampered during foreign-accented speech comprehension. However, the amplitude of the N400 component for foreign-accented speech comprehension decreased across the experiment, suggesting the use of a higher level, lexical mechanism. Furthermore, during native speech comprehension, semantic violations in the critical words elicited an N400 effect followed by a late positivity. During foreign-accented speech comprehension, semantic violations only elicited an N400 effect. Overall, our results suggest that, despite a lack of improvement in phonetic discrimination, native listeners experience changes at lexical-semantic levels of processing after brief exposure to foreign-accented speech. Moreover, these results suggest that lexical access, semantic integration and linguistic re-analysis processes are permeable to external factors, such as the accent of the speaker. PMID:25859209
Evidence-Based Systematic Review: Effects of Nonspeech Oral Motor Exercises on Speech
ERIC Educational Resources Information Center
McCauley, Rebecca J.; Strand, Edythe; Lof, Gregory L.; Schooling, Tracy; Frymark, Tobi
2009-01-01
Purpose: The purpose of this systematic review was to examine the current evidence for the use of oral motor exercises (OMEs) on speech (i.e., speech physiology, speech production, and functional speech outcomes) as a means of supporting further research and clinicians' use of evidence-based practice. Method: The peer-reviewed literature from 1960…
Speech Characteristics of 8-Year-Old Children: Findings from a Prospective Population Study
ERIC Educational Resources Information Center
Wren, Yvonne; McLeod, Sharynne; White, Paul; Miller, Laura L.; Roulstone, Sue
2013-01-01
Speech disorder that continues into middle childhood is rarely studied compared with speech disorder in the early years. Speech production in single words, connected speech and nonword repetition was assessed for 7390 eight-year-old children within the Avon Longitudinal Study of Parents and Children (ALSPAC). The majority (n=6399) had typical…
Perceptual statistical learning over one week in child speech production.
Richtsmeier, Peter T; Goffman, Lisa
2017-07-01
What cognitive mechanisms account for the trajectory of speech sound development, in particular, gradually increasing accuracy during childhood? An intriguing potential contributor is statistical learning, a type of learning that has been studied frequently in infant perception but less often in child speech production. To assess the relevance of statistical learning to developing speech accuracy, we carried out a statistical learning experiment with four- and five-year-olds in which statistical learning was examined over one week. Children were familiarized with and tested on word-medial consonant sequences in novel words. There was only modest evidence for statistical learning, primarily in the first few productions of the first session. This initial learning effect nevertheless aligns with previous statistical learning research. Furthermore, the overall learning effect was similar to an estimate of weekly accuracy growth based on normative studies. The results implicate other important factors in speech sound development, particularly learning via production. Copyright © 2017 Elsevier Inc. All rights reserved.
The Hierarchical Cortical Organization of Human Speech Processing
de Heer, Wendy A.; Huth, Alexander G.; Griffiths, Thomas L.
2017-01-01
Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech. SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to natural speech. Both cerebral hemispheres were actively involved in speech processing in large and equal amounts. Also, the transformation from spectral features to semantic elements occurs early in the cortical speech-processing stream. Our experimental and analytical approaches are important alternatives and complements to standard approaches that use segmented speech and block designs, which report more laterality in speech processing and associated semantic processing to higher levels of cortex than reported here. PMID:28588065
Barista: A Framework for Concurrent Speech Processing by USC-SAIL
Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G.; Narayanan, Shrikanth S.
2016-01-01
We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0. PMID:27610047
Barista: A Framework for Concurrent Speech Processing by USC-SAIL.
Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G; Narayanan, Shrikanth S
2014-05-01
We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0.
Cognitive processing load as a determinant of stuttering: summary of a research programme.
Bosshardt, Hans-Georg
2006-07-01
The present paper integrates the results of experimental studies in which cognitive differences between stuttering and nonstuttering adults were investigated. In a monitoring experiment it was found that persons who stutter encode semantic information more slowly than nonstuttering persons. In dual-task experiments the two groups were compared in overt word-repetition and sentence-production experiments. The results of the two word-repetition experiments indicate that the speech of stuttering persons is sensitive to interference from concurrent attention-demanding cognitive processing-particularly when phonological coding is involved. In two sentence-generation and -production experiments it was found that under dual-task conditions stuttering persons produced sentences containing a smaller number of content units whereas persons who do not stutter did not show a significant single- vs. dual-task contrast. These results suggest that sentence generation and production required greater sustained attentional processing in stuttering than in nonstuttering persons and that persons who stutter reduce the amount of "conceptual work" in order to keep their stuttering rates low. Data from an fMRI-study indicate that in persons who stutter the neural systems activated during sentence generation and production overlap to a greater extent than those of persons who do not stutter. It is suggested that in persons who stutter neural subsystems involved in speech planning are "modularized" to a lesser extent than in persons who do not stutter.
Neural pathways for visual speech perception
Bernstein, Lynne E.; Liebenthal, Einat
2014-01-01
This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611
Toni, Ivan; Hagoort, Peter; Kelly, Spencer D.; Özyürek, Aslı
2015-01-01
Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture. Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts. PMID:24652857
Children's perception of their synthetically corrected speech production.
Strömbergsson, Sofia; Wengelin, Asa; House, David
2014-06-01
We explore children's perception of their own speech - in its online form, in its recorded form, and in synthetically modified forms. Children with phonological disorder (PD) and children with typical speech and language development (TD) performed tasks of evaluating accuracy of the different types of speech stimuli, either immediately after having produced the utterance or after a delay. In addition, they performed a task designed to assess their ability to detect synthetic modification. Both groups showed high performance in tasks involving evaluation of other children's speech, whereas in tasks of evaluating one's own speech, the children with PD were less accurate than their TD peers. The children with PD were less sensitive to misproductions in immediate conjunction with their production of an utterance, and more accurate after a delay. Within-category modification often passed undetected, indicating a satisfactory quality of the generated speech. Potential clinical benefits of using corrective re-synthesis are discussed.
Sensorimotor Oscillations Prior to Speech Onset Reflect Altered Motor Networks in Adults Who Stutter
Mersov, Anna-Maria; Jobst, Cecilia; Cheyne, Douglas O.; De Nil, Luc
2016-01-01
Adults who stutter (AWS) have demonstrated atypical coordination of motor and sensory regions during speech production. Yet little is known of the speech-motor network in AWS in the brief time window preceding audible speech onset. The purpose of the current study was to characterize neural oscillations in the speech-motor network during preparation for and execution of overt speech production in AWS using magnetoencephalography (MEG). Twelve AWS and 12 age-matched controls were presented with 220 words, each word embedded in a carrier phrase. Controls were presented with the same word list as their matched AWS participant. Neural oscillatory activity was localized using minimum-variance beamforming during two time periods of interest: speech preparation (prior to speech onset) and speech execution (following speech onset). Compared to controls, AWS showed stronger beta (15–25 Hz) suppression in the speech preparation stage, followed by stronger beta synchronization in the bilateral mouth motor cortex. AWS also recruited the right mouth motor cortex significantly earlier in the speech preparation stage compared to controls. Exaggerated motor preparation is discussed in the context of reduced coordination in the speech-motor network of AWS. It is further proposed that exaggerated beta synchronization may reflect a more strongly inhibited motor system that requires a stronger beta suppression to disengage prior to speech initiation. These novel findings highlight critical differences in the speech-motor network of AWS that occur prior to speech onset and emphasize the need to investigate further the speech-motor assembly in the stuttering population. PMID:27642279
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
2014-01-01
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language. PMID:24904497
Incremental Phonological Encoding during Unscripted Sentence Production
Jaeger, T. Florian; Furth, Katrina; Hilliard, Caitlin
2012-01-01
We investigate phonological encoding during unscripted sentence production, focusing on the effect of phonological overlap on phonological encoding. Previous work on this question has almost exclusively employed isolated word production or highly scripted multi-word production. These studies have led to conflicting results: some studies found that phonological overlap between two words facilitates phonological encoding, while others found inhibitory effects. One worry with many of these paradigms is that they involve processes that are not typical to everyday language use, which calls into question to what extent their findings speak to the architectures and mechanisms underlying language production. We present a paradigm to investigate the consequences of phonological overlap between words in a sentence while leaving speakers much of the lexical and structural choices typical in everyday language use. Adult native speakers of English described events in short video clips. We annotated the presence of disfluencies and the speech rate at various points throughout the sentence, as well as the constituent order. We find that phonological overlap has an inhibitory effect on phonological encoding. Specifically, if adjacent content words share their phonological onset (e.g., hand the hammer), they are preceded by production difficulty, as reflected in fluency and speech rate. We also find that this production difficulty affects speakers’ constituent order preferences during grammatical encoding. We discuss our results and previous works to isolate the properties of other paradigms that resulted in facilitatory or inhibitory results. The data from our paradigm also speak to questions about the scope of phonological planning in unscripted speech and as to whether phonological and grammatical encoding interact. PMID:23162515
Chest Wall Motion during Speech Production in Patients with Advanced Ankylosing Spondylitis
ERIC Educational Resources Information Center
Kalliakosta, Georgia; Mandros, Charalampos; Tzelepis, George E.
2007-01-01
Purpose: To test the hypothesis that ankylosing spondylitis (AS) alters the pattern of chest wall motion during speech production. Method: The pattern of chest wall motion during speech was measured with respiratory inductive plethysmography in 6 participants with advanced AS (5 men, 1 woman, age 45 plus or minus 8 years, Schober test 1.45 plus or…
Using Visible Speech to Train Perception and Production of Speech for Individuals with Hearing Loss.
ERIC Educational Resources Information Center
Massaro, Dominic W.; Light, Joanna
2004-01-01
The main goal of this study was to implement a computer-animated talking head, Baldi, as a language tutor for speech perception and production for individuals with hearing loss. Baldi can speak slowly; illustrate articulation by making the skin transparent to reveal the tongue, teeth, and palate; and show supplementary articulatory features, such…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less
Anumanchipalli, Gopala K.; Dichter, Benjamin; Chaisanguanthum, Kris S.; Johnson, Keith; Chang, Edward F.
2016-01-01
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial—especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship across speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics. PMID:27019106
Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.; ...
2016-03-28
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less
Effects of context and word class on lexical retrieval in Chinese speakers with anomic aphasia.
Law, Sam-Po; Kong, Anthony Pak-Hin; Lai, Loretta Wing-Shan; Lai, Christy
2015-01-01
Differences in processing nouns and verbs have been investigated intensely in psycholinguistics and neuropsychology in past decades. However, the majority of studies examining retrieval of these word classes have involved tasks of single word stimuli or responses. While the results have provided rich information for addressing issues about grammatical class distinctions, it is unclear whether they have adequate ecological validity for understanding lexical retrieval in connected speech which characterizes daily verbal communication. Previous investigations comparing retrieval of nouns and verbs in single word production and connected speech have reported either discrepant performance between the two contexts with presence of word class dissociation in picture naming but absence in connected speech, or null effects of word class. In addition, word finding difficulties have been found to be less severe in connected speech than picture naming. However, these studies have failed to match target stimuli of the two word classes and between tasks on psycholinguistic variables known to affect performance in response latency and/or accuracy. The present study compared lexical retrieval of nouns and verbs in picture naming and connected speech from picture description, procedural description, and story-telling among 19 Chinese speakers with anomic aphasia and their age, gender, and education matched healthy controls, to understand the influence of grammatical class on word production across speech contexts when target items were balanced for confounding variables between word classes and tasks. Elicitation of responses followed the protocol of the AphasiaBank consortium (http://talkbank.org/AphasiaBank/). Target words for confrontation naming were based on well-established naming tests, while those for narrative were drawn from a large database of normal speakers. Selected nouns and verbs in the two contexts were matched for age-of-acquisition (AoA) and familiarity. Influence of imageability was removed through statistical control. When AoA and familiarity were balanced, nouns were retrieved better than verbs, and performance was higher in picture naming than connected speech. When imageability was further controlled for, only the effect of task remained significant. The absence of word class effects when confounding variables are controlled for is similar to many previous reports; however, the pattern of better word retrieval in naming is rare but compatible with the account that processing demands are higher in narrative than naming. The overall findings have strongly suggested the importance of including connected speech tasks in any language assessment and evaluation of language rehabilitation of individuals with aphasia.
Effects of context and word class on lexical retrieval in Chinese speakers with anomic aphasia
Law, Sam-Po; Kong, Anthony Pak-Hin; Lai, Loretta Wing-Shan; Lai, Christy
2014-01-01
Background Differences in processing nouns and verbs have been investigated intensely in psycholinguistics and neuropsychology in past decades. However, the majority of studies examining retrieval of these word classes have involved tasks of single word stimuli or responses. While the results have provided rich information for addressing issues about grammatical class distinctions, it is unclear whether they have adequate ecological validity for understanding lexical retrieval in connected speech which characterizes daily verbal communication. Previous investigations comparing retrieval of nouns and verbs in single word production and connected speech have reported either discrepant performance between the two contexts with presence of word class dissociation in picture naming but absence in connected speech, or null effects of word class. In addition, word finding difficulties have been found to be less severe in connected speech than picture naming. However, these studies have failed to match target stimuli of the two word classes and between tasks on psycholinguistic variables known to affect performance in response latency and/or accuracy. Aims The present study compared lexical retrieval of nouns and verbs in picture naming and connected speech from picture description, procedural description, and story-telling among 19 Chinese speakers with anomic aphasia and their age, gender, and education matched healthy controls, to understand the influence of grammatical class on word production across speech contexts when target items were balanced for confounding variables between word classes and tasks. Methods & Procedures Elicitation of responses followed the protocol of the AphasiaBank consortium (http://talkbank.org/AphasiaBank/). Target words for confrontation naming were based on well-established naming tests, while those for narrative were drawn from a large database of normal speakers. Selected nouns and verbs in the two contexts were matched for age-of-acquisition (AoA) and familiarity. Influence of imageability was removed through statistical control. Outcomes & Results When AoA and familiarity were balanced, nouns were retrieved better than verbs, and performance was higher in picture naming than connected speech. When imageability was further controlled for, only the effect of task remained significant. Conclusions The absence of word class effects when confounding variables are controlled for is similar to many previous reports; however, the pattern of better word retrieval in naming is rare but compatible with the account that processing demands are higher in narrative than naming. The overall findings have strongly suggested the importance of including connected speech tasks in any language assessment and evaluation of language rehabilitation of individuals with aphasia. PMID:25505810
Albustanji, Yusuf M; Albustanji, Mahmoud M; Hegazi, Mohamed M; Amayreh, Mousa M
2014-10-01
The purpose of this study was to assess prevalence and types of consonant production errors and phonological processes in Saudi Arabic-speaking children with repaired cleft lip and palate, and to determine the relationship between frequency of errors on one hand and the type of the cleft. Possible relationship between age, gender and frequency of errors was also investigated. Eighty Saudi children with repaired cleft lip and palate aged 6-15 years (mean 6.7 years), underwent speech, language, and hearing evaluation. The diagnosis of articulation deficits was based on the results of an Arabic articulation test. Phonological processes were reported based on the productivity scale of a minimum 20% of occurrence. Diagnosis of nasality was based on a 5-point scale that reflects severity from 0 through 4. All participants underwent intraoral examination, informal language assessment, and hearing evaluation to assess their speech and language abilities. The Chi-Square test for independence was used to analyze the results of consonant production as a function of type of CLP and age. Out of 80 participants with CLP, 21 participants had normal articulation and resonance, 59 of participants (74%) showed speech abnormalities. Twenty-one of these 59 participants showed only articulation errors; 17 showed only hypernasality; and 21 showed both articulation and resonance deficits. CAs were observed in 20 participant. The productive phonological processes were consonant backing, final consonant deletion, gliding, and stopping. At age 6 and older, 37% of participants had persisting hearing loss. Despite early age at time of surgery (mean 6.7 months) for the studied CLP participants in this study, a substantial number of them demonstrated articulation errors and hypernasality. The results showed desirable findings for diverse languages. It is especially interesting to consider the prevalence of glottal stops and pharyngeal fricatives in a population for whom these sound are phonemic. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
New Developments in Understanding the Complexity of Human Speech Production.
Simonyan, Kristina; Ackermann, Hermann; Chang, Edward F; Greenlee, Jeremy D
2016-11-09
Speech is one of the most unique features of human communication. Our ability to articulate our thoughts by means of speech production depends critically on the integrity of the motor cortex. Long thought to be a low-order brain region, exciting work in the past years is overturning this notion. Here, we highlight some of major experimental advances in speech motor control research and discuss the emerging findings about the complexity of speech motocortical organization and its large-scale networks. This review summarizes the talks presented at a symposium at the Annual Meeting of the Society of Neuroscience; it does not represent a comprehensive review of contemporary literature in the broader field of speech motor control. Copyright © 2016 the authors 0270-6474/16/3611440-09$15.00/0.
Primary Progressive Speech Abulia.
Milano, Nicholas J; Heilman, Kenneth M
2015-01-01
Primary progressive aphasia (PPA) is a neurodegenerative disorder characterized by progressive language impairment. The three variants of PPA include the nonfluent/agrammatic, semantic, and logopenic types. The goal of this report is to describe two patients with a loss of speech initiation that was associated with bilateral medial frontal atrophy. Two patients with progressive speech deficits were evaluated and their examinations revealed a paucity of spontaneous speech; however their naming, repetition, reading, and writing were all normal. The patients had no evidence of agrammatism or apraxia of speech but did have impaired speech fluency. In addition to impaired production of propositional spontaneous speech, these patients had impaired production of automatic speech (e.g., reciting the Lord's Prayer) and singing. Structural brain imaging revealed bilateral medial frontal atrophy in both patients. These patients' language deficits are consistent with a PPA, but they are in the pattern of a dynamic aphasia. Whereas the signs-symptoms of dynamic aphasia have been previously described, to our knowledge these are the first cases associated with predominantly bilateral medial frontal atrophy that impaired both propositional and automatic speech. Thus, this profile may represent a new variant of PPA.
NASA Astrophysics Data System (ADS)
Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.
2013-12-01
The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.
Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.
2014-01-01
There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269
Tongue corticospinal modulation during attended verbal stimuli: priming and coarticulation effects.
D'Ausilio, Alessandro; Jarmolowska, Joanna; Busan, Pierpaolo; Bufalari, Ilaria; Craighero, Laila
2011-11-01
Humans perceive continuous speech through interruptions or brief noise bursts cancelling entire phonemes. This robust phenomenon has been classically associated with mechanisms of perceptual restoration. In parallel, recent experimental evidence suggests that the motor system may actively participate in speech perception, even contributing to phoneme discrimination. In the present study we intended to verify if the motor system has a specific role in speech perceptual restoration as well. To this aim we recorded tongue corticospinal excitability during phoneme expectation induced by contextual information. Results showed that phoneme expectation determines an involvement of the individual's motor system specifically implicated in the production of the attended phoneme, exactly as it happens during actual listening of that phoneme, suggesting the presence of a speech imagery-like process. Very interestingly, this motoric phoneme expectation is also modulated by subtle coarticulation cues of which the listener is not consciously aware. Present data indicate that the rehearsal of a specific phoneme requires the contribution of the motor system exactly as it happens during the rehearsal of actions executed by the limbs, and that this process is abolished when an incongruent phonemic cue is presented, as similarly occurs during observation of anomalous hand actions. We propose that altogether these effects indicate that during speech listening an attentional-like mechanism driven by the motor system, based on a feed-forward anticipatory mechanism constantly verifying incoming information, is working allowing perceptual restoration. Copyright © 2011 Elsevier Ltd. All rights reserved.
Brainstem origins for cortical 'what' and 'where' pathways in the auditory system.
Kraus, Nina; Nicol, Trent
2005-04-01
We have developed a data-driven conceptual framework that links two areas of science: the source-filter model of acoustics and cortical sensory processing streams. The source-filter model describes the mechanics behind speech production: the identity of the speaker is carried largely in the vocal cord source and the message is shaped by the ever-changing filters of the vocal tract. Sensory processing streams, popularly called 'what' and 'where' pathways, are well established in the visual system as a neural scheme for separately carrying different facets of visual objects, namely their identity and their position/motion, to the cortex. A similar functional organization has been postulated in the auditory system. Both speaker identity and the spoken message, which are simultaneously conveyed in the acoustic structure of speech, can be disentangled into discrete brainstem response components. We argue that these two response classes are early manifestations of auditory 'what' and 'where' streams in the cortex. This brainstem link forges a new understanding of the relationship between the acoustics of speech and cortical processing streams, unites two hitherto separate areas in science, and provides a model for future investigations of auditory function.
Vandewalle, Ellen; Boets, Bart; Ghesquière, Pol; Zink, Inge
2012-01-01
This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay (n = 8), (2) children with SLI and normal literacy (n = 10) and (3) typically developing children (n = 14). Moreover, the relations between these auditory processing and speech perception skills and oral language and literacy skills in grade 1 and grade 3 were analyzed. The SLI group with literacy delay scored significantly lower than both other groups on speech perception, but not on temporal auditory processing. Both normal reading groups did not differ in terms of speech perception or auditory processing. Speech perception was significantly related to reading and spelling in grades 1 and 3 and had a unique predictive contribution to reading growth in grade 3, even after controlling reading level, phonological ability, auditory processing and oral language skills in grade 1. These findings indicated that speech perception also had a unique direct impact upon reading development and not only through its relation with phonological awareness. Moreover, speech perception seemed to be more associated with the development of literacy skills and less with oral language ability. Copyright © 2011 Elsevier Ltd. All rights reserved.
Planning and production of grammatical and lexical verbs in multi-word messages.
Michel Lange, Violaine; Messerschmidt, Maria; Harder, Peter; Siebner, Hartwig Roman; Boye, Kasper
2017-01-01
Grammatical words represent the part of grammar that can be most directly contrasted with the lexicon. Aphasiological studies, linguistic theories and psycholinguistic studies suggest that their processing is operated at different stages in speech production. Models of sentence production propose that at the formulation stage, lexical words are processed at the functional level while grammatical words are processed at a later positional level. In this study we consider proposals made by linguistic theories and psycholinguistic models to derive two predictions for the processing of grammatical words compared to lexical words. First, based on the assumption that grammatical words are less crucial for communication and therefore paid less attention to, it is predicted that they show shorter articulation times and/or higher error rates than lexical words. Second, based on the assumption that grammatical words differ from lexical words in being dependent on a lexical host, it is hypothesized that the retrieval of a grammatical word has to be put on hold until its lexical host is available, and it is predicted that this is reflected in longer reaction times (RTs) for grammatical compared to lexical words. We investigated these predictions by comparing fully homonymous sentences with only a difference in verb status (grammatical vs. lexical) elicited by a specific context. We measured RTs, duration and accuracy rate. No difference in duration was observed. Longer RTs and a lower accuracy rate for grammatical words were reported, successfully reflecting grammatical word properties as defined by linguistic theories and psycholinguistic models. Importantly, this study provides insight into the span of encoding and grammatical encoding processes in speech production.
Planning and production of grammatical and lexical verbs in multi-word messages
Messerschmidt, Maria; Harder, Peter; Siebner, Hartwig Roman; Boye, Kasper
2017-01-01
Grammatical words represent the part of grammar that can be most directly contrasted with the lexicon. Aphasiological studies, linguistic theories and psycholinguistic studies suggest that their processing is operated at different stages in speech production. Models of sentence production propose that at the formulation stage, lexical words are processed at the functional level while grammatical words are processed at a later positional level. In this study we consider proposals made by linguistic theories and psycholinguistic models to derive two predictions for the processing of grammatical words compared to lexical words. First, based on the assumption that grammatical words are less crucial for communication and therefore paid less attention to, it is predicted that they show shorter articulation times and/or higher error rates than lexical words. Second, based on the assumption that grammatical words differ from lexical words in being dependent on a lexical host, it is hypothesized that the retrieval of a grammatical word has to be put on hold until its lexical host is available, and it is predicted that this is reflected in longer reaction times (RTs) for grammatical compared to lexical words. We investigated these predictions by comparing fully homonymous sentences with only a difference in verb status (grammatical vs. lexical) elicited by a specific context. We measured RTs, duration and accuracy rate. No difference in duration was observed. Longer RTs and a lower accuracy rate for grammatical words were reported, successfully reflecting grammatical word properties as defined by linguistic theories and psycholinguistic models. Importantly, this study provides insight into the span of encoding and grammatical encoding processes in speech production. PMID:29091940
Stimulating Language: Insights from TMS
ERIC Educational Resources Information Center
Devlin, Joseph T.; Watkins, Kate E.
2007-01-01
Fifteen years ago, Pascual-Leone and colleagues used transcranial magnetic stimulation (TMS) to investigate speech production in pre-surgical epilepsy patients and in doing so, introduced a novel tool into language research. TMS can be used to non-invasively stimulate a specific cortical region and transiently disrupt information processing. These…
Brown, Meredith; Kuperberg, Gina R.
2015-01-01
Language and thought dysfunction are central to the schizophrenia syndrome. They are evident in the major symptoms of psychosis itself, particularly as disorganized language output (positive thought disorder) and auditory verbal hallucinations (AVHs), and they also manifest as abnormalities in both high-level semantic and contextual processing and low-level perception. However, the literatures characterizing these abnormalities have largely been separate and have sometimes provided mutually exclusive accounts of aberrant language in schizophrenia. In this review, we propose that recent generative probabilistic frameworks of language processing can provide crucial insights that link these four lines of research. We first outline neural and cognitive evidence that real-time language comprehension and production normally involve internal generative circuits that propagate probabilistic predictions to perceptual cortices — predictions that are incrementally updated based on prediction error signals as new inputs are encountered. We then explain how disruptions to these circuits may compromise communicative abilities in schizophrenia by reducing the efficiency and robustness of both high-level language processing and low-level speech perception. We also argue that such disruptions may contribute to the phenomenology of thought-disordered speech and false perceptual inferences in the language system (i.e., AVHs). This perspective suggests a number of productive avenues for future research that may elucidate not only the mechanisms of language abnormalities in schizophrenia, but also promising directions for cognitive rehabilitation. PMID:26640435
Speech Perception as a Cognitive Process: The Interactive Activation Model.
ERIC Educational Resources Information Center
Elman, Jeffrey L.; McClelland, James L.
Research efforts to model speech perception in terms of a processing system in which knowledge and processing are distributed over large numbers of highly interactive--but computationally primative--elements are described in this report. After discussing the properties of speech that demand a parallel interactive processing system, the report…
Su, Qiaotong; Galvin, John J.; Zhang, Guoping; Li, Yongxin
2016-01-01
Cochlear implant (CI) speech performance is typically evaluated using well-enunciated speech produced at a normal rate by a single talker. CI users often have greater difficulty with variations in speech production encountered in everyday listening. Within a single talker, speaking rate, amplitude, duration, and voice pitch information may be quite variable, depending on the production context. The coarse spectral resolution afforded by the CI limits perception of voice pitch, which is an important cue for speech prosody and for tonal languages such as Mandarin Chinese. In this study, sentence recognition from the Mandarin speech perception database was measured in adult and pediatric Mandarin-speaking CI listeners for a variety of speaking styles: voiced speech produced at slow, normal, and fast speaking rates; whispered speech; voiced emotional speech; and voiced shouted speech. Recognition of Mandarin Hearing in Noise Test sentences was also measured. Results showed that performance was significantly poorer with whispered speech relative to the other speaking styles and that performance was significantly better with slow speech than with fast or emotional speech. Results also showed that adult and pediatric performance was significantly poorer with Mandarin Hearing in Noise Test than with Mandarin speech perception sentences at the normal rate. The results suggest that adult and pediatric Mandarin-speaking CI patients are highly susceptible to whispered speech, due to the lack of lexically important voice pitch cues and perhaps other qualities associated with whispered speech. The results also suggest that test materials may contribute to differences in performance observed between adult and pediatric CI users. PMID:27363714
Situational influences on rhythmicity in speech, music, and their interaction.
Hawkins, Sarah
2014-12-19
Brain processes underlying the production and perception of rhythm indicate considerable flexibility in how physical signals are interpreted. This paper explores how that flexibility might play out in rhythmicity in speech and music. There is much in common across the two domains, but there are also significant differences. Interpretations are explored that reconcile some of the differences, particularly with respect to how functional properties modify the rhythmicity of speech, within limits imposed by its structural constraints. Functional and structural differences mean that music is typically more rhythmic than speech, and that speech will be more rhythmic when the emotions are more strongly engaged, or intended to be engaged. The influence of rhythmicity on attention is acknowledged, and it is suggested that local increases in rhythmicity occur at times when attention is required to coordinate joint action, whether in talking or music-making. Evidence is presented which suggests that while these short phases of heightened rhythmical behaviour are crucial to the success of transitions in communicative interaction, their modality is immaterial: they all function to enhance precise temporal prediction and hence tightly coordinated joint action. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Iconic hand gestures and the predictability of words in context in spontaneous speech.
Beattie, G; Shovelton, H
2000-11-01
This study presents a series of empirical investigations to test a theory of speech production proposed by Butterworth and Hadar (1989; revised in Hadar & Butterworth, 1997) that iconic gestures have a functional role in lexical retrieval in spontaneous speech. Analysis 1 demonstrated that words which were totally unpredictable (as measured by the Shannon guessing technique) were more likely to occur after pauses than after fluent speech, in line with earlier findings. Analysis 2 demonstrated that iconic gestures were associated with words of lower transitional probability than words not associated with gesture, even when grammatical category was controlled. This therefore provided new supporting evidence for Butterworth and Hadar's claims that gestures' lexical affiliates are indeed unpredictable lexical items. However, Analysis 3 found that iconic gestures were not occasioned by lexical accessing difficulties because although gestures tended to occur with words of significantly lower transitional probability, these lower transitional probability words tended to be uttered quite fluently. Overall, therefore, this study provided little evidence for Butterworth and Hadar's theoretical claim that the main function of the iconic hand gestures that accompany spontaneous speech is to assist in the process of lexical access. Instead, such gestures are reconceptualized in terms of communicative function.
Bosshardt, Hans-Georg
2002-01-01
This study investigated how silent reading and word memorization may affect the fluency of concurrently repeated words. The words silently read or memorized were phonologically similar or dissimilar to the words of the repetition task. Fourteen adults who stutter and 16 who do not participated in the experiment. The two groups were matched for age, education, sex, forward and backward memory span and vocabulary. It was found that the disfluencies of persons who stutter significantly increased during word repetition when similar words were read or memorized concurrently. In contrast, the disfluencies of persons who do not stutter were not significantly affected by either secondary task. These results indicate that the speech of persons who stutter is more sensitive to interference from concurrently performed cognitive processing than that of nonstuttering persons. It is proposed that the phonological and articulatory systems of persons who stutter are protected less efficiently from interference by attention-demanding processing within the central executive system. Alternative interpretations are also discussed. Readers will learn how modern speech production theories and the concept of modularity can account for stuttering, and will be able to explain the greater vulnerability of stutterer's speech fluency to concurrent cognitive processing.
Word pair classification during imagined speech using direct brain recordings
NASA Astrophysics Data System (ADS)
Martin, Stephanie; Brunner, Peter; Iturrate, Iñaki; Millán, José Del R.; Schalk, Gerwin; Knight, Robert T.; Pasley, Brian N.
2016-05-01
People that cannot communicate due to neurological disorders would benefit from an internal speech decoder. Here, we showed the ability to classify individual words during imagined speech from electrocorticographic signals. In a word imagery task, we used high gamma (70-150 Hz) time features with a support vector machine model to classify individual words from a pair of words. To account for temporal irregularities during speech production, we introduced a non-linear time alignment into the SVM kernel. Classification accuracy reached 88% in a two-class classification framework (50% chance level), and average classification accuracy across fifteen word-pairs was significant across five subjects (mean = 58% p < 0.05). We also compared classification accuracy between imagined speech, overt speech and listening. As predicted, higher classification accuracy was obtained in the listening and overt speech conditions (mean = 89% and 86%, respectively; p < 0.0001), where speech stimuli were directly presented. The results provide evidence for a neural representation for imagined words in the temporal lobe, frontal lobe and sensorimotor cortex, consistent with previous findings in speech perception and production. These data represent a proof of concept study for basic decoding of speech imagery, and delineate a number of key challenges to usage of speech imagery neural representations for clinical applications.
Word pair classification during imagined speech using direct brain recordings
Martin, Stephanie; Brunner, Peter; Iturrate, Iñaki; Millán, José del R.; Schalk, Gerwin; Knight, Robert T.; Pasley, Brian N.
2016-01-01
People that cannot communicate due to neurological disorders would benefit from an internal speech decoder. Here, we showed the ability to classify individual words during imagined speech from electrocorticographic signals. In a word imagery task, we used high gamma (70–150 Hz) time features with a support vector machine model to classify individual words from a pair of words. To account for temporal irregularities during speech production, we introduced a non-linear time alignment into the SVM kernel. Classification accuracy reached 88% in a two-class classification framework (50% chance level), and average classification accuracy across fifteen word-pairs was significant across five subjects (mean = 58%; p < 0.05). We also compared classification accuracy between imagined speech, overt speech and listening. As predicted, higher classification accuracy was obtained in the listening and overt speech conditions (mean = 89% and 86%, respectively; p < 0.0001), where speech stimuli were directly presented. The results provide evidence for a neural representation for imagined words in the temporal lobe, frontal lobe and sensorimotor cortex, consistent with previous findings in speech perception and production. These data represent a proof of concept study for basic decoding of speech imagery, and delineate a number of key challenges to usage of speech imagery neural representations for clinical applications. PMID:27165452
Gutierrez-Sigut, Eva; Payne, Heather; MacSweeney, Mairéad
2016-08-01
The neural systems supporting speech and sign processing are very similar, although not identical. In a previous fTCD study of hearing native signers (Gutierrez-Sigut, Daws, et al., 2015) we found stronger left lateralization for sign than speech. Given that this increased lateralization could not be explained by hand movement alone, the contribution of motor movement versus 'linguistic' processes to the strength of hemispheric lateralization during sign production remains unclear. Here we directly contrast lateralization strength of covert versus overt signing during phonological and semantic fluency tasks. To address the possibility that hearing native signers' elevated lateralization indices (LIs) were due to performing a task in their less dominant language, here we test deaf native signers, whose dominant language is British Sign Language (BSL). Signers were more strongly left lateralized for overt than covert sign generation. However, the strength of lateralization was not correlated with the amount of time producing movements of the right hand. Comparisons with previous data from hearing native English speakers suggest stronger laterality indices for sign than speech in both covert and overt tasks. This increased left lateralization may be driven by specific properties of sign production such as the increased use of self-monitoring mechanisms or the nature of phonological encoding of signs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Is complex signal processing for bone conduction hearing aids useful?
Kompis, Martin; Kurz, Anja; Pfiffner, Flurin; Senn, Pascal; Arnold, Andreas; Caversaccio, Marco
2014-05-01
To establish whether complex signal processing is beneficial for users of bone anchored hearing aids. Review and analysis of two studies from our own group, each comparing a speech processor with basic digital signal processing (either Baha Divino or Baha Intenso) and a processor with complex digital signal processing (either Baha BP100 or Baha BP110 power). The main differences between basic and complex signal processing are the number of audiologist accessible frequency channels and the availability and complexity of the directional multi-microphone noise reduction and loudness compression systems. Both studies show a small, statistically non-significant improvement of speech understanding in quiet with the complex digital signal processing. The average improvement for speech in noise is +0.9 dB, if speech and noise are emitted both from the front of the listener. If noise is emitted from the rear and speech from the front of the listener, the advantage of the devices with complex digital signal processing as opposed to those with basic signal processing increases, on average, to +3.2 dB (range +2.3 … +5.1 dB, p ≤ 0.0032). Complex digital signal processing does indeed improve speech understanding, especially in noise coming from the rear. This finding has been supported by another study, which has been published recently by a different research group. When compared to basic digital signal processing, complex digital signal processing can increase speech understanding of users of bone anchored hearing aids. The benefit is most significant for speech understanding in noise.
The alluring but misleading analogy between mirror neurons and the motor theory of speech.
Holt, Lori L; Lotto, Andrew J
2014-04-01
Speech is commonly claimed to relate to mirror neurons because of the alluring surface analogy of mirror neurons to the Motor Theory of speech perception, which posits that perception and production draw upon common motor-articulatory representations. We argue that the analogy fails and highlight examples of systems-level developmental approaches that have been more fruitful in revealing perception-production associations.
Rib Torque Does Not Assist Resting Tidal Expiration or Most Conversational Speech Expiration
ERIC Educational Resources Information Center
Hixon, Thomas J.
2006-01-01
Purpose: This research note discusses a common misconception in speech science and speech-language pathology textbooks that rib torque (i.e., "rotational stress") assists resting tidal expiration and conversational speech production. Method: The nature of this misconception is considered. Conclusion: An alternate conceptualization is offered that…
ERIC Educational Resources Information Center
Hodge, Megan M.; Gotzke, Carrie L.
2011-01-01
Listeners' identification of young children's productions of minimally contrastive words and predictive relationships between accurately identified words and intelligibility scores obtained from a 100-word spontaneous speech sample were determined for 36 children with typically developing speech (TDS) and 36 children with speech sound disorders…
Yu, Alan C. L.
2010-01-01
Variation is a ubiquitous feature of speech. Listeners must take into account context-induced variation to recover the interlocutor's intended message. When listeners fail to normalize for context-induced variation properly, deviant percepts become seeds for new perceptual and production norms. In question is how deviant percepts accumulate in a systematic fashion to give rise to sound change (i.e., new pronunciation norms) within a given speech community. The present study investigated subjects' classification of /s/ and // before /a/ or /u/ spoken by a male or a female voice. Building on modern cognitive theories of autism-spectrum condition, which see variation in autism-spectrum condition in terms of individual differences in cognitive processing style, we established a significant correlation between individuals' normalization for phonetic context (i.e., whether the following vowel is /a/ or /u/) and talker voice variation (i.e., whether the talker is male or female) in speech and their “autistic” traits, as measured by the Autism Spectrum Quotient (AQ). In particular, our mixed-effect logistic regression models show that women with low AQ (i.e., the least “autistic”) do not normalize for phonetic coarticulation as much as men and high AQ women. This study provides first direct evidence that variability in human's ability to compensate for context-induced variations in speech perceptually is governed by the individual's sex and cognitive processing style. These findings lend support to the hypothesis that the systematic infusion of new linguistic variants (i.e., the deviant percepts) originate from a sub-segment of the speech community that consistently under-compensates for contextual variation in speech. PMID:20808859
Perception of temporally modified speech in auditory neuropathy.
Hassan, Dalia Mohamed
2011-01-01
Disrupted auditory nerve activity in auditory neuropathy (AN) significantly impairs the sequential processing of auditory information, resulting in poor speech perception. This study investigated the ability of AN subjects to perceive temporally modified consonant-vowel (CV) pairs and shed light on their phonological awareness skills. Four Arabic CV pairs were selected: /ki/-/gi/, /to/-/do/, /si/-/sti/ and /so/-/zo/. The formant transitions in consonants and the pauses between CV pairs were prolonged. Rhyming, segmentation and blending skills were tested using words at a natural rate of speech and with prolongation of the speech stream. Fourteen adult AN subjects were compared to a matched group of cochlear-impaired patients in their perception of acoustically processed speech. The AN group distinguished the CV pairs at a low speech rate, in particular with modification of the consonant duration. Phonological awareness skills deteriorated in adult AN subjects but improved with prolongation of the speech inter-syllabic time interval. A rehabilitation program for AN should consider temporal modification of speech, training for auditory temporal processing and the use of devices with innovative signal processing schemes. Verbal modifications as well as visual imaging appear to be promising compensatory strategies for remediating the affected phonological processing skills.
Articulatory movements modulate auditory responses to speech
Agnew, Z.K.; McGettigan, C.; Banks, B.; Scott, S.K.
2013-01-01
Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech. Using functional neuroimaging, subjects were asked to produce sentences aloud, to silently mouth while listening to a different speaker producing the same sentence, to passively listen to sentences being read aloud, or to read sentences silently. We show that that separate regions of the superior temporal cortex display distinct response profiles to speaking aloud, mouthing while listening, and passive listening. Responses in anterior superior temporal cortices in both hemispheres are greater for passive listening compared with both mouthing while listening, and speaking aloud. This is the first demonstration that articulation, whether or not it has auditory consequences, modulates responses of the dorsolateral temporal cortex. In contrast posterior regions of the superior temporal cortex are recruited during both articulation conditions. In dorsal regions of the posterior superior temporal gyrus, responses to mouthing and reading aloud were equivalent, and in more ventral posterior superior temporal sulcus, responses were greater for reading aloud compared with mouthing while listening. These data demonstrate an anterior–posterior division of superior temporal regions where anterior fields are suppressed during motor output, potentially for the purpose of enhanced detection of the speech of others. We suggest posterior fields are engaged in auditory processing for the guidance of articulation by auditory information. PMID:22982103
Walenski, Matthew; Swinney, David
2009-01-01
The central question underlying this study revolves around how children process co-reference relationships—such as those evidenced by pronouns (him) and reflexives (himself)—and how a slowed rate of speech input may critically affect this process. Previous studies of child language processing have demonstrated that typical language developing (TLD) children as young as 4 years of age process co-reference relations in a manner similar to adults on-line. In contrast, off-line measures of pronoun comprehension suggest a developmental delay for pronouns (relative to reflexives). The present study examines dependency relations in TLD children (ages 5–13) and investigates how a slowed rate of speech input affects the unconscious (on-line) and conscious (off-line) parsing of these constructions. For the on-line investigations (using a cross-modal picture priming paradigm), results indicate that at a normal rate of speech TLD children demonstrate adult-like syntactic reflexes. At a slowed rate of speech the typical language developing children displayed a breakdown in automatic syntactic parsing (again, similar to the pattern seen in unimpaired adults). As demonstrated in the literature, our off-line investigations (sentence/picture matching task) revealed that these children performed much better on reflexives than on pronouns at a regular speech rate. However, at the slow speech rate, performance on pronouns was substantially improved, whereas performance on reflexives was not different than at the regular speech rate. We interpret these results in light of a distinction between fast automatic processes (relied upon for on-line processing in real time) and conscious reflective processes (relied upon for off-line processing), such that slowed speech input disrupts the former, yet improves the latter. PMID:19343495
Engaged listeners: shared neural processing of powerful political speeches
Häcker, Frank E. K.; Honey, Christopher J.; Hasson, Uri
2015-01-01
Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners’ brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages. PMID:25653012
Processing of speech signals for physical and sensory disabilities.
Levitt, H
1995-01-01
Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities. Images Fig. 4 PMID:7479816
Loss tolerant speech decoder for telecommunications
NASA Technical Reports Server (NTRS)
Prieto, Jr., Jaime L. (Inventor)
1999-01-01
A method and device for extrapolating past signal-history data for insertion into missing data segments in order to conceal digital speech frame errors. The extrapolation method uses past-signal history that is stored in a buffer. The method is implemented with a device that utilizes a finite-impulse response (FIR) multi-layer feed-forward artificial neural network that is trained by back-propagation for one-step extrapolation of speech compression algorithm (SCA) parameters. Once a speech connection has been established, the speech compression algorithm device begins sending encoded speech frames. As the speech frames are received, they are decoded and converted back into speech signal voltages. During the normal decoding process, pre-processing of the required SCA parameters will occur and the results stored in the past-history buffer. If a speech frame is detected to be lost or in error, then extrapolation modules are executed and replacement SCA parameters are generated and sent as the parameters required by the SCA. In this way, the information transfer to the SCA is transparent, and the SCA processing continues as usual. The listener will not normally notice that a speech frame has been lost because of the smooth transition between the last-received, lost, and next-received speech frames.
Processing of Speech Signals for Physical and Sensory Disabilities
NASA Astrophysics Data System (ADS)
Levitt, Harry
1995-10-01
Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities.
... able to assess your child’s speech production and language development and make appropriate therapy recommendations. It is also ... pathologist should consistently assess your child’s speech and language development, as well as screen for hearing problems (with ...
Modeling Driving Performance Using In-Vehicle Speech Data From a Naturalistic Driving Study.
Kuo, Jonny; Charlton, Judith L; Koppel, Sjaan; Rudin-Brown, Christina M; Cross, Suzanne
2016-09-01
We aimed to (a) describe the development and application of an automated approach for processing in-vehicle speech data from a naturalistic driving study (NDS), (b) examine the influence of child passenger presence on driving performance, and (c) model this relationship using in-vehicle speech data. Parent drivers frequently engage in child-related secondary behaviors, but the impact on driving performance is unknown. Applying automated speech-processing techniques to NDS audio data would facilitate the analysis of in-vehicle driver-child interactions and their influence on driving performance. Speech activity detection and speaker diarization algorithms were applied to audio data from a Melbourne-based NDS involving 42 families. Multilevel models were developed to evaluate the effect of speech activity and the presence of child passengers on driving performance. Speech activity was significantly associated with velocity and steering angle variability. Child passenger presence alone was not associated with changes in driving performance. However, speech activity in the presence of two child passengers was associated with the most variability in driving performance. The effects of in-vehicle speech on driving performance in the presence of child passengers appear to be heterogeneous, and multiple factors may need to be considered in evaluating their impact. This goal can potentially be achieved within large-scale NDS through the automated processing of observational data, including speech. Speech-processing algorithms enable new perspectives on driving performance to be gained from existing NDS data, and variables that were once labor-intensive to process can be readily utilized in future research. © 2016, Human Factors and Ergonomics Society.
Visemic Processing in Audiovisual Discrimination of Natural Speech: A Simultaneous fMRI-EEG Study
ERIC Educational Resources Information Center
Dubois, Cyril; Otzenberger, Helene; Gounot, Daniel; Sock, Rudolph; Metz-Lutz, Marie-Noelle
2012-01-01
In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a counterpart based on "visemes", the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a…
High-frequency energy in singing and speech
NASA Astrophysics Data System (ADS)
Monson, Brian Bruce
While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.
Vilela, Nadia; Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Sanches, Seisse Gabriela Gandolfi; Wertzner, Haydée Fiszbein; Carvallo, Renata Mota Mamede
2016-02-01
To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central) auditory processing disorder . Language, audiological and (central) auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central) auditory processing evaluation results. When a (central) auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central) auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central) auditory processing disorder. The severity of speech-sound disorder in children was influenced by the presence of (central) auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.
Walking while talking: Young adults flexibly allocate resources between speech and gait.
Raffegeau, Tiphanie E; Haddad, Jeffrey M; Huber, Jessica E; Rietdyk, Shirley
2018-05-26
Walking while talking is an ideal multitask behavior to assess how young healthy adults manage concurrent tasks as it is well-practiced, cognitively demanding, and has real consequences for impaired performance in either task. Since the association between cognitive tasks and gait appears stronger when the gait task is more challenging, gait challenge was systematically manipulated in this study. To understand how young adults accomplish the multitask behavior of walking while talking as the gait challenge was systematically manipulated. Sixteen young adults (21 ± 1.6 years, 9 males) performed three gait tasks with and without speech: unobstructed gait (easy), obstacle crossing (moderate), obstacle crossing and tray carrying (difficult). Participants also provided a speech sample while seated for a baseline indicator of speech. The speech task was to speak extemporaneously about a topic (e.g. first car). Gait speed and the duration of silent pauses during speaking were determined. Silent pauses reflect cognitive processes involved in speech production and language planning. When speaking and walking without obstacles, gait speed decreased (relative to walking without speaking) but silent pause duration did not change (relative to seated speech). These changes are consistent with the idea that, in the easy gait task, participants placed greater value on speech pauses than on gait speed, likely due to the negative social consequences of impaired speech. In the moderate and difficult gait tasks both parameters changed: gait speed decreased and silent pauses increased. Walking while talking is a cognitively demanding task for healthy young adults, despite being a well-practiced habitual activity. These findings are consistent with the integrated model of task prioritization from Yogev-Seligmann et al., [1]. Copyright © 2018 Elsevier B.V. All rights reserved.
How our own speech rate influences our perception of others.
Bosker, Hans Rutger
2017-08-01
In conversation, our own speech and that of others follow each other in rapid succession. Effects of the surrounding context on speech perception are well documented but, despite the ubiquity of the sound of our own voice, it is unknown whether our own speech also influences our perception of other talkers. This study investigated context effects induced by our own speech through 6 experiments, specifically targeting rate normalization (i.e., perceiving phonetic segments relative to surrounding speech rate). Experiment 1 revealed that hearing prerecorded fast or slow context sentences altered the perception of ambiguous vowels, replicating earlier work. Experiment 2 demonstrated that talking at a fast or slow rate prior to target presentation also altered target perception, though the effect of preceding speech rate was reduced. Experiment 3 showed that silent talking (i.e., inner speech) at fast or slow rates did not modulate the perception of others, suggesting that the effect of self-produced speech rate in Experiment 2 arose through monitoring of the external speech signal. Experiment 4 demonstrated that, when participants were played back their own (fast/slow) speech, no reduction of the effect of preceding speech rate was observed, suggesting that the additional task of speech production may be responsible for the reduced effect in Experiment 2. Finally, Experiments 5 and 6 replicate Experiments 2 and 3 with new participant samples. Taken together, these results suggest that variation in speech production may induce variation in speech perception, thus carrying implications for our understanding of spoken communication in dialogue settings. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Issues in Perceptual Speech Analysis in Cleft Palate and Related Disorders: A Review
ERIC Educational Resources Information Center
Sell, Debbie
2005-01-01
Perceptual speech assessment is central to the evaluation of speech outcomes associated with cleft palate and velopharyngeal dysfunction. However, the complexity of this process is perhaps sometimes underestimated. To draw together the many different strands in the complex process of perceptual speech assessment and analysis, and make…
Phonologically Driven Variability: The Case of Determiners
ERIC Educational Resources Information Center
Bürki, Audrey; Laganaro, Marina; Alario, F.-Xavier
2014-01-01
Speakers usually produce words in connected speech. In such contexts, the form in which many words are uttered is influenced by the phonological properties of neighboring words. The current article examines the representations and processes underlying the production of phonologically constrained word form variations. For this purpose, we consider…
Orthography Influences the Perception and Production of Speech
ERIC Educational Resources Information Center
Rastle, Kathleen; McCormick, Samantha F.; Bayliss, Linda; Davis, Colin J.
2011-01-01
One intriguing question in language research concerns the extent to which orthographic information impacts on spoken word processing. Previous research has faced a number of methodological difficulties and has not reached a definitive conclusion. Our research addresses these difficulties by capitalizing on recent developments in the area of word…
The Universality of Acquisitional Phonology.
ERIC Educational Resources Information Center
Salus, Peter H.
This paper is concerned with the Aristotelian notion of "universal" as applied to phonological phenomena. It is claimed that speech production in children and adults, in normal and deviant speakers, and in a variety of languages, can all be described according to the same universal phonological rules which constitute the universal process of…
Encoding of Human Action in Broca's Area
ERIC Educational Resources Information Center
Fazio, Patrik; Cantagallo, Anna; Craighero, Laila; D'Ausilio, Alessandro; Roy, Alice C.; Pozzo, Thierry; Calzolari, Ferdinando; Granieri, Enrico; Fadiga, Luciano
2009-01-01
Broca's area has been considered, for over a century, as the brain centre responsible for speech production. Modern neuroimaging and neuropsychological evidence have suggested a wider functional role is played by this area. In addition to the evidence that it is involved in syntactical analysis, mathematical calculation and music processing, it…
The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing
Gow, David W.
2012-01-01
Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing. PMID:22498237
Van der Haegen, Lise; Acke, Frederic; Vingerhoets, Guy; Dhooge, Ingeborg; De Leenheer, Els; Cai, Qing; Brysbaert, Marc
2016-12-01
Auditory speech perception, speech production and reading lateralize to the left hemisphere in the majority of healthy right-handers. In this study, we investigated to what extent sensory input underlies the side of language dominance. We measured the lateralization of the three core subprocesses of language in patients who had profound hearing loss in the right ear from birth and in matched control subjects. They took part in a semantic decision listening task involving speech and sound stimuli (auditory perception), a word generation task (speech production) and a passive reading task (reading). The results show that a lack of sensory auditory input on the right side, which is strongly connected to the contralateral left hemisphere, does not lead to atypical lateralization of speech perception. Speech production and reading were also typically left lateralized in all but one patient, contradicting previous small scale studies. Other factors such as genetic constraints presumably overrule the role of sensory input in the development of (a)typical language lateralization. Copyright © 2015 Elsevier Ltd. All rights reserved.
Provine, Robert R.; Emmorey, Karen
2008-01-01
The placement of laughter in the speech of hearing individuals is not random but “punctuates” speech, occurring during pauses and at phrase boundaries where punctuation would be placed in a transcript of a conversation. For speakers, language is dominant in the competition for the vocal tract since laughter seldom interrupts spoken phrases. For users of American Sign Language, however, laughter and language do not compete in the same way for a single output channel. This study investigated whether laughter occurs simultaneously with signing, or punctuates signing, as it does speech, in 11 signed conversations (with two to five participants) that had at least one instance of audible, vocal laughter. Laughter occurred 2.7 times more often during pauses and at phrase boundaries than simultaneously with a signed utterance. Thus, the production of laughter involves higher order cognitive or linguistic processes rather than the low-level regulation of motor processes competing for a single vocal channel. In an examination of other variables, the social dynamics of deaf and hearing people were similar, with “speakers” (those signing) laughing more than their audiences and females laughing more than males. PMID:16891353
Provine, Robert R; Emmorey, Karen
2006-01-01
The placement of laughter in the speech of hearing individuals is not random but "punctuates" speech, occurring during pauses and at phrase boundaries where punctuation would be placed in a transcript of a conversation. For speakers, language is dominant in the competition for the vocal tract since laughter seldom interrupts spoken phrases. For users of American Sign Language, however, laughter and language do not compete in the same way for a single output channel. This study investigated whether laughter occurs simultaneously with signing, or punctuates signing, as it does speech, in 11 signed conversations (with two to five participants) that had at least one instance of audible, vocal laughter. Laughter occurred 2.7 times more often during pauses and at phrase boundaries than simultaneously with a signed utterance. Thus, the production of laughter involves higher order cognitive or linguistic processes rather than the low-level regulation of motor processes competing for a single vocal channel. In an examination of other variables, the social dynamics of deaf and hearing people were similar, with "speakers" (those signing) laughing more than their audiences and females laughing more than males.
Carlson, Matthew T.; Sonderegger, Morgan; Bane, Max
2014-01-01
We explored how phonological network structure influences the age of words’ first appearance in children’s (14–50 months) speech, using a large, longitudinal corpus of spontaneous child-caregiver interactions. We represent the caregiver lexicon as a network in which each word is connected to all of its phonological neighbors, and consider both words’ local neighborhood density (degree), and also their embeddedness among interconnected neighborhoods (clustering coefficient and coreness). The larger-scale structure reflected in the latter two measures is implicated in current theories of lexical development and processing, but its role in lexical development has not yet been explored. Multilevel discrete-time survival analysis revealed that children are more likely to produce new words whose network properties support lexical access for production: high degree, but low clustering coefficient and coreness. These effects appear to be strongest at earlier ages and largely absent from 30 months on. These results suggest that both a word’s local connectivity in the lexicon and its position in the lexicon as a whole influences when it is learned, and they underscore how general lexical processing mechanisms contribute to productive vocabulary development. PMID:25089073
Keilmann, Annerose; Friese, Barbara; Lässig, Anne; Hoffmann, Vanessa
2018-04-01
The introduction of neonatal hearing screening and the increasingly early age at which children can receive a cochlear implant has intensified the need for a validated questionnaire to assess the speech production of children aged 0‒18. Such a questionnaire has been created, the LittlEARS ® Early Speech Production Questionnaire (LEESPQ). This study aimed to validate a second, revised edition of the LEESPQ. Questionnaires were returned for 362 children with normal hearing. Completed questionnaires were analysed to determine if the LEESPQ is reliable, prognostically accurate, internally consistent, and if gender or multilingualism affects total scores. Total scores correlated positively with age. The LEESPQ is reliable, accurate, and consistent, and independent of gender or lingual status. A norm curve was created. This second version of the LEESPQ is a valid tool to assess the speech production development of children with normal hearing, aged 0‒18, regardless of their gender. As such, the LEESPQ may be a useful tool to monitor the development of paediatric hearing device users. The second version of the LEESPQ is a valid instrument for assessing early speech production of children aged 0‒18 months.
Review of Visual Speech Perception by Hearing and Hearing-Impaired People: Clinical Implications
ERIC Educational Resources Information Center
Woodhouse, Lynn; Hickson, Louise; Dodd, Barbara
2009-01-01
Background: Speech perception is often considered specific to the auditory modality, despite convincing evidence that speech processing is bimodal. The theoretical and clinical roles of speech-reading for speech perception, however, have received little attention in speech-language therapy. Aims: The role of speech-read information for speech…
Method and apparatus for obtaining complete speech signals for speech recognition applications
NASA Technical Reports Server (NTRS)
Abrash, Victor (Inventor); Cesari, Federico (Inventor); Franco, Horacio (Inventor); George, Christopher (Inventor); Zheng, Jing (Inventor)
2009-01-01
The present invention relates to a method and apparatus for obtaining complete speech signals for speech recognition applications. In one embodiment, the method continuously records an audio stream comprising a sequence of frames to a circular buffer. When a user command to commence or terminate speech recognition is received, the method obtains a number of frames of the audio stream occurring before or after the user command in order to identify an augmented audio signal for speech recognition processing. In further embodiments, the method analyzes the augmented audio signal in order to locate starting and ending speech endpoints that bound at least a portion of speech to be processed for recognition. At least one of the speech endpoints is located using a Hidden Markov Model.
The impact of language co-activation on L1 and L2 speech fluency.
Bergmann, Christopher; Sprenger, Simone A; Schmid, Monika S
2015-10-01
Fluent speech depends on the availability of well-established linguistic knowledge and routines for speech planning and articulation. A lack of speech fluency in late second-language (L2) learners may point to a deficiency of these representations, due to incomplete acquisition. Experiments on bilingual language processing have shown, however, that there are strong reasons to believe that multilingual speakers experience co-activation of the languages they speak. We have studied to what degree language co-activation affects fluency in the speech of bilinguals, comparing a monolingual German control group with two bilingual groups: 1) first-language (L1) attriters, who have fully acquired German before emigrating to an L2 English environment, and 2) immersed L2 learners of German (L1: English). We have analysed the temporal fluency and the incidence of disfluency markers (pauses, repetitions and self-corrections) in spontaneous film retellings. Our findings show that learners to speak more slowly than controls and attriters. Also, on each count, the speech of at least one of the bilingual groups contains more disfluency markers than the retellings of the control group. Generally speaking, both bilingual groups-learners and attriters-are equally (dis)fluent and significantly more disfluent than the monolingual speakers. Given that the L1 attriters are unaffected by incomplete acquisition, we interpret these findings as evidence for language competition during speech production. Copyright © 2015. Published by Elsevier B.V.
Automatic intelligibility classification of sentence-level pathological speech
Kim, Jangwon; Kumar, Naveen; Tsiartas, Andreas; Li, Ming; Narayanan, Shrikanth S.
2014-01-01
Pathological speech usually refers to the condition of speech distortion resulting from atypicalities in voice and/or in the articulatory mechanisms owing to disease, illness or other physical or biological insult to the production system. Although automatic evaluation of speech intelligibility and quality could come in handy in these scenarios to assist experts in diagnosis and treatment design, the many sources and types of variability often make it a very challenging computational processing problem. In this work we propose novel sentence-level features to capture abnormal variation in the prosodic, voice quality and pronunciation aspects in pathological speech. In addition, we propose a post-classification posterior smoothing scheme which refines the posterior of a test sample based on the posteriors of other test samples. Finally, we perform feature-level fusions and subsystem decision fusion for arriving at a final intelligibility decision. The performances are tested on two pathological speech datasets, the NKI CCRT Speech Corpus (advanced head and neck cancer) and the TORGO database (cerebral palsy or amyotrophic lateral sclerosis), by evaluating classification accuracy without overlapping subjects’ data among training and test partitions. Results show that the feature sets of each of the voice quality subsystem, prosodic subsystem, and pronunciation subsystem, offer significant discriminating power for binary intelligibility classification. We observe that the proposed posterior smoothing in the acoustic space can further reduce classification errors. The smoothed posterior score fusion of subsystems shows the best classification performance (73.5% for unweighted, and 72.8% for weighted, average recalls of the binary classes). PMID:25414544
Choi, Ja Young; Hu, Elly R; Perrachione, Tyler K
2018-04-01
The nondeterministic relationship between speech acoustics and abstract phonemic representations imposes a challenge for listeners to maintain perceptual constancy despite the highly variable acoustic realization of speech. Talker normalization facilitates speech processing by reducing the degrees of freedom for mapping between encountered speech and phonemic representations. While this process has been proposed to facilitate the perception of ambiguous speech sounds, it is currently unknown whether talker normalization is affected by the degree of potential ambiguity in acoustic-phonemic mapping. We explored the effects of talker normalization on speech processing in a series of speeded classification paradigms, parametrically manipulating the potential for inconsistent acoustic-phonemic relationships across talkers for both consonants and vowels. Listeners identified words with varying potential acoustic-phonemic ambiguity across talkers (e.g., beet/boat vs. boot/boat) spoken by single or mixed talkers. Auditory categorization of words was always slower when listening to mixed talkers compared to a single talker, even when there was no potential acoustic ambiguity between target sounds. Moreover, the processing cost imposed by mixed talkers was greatest when words had the most potential acoustic-phonemic overlap across talkers. Models of acoustic dissimilarity between target speech sounds did not account for the pattern of results. These results suggest (a) that talker normalization incurs the greatest processing cost when disambiguating highly confusable sounds and (b) that talker normalization appears to be an obligatory component of speech perception, taking place even when the acoustic-phonemic relationships across sounds are unambiguous.
EEG oscillations entrain their phase to high-level features of speech sound.
Zoefel, Benedikt; VanRullen, Rufin
2016-01-01
Phase entrainment of neural oscillations, the brain's adjustment to rhythmic stimulation, is a central component in recent theories of speech comprehension: the alignment between brain oscillations and speech sound improves speech intelligibility. However, phase entrainment to everyday speech sound could also be explained by oscillations passively following the low-level periodicities (e.g., in sound amplitude and spectral content) of auditory stimulation-and not by an adjustment to the speech rhythm per se. Recently, using novel speech/noise mixture stimuli, we have shown that behavioral performance can entrain to speech sound even when high-level features (including phonetic information) are not accompanied by fluctuations in sound amplitude and spectral content. In the present study, we report that neural phase entrainment might underlie our behavioral findings. We observed phase-locking between electroencephalogram (EEG) and speech sound in response not only to original (unprocessed) speech but also to our constructed "high-level" speech/noise mixture stimuli. Phase entrainment to original speech and speech/noise sound did not differ in the degree of entrainment, but rather in the actual phase difference between EEG signal and sound. Phase entrainment was not abolished when speech/noise stimuli were presented in reverse (which disrupts semantic processing), indicating that acoustic (rather than linguistic) high-level features play a major role in the observed neural entrainment. Our results provide further evidence for phase entrainment as a potential mechanism underlying speech processing and segmentation, and for the involvement of high-level processes in the adjustment to the rhythm of speech. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tallal, Paula; Miller, Steve L.; Bedi, Gail; Byma, Gary; Wang, Xiaoqin; Nagarajan, Srikantan S.; Schreiner, Christoph; Jenkins, William M.; Merzenich, Michael M.
1996-01-01
A speech processing algorithm was developed to create more salient versions of the rapidly changing elements in the acoustic waveform of speech that have been shown to be deficiently processed by language-learning impaired (LLI) children. LLI children received extensive daily training, over a 4-week period, with listening exercises in which all speech was translated into this synthetic form. They also received daily training with computer "games" designed to adaptively drive improvements in temporal processing thresholds. Significant improvements in speech discrimination and language comprehension abilities were demonstrated in two independent groups of LLI children.
ERIC Educational Resources Information Center
Ashwell, Tim; Elam, Jesse R.
2017-01-01
The ultimate aim of our research project was to use the Google Web Speech API to automate scoring of elicited imitation (EI) tests. However, in order to achieve this goal, we had to take a number of preparatory steps. We needed to assess how accurate this speech recognition tool is in recognizing native speakers' production of the test items; we…
Don’t speak too fast! Processing of fast rate speech in children with specific language impairment
Bedoin, Nathalie; Krifi-Papoz, Sonia; Herbillon, Vania; Caillot-Bascoul, Aurélia; Gonzalez-Monge, Sibylle; Boulenger, Véronique
2018-01-01
Background Perception of speech rhythm requires the auditory system to track temporal envelope fluctuations, which carry syllabic and stress information. Reduced sensitivity to rhythmic acoustic cues has been evidenced in children with Specific Language Impairment (SLI), impeding syllabic parsing and speech decoding. Our study investigated whether these children experience specific difficulties processing fast rate speech as compared with typically developing (TD) children. Method Sixteen French children with SLI (8–13 years old) with mainly expressive phonological disorders and with preserved comprehension and 16 age-matched TD children performed a judgment task on sentences produced 1) at normal rate, 2) at fast rate or 3) time-compressed. Sensitivity index (d′) to semantically incongruent sentence-final words was measured. Results Overall children with SLI perform significantly worse than TD children. Importantly, as revealed by the significant Group × Speech Rate interaction, children with SLI find it more challenging than TD children to process both naturally or artificially accelerated speech. The two groups do not significantly differ in normal rate speech processing. Conclusion In agreement with rhythm-processing deficits in atypical language development, our results suggest that children with SLI face difficulties adjusting to rapid speech rate. These findings are interpreted in light of temporal sampling and prosodic phrasing frameworks and of oscillatory mechanisms underlying speech perception. PMID:29373610
Brozovich, Faith A; Heimberg, Richard G
2013-12-01
The present study investigated whether post-event processing (PEP) involving mental imagery about a past speech is particularly detrimental for socially anxious individuals who are currently anticipating giving a speech. One hundred fourteen high and low socially anxious participants were told they would give a 5 min impromptu speech at the end of the experimental session. They were randomly assigned to one of three manipulation conditions: post-event processing about a past speech incorporating imagery (PEP-Imagery), semantic post-event processing about a past speech (PEP-Semantic), or a control condition, (n=19 per experimental group, per condition [high vs low socially anxious]). After the condition inductions, individuals' anxiety, their predictions of performance in the anticipated speech, and their interpretations of other ambiguous social events were measured. Consistent with predictions, high socially anxious individuals in the PEP-Imagery condition displayed greater anxiety than individuals in the other conditions immediately following the induction and before the anticipated speech task. They also interpreted ambiguous social scenarios in a more socially anxious manner than socially anxious individuals in the control condition. High socially anxious individuals made more negative predictions about their upcoming speech performance than low anxious participants in all conditions. The impact of imagery during post-event processing in social anxiety and its implications are discussed. © 2013.
Derix, Johanna; Iljina, Olga; Weiske, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Ball, Tonio
2014-01-01
Exchange of thoughts by means of expressive speech is fundamental to human communication. However, the neuronal basis of real-life communication in general, and of verbal exchange of ideas in particular, has rarely been studied until now. Here, our aim was to establish an approach for exploring the neuronal processes related to cognitive “idea” units (IUs) in conditions of non-experimental speech production. We investigated whether such units corresponding to single, coherent chunks of speech with syntactically-defined borders, are useful to unravel the neuronal mechanisms underlying real-world human cognition. To this aim, we employed simultaneous electrocorticography (ECoG) and video recordings obtained in pre-neurosurgical diagnostics of epilepsy patients. We transcribed non-experimental, daily hospital conversations, identified IUs in transcriptions of the patients' speech, classified the obtained IUs according to a previously-proposed taxonomy focusing on memory content, and investigated the underlying neuronal activity. In each of our three subjects, we were able to collect a large number of IUs which could be assigned to different functional IU subclasses with a high inter-rater agreement. Robust IU-onset-related changes in spectral magnitude could be observed in high gamma frequencies (70–150 Hz) on the inferior lateral convexity and in the superior temporal cortex regardless of the IU content. A comparison of the topography of these responses with mouth motor and speech areas identified by electrocortical stimulation showed that IUs might be of use for extraoperative mapping of eloquent cortex (average sensitivity: 44.4%, average specificity: 91.1%). High gamma responses specific to memory-related IU subclasses were observed in the inferior parietal and prefrontal regions. IU-based analysis of ECoG recordings during non-experimental communication thus elicits topographically- and functionally-specific effects. We conclude that segmentation of spontaneous real-world speech in linguistically-motivated units is a promising strategy for elucidating the neuronal basis of mental processing during non-experimental communication. PMID:24982625
The Segmentation Problem in the Study of Impromptu Speech.
ERIC Educational Resources Information Center
Loman, Bengt
A fundamental problem in the study of spontaneous speech is how to segment it for analysis. The segments should be relevant for the study of linguistic structures, speech planning, speech production, or communication strategies. Operational rules for segmentation should consider a wide variety of criteria and be hierarchically ordered. This is…
ERIC Educational Resources Information Center
McGrath, Lauren M.; Hutaff-Lee, Christa; Scott, Ashley; Boada, Richard; Shriberg, Lawrence D.; Pennington, Bruce F.
2008-01-01
This study focuses on the comorbidity between attention-deficit/hyperactivity disorder (ADHD) symptoms and speech sound disorder (SSD). SSD is a developmental disorder characterized by speech production errors that impact intelligibility. Previous research addressing this comorbidity has typically used heterogeneous groups of speech-language…
Perceptual Bias in Speech Error Data Collection: Insights from Spanish Speech Errors
ERIC Educational Resources Information Center
Perez, Elvira; Santiago, Julio; Palma, Alfonso; O'Seaghdha, Padraig G.
2007-01-01
This paper studies the reliability and validity of naturalistic speech errors as a tool for language production research. Possible biases when collecting naturalistic speech errors are identified and specific predictions derived. These patterns are then contrasted with published reports from Germanic languages (English, German and Dutch) and one…
Central Presbycusis: A Review and Evaluation of the Evidence
Humes, Larry E.; Dubno, Judy R.; Gordon-Salant, Sandra; Lister, Jennifer J.; Cacace, Anthony T.; Cruickshanks, Karen J.; Gates, George A.; Wilson, Richard H.; Wingfield, Arthur
2018-01-01
Background The authors reviewed the evidence regarding the existence of age-related declines in central auditory processes and the consequences of any such declines for everyday communication. Purpose This report summarizes the review process and presents its findings. Data Collection and Analysis The authors reviewed 165 articles germane to central presbycusis. Of the 165 articles, 132 articles with a focus on human behavioral measures for either speech or nonspeech stimuli were selected for further analysis. Results For 76 smaller-scale studies of speech understanding in older adults reviewed, the following findings emerged: (1) the three most commonly studied behavioral measures were speech in competition, temporally distorted speech, and binaural speech perception (especially dichotic listening); (2) for speech in competition and temporally degraded speech, hearing loss proved to have a significant negative effect on performance in most of the laboratory studies; (3) significant negative effects of age, unconfounded by hearing loss, were observed in most of the studies of speech in competing speech, time-compressed speech, and binaural speech perception; and (4) the influence of cognitive processing on speech understanding has been examined much less frequently, but when included, significant positive associations with speech understanding were observed. For 36 smaller-scale studies of the perception of nonspeech stimuli by older adults reviewed, the following findings emerged: (1) the three most frequently studied behavioral measures were gap detection, temporal discrimination, and temporal-order discrimination or identification; (2) hearing loss was seldom a significant factor; and (3) negative effects of age were almost always observed. For 18 studies reviewed that made use of test batteries and medium-to-large sample sizes, the following findings emerged: (1) all studies included speech-based measures of auditory processing; (2) 4 of the 18 studies included nonspeech stimuli; (3) for the speech-based measures, monaural speech in a competing-speech background, dichotic speech, and monaural time-compressed speech were investigated most frequently; (4) the most frequently used tests were the Synthetic Sentence Identification (SSI) test with Ipsilateral Competing Message (ICM), the Dichotic Sentence Identification (DSI) test, and time-compressed speech; (5) many of these studies using speech-based measures reported significant effects of age, but most of these studies were confounded by declines in hearing, cognition, or both; (6) for nonspeech auditory-processing measures, the focus was on measures of temporal processing in all four studies; (7) effects of cognition on nonspeech measures of auditory processing have been studied less frequently, with mixed results, whereas the effects of hearing loss on performance were minimal due to judicious selection of stimuli; and (8) there is a paucity of observational studies using test batteries and longitudinal designs. Conclusions Based on this review of the scientific literature, there is insufficient evidence to confirm the existence of central presbycusis as an isolated entity. On the other hand, recent evidence has been accumulating in support of the existence of central presbycusis as a multifactorial condition that involves age- and/or disease-related changes in the auditory system and in the brain. Moreover, there is a clear need for additional research in this area. PMID:22967738
Aspect level sentiment analysis using machine learning
NASA Astrophysics Data System (ADS)
Shubham, D.; Mithil, P.; Shobharani, Meesala; Sumathy, S.
2017-11-01
In modern world the development of web and smartphones increases the usage of online shopping. The overall feedback about product is generated with the help of sentiment analysis using text processing.Opinion mining or sentiment analysis is used to collect and categorized the reviews of product. The proposed system uses aspect leveldetection in which features are extracted from the datasets. The system performs pre-processing operation such as tokenization, part of speech and limitization on the data tofinds meaningful information which is used to detect the polarity level and assigns rating to product. The proposed model focuses on aspects to produces accurate result by avoiding the spam reviews.
Tobacco Advertising and the First Amendment.
ERIC Educational Resources Information Center
Hanauer, Peter; Pertschuk, Michael
1988-01-01
Presents an argument for the recently proposed ban on advertising of tobacco products. Differentiates between commercial speech, used to sell products and services, and political speech, which relates to ideas. Argues that tobacco is the only legal product that is dangerous when used as intended, therefore First Amendment rights regarding other…
Automatic Speech Recognition from Neural Signals: A Focused Review.
Herff, Christian; Schultz, Tanja
2016-01-01
Speech interfaces have become widely accepted and are nowadays integrated in various real-life applications and devices. They have become a part of our daily life. However, speech interfaces presume the ability to produce intelligible speech, which might be impossible due to either loud environments, bothering bystanders or incapabilities to produce speech (i.e., patients suffering from locked-in syndrome). For these reasons it would be highly desirable to not speak but to simply envision oneself to say words or sentences. Interfaces based on imagined speech would enable fast and natural communication without the need for audible speech and would give a voice to otherwise mute people. This focused review analyzes the potential of different brain imaging techniques to recognize speech from neural signals by applying Automatic Speech Recognition technology. We argue that modalities based on metabolic processes, such as functional Near Infrared Spectroscopy and functional Magnetic Resonance Imaging, are less suited for Automatic Speech Recognition from neural signals due to low temporal resolution but are very useful for the investigation of the underlying neural mechanisms involved in speech processes. In contrast, electrophysiologic activity is fast enough to capture speech processes and is therefor better suited for ASR. Our experimental results indicate the potential of these signals for speech recognition from neural data with a focus on invasively measured brain activity (electrocorticography). As a first example of Automatic Speech Recognition techniques used from neural signals, we discuss the Brain-to-text system.
Schoof, Tim; Rosen, Stuart
2014-01-01
Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60–72 years) and younger (19–29 years) adults with normal hearing. Speech reception thresholds (SRTs) were measured for sentences in steady-state speech-shaped noise (SS), 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM), and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analog of the SRT), and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise. SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed) although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed. PMID:25429266
ERIC Educational Resources Information Center
Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram
2009-01-01
Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands…
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2012-01-01
Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137
Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise
2013-05-01
To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.
Ertmer, David J.; Jung, Jongmin
2012-01-01
Background Evidence of auditory-guided speech development can be heard as the prelinguistic vocalizations of young cochlear implant recipients become increasingly complex, phonetically diverse, and speech-like. In research settings, these changes are most often documented by collecting and analyzing speech samples. Sampling, however, may be too time-consuming and impractical for widespread use in clinical settings. The Conditioned Assessment of Speech Production (CASP; Ertmer & Stoel-Gammon, 2008) is an easily administered and time-efficient alternative to speech sample analysis. The current investigation examined the concurrent validity of the CASP and data obtained from speech samples recorded at the same intervals. Methods Nineteen deaf children who received CIs before their third birthdays participated in the study. Speech samples and CASP scores were gathered at 6, 12, 18, and 24 months post-activation. Correlation analyses were conducted to assess the concurrent validity of CASP scores and data from samples. Results CASP scores showed strong concurrent validity with scores from speech samples gathered across all recording sessions (6 – 24 months). Conclusions The CASP was found to be a valid, reliable, and time-efficient tool for assessing progress in vocal development during young CI recipient’s first 2 years of device experience. PMID:22628109
Turgeon, Christine; Prémont, Amélie; Trudeau-Fisette, Paméla; Ménard, Lucie
2015-05-01
Studies have reported strong links between speech production and perception. We aimed to evaluate the role of long- and short-term auditory feedback alteration on speech production. Eleven adults with normal hearing (controls) and 17 cochlear implant (CI) users (7 pre-lingually deaf and 10 post-lingually deaf adults) were recruited. Short-term auditory feedback deprivation was induced by turning off the CI or by providing masking noise. Acoustic and articulatory measures were obtained during the production of /u/, with and without a tube inserted between the lips (perturbation), and with and without auditory feedback. F1 values were significantly different between the implant OFF and ON conditions for the pre-lingually deaf participants. In the absence of auditory feedback, the pre-lingually deaf participants moved the tongue more forward. Thus, a lack of normal auditory experience of speech may affect the internal representation of a vowel.
Conceptual clusters in figurative language production.
Corts, Daniel P; Meyers, Kristina
2002-07-01
Although most prior research on figurative language examines comprehension, several recent studies on the production of such language have proved to be informative. One of the most noticeable traits of figurative language production is that it is produced at a somewhat random rate with occasional bursts of highly figurative speech (e.g., Corts & Pollio, 1999). The present article seeks to extend these findings by observing production during speech that involves a very high base rate of figurative language, making statistically defined bursts difficult to detect. In an analysis of three Baptist sermons, burst-like clusters of figurative language were identified. Further study indicated that these clusters largely involve a central root metaphor that represents the topic under consideration. An interaction of the coherence, along with a conceptual understanding of a topic and the relative importance of the topic to the purpose of the speech, is offered as the most likely explanation for the clustering of figurative language in natural speech.
Near-Term Fetuses Process Temporal Features of Speech
ERIC Educational Resources Information Center
Granier-Deferre, Carolyn; Ribeiro, Aurelie; Jacquet, Anne-Yvonne; Bassereau, Sophie
2011-01-01
The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or…
The Downside of Greater Lexical Influences: Selectively Poorer Speech Perception in Noise
ERIC Educational Resources Information Center
Lam, Boji P. W.; Xie, Zilong; Tessmer, Rachel; Chandrasekaran, Bharath
2017-01-01
Purpose: Although lexical information influences phoneme perception, the extent to which reliance on lexical information enhances speech processing in challenging listening environments is unclear. We examined the extent to which individual differences in lexical influences on phonemic processing impact speech processing in maskers containing…
Sleep Disrupts High-Level Speech Parsing Despite Significant Basic Auditory Processing.
Makov, Shiri; Sharon, Omer; Ding, Nai; Ben-Shachar, Michal; Nir, Yuval; Zion Golumbic, Elana
2017-08-09
The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep. SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech parsing are also preserved. We used a novel approach for studying the depth of speech processing across wakefulness and sleep while tracking neuronal activity with EEG. We found that responses to the auditory sound stream remained intact; however, the sleeping brain did not show signs of hierarchical parsing of the continuous stream of syllables into words, phrases, and sentences. The results suggest that sleep imposes a functional barrier between basic sensory processing and high-level cognitive processing. This paradigm also holds promise for studying residual cognitive abilities in a wide array of unresponsive states. Copyright © 2017 the authors 0270-6474/17/377772-10$15.00/0.
ERIC Educational Resources Information Center
Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve
2018-01-01
To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…
Temporal processing of speech in a time-feature space
NASA Astrophysics Data System (ADS)
Avendano, Carlos
1997-09-01
The performance of speech communication systems often degrades under realistic environmental conditions. Adverse environmental factors include additive noise sources, room reverberation, and transmission channel distortions. This work studies the processing of speech in the temporal-feature or modulation spectrum domain, aiming for alleviation of the effects of such disturbances. Speech reflects the geometry of the vocal organs, and the linguistically dominant component is in the shape of the vocal tract. At any given point in time, the shape of the vocal tract is reflected in the short-time spectral envelope of the speech signal. The rate of change of the vocal tract shape appears to be important for the identification of linguistic components. This rate of change, or the rate of change of the short-time spectral envelope can be described by the modulation spectrum, i.e. the spectrum of the time trajectories described by the short-time spectral envelope. For a wide range of frequency bands, the modulation spectrum of speech exhibits a maximum at about 4 Hz, the average syllabic rate. Disturbances often have modulation frequency components outside the speech range, and could in principle be attenuated without significantly affecting the range with relevant linguistic information. Early efforts for exploiting the modulation spectrum domain (temporal processing), such as the dynamic cepstrum or the RASTA processing, used ad hoc designed processing and appear to be suboptimal. As a major contribution, in this dissertation we aim for a systematic data-driven design of temporal processing. First we analytically derive and discuss some properties and merits of temporal processing for speech signals. We attempt to formalize the concept and provide a theoretical background which has been lacking in the field. In the experimental part we apply temporal processing to a number of problems including adaptive noise reduction in cellular telephone environments, reduction of reverberation for speech enhancement, and improvements on automatic recognition of speech degraded by linear distortions and reverberation.
Multi-sensory learning and learning to read.
Blomert, Leo; Froyen, Dries
2010-09-01
The basis of literacy acquisition in alphabetic orthographies is the learning of the associations between the letters and the corresponding speech sounds. In spite of this primacy in learning to read, there is only scarce knowledge on how this audiovisual integration process works and which mechanisms are involved. Recent electrophysiological studies of letter-speech sound processing have revealed that normally developing readers take years to automate these associations and dyslexic readers hardly exhibit automation of these associations. It is argued that the reason for this effortful learning may reside in the nature of the audiovisual process that is recruited for the integration of in principle arbitrarily linked elements. It is shown that letter-speech sound integration does not resemble the processes involved in the integration of natural audiovisual objects such as audiovisual speech. The automatic symmetrical recruitment of the assumedly uni-sensory visual and auditory cortices in audiovisual speech integration does not occur for letter and speech sound integration. It is also argued that letter-speech sound integration only partly resembles the integration of arbitrarily linked unfamiliar audiovisual objects. Letter-sound integration and artificial audiovisual objects share the necessity of a narrow time window for integration to occur. However, they differ from these artificial objects, because they constitute an integration of partly familiar elements which acquire meaning through the learning of an orthography. Although letter-speech sound pairs share similarities with audiovisual speech processing as well as with unfamiliar, arbitrary objects, it seems that letter-speech sound pairs develop into unique audiovisual objects that furthermore have to be processed in a unique way in order to enable fluent reading and thus very likely recruit other neurobiological learning mechanisms than the ones involved in learning natural or arbitrary unfamiliar audiovisual associations. Copyright 2010 Elsevier B.V. All rights reserved.
Neural Tuning to Low-Level Features of Speech throughout the Perisylvian Cortex.
Berezutskaya, Julia; Freudenburg, Zachary V; Güçlü, Umut; van Gerven, Marcel A J; Ramsey, Nick F
2017-08-16
Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain. SIGNIFICANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cortical regions is involved in sound and language processing. Here, we investigated the tuning to low-level sound features within these regions using neural responses to a short feature film. We also looked at whether the tuning organization along these brain regions showed any parallel to the hierarchy of language structures in continuous speech. Our results show that low-level speech features propagate throughout the perisylvian cortex and potentially contribute to the emergence of "coarse" speech representations in inferior frontal gyrus typically associated with high-level language processing. These findings add to the previous work on auditory processing and underline a distinctive role of inferior frontal gyrus in natural speech comprehension. Copyright © 2017 the authors 0270-6474/17/377906-15$15.00/0.
Terband, H.; Maassen, B.; Guenther, F.H.; Brumberg, J.
2014-01-01
Background/Purpose Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. Method In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Results Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. Conclusions These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. PMID:24491630
Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela
2015-01-01
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047
Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela
2015-07-01
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.
Dynamic action units slip in speech production errors ☆
Goldstein, Louis; Pouplier, Marianne; Chen, Larissa; Saltzman, Elliot; Byrd, Dani
2008-01-01
In the past, the nature of the compositional units proposed for spoken language has largely diverged from the types of control units pursued in the domains of other skilled motor tasks. A classic source of evidence as to the units structuring speech has been patterns observed in speech errors – “slips of the tongue”. The present study reports, for the first time, on kinematic data from tongue and lip movements during speech errors elicited in the laboratory using a repetition task. Our data are consistent with the hypothesis that speech production results from the assembly of dynamically defined action units – gestures – in a linguistically structured environment. The experimental results support both the presence of gestural units and the dynamical properties of these units and their coordination. This study of speech articulation shows that it is possible to develop a principled account of spoken language within a more general theory of action. PMID:16822494
Toutios, Asterios; Narayanan, Shrikanth S
2016-01-01
Real-time magnetic resonance imaging (rtMRI) of the moving vocal tract during running speech production is an important emerging tool for speech production research providing dynamic information of a speaker's upper airway from the entire mid-sagittal plane or any other scan plane of interest. There have been several advances in the development of speech rtMRI and corresponding analysis tools, and their application to domains such as phonetics and phonological theory, articulatory modeling, and speaker characterization. An important recent development has been the open release of a database that includes speech rtMRI data from five male and five female speakers of American English each producing 460 phonetically balanced sentences. The purpose of the present paper is to give an overview and outlook of the advances in rtMRI as a tool for speech research and technology development.