Sample records for speech act coding

  1. Automating annotation of information-giving for analysis of clinical conversation.

    PubMed

    Mayfield, Elijah; Laws, M Barton; Wilson, Ira B; Penstein Rosé, Carolyn

    2014-02-01

    Coding of clinical communication for fine-grained features such as speech acts has produced a substantial literature. However, annotation by humans is laborious and expensive, limiting application of these methods. We aimed to show that through machine learning, computers could code certain categories of speech acts with sufficient reliability to make useful distinctions among clinical encounters. The data were transcripts of 415 routine outpatient visits of HIV patients which had previously been coded for speech acts using the Generalized Medical Interaction Analysis System (GMIAS); 50 had also been coded for larger scale features using the Comprehensive Analysis of the Structure of Encounters System (CASES). We aggregated selected speech acts into information-giving and requesting, then trained the machine to automatically annotate using logistic regression classification. We evaluated reliability by per-speech act accuracy. We used multiple regression to predict patient reports of communication quality from post-visit surveys using the patient and provider information-giving to information-requesting ratio (briefly, information-giving ratio) and patient gender. Automated coding produces moderate reliability with human coding (accuracy 71.2%, κ=0.57), with high correlation between machine and human prediction of the information-giving ratio (r=0.96). The regression significantly predicted four of five patient-reported measures of communication quality (r=0.263-0.344). The information-giving ratio is a useful and intuitive measure for predicting patient perception of provider-patient communication quality. These predictions can be made with automated annotation, which is a practical option for studying large collections of clinical encounters with objectivity, consistency, and low cost, providing greater opportunity for training and reflection for care providers.

  2. Analysis of Parent, Teacher, and Consultant Speech Exchanges and Educational Outcomes of Students With Autism During COMPASS Consultation.

    PubMed

    Ruble, Lisa; Birdwhistell, Jessie; Toland, Michael D; McGrew, John H

    2011-01-01

    The significant increase in the numbers of students with autism combined with the need for better trained teachers (National Research Council, 2001) call for research on the effectiveness of alternative methods, such as consultation, that have the potential to improve service delivery. Data from 2 randomized controlled single-blind trials indicate that an autism-specific consultation planning framework known as the collaborative model for promoting competence and success (COMPASS) is effective in increasing child Individual Education Programs (IEP) outcomes (Ruble, Dal-rymple, & McGrew, 2010; Ruble, McGrew, & Toland, 2011). In this study, we describe the verbal interactions, defined as speech acts and speech act exchanges that take place during COMPASS consultation, and examine the associations between speech exchanges and child outcomes. We applied the Psychosocial Processes Coding Scheme (Leaper, 1991) to code speech acts. Speech act exchanges were overwhelmingly affiliative, failed to show statistically significant relationships with child IEP outcomes and teacher adherence, but did correlate positively with IEP quality.

  3. Analysis of Parent, Teacher, and Consultant Speech Exchanges and Educational Outcomes of Students With Autism During COMPASS Consultation

    PubMed Central

    RUBLE, LISA; BIRDWHISTELL, JESSIE; TOLAND, MICHAEL D.; MCGREW, JOHN H.

    2011-01-01

    The significant increase in the numbers of students with autism combined with the need for better trained teachers (National Research Council, 2001) call for research on the effectiveness of alternative methods, such as consultation, that have the potential to improve service delivery. Data from 2 randomized controlled single-blind trials indicate that an autism-specific consultation planning framework known as the collaborative model for promoting competence and success (COMPASS) is effective in increasing child Individual Education Programs (IEP) outcomes (Ruble, Dal-rymple, & McGrew, 2010; Ruble, McGrew, & Toland, 2011). In this study, we describe the verbal interactions, defined as speech acts and speech act exchanges that take place during COMPASS consultation, and examine the associations between speech exchanges and child outcomes. We applied the Psychosocial Processes Coding Scheme (Leaper, 1991) to code speech acts. Speech act exchanges were overwhelmingly affiliative, failed to show statistically significant relationships with child IEP outcomes and teacher adherence, but did correlate positively with IEP quality. PMID:22639523

  4. School Dress Codes v. The First Amendment: Ganging up on Student Attire.

    ERIC Educational Resources Information Center

    Jahn, Karon L.

    Do school dress codes written with the specific purpose of limiting individual dress preferences, including dress associated with gangs, infringe on speech freedoms granted by the First Amendment of the U.S. Constitution? Although the Supreme Court has extended its protection of political speech to nonverbal acts of communication, it has…

  5. Speech Acts during Friends' and Non-Friends' Spontaneous Conversations in Preschool Dyads with High-Functioning Autism Spectrum Disorder versus Typical Development

    ERIC Educational Resources Information Center

    Bauminger-Zviely, Nirit; Golan-Itshaky, Adi; Tubul-Lavy, Gila

    2017-01-01

    In this study, we videotaped two 10-min. free-play interactions and coded speech acts (SAs) in peer talk of 51 preschoolers (21 ASD, 30 typical), interacting with friend versus non-friend partners. Groups were matched for maternal education, IQ (verbal/nonverbal), and CA. We compared SAs by group (ASD/typical), by partner's friendship status…

  6. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    PubMed

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate anatomically distinct cortical representations of modulated noise in normal-hearing and hearing-impaired listeners. This work provides the first link among hearing thresholds, the amplitude of cortical representations of modulated sounds, and the ability to understand speech in modulated background noise. In light of previous work, we propose that magnified cortical representations of modulated sounds disrupt the separation of speech from modulated background noise in auditory cortex. Copyright © 2017 Millman et al.

  7. 29 CFR 1401.21 - Information policy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... excluded by subsection 552(b) of title 5, United States Code, matters covered by the Privacy Act, or other... routine public distribution, e.g., pamphlets, speeches, and educational or training materials, will be...

  8. Development of a coding form for approach control/pilot voice communications.

    DOT National Transportation Integrated Search

    1995-05-01

    The Aviation Topics Speech Acts Taxonomy (ATSAT) is a tool for categorizing pilot/controller communications according to their purpose and for classifying communication errors. Air traffic controller communications that deviate from FAA Air Traffic C...

  9. Speech Acts During Friends' and Non-friends' Spontaneous Conversations in Preschool Dyads with High-Functioning Autism Spectrum Disorder versus Typical Development.

    PubMed

    Bauminger-Zviely, Nirit; Golan-Itshaky, Adi; Tubul-Lavy, Gila

    2017-05-01

    In this study, we videotaped two 10-min. free-play interactions and coded speech acts (SAs) in peer talk of 51 preschoolers (21 ASD, 30 typical), interacting with friend versus non-friend partners. Groups were matched for maternal education, IQ (verbal/nonverbal), and CA. We compared SAs by group (ASD/typical), by partner's friendship status (friend/non-friend), and by partner's disability status. Main results yielded a higher amount and diversity of SAs in the typical than the ASD group (mainly in assertive acts, organizational devices, object-dubbing, and pretend-play); yet, those categories, among others, showed better performance with friends versus non-friends. Overall, a more nuanced perception of the pragmatic deficit in ASD should be adopted, highlighting friendship as an important context for children's development of SAs.

  10. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, J.F.; Ng, L.C.

    1998-03-17

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.

  11. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, John F.; Ng, Lawrence C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.

  12. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, J.F.; Ng, L.C.

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used formore » purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.« less

  13. Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain

    PubMed Central

    Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon

    2013-01-01

    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472

  14. Argument Structure, Speech Acts, and Roles in Child-Adult Dispute Episodes.

    ERIC Educational Resources Information Center

    Prescott, Barbara L.

    A study identified discourse patterns in potential disputes, deflected disputes, incomplete, and completed disputes from a one-hour conversation involving two 3-year-old female children and one female adult. These varied dispute episodes were identified, coded, and analyzed using a pragmatic model of adult argumentation focusing on the structures,…

  15. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    ERIC Educational Resources Information Center

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  16. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  17. Speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfullymore » regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.« less

  18. Axon guidance pathways served as common targets for human speech/language evolution and related disorders.

    PubMed

    Lei, Huimeng; Yan, Zhangming; Sun, Xiaohong; Zhang, Yue; Wang, Jianhong; Ma, Caihong; Xu, Qunyuan; Wang, Rui; Jarvis, Erich D; Sun, Zhirong

    2017-11-01

    Human and several nonhuman species share the rare ability of modifying acoustic and/or syntactic features of sounds produced, i.e. vocal learning, which is the important neurobiological and behavioral substrate of human speech/language. This convergent trait was suggested to be associated with significant genomic convergence and best manifested at the ROBO-SLIT axon guidance pathway. Here we verified the significance of such genomic convergence and assessed its functional relevance to human speech/language using human genetic variation data. In normal human populations, we found the affected amino acid sites were well fixed and accompanied with significantly more associated protein-coding SNPs in the same genes than the rest genes. Diseased individuals with speech/language disorders have significant more low frequency protein coding SNPs but they preferentially occurred outside the affected genes. Such patients' SNPs were enriched in several functional categories including two axon guidance pathways (mediated by netrin and semaphorin) that interact with ROBO-SLITs. Four of the six patients have homozygous missense SNPs on PRAME gene family, one youngest gene family in human lineage, which possibly acts upon retinoic acid receptor signaling, similarly as FOXP2, to modulate axon guidance. Taken together, we suggest the axon guidance pathways (e.g. ROBO-SLIT, PRAME gene family) served as common targets for human speech/language evolution and related disorders. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  20. Hateful Help--A Practical Look at the Issue of Hate Speech.

    ERIC Educational Resources Information Center

    Shelton, Michael W.

    Many college and university administrators have responded to the recent increase in hateful incidents on campus by putting hate speech codes into place. The establishment of speech codes has sparked a heated debate over the impact that such codes have upon free speech and First Amendment values. Some commentators have suggested that viewing hate…

  1. Speech Acts across Cultures: Challenges to Communication in a Second Language. Studies on Language Acquisition, 11.

    ERIC Educational Resources Information Center

    Gass, Susan M., Ed.; Neu, Joyce, Ed.

    Articles on speech acts and intercultural communication include: "Investigating the Production of Speech Act Sets" (Andrew Cohen); "Non-Native Refusals: A Methodological Perspective" (Noel Houck, Susan M. Gass); "Natural Speech Act Data versus Written Questionnaire Data: How Data Collection Method Affects Speech Act…

  2. Elements of a Plan-Based Theory of Speech Acts. Technical Report No. 141.

    ERIC Educational Resources Information Center

    Cohen, Philip R.; Perrault, C. Raymond

    This report proposes that people often plan their speech acts to affect their listeners' beliefs, goals, and emotional states and that such language use can be modeled by viewing speech acts as operators in a planning system, allowing both physical and speech acts to be integrated into plans. Methodological issues of how speech acts should be…

  3. The Effects of Direct and Indirect Speech Acts on Native English and ESL Speakers' Perception of Teacher Written Feedback

    ERIC Educational Resources Information Center

    Baker, Wendy; Hansen Bricker, Rachel

    2010-01-01

    This study explores how second language (L2) learners perceive indirect (hedging or indirect speech acts) and direct written teacher feedback. Though research suggests that indirect speech acts may be more difficult to interpret than direct speech acts ([Champagne, 2001] and [Holtgraves, 1999]), using indirect speech acts is often encouraged in…

  4. Impact of dynamic rate coding aspects of mobile phone networks on forensic voice comparison.

    PubMed

    Alzqhoul, Esam A S; Nair, Balamurali B T; Guillemin, Bernard J

    2015-09-01

    Previous studies have shown that landline and mobile phone networks are different in their ways of handling the speech signal, and therefore in their impact on it. But the same is also true of the different networks within the mobile phone arena. There are two major mobile phone technologies currently in use today, namely the global system for mobile communications (GSM) and code division multiple access (CDMA) and these are fundamentally different in their design. For example, the quality of the coded speech in the GSM network is a function of channel quality, whereas in the CDMA network it is determined by channel capacity (i.e., the number of users sharing a cell site). This paper examines the impact on the speech signal of a key feature of these networks, namely dynamic rate coding, and its subsequent impact on the task of likelihood-ratio-based forensic voice comparison (FVC). Surprisingly, both FVC accuracy and precision are found to be better for both GSM- and CDMA-coded speech than for uncoded. Intuitively one expects FVC accuracy to increase with increasing coded speech quality. This trend is shown to occur for the CDMA network, but, surprisingly, not for the GSM network. Further, in respect to comparisons between these two networks, FVC accuracy for CDMA-coded speech is shown to be slightly better than for GSM-coded speech, particularly when the coded-speech quality is high, but in terms of FVC precision the two networks are shown to be very similar. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  5. A Comparison of LBG and ADPCM Speech Compression Techniques

    NASA Astrophysics Data System (ADS)

    Bachu, Rajesh G.; Patel, Jignasa; Barkana, Buket D.

    Speech compression is the technology of converting human speech into an efficiently encoded representation that can later be decoded to produce a close approximation of the original signal. In all speech there is a degree of predictability and speech coding techniques exploit this to reduce bit rates yet still maintain a suitable level of quality. This paper is a study and implementation of Linde-Buzo-Gray Algorithm (LBG) and Adaptive Differential Pulse Code Modulation (ADPCM) algorithms to compress speech signals. In here we implemented the methods using MATLAB 7.0. The methods we used in this study gave good results and performance in compressing the speech and listening tests showed that efficient and high quality coding is achieved.

  6. Spotlight on Speech Codes 2012: The State of Free Speech on Our Nation's Campuses

    ERIC Educational Resources Information Center

    Foundation for Individual Rights in Education (NJ1), 2012

    2012-01-01

    The U.S. Supreme Court has called America's colleges and universities "vital centers for the Nation's intellectual life," but the reality today is that many of these institutions severely restrict free speech and open debate. Speech codes--policies prohibiting student and faculty speech that would, outside the bounds of campus, be…

  7. Automated Discovery of Speech Act Categories in Educational Games

    ERIC Educational Resources Information Center

    Rus, Vasile; Moldovan, Cristian; Niraula, Nobal; Graesser, Arthur C.

    2012-01-01

    In this paper we address the important task of automated discovery of speech act categories in dialogue-based, multi-party educational games. Speech acts are important in dialogue-based educational systems because they help infer the student speaker's intentions (the task of speech act classification) which in turn is crucial to providing adequate…

  8. Pulse Vector-Excitation Speech Encoder

    NASA Technical Reports Server (NTRS)

    Davidson, Grant; Gersho, Allen

    1989-01-01

    Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.

  9. Neural Coding of Formant-Exaggerated Speech in the Infant Brain

    ERIC Educational Resources Information Center

    Zhang, Yang; Koerner, Tess; Miller, Sharon; Grice-Patil, Zach; Svec, Adam; Akbari, David; Tusler, Liz; Carney, Edward

    2011-01-01

    Speech scientists have long proposed that formant exaggeration in infant-directed speech plays an important role in language acquisition. This event-related potential (ERP) study investigated neural coding of formant-exaggerated speech in 6-12-month-old infants. Two synthetic /i/ vowels were presented in alternating blocks to test the effects of…

  10. Pragmatic Study of Directive Speech Acts in Stories in Alquran

    ERIC Educational Resources Information Center

    Santosa, Rochmat Budi; Nurkamto, Joko; Baidan, Nashruddin; Sumarlam

    2016-01-01

    This study aims at describing the directive speech acts in the verses that contain the stories in the Qur'an. Specifically, the objectives of this study are to assess the sub directive speech acts contained in the verses of the stories and the dominant directive speech acts. The research target is the verses ("ayat") containing stories…

  11. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  12. Governing sexual behaviour through humanitarian codes of conduct.

    PubMed

    Matti, Stephanie

    2015-10-01

    Since 2001, there has been a growing consensus that sexual exploitation and abuse of intended beneficiaries by humanitarian workers is a real and widespread problem that requires governance. Codes of conduct have been promoted as a key mechanism for governing the sexual behaviour of humanitarian workers and, ultimately, preventing sexual exploitation and abuse (PSEA). This article presents a systematic study of PSEA codes of conduct adopted by humanitarian non-governmental organisations (NGOs) and how they govern the sexual behaviour of humanitarian workers. It draws on Foucault's analytics of governance and speech act theory to examine the findings of a survey of references to codes of conduct made on the websites of 100 humanitarian NGOs, and to analyse some features of the organisation-specific PSEA codes identified. © 2015 The Author(s). Disasters © Overseas Development Institute, 2015.

  13. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise

    PubMed Central

    White-Schwoch, Travis; Davies, Evan C.; Thompson, Elaine C.; Carr, Kali Woodruff; Nicol, Trent; Bradlow, Ann R.; Kraus, Nina

    2015-01-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But learning rarely occurs under ideal listening conditions—children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3–5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features—even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties in this age group. These normative metrics may be useful clinically to evaluate auditory processing difficulties during early childhood. PMID:26113025

  14. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  15. Together They Stand: Interpreting Not-At-Issue Content.

    PubMed

    Frazier, Lyn; Dillon, Brian; Clifton, Charles

    2018-06-01

    Potts unified the account of appositives, parentheticals, expressives, and honorifics as 'Not- At-Issue' (NAI) content, treating them as a natural class semantically in behaving like root (unembedded) structures, typically expressing speaker commitments, and being interpreted independently of At-Issue content. We propose that NAI content expresses a complete speech act distinct from the speech act of the containing utterance. The speech act hypothesis leads us to expect the semantic properties Potts established. We present experimental confirmation of two intuitive observations made by Potts: first that speech act adverbs should be acceptable as NAI content, supporting the speech act hypothesis; and second, that when two speech acts are expressed as successive sentences, the comprehender assumes they are related by some discourse coherence relation, whereas an NAI speech act need not bear a restrictive discourse coherence relation to its containing utterance, though overall sentences containing relevant content are rated more acceptable than those that do not. The speech act hypothesis accounts for these effects, and further accounts for why judgments of syntactic complexity or evaluations of whether or not a statement is true interact with the at-issue status of the material being judged or evaluated.

  16. 'Just wait then and see what he does': a speech act analysis of healthcare professionals' interaction coaching with parents of children with autism spectrum disorders.

    PubMed

    McKnight, Lindsay M; O'Malley-Keighran, Mary-Pat; Carroll, Clare

    2016-11-01

    There is evidence indicating that parent training programmes including interaction coaching of parents of children with autism spectrum disorders (ASD) can increase parental responsiveness, promote language development and social interaction skills in children with ASD. However, there is a lack of research exploring precisely how healthcare professionals use language in interaction coaching. To identify the speech acts of healthcare professionals during individual video-recorded interaction coaching sessions of a Hanen-influenced parent training programme with parents of children with ASD. This retrospective study used speech act analysis. Healthcare professional participants included two speech-language therapists and one occupational therapist. Sixteen videos were transcribed and a speech act analysis was conducted to identify the form and functions of the language used by the healthcare professionals. Descriptive statistics provided frequencies and percentages for the different speech acts used across the 16 videos. Six types of speech acts used by the healthcare professionals during coaching sessions were identified. These speech acts were, in order of frequency: Instructing, Modelling, Suggesting, Commanding, Commending and Affirming. The healthcare professionals were found to tailor their interaction coaching to the learning needs of the parents. A pattern was observed in which more direct speech acts were used in instances where indirect speech acts did not achieve the intended response. The study provides an insight into the nature of interaction coaching provided by healthcare professionals during a parent training programme. It identifies the types of language used during interaction coaching. It also highlights additional important aspects of interaction coaching such as the ability of healthcare professionals to adjust the directness of the coaching in order to achieve the intended parental response to the child's interaction. The findings may be used to increase the awareness of healthcare professionals about the types of speech acts used during interaction coaching as well as the manner in which coaching sessions are conducted. © 2016 Royal College of Speech and Language Therapists.

  17. Done Wrong or Said Wrong? Young Children Understand the Normative Directions of Fit of Different Speech Acts

    ERIC Educational Resources Information Center

    Rakoczy, Hannes; Tomasello, Michael

    2009-01-01

    Young children use and comprehend different kinds of speech acts from the beginning of their communicative development. But it is not clear how they understand the conventional and normative structure of such speech acts. In particular, imperative speech acts have a world-to-word direction of fit, such that their fulfillment means that the world…

  18. Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant.

    PubMed

    Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa

    2016-07-01

    The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied.

  19. Ultra-narrow bandwidth voice coding

    DOEpatents

    Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA

    2007-01-09

    A system of removing excess information from a human speech signal and coding the remaining signal information, transmitting the coded signal, and reconstructing the coded signal. The system uses one or more EM wave sensors and one or more acoustic microphones to determine at least one characteristic of the human speech signal.

  20. Speech Act Theory and Business Communication Conventions.

    ERIC Educational Resources Information Center

    Ewald, Helen Rothschild; Stine, Donna

    1983-01-01

    Applies speech act theory to business writing to determine why certain letters and memos succeed while others fail. Specifically, shows how speech act theorist H. P. Grice's rules or maxims illuminate the writing process in business communication. (PD)

  1. Language Recognition via Sparse Coding

    DTIC Science & Technology

    2016-09-08

    a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector

  2. Spatiotemporal dynamics of auditory attention synchronize with speech

    PubMed Central

    Wöstmann, Malte; Herrmann, Björn; Maess, Burkhard

    2016-01-01

    Attention plays a fundamental role in selectively processing stimuli in our environment despite distraction. Spatial attention induces increasing and decreasing power of neural alpha oscillations (8–12 Hz) in brain regions ipsilateral and contralateral to the locus of attention, respectively. This study tested whether the hemispheric lateralization of alpha power codes not just the spatial location but also the temporal structure of the stimulus. Participants attended to spoken digits presented to one ear and ignored tightly synchronized distracting digits presented to the other ear. In the magnetoencephalogram, spatial attention induced lateralization of alpha power in parietal, but notably also in auditory cortical regions. This alpha power lateralization was not maintained steadily but fluctuated in synchrony with the speech rate and lagged the time course of low-frequency (1–5 Hz) sensory synchronization. Higher amplitude of alpha power modulation at the speech rate was predictive of a listener’s enhanced performance of stream-specific speech comprehension. Our findings demonstrate that alpha power lateralization is modulated in tune with the sensory input and acts as a spatiotemporal filter controlling the read-out of sensory content. PMID:27001861

  3. Sensorineural hearing loss amplifies neural coding of envelope information in the central auditory system of chinchillas

    PubMed Central

    Zhong, Ziwei; Henry, Kenneth S.; Heinz, Michael G.

    2014-01-01

    People with sensorineural hearing loss often have substantial difficulty understanding speech under challenging listening conditions. Behavioral studies suggest that reduced sensitivity to the temporal structure of sound may be responsible, but underlying neurophysiological pathologies are incompletely understood. Here, we investigate the effects of noise-induced hearing loss on coding of envelope (ENV) structure in the central auditory system of anesthetized chinchillas. ENV coding was evaluated noninvasively using auditory evoked potentials recorded from the scalp surface in response to sinusoidally amplitude modulated tones with carrier frequencies of 1, 2, 4, and 8 kHz and a modulation frequency of 140 Hz. Stimuli were presented in quiet and in three levels of white background noise. The latency of scalp-recorded ENV responses was consistent with generation in the auditory midbrain. Hearing loss amplified neural coding of ENV at carrier frequencies of 2 kHz and above. This result may reflect enhanced ENV coding from the periphery and/or an increase in the gain of central auditory neurons. In contrast to expectations, hearing loss was not associated with a stronger adverse effect of increasing masker intensity on ENV coding. The exaggerated neural representation of ENV information shown here at the level of the auditory midbrain helps to explain previous findings of enhanced sensitivity to amplitude modulation in people with hearing loss under some conditions. Furthermore, amplified ENV coding may potentially contribute to speech perception problems in people with cochlear hearing loss by acting as a distraction from more salient acoustic cues, particularly in fluctuating backgrounds. PMID:24315815

  4. Speech Alarms Pilot Study

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Moses, Haifa

    2016-01-01

    Speech alarms have been used extensively in aviation and included in International Building Codes (IBC) and National Fire Protection Association's (NFPA) Life Safety Code. However, they have not been implemented on space vehicles. Previous studies conducted at NASA JSC showed that speech alarms lead to faster identification and higher accuracy. This research evaluated updated speech and tone alerts in a laboratory environment and in the Human Exploration Research Analog (HERA) in a realistic setup.

  5. Study of Discussion Record Analysis Using Temporal Data Crystallization and Its Application to TV Scene Analysis

    DTIC Science & Technology

    2015-03-31

    analysis. For scene analysis, we use Temporal Data Crystallization (TDC), and for logical analysis, we use Speech Act theory and Toulmin Argumentation...utterance in the discussion record. (i) An utterance ID, and a speaker ID (ii) Speech acts (iii) Argument structure Speech act denotes...mediator is expected to use more OQs than CQs. When the speech act of an utterance is an argument, furthermore, we recognize the conclusion part

  6. Philosophical Perspectives on Values and Ethics in Speech Communication.

    ERIC Educational Resources Information Center

    Becker, Carl B.

    There are three very different concerns of communication ethics: (1) applied speech ethics, (2) ethical rules or standards, and (3) metaethical issues. In the area of applied speech ethics, communications theorists attempt to determine whether a speech act is moral or immoral by focusing on the content and effects of specific speech acts. Specific…

  7. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  8. Civility on Campus: Harassment Codes vs. Free Speech. ASHE Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Nordin, Virginia Davis

    In response to the resurgence of racial incidents and increased "gay-bashing" on higher education campuses in recent years, campus authorities have instituted harassment codes thereby giving rise to a conflicts with free speech. Similar conflicts and challenges to free speech have arisen recently in a municipal context such as a St. Paul…

  9. Second Language Learners and Speech Act Comprehension

    ERIC Educational Resources Information Center

    Holtgraves, Thomas

    2007-01-01

    Recognizing the specific speech act ( Searle, 1969) that a speaker performs with an utterance is a fundamental feature of pragmatic competence. Past research has demonstrated that native speakers of English automatically recognize speech acts when they comprehend utterances (Holtgraves & Ashley, 2001). The present research examined whether this…

  10. Pragmatic Difficulties in the Production of the Speech Act of Apology by Iraqi EFL Learners

    ERIC Educational Resources Information Center

    Al-Ghazalli, Mehdi Falih; Al-Shammary, Mohanad A. Amert

    2014-01-01

    The purpose of this paper is to investigate the pragmatic difficulties encountered by Iraqi EFL university students in producing the speech act of apology. Although the act of apology is easy to recognize or use by native speakers of English, non-native speakers generally encounter difficulties in discriminating one speech act from another. The…

  11. The Learning of Complex Speech Act Behaviour.

    ERIC Educational Resources Information Center

    Olshtain, Elite; Cohen, Andrew

    1990-01-01

    Pre- and posttraining measurement of adult English-as-a-Second-Language learners' (N=18) apology speech act behavior found no clear-cut quantitative improvement after training, although there was an obvious qualitative approximation of native-like speech act behavior in terms of types of intensification and downgrading, choice of strategy, and…

  12. Developmental Differences in Speech Act Recognition: A Pragmatic Awareness Study

    ERIC Educational Resources Information Center

    Garcia, Paula

    2004-01-01

    With the growing acknowledgement of the importance of pragmatic competence in second language (L2) learning, language researchers have identified the comprehension of speech acts as they occur in natural conversation as essential to communicative competence (e.g. Bardovi-Harlig, 2001; Thomas, 1983). Nonconventional indirect speech acts are formed…

  13. A Motive of Rhetorics: Invention and Speech Acts.

    ERIC Educational Resources Information Center

    Schneider, Michael J.

    While rhetorical theory has long been concerned with the epistemological foundations of rhetorical abilities, the full potential of the structuralist perspective is far from realized. The study of speech acts and inventive processes discloses the underlying logic of linguistic performance. A speech act is conceptualized in terms of the…

  14. Objective speech quality assessment and the RPE-LTP coding algorithm in different noise and language conditions.

    PubMed

    Hansen, J H; Nandkumar, S

    1995-01-01

    The formulation of reliable signal processing algorithms for speech coding and synthesis require the selection of a prior criterion of performance. Though coding efficiency (bits/second) or computational requirements can be used, a final performance measure must always include speech quality. In this paper, three objective speech quality measures are considered with respect to quality assessment for American English, noisy American English, and noise-free versions of seven languages. The purpose is to determine whether objective quality measures can be used to quantify changes in quality for a given voice coding method, with a known subjective performance level, as background noise or language conditions are changed. The speech coding algorithm chosen is regular-pulse excitation with long-term prediction (RPE-LTP), which has been chosen as the standard voice compression algorithm for the European Digital Mobile Radio system. Three areas are considered for objective quality assessment which include: (i) vocoder performance for American English in a noise-free environment, (ii) speech quality variation for three additive background noise sources, and (iii) noise-free performance for seven languages which include English, Japanese, Finnish, German, Hindi, Spanish, and French. It is suggested that although existing objective quality measures will never replace subjective testing, they can be a useful means of assessing changes in performance, identifying areas for improvement in algorithm design, and augmenting subjective quality tests for voice coding/compression algorithms in noise-free, noisy, and/or non-English applications.

  15. The Production of Speech Acts by EFL Learners.

    ERIC Educational Resources Information Center

    Cohen, Andrew D.; Olshtain, Elite

    A study is reported that describes ways in which nonnative speakers assess, plan, and execute speech acts in certain situations. The subjects, 15 advanced English foreign-language learners, were given 6 speech act situations (two apologies, two complaints, and two requests) in which they were to role play along with a native speaker. The…

  16. L'indirection: Procede d'expression et de persuasion en communication publique (Indirection: Process of Expression and Persuasion in Public Communication).

    ERIC Educational Resources Information Center

    Gauthier, Gilles

    2001-01-01

    Focuses on the indirection process presented in Searle's and Vanderveken's theory of speech acts: the performance of a primary speech act by means of the accomplishment of a secondary speech act. Discusses indirection mechanisms used in advertising and in political communication. (Author/VWL)

  17. Everyday listening questionnaire: correlation between subjective hearing and objective performance.

    PubMed

    Brendel, Martina; Frohne-Buechner, Carolin; Lesinski-Schiedat, Anke; Lenarz, Thomas; Buechner, Andreas

    2014-01-01

    Clinical experience has demonstrated that speech understanding by cochlear implant (CI) recipients has improved over recent years with the development of new technology. The Everyday Listening Questionnaire 2 (ELQ 2) was designed to collect information regarding the challenges faced by CI recipients in everyday listening. The aim of this study was to compare self-assessment of CI users using ELQ 2 with objective speech recognition measures and to compare results between users of older and newer coding strategies. During their regular clinical review appointments a group of representative adult CI recipients implanted with the Advanced Bionics implant system were asked to complete the questionnaire. The first 100 patients who agreed to participate in this survey were recruited independent of processor generation and speech coding strategy. Correlations between subjectively scored hearing performance in everyday listening situations and objectively measured speech perception abilities were examined relative to the speech coding strategies used. When subjects were grouped by strategy there were significant differences between users of older 'standard' strategies and users of the newer, currently available strategies (HiRes and HiRes 120), especially in the categories of telephone use and music perception. Significant correlations were found between certain subjective ratings and the objective speech perception data in noise. There is a good correlation between subjective and objective data. Users of more recent speech coding strategies tend to have fewer problems in difficult hearing situations.

  18. Effects of various electrode configurations on music perception, intonation and speaker gender identification.

    PubMed

    Landwehr, Markus; Fürstenberg, Dirk; Walger, Martin; von Wedel, Hasso; Meister, Hartmut

    2014-01-01

    Advances in speech coding strategies and electrode array designs for cochlear implants (CIs) predominantly aim at improving speech perception. Current efforts are also directed at transmitting appropriate cues of the fundamental frequency (F0) to the auditory nerve with respect to speech quality, prosody, and music perception. The aim of this study was to examine the effects of various electrode configurations and coding strategies on speech intonation identification, speaker gender identification, and music quality rating. In six MED-EL CI users electrodes were selectively deactivated in order to simulate different insertion depths and inter-electrode distances when using the high definition continuous interleaved sampling (HDCIS) and fine structure processing (FSP) speech coding strategies. Identification of intonation and speaker gender was determined and music quality rating was assessed. For intonation identification HDCIS was robust against the different electrode configurations, whereas fine structure processing showed significantly worse results when a short electrode depth was simulated. In contrast, speaker gender recognition was not affected by electrode configuration or speech coding strategy. Music quality rating was sensitive to electrode configuration. In conclusion, the three experiments revealed different outcomes, even though they all addressed the reception of F0 cues. Rapid changes in F0, as seen with intonation, were the most sensitive to electrode configurations and coding strategies. In contrast, electrode configurations and coding strategies did not show large effects when F0 information was available over a longer time period, as seen with speaker gender. Music quality relies on additional spectral cues other than F0, and was poorest when a shallow insertion was simulated.

  19. Speech-Act Theory as a New Way of Conceptualizing the "Student Experience"

    ERIC Educational Resources Information Center

    Fisher, Andrew

    2010-01-01

    This article has four aims. The first is to characterize the key features of speech-act theory, and, in particular, to show that there is a genuine distinction between the sound uttered when someone is speaking (locution), the effect the speech has (perlocution) and the very "act" of speaking (the illocution). Secondly, it aims to…

  20. Effects of Culture and Gender in Comprehension of Speech Acts of Indirect Request

    ERIC Educational Resources Information Center

    Shams, Rabe'a; Afghari, Akbar

    2011-01-01

    This study investigates the comprehension of indirect request speech act used by Iranian people in daily communication. The study is an attempt to find out whether different cultural backgrounds and the gender of the speakers affect the comprehension of the indirect request of speech act. The sample includes thirty males and females in Gachsaran…

  1. The Role of the Right Hemisphere in Speech Act Comprehension

    ERIC Educational Resources Information Center

    Holtgraves, Thomas

    2012-01-01

    In this research the role of the RH in the comprehension of speech acts (or illocutionary force) was examined. Two split-screen experiments were conducted in which participants made lexical decisions for lateralized targets after reading a brief conversation remark. On one-half of the trials the target word named the speech act performed with the…

  2. Investigating the Speech Act of Correction in Iraqi EFL Context

    ERIC Educational Resources Information Center

    Darweesh, Abbas Deygan; Mehdi, Wafaa Sahib

    2016-01-01

    The present paper investigates the performance of the Iraqi students for the speech act of correction and how it is realized with status unequal. It attempts to achieve the following aims: (1) Setting out the felicity conditions for the speech act of correction in terms of Searle conditions; (2) Identifying the semantic formulas that realize the…

  3. Combined electric and acoustic hearing performance with Zebra® speech processor: speech reception, place, and temporal coding evaluation.

    PubMed

    Vaerenberg, Bart; Péan, Vincent; Lesbros, Guillaume; De Ceulaer, Geert; Schauwers, Karen; Daemers, Kristin; Gnansia, Dan; Govaerts, Paul J

    2013-06-01

    To assess the auditory performance of Digisonic(®) cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure. Six patients implanted with a Digisonic(®) SP implant and showing low-frequency residual hearing were fitted with the Zebra(®) speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( Vaerenberg et al., 2011 ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS. Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking. These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.

  4. Neural evidence for predictive coding in auditory cortex during speech production.

    PubMed

    Okada, Kayoko; Matchin, William; Hickok, Gregory

    2018-02-01

    Recent models of speech production suggest that motor commands generate forward predictions of the auditory consequences of those commands, that these forward predications can be used to monitor and correct speech output, and that this system is hierarchically organized (Hickok, Houde, & Rong, Neuron, 69(3), 407--422, 2011; Pickering & Garrod, Behavior and Brain Sciences, 36(4), 329--347, 2013). Recent psycholinguistic research has shown that internally generated speech (i.e., imagined speech) produces different types of errors than does overt speech (Oppenheim & Dell, Cognition, 106(1), 528--537, 2008; Oppenheim & Dell, Memory & Cognition, 38(8), 1147-1160, 2010). These studies suggest that articulated speech might involve predictive coding at additional levels than imagined speech. The current fMRI experiment investigates neural evidence of predictive coding in speech production. Twenty-four participants from UC Irvine were recruited for the study. Participants were scanned while they were visually presented with a sequence of words that they reproduced in sync with a visual metronome. On each trial, they were cued to either silently articulate the sequence or to imagine the sequence without overt articulation. As expected, silent articulation and imagined speech both engaged a left hemisphere network previously implicated in speech production. A contrast of silent articulation with imagined speech revealed greater activation for articulated speech in inferior frontal cortex, premotor cortex and the insula in the left hemisphere, consistent with greater articulatory load. Although both conditions were silent, this contrast also produced significantly greater activation in auditory cortex in dorsal superior temporal gyrus in both hemispheres. We suggest that these activations reflect forward predictions arising from additional levels of the perceptual/motor hierarchy that are involved in monitoring the intended speech output.

  5. Fine-coarse semantic processing in schizophrenia: a reversed pattern of hemispheric dominance.

    PubMed

    Zeev-Wolf, Maor; Goldstein, Abraham; Levkovitz, Yechiel; Faust, Miriam

    2014-04-01

    Left lateralization for language processing is a feature of neurotypical brains. In individuals with schizophrenia, lack of left lateralization is associated with the language impairments manifested in this population. Beeman׳s fine-coarse semantic coding model asserts left hemisphere specialization in fine (i.e., conventionalized) semantic coding and right hemisphere specialization in coarse (i.e., non-conventionalized) semantic coding. Applying this model to schizophrenia would suggest that language impairments in this population are a result of greater reliance on coarse semantic coding. We investigated this hypothesis and examined whether a reversed pattern of hemispheric involvement in fine-coarse semantic coding along the time course of activation could be detected in individuals with schizophrenia. Seventeen individuals with schizophrenia and 30 neurotypical participants were presented with two word expressions of four types: literal, conventional metaphoric, unrelated (exemplars of fine semantic coding) and novel metaphoric (an exemplar of coarse semantic coding). Expressions were separated by either a short (250 ms) or long (750 ms) delay. Findings indicate that whereas during novel metaphor processing, controls displayed a left hemisphere advantage at 250 ms delay and right hemisphere advantage at 750 ms, individuals with schizophrenia displayed the opposite. For conventional metaphoric and unrelated expressions, controls showed left hemisphere advantage across times, while individuals with schizophrenia showed a right hemisphere advantage. Furthermore, whereas individuals with schizophrenia were less accurate than control at judging literal, conventional metaphoric and unrelated expressions they were more accurate when judging novel metaphors. Results suggest that individuals with schizophrenia display a reversed pattern of lateralization for semantic coding which causes them to rely more heavily on coarse semantic coding. Thus, for individuals with schizophrenia, speech situation are always non-conventional, compelling them to constantly seek for meanings and prejudicing them toward novel or atypical speech acts. This, in turn, may disadvantage them in conventionalized communication and result in language impairment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Bilingual Voicing: A Study of Code-Switching in the Reported Speech of Finnish Immigrants in Estonia

    ERIC Educational Resources Information Center

    Frick, Maria; Riionheimo, Helka

    2013-01-01

    Through a conversation analytic investigation of Finnish-Estonian bilingual (direct) reported speech (i.e., voicing) by Finns who live in Estonia, this study shows how code-switching is used as a double contextualization device. The code-switched voicings are shaped by the on-going interactional situation, serving its needs by opening up a context…

  7. Look at the Gato! Code-Switching in Speech to Toddlers

    ERIC Educational Resources Information Center

    Bail, Amelie; Morini, Giovanna; Newman, Rochelle S.

    2015-01-01

    We examined code-switching (CS) in the speech of twenty-four bilingual caregivers when speaking with their 18- to 24-month-old children. All parents CS at least once in a short play session, and some code-switched quite often (over 1/3 of utterances). This CS included both inter-sentential and intra-sentential switches, suggesting that at least…

  8. 4800 B/S speech compression techniques for mobile satellite systems

    NASA Technical Reports Server (NTRS)

    Townes, S. A.; Barnwell, T. P., III; Rose, R. C.; Gersho, A.; Davidson, G.

    1986-01-01

    This paper will discuss three 4800 bps digital speech compression techniques currently being investigated for application in the mobile satellite service. These three techniques, vector adaptive predictive coding, vector excitation coding, and the self excited vocoder, are the most promising among a number of techniques being developed to possibly provide near-toll-quality speech compression while still keeping the bit-rate low enough for a power and bandwidth limited satellite service.

  9. The Effect of Explicit Metapragmatic Instruction on Request Speech Act Awareness of Intermediate EFL Students at Institute Level

    ERIC Educational Resources Information Center

    Masouleh, Fatemeh Abdollahizadeh; Arjmandi, Masoumeh; Vahdany, Fereydoon

    2014-01-01

    This study deals with the application of the pragmatics research to EFL teaching. The need for language learners to utilize a form of speech acts such as request which involves a series of strategies was significance of the study. Although defining different speech acts has been established since 1960s, recently there has been a shift towards…

  10. Population responses in primary auditory cortex simultaneously represent the temporal envelope and periodicity features in natural speech.

    PubMed

    Abrams, Daniel A; Nicol, Trent; White-Schwoch, Travis; Zecker, Steven; Kraus, Nina

    2017-05-01

    Speech perception relies on a listener's ability to simultaneously resolve multiple temporal features in the speech signal. Little is known regarding neural mechanisms that enable the simultaneous coding of concurrent temporal features in speech. Here we show that two categories of temporal features in speech, the low-frequency speech envelope and periodicity cues, are processed by distinct neural mechanisms within the same population of cortical neurons. We measured population activity in primary auditory cortex of anesthetized guinea pig in response to three variants of a naturally produced sentence. Results show that the envelope of population responses closely tracks the speech envelope, and this cortical activity more closely reflects wider bandwidths of the speech envelope compared to narrow bands. Additionally, neuronal populations represent the fundamental frequency of speech robustly with phase-locked responses. Importantly, these two temporal features of speech are simultaneously observed within neuronal ensembles in auditory cortex in response to clear, conversation, and compressed speech exemplars. Results show that auditory cortical neurons are adept at simultaneously resolving multiple temporal features in extended speech sentences using discrete coding mechanisms. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Dual routes for verbal repetition: articulation-based and acoustic-phonetic codes for pseudoword and word repetition, respectively.

    PubMed

    Yoo, Sejin; Chung, Jun-Young; Jeon, Hyeon-Ae; Lee, Kyoung-Min; Kim, Young-Bo; Cho, Zang-Hee

    2012-07-01

    Speech production is inextricably linked to speech perception, yet they are usually investigated in isolation. In this study, we employed a verbal-repetition task to identify the neural substrates of speech processing with two ends active simultaneously using functional MRI. Subjects verbally repeated auditory stimuli containing an ambiguous vowel sound that could be perceived as either a word or a pseudoword depending on the interpretation of the vowel. We found verbal repetition commonly activated the audition-articulation interface bilaterally at Sylvian fissures and superior temporal sulci. Contrasting word-versus-pseudoword trials revealed neural activities unique to word repetition in the left posterior middle temporal areas and activities unique to pseudoword repetition in the left inferior frontal gyrus. These findings imply that the tasks are carried out using different speech codes: an articulation-based code of pseudowords and an acoustic-phonetic code of words. It also supports the dual-stream model and imitative learning of vocabulary. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Speech reading and learning to read: a comparison of 8-year-old profoundly deaf children with good and poor reading ability.

    PubMed

    Harris, Margaret; Moreno, Constanza

    2006-01-01

    Nine children with severe-profound prelingual hearing loss and single-word reading scores not more than 10 months behind chronological age (Good Readers) were matched with 9 children whose reading lag was at least 15 months (Poor Readers). Good Readers had significantly higher spelling and reading comprehension scores. They produced significantly more phonetic errors (indicating the use of phonological coding) and more often correctly represented the number of syllables in spelling than Poor Readers. They also scored more highly on orthographic awareness and were better at speech reading. Speech intelligibility was the same in the two groups. Cluster analysis revealed that only three Good Readers showed strong evidence of phonetic coding in spelling although seven had good representation of syllables; only four had high orthographic awareness scores. However, all 9 children were good speech readers, suggesting that a phonological code derived through speech reading may underpin reading success for deaf children.

  13. Sinusoidal transform coding

    NASA Technical Reports Server (NTRS)

    Mcaulay, Robert J.; Quatieri, Thomas F.

    1988-01-01

    It has been shown that an analysis/synthesis system based on a sinusoidal representation of speech leads to synthetic speech that is essentially perceptually indistinguishable from the original. Strategies for coding the amplitudes, frequencies and phases of the sine waves have been developed that have led to a multirate coder operating at rates from 2400 to 9600 bps. The encoded speech is highly intelligible at all rates with a uniformly improving quality as the data rate is increased. A real-time fixed-point implementation has been developed using two ADSP2100 DSP chips. The methods used for coding and quantizing the sine-wave parameters for operation at the various frame rates are described.

  14. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  15. Identifying Speech Acts in E-Mails: Toward Automated Scoring of the "TOEIC"® E-Mail Task. Research Report. ETS RR-12-16

    ERIC Educational Resources Information Center

    De Felice, Rachele; Deane, Paul

    2012-01-01

    This study proposes an approach to automatically score the "TOEIC"® Writing e-mail task. We focus on one component of the scoring rubric, which notes whether the test-takers have used particular speech acts such as requests, orders, or commitments. We developed a computational model for automated speech act identification and tested it…

  16. The Effect of Explicit vs. Implicit Instruction on Mastering the Speech Act of Thanking among Iranian Male and Female EFL Learners

    ERIC Educational Resources Information Center

    Ghaedrahmat, Mahdi; Alavi Nia, Parviz; Biria, Reza

    2016-01-01

    This pragmatic study investigated the speech act of thanking as used by non-native speakers of English. The study was an attempt to find whether the pragmatic awareness of Iranian EFL learners could be improved through explicit instruction of the structure of the speech act of "Thanking". In fact, this study aimed to find out if there…

  17. a New Architecture for Intelligent Systems with Logic Based Languages

    NASA Astrophysics Data System (ADS)

    Saini, K. K.; Saini, Sanju

    2008-10-01

    People communicate with each other in sentences that incorporate two kinds of information: propositions about some subject, and metalevel speech acts that specify how the propositional information is used—as an assertion, a command, a question, or a promise. By means of speech acts, a group of people who have different areas of expertise can cooperate and dynamically reconfigure their social interactions to perform tasks and solve problems that would be difficult or impossible for any single individual. This paper proposes a framework for intelligent systems that consist of a variety of specialized components together with logic-based languages that can express propositions and speech acts about those propositions. The result is a system with a dynamically changing architecture that can be reconfigured in various ways: by a human knowledge engineer who specifies a script of speech acts that determine how the components interact; by a planning component that generates the speech acts to redirect the other components; or by a committee of components, which might include human assistants, whose speech acts serve to redirect one another. The components communicate by sending messages to a Linda-like blackboard, in which components accept messages that are either directed to them or that they consider themselves competent to handle.

  18. Neural dynamics of speech act comprehension: an MEG study of naming and requesting.

    PubMed

    Egorova, Natalia; Pulvermüller, Friedemann; Shtyrov, Yury

    2014-05-01

    The neurobiological basis and temporal dynamics of communicative language processing pose important yet unresolved questions. It has previously been suggested that comprehension of the communicative function of an utterance, i.e. the so-called speech act, is supported by an ensemble of neural networks, comprising lexico-semantic, action and mirror neuron as well as theory of mind circuits, all activated in concert. It has also been demonstrated that recognition of the speech act type occurs extremely rapidly. These findings however, were obtained in experiments with insufficient spatio-temporal resolution, thus possibly concealing important facets of the neural dynamics of the speech act comprehension process. Here, we used magnetoencephalography to investigate the comprehension of Naming and Request actions performed with utterances controlled for physical features, psycholinguistic properties and the probability of occurrence in variable contexts. The results show that different communicative actions are underpinned by a dynamic neural network, which differentiates between speech act types very early after the speech act onset. Within 50-90 ms, Requests engaged mirror-neuron action-comprehension systems in sensorimotor cortex, possibly for processing action knowledge and intentions. Still, within the first 200 ms of stimulus onset (100-150 ms), Naming activated brain areas involved in referential semantic retrieval. Subsequently (200-300 ms), theory of mind and mentalising circuits were activated in medial prefrontal and temporo-parietal areas, possibly indexing processing of intentions and assumptions of both communication partners. This cascade of stages of processing information about actions and intentions, referential semantics, and theory of mind may underlie dynamic and interactive speech act comprehension.

  19. Speaking of Race, Speaking of Sex: Hate Speech, Civil Rights, and Civil Liberties.

    ERIC Educational Resources Information Center

    Gates, Henry Louis, Jr.; And Others

    The essays of this collection explore the restriction of speech and the hate speech codes that attempt to restrict bigoted or offensive speech and punish those who engage in it. These essays generally argue that speech restrictions are dangerous and counterproductive, but they acknowledge that it is very difficult to distinguish between…

  20. Verbal Short-Term Memory Span in Speech-Disordered Children: Implications for Articulatory Coding in Short-Term Memory.

    ERIC Educational Resources Information Center

    Raine, Adrian; And Others

    1991-01-01

    Children with speech disorders had lower short-term memory capacity and smaller word length effect than control children. Children with speech disorders also had reduced speech-motor activity during rehearsal. Results suggest that speech rate may be a causal determinant of verbal short-term memory capacity. (BC)

  1. Speech processing using maximum likelihood continuity mapping

    DOEpatents

    Hogden, John E.

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  2. Speech processing using maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.E.

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  3. Research on the optoacoustic communication system for speech transmission by variable laser-pulse repetition rates

    NASA Astrophysics Data System (ADS)

    Jiang, Hongyan; Qiu, Hongbing; He, Ning; Liao, Xin

    2018-06-01

    For the optoacoustic communication from in-air platforms to submerged apparatus, a method based on speech recognition and variable laser-pulse repetition rates is proposed, which realizes character encoding and transmission for speech. Firstly, the theories and spectrum characteristics of the laser-generated underwater sound are analyzed; and moreover character conversion and encoding for speech as well as the pattern of codes for laser modulation is studied; lastly experiments to verify the system design are carried out. Results show that the optoacoustic system, where laser modulation is controlled by speech-to-character baseband codes, is beneficial to improve flexibility in receiving location for underwater targets as well as real-time performance in information transmission. In the overwater transmitter, a pulse laser is controlled to radiate by speech signals with several repetition rates randomly selected in the range of one to fifty Hz, and then in the underwater receiver laser pulse repetition rate and data can be acquired by the preamble and information codes of the corresponding laser-generated sound. When the energy of the laser pulse is appropriate, real-time transmission for speaker-independent speech can be realized in that way, which solves the problem of underwater bandwidth resource and provides a technical approach for the air-sea communication.

  4. Directive Speech Act of Imamu in Katoba Discourse of Muna Ethnic

    NASA Astrophysics Data System (ADS)

    Ardianto, Ardianto; Hadirman, Hardiman

    2018-05-01

    One of the traditions of Muna ethnic is katoba ritual. Katoba ritual is one tradition that values local knowledge maintained its existence for generations until today. Katoba ritual is a ritual to be Islamic person, repentance, and the formation of a child's character (male/female) who will enter adulthood (6-11 years) using directive speech. In katoba ritual, a child who is in-katoba introduced to the teaching of the Islamic religion, customs, manners to parents and his brother and behaviour towards others which is expected to be implemented in daily life. This study aims to describe and explain the directive speech acts of the imamu in the katoba discourse of Muna ethnic. This research uses a qualitative approach. Data are collected from a natural setting, namely katoba speech discourses. The data consist of two types, namely: (a) speech data, and (b) field note data. Data are analyzed using an interactive model with four stages: (1) data collection, (2) data reduction, (3) data display, and (4) conclusion and verification. The result shows, firstly, the form of directive speech acts includes declarative and imperative form; secondly, the function of directive speech acts includes functions of teaching, explaining, suggesting, and expecting; and thirdly, the strategy of directive speech acts includes both direct and indirect strategy. The results of this study could be implied in the development of character learning materials at schools. It also can be one of the contents of local content (mulok) at school.

  5. The Impact of Personal and/or Close Relationship Experience on Memorable Messages about Breast Cancer and the Perceived Speech Acts of the Sender

    PubMed Central

    Smith, Sandi W.; Atkin, Charles; Skubisz, Christine M.; Munday, Samantha; Stohl, Cynthia

    2009-01-01

    Background Memorable messages and their speech acts (purposes of the messages) can promote protection against breast cancer and guide health behaviors. Methods Participants reported their personal, friends’, and relatives’ experiences with breast cancer and a memorable message about breast cancer if one came to mind. Those with a memorable message reported its perceived speech acts. Results Individuals who had personal and friend or relative experience with breast cancer were significantly more likely to recall memorable messages than other respondents. The most frequently perceived speech acts were providing facts, providing advice, and giving hope. Conclusion This information should be used to form messages in future breast cancer protection campaigns. PMID:19431030

  6. Hate Speech and the First Amendment.

    ERIC Educational Resources Information Center

    Rainey, Susan J.; Kinsler, Waren S.; Kannarr, Tina L.; Reaves, Asa E.

    This document is comprised of California state statutes, federal legislation, and court litigation pertaining to hate speech and the First Amendment. The document provides an overview of California education code sections relating to the regulation of speech; basic principles of the First Amendment; government efforts to regulate hate speech,…

  7. The Cheerleaders' Mock Execution

    ERIC Educational Resources Information Center

    Trujillo-Jenks, Laura

    2011-01-01

    The fervor of student speech is demonstrated through different mediums and venues in public schools. In this case, a new principal encounters the mores of a community that believes in free speech, specifically student free speech. When a pep rally becomes a venue for hate speech, terroristic threats, and profanity, the student code of conduct…

  8. Noise suppression methods for robust speech processing

    NASA Astrophysics Data System (ADS)

    Boll, S. F.; Ravindra, H.; Randall, G.; Armantrout, R.; Power, R.

    1980-05-01

    Robust speech processing in practical operating environments requires effective environmental and processor noise suppression. This report describes the technical findings and accomplishments during this reporting period for the research program funded to develop real time, compressed speech analysis synthesis algorithms whose performance in invariant under signal contamination. Fulfillment of this requirement is necessary to insure reliable secure compressed speech transmission within realistic military command and control environments. Overall contributions resulting from this research program include the understanding of how environmental noise degrades narrow band, coded speech, development of appropriate real time noise suppression algorithms, and development of speech parameter identification methods that consider signal contamination as a fundamental element in the estimation process. This report describes the current research and results in the areas of noise suppression using the dual input adaptive noise cancellation using the short time Fourier transform algorithms, articulation rate change techniques, and a description of an experiment which demonstrated that the spectral subtraction noise suppression algorithm can improve the intelligibility of 2400 bps, LPC 10 coded, helicopter speech by 10.6 point.

  9. Effects of synthetic speech output in the learning of graphic symbols of varied iconicity.

    PubMed

    Koul, Rajinder; Schlosser, Ralf

    To examine the effects of additional auditory feedback from synthetic speech on the learning of high translucent symbols versus low translucent symbols. Two adults with little or no functional speech and severe intellectual disabilities served as participants. A single-subject ABACA/ACABA design was used to study the relative effects of two treatments: symbol training in the presence and absence of synthetic speech output. The results clearly indicated that the two treatments, rather than extraneous variables were responsible for gains in the symbol learning. Both participants learned either more low translucent symbols or reached their maximum learning of low translucent symbols in the speech output condition. The results of this preliminary study replicate and extend the iconicity hypothesis to a new set of learning conditions involving speech output, and suggest that feedback from speech output may assist adults with profound intellectual disabilities in coding particularly those symbols whose association with their referent cannot be coded via their visual resemblance with the referent.

  10. Fine Structure Processing improves speech perception as well as objective and subjective benefits in pediatric MED-EL COMBI 40+ users.

    PubMed

    Lorens, Artur; Zgoda, Małgorzata; Obrycka, Anita; Skarżynski, Henryk

    2010-12-01

    Presently, there are only few studies examining the benefits of fine structure information in coding strategies. Against this background, this study aims to assess the objective and subjective performance of children experienced with the C40+ cochlear implant using the CIS+ coding strategy who were upgraded to the OPUS 2 processor using FSP and HDCIS. In this prospective study, 60 children with more than 3.5 years of experience with the C40+ cochlear implant were upgraded to the OPUS 2 processor and fit and tested with HDCIS (Interval I). After 3 months of experience with HDCIS, they were fit with the FSP coding strategy (Interval II) and tested with all strategies (FSP, HDCIS, CIS+). After an additional 3-4 months, they were assessed on all three strategies and asked to choose their take-home strategy (Interval III). The children were tested using the Adaptive Auditory Speech Test which measures speech reception threshold (SRT) in quiet and noise at each test interval. The children were also asked to rate on a Visual Analogue Scale their satisfaction and coding strategy preference when listening to speech and a pop song. However, since not all tests could be performed at one single visit, some children were not able complete all tests at all intervals. At the study endpoint, speech in quiet showed a significant difference in SRT of 1.0 dB between FSP and HDCIS, with FSP performing better. FSP proved a better strategy compared with CIS+, showing lower SRT results of 5.2 dB. Speech in noise tests showed FSP to be significantly better than CIS+ by 0.7 dB, and HDCIS to be significantly better than CIS+ by 0.8 dB. Both satisfaction and coding strategy preference ratings also revealed that FSP and HDCIS strategies were better than CIS+ strategy when listening to speech and music. FSP was better than HDCIS when listening to speech. This study demonstrates that long-term pediatric users of the COMBI 40+ are able to upgrade to a newer processor and coding strategy without compromising their listening performance and even improving their performance with FSP after a short time of experience. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. Walking the Talk on Campus Speech

    ERIC Educational Resources Information Center

    O'Neil, Robert M.

    2004-01-01

    A public university faced with intolerant student speech now risks being damned if it acts, but equally damned if it fails to act. To a greater degree than at any time in recent memory, the actions and policies of higher education institutions concerning student speech not only are being scrutinized, but they also are becoming the subject of legal…

  12. A Pragma-Stylistic Analysis of President Goodluck Ebele Jonathan Inaugural Speech

    ERIC Educational Resources Information Center

    Abuya, Eromosele John

    2012-01-01

    The study was an examination through the pragma-stylistic approach to meaning of the linguistic acts that manifest in the Inaugural Speech of Goodluck Ebele Jonathan as the democratically elected president in May 2011 General Elections in Nigeria. Hence, the study focused on speech acts type of locution, illocutionary and perlocutionary in the…

  13. Speech-Act and Text-Act Theory: "Theme-ing" in Freshman Composition.

    ERIC Educational Resources Information Center

    Horner, Winifred B.

    In contrast to a speech-act theory that is limited by a simple speaker/hearer relationship, a text-act theory of written language allows for the historical or personal context of a writer and reader, both in the written work itself and in the act of reading. This theory can be applied to theme writing, essay examinations, and revision in the…

  14. Symbolic Speech

    ERIC Educational Resources Information Center

    Podgor, Ellen S.

    1976-01-01

    The concept of symbolic speech emanates from the 1967 case of United States v. O'Brien. These discussions of flag desecration, grooming and dress codes, nude entertainment, buttons and badges, and musical expression show that the courts place symbolic speech in different strata from verbal communication. (LBH)

  15. [A comparison of time resolution among auditory, tactile and promontory electrical stimulation--superiority of cochlear implants as human communication aids].

    PubMed

    Matsushima, J; Kumagai, M; Harada, C; Takahashi, K; Inuyama, Y; Ifukube, T

    1992-09-01

    Our previous reports showed that second formant information, using a speech coding method, could be transmitted through an electrode on the promontory. However, second formant information can also be transmitted by tactile stimulation. Therefore, to find out whether electrical stimulation of the auditory nerve would be superior to tactile stimulation for our speech coding method, the time resolutions of the two modes of stimulation were compared. The results showed that the time resolution of electrical promontory stimulation was three times better than the time resolution of tactile stimulation of the finger. This indicates that electrical stimulation of the auditory nerve is much better for our speech coding method than tactile stimulation of the finger.

  16. Spotlight on Speech Codes 2011: The State of Free Speech on Our Nation's Campuses

    ERIC Educational Resources Information Center

    Foundation for Individual Rights in Education (NJ1), 2011

    2011-01-01

    Each year, the Foundation for Individual Rights in Education (FIRE) conducts a rigorous survey of restrictions on speech at America's colleges and universities. The survey and accompanying report explore the extent to which schools are meeting their legal and moral obligations to uphold students' and faculty members' rights to freedom of speech,…

  17. Spotlight on Speech Codes 2009: The State of Free Speech on Our Nation's Campuses

    ERIC Educational Resources Information Center

    Foundation for Individual Rights in Education (NJ1), 2009

    2009-01-01

    Each year, the Foundation for Individual Rights in Education (FIRE) conducts a wide, detailed survey of restrictions on speech at America's colleges and universities. The survey and resulting report explore the extent to which schools are meeting their obligations to uphold students' and faculty members' rights to freedom of speech, freedom of…

  18. Spotlight on Speech Codes 2010: The State of Free Speech on Our Nation's Campuses

    ERIC Educational Resources Information Center

    Foundation for Individual Rights in Education (NJ1), 2010

    2010-01-01

    Each year, the Foundation for Individual Rights in Education (FIRE) conducts a rigorous survey of restrictions on speech at America's colleges and universities. The survey and resulting report explore the extent to which schools are meeting their legal and moral obligations to uphold students' and faculty members' rights to freedom of speech,…

  19. Shared acoustic codes underlie emotional communication in music and speech-Evidence from deep transfer learning.

    PubMed

    Coutinho, Eduardo; Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.

  20. Design of a robust baseband LPC coder for speech transmission over 9.6 kbit/s noisy channels

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Russell, W. H.; Higgins, A. L.

    1982-04-01

    This paper describes the design of a baseband Linear Predictive Coder (LPC) which transmits speech over 9.6 kbit/sec synchronous channels with random bit errors of up to 1%. Presented are the results of our investigation of a number of aspects of the baseband LPC coder with the goal of maximizing the quality of the transmitted speech. Important among these aspects are: bandwidth of the baseband, coding of the baseband residual, high-frequency regeneration, and error protection of important transmission parameters. The paper discusses these and other issues, presents the results of speech-quality tests conducted during the various stages of optimization, and describes the details of the optimized speech coder. This optimized speech coding algorithm has been implemented as a real-time full-duplex system on an array processor. Informal listening tests of the real-time coder have shown that the coder produces good speech quality in the absence of channel bit errors and introduces only a slight degradation in quality for channel bit error rates of up to 1%.

  1. The Cost of Speech Codes.

    ERIC Educational Resources Information Center

    Riley, Gresham

    1993-01-01

    It is argued that the arguments currently advanced for limiting speech on college campuses are also arguments that will compromise academic freedom and that a distinction needs to be made between the right of free speech and the wisdom of exercising the right on any given occasion. (MSE)

  2. Transitioning from analog to digital audio recording in childhood speech sound disorders.

    PubMed

    Shriberg, Lawrence D; McSweeny, Jane L; Anderson, Bruce E; Campbell, Thomas F; Chial, Michael R; Green, Jordan R; Hauner, Katherina K; Moore, Christopher A; Rusiewicz, Heather L; Wilson, David L

    2005-06-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital audio systems. Effect sizes calculated for an array of analog versus digital comparisons ranged from negligible to medium, with a trend for participants' speech competency scores to be slightly lower for samples obtained and transcribed using the digital system. We discuss the implications of these and other findings for research and clinical practise.

  3. Transitioning from analog to digital audio recording in childhood speech sound disorders

    PubMed Central

    Shriberg, Lawrence D.; McSweeny, Jane L.; Anderson, Bruce E.; Campbell, Thomas F.; Chial, Michael R.; Green, Jordan R.; Hauner, Katherina K.; Moore, Christopher A.; Rusiewicz, Heather L.; Wilson, David L.

    2014-01-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital audio systems. Effect sizes calculated for an array of analog versus digital comparisons ranged from negligible to medium, with a trend for participants’ speech competency scores to be slightly lower for samples obtained and transcribed using the digital system. We discuss the implications of these and other findings for research and clinical practise. PMID:16019779

  4. [Prosody, speech input and language acquisition].

    PubMed

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  5. Speech coding at 4800 bps for mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Gersho, Allen; Chan, Wai-Yip; Davidson, Grant; Chen, Juin-Hwey; Yong, Mei

    1988-01-01

    A speech compression project has recently been completed to develop a speech coding algorithm suitable for operation in a mobile satellite environment aimed at providing telephone quality natural speech at 4.8 kbps. The work has resulted in two alternative techniques which achieve reasonably good communications quality at 4.8 kbps while tolerating vehicle noise and rather severe channel impairments. The algorithms are embodied in a compact self-contained prototype consisting of two AT and T 32-bit floating-point DSP32 digital signal processors (DSP). A Motorola 68HC11 microcomputer chip serves as the board controller and interface handler. On a wirewrapped card, the prototype's circuit footprint amounts to only 200 sq cm, and consumes about 9 watts of power.

  6. 76 FR 69737 - Information Collections Being Reviewed by the Federal Communications Commission for Extension...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-09

    ...: Telecommunications Relay Services and Speech-to-Speech Services for Individuals with Hearing and Speech Disabilities... Services and Speech-to-Speech Services for Individuals with Hearing and Speech Disabilities; Americans with Disabilities Act of 1990, CC Docket No. 98-67, CG Docket No. 10-123, Second Report and Order, Order on...

  7. Equality marker in the language of bali

    NASA Astrophysics Data System (ADS)

    Wajdi, Majid; Subiyanto, Paulus

    2018-01-01

    The language of Bali could be grouped into one of the most elaborate languages of the world since the existence of its speech levels, low and high speech levels, as the language of Java has. Low and high speech levels of the language of Bali are language codes that could be used to show and express social relationship between or among its speakers. This paper focuses on describing, analyzing, and interpreting the use of the low code of the language of Bali in daily communication in the speech community of Pegayaman, Bali. Observational and documentation methods were applied to provide the data for the research. Recoding and field note techniques were executed to provide the data. Recorded in spoken language and the study of novel of Balinese were transcribed into written form to ease the process of analysis. Symmetric use of low code expresses social equality between or among the participants involves in the communication. It also implies social intimacy between or among the speakers of the language of Bali. Regular and patterned use of the low code of the language of Bali is not merely communication strategy, but it is a kind of communication agreement or communication contract between the participants. By using low code during their social and communication activities, the participants shared and express their social equality and intimacy between or among the participants involve in social and communication activities.

  8. Prediction Errors but Not Sharpened Signals Simulate Multivoxel fMRI Patterns during Speech Perception

    PubMed Central

    Davis, Matthew H.

    2016-01-01

    Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains. PMID:27846209

  9. Status Report on Speech Research, July 1994-December 1995.

    ERIC Educational Resources Information Center

    Fowler, Carol A., Ed.

    This publication (one of a series) contains 19 articles which report the status and progress of studies on the nature of speech, instruments for its investigation, and practical applications. Articles are: "Speech Perception Deficits in Poor Readers: Auditory Processing or Phonological Coding?" (Maria Mody and others); "Auditory…

  10. Family Worlds: Couple Satisfaction, Parenting Style, and Mothers' and Fathers' Speech to Young Children.

    ERIC Educational Resources Information Center

    Pratt, Michael W.; And Others

    1992-01-01

    Investigated relations between certain family context variables and the conversational behavior of 36 parents who were playing with their 3 year olds. Transcripts were coded for types of conversational functions and structure of parent speech. Marital satisfaction was associated with aspects of parent speech. (LB)

  11. Linguistic and pragmatic constraints on utterance interpretation

    NASA Astrophysics Data System (ADS)

    Hinkelman, Elizabeth A.

    1990-05-01

    In order to model how people understand language, it is necessary to understand not only grammar and logic but also how people use language to affect their environment. This area of study is known as natural language pragmatics. Speech acts, for instance, are the offers, promises, announcements, etc., that people make by talking. The same expression may be different acts in different contexts, and yet not every expression performs every act. We want to understand how people are able to recognize other's intentions and implications in saying something. Previous plan-based theories of speech act interpretation do not account for the conventional aspect of speech acts. They can, however, be made sensitive to both linguistic and propositional information. This dissertation presents a method of speech act interpretation which uses patterns of linguistic features (e.g., mood, verb form, sentence adverbials, thematic roles) to identify a range of speech act interpretations for the utterance. These are then filtered and elaborated by inferences about agents' goals and plans. In many cases the plan reasoning consists of short, local inference chains (that are in fact conversational implicatures) and, extended reasoning is necessary only for the most difficult cases. The method is able to accommodate a wide range of cases, from those which seem very idiomatic to those which must be analyzed using knowledge about the world and human behavior. It explains how, Can you pass the salt, can be a request while, Are you able to pass the salt, is not.

  12. Predictions of Speech Chimaera Intelligibility Using Auditory Nerve Mean-Rate and Spike-Timing Neural Cues.

    PubMed

    Wirtzfeld, Michael R; Ibrahim, Rasha A; Bruce, Ian C

    2017-10-01

    Perceptual studies of speech intelligibility have shown that slow variations of acoustic envelope (ENV) in a small set of frequency bands provides adequate information for good perceptual performance in quiet, whereas acoustic temporal fine-structure (TFS) cues play a supporting role in background noise. However, the implications for neural coding are prone to misinterpretation because the mean-rate neural representation can contain recovered ENV cues from cochlear filtering of TFS. We investigated ENV recovery and spike-time TFS coding using objective measures of simulated mean-rate and spike-timing neural representations of chimaeric speech, in which either the ENV or the TFS is replaced by another signal. We (a) evaluated the levels of mean-rate and spike-timing neural information for two categories of chimaeric speech, one retaining ENV cues and the other TFS; (b) examined the level of recovered ENV from cochlear filtering of TFS speech; (c) examined and quantified the contribution to recovered ENV from spike-timing cues using a lateral inhibition network (LIN); and (d) constructed linear regression models with objective measures of mean-rate and spike-timing neural cues and subjective phoneme perception scores from normal-hearing listeners. The mean-rate neural cues from the original ENV and recovered ENV partially accounted for perceptual score variability, with additional variability explained by the recovered ENV from the LIN-processed TFS speech. The best model predictions of chimaeric speech intelligibility were found when both the mean-rate and spike-timing neural cues were included, providing further evidence that spike-time coding of TFS cues is important for intelligibility when the speech envelope is degraded.

  13. Premotor neural correlates of predictive motor timing for speech production and hand movement: evidence for a temporal predictive code in the motor system.

    PubMed

    Johari, Karim; Behroozmand, Roozbeh

    2017-05-01

    The predictive coding model suggests that neural processing of sensory information is facilitated for temporally-predictable stimuli. This study investigated how temporal processing of visually-presented sensory cues modulates movement reaction time and neural activities in speech and hand motor systems. Event-related potentials (ERPs) were recorded in 13 subjects while they were visually-cued to prepare to produce a steady vocalization of a vowel sound or press a button in a randomized order, and to initiate the cued movement following the onset of a go signal on the screen. Experiment was conducted in two counterbalanced blocks in which the time interval between visual cue and go signal was temporally-predictable (fixed delay at 1000 ms) or unpredictable (variable between 1000 and 2000 ms). Results of the behavioral response analysis indicated that movement reaction time was significantly decreased for temporally-predictable stimuli in both speech and hand modalities. We identified premotor ERP activities with a left-lateralized parietal distribution for hand and a frontocentral distribution for speech that were significantly suppressed in response to temporally-predictable compared with unpredictable stimuli. The premotor ERPs were elicited approximately -100 ms before movement and were significantly correlated with speech and hand motor reaction times only in response to temporally-predictable stimuli. These findings suggest that the motor system establishes a predictive code to facilitate movement in response to temporally-predictable sensory stimuli. Our data suggest that the premotor ERP activities are robust neurophysiological biomarkers of such predictive coding mechanisms. These findings provide novel insights into the temporal processing mechanisms of speech and hand motor systems.

  14. Speech input system for meat inspection and pathological coding used thereby

    NASA Astrophysics Data System (ADS)

    Abe, Shozo

    Meat inspection is one of exclusive and important jobs of veterinarians though it is not well known in general. As the inspection should be conducted skillfully during a series of continuous operations in a slaughter house, development of automatic inspecting systems has been required for a long time. We employed a hand-free speech input system to record the inspecting data because inspecters have to use their both hands to treat the internals of catles and check their health conditions by necked eyes. The data collected by the inspectors are transfered to a speech recognizer and then stored as controlable data of each catle inspected. Control of terms such as pathological conditions to be input and their coding are also important in this speech input system and practical examples are shown.

  15. Shared acoustic codes underlie emotional communication in music and speech—Evidence from deep transfer learning

    PubMed Central

    Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies—the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain. PMID:28658285

  16. The Pragmatics of Greetings: Teaching Speech Acts in the EFL Classroom

    ERIC Educational Resources Information Center

    Zeff, B. Bricklin

    2016-01-01

    As a language teacher, Bricklin Zeff has long realized that knowing the words of a language is only part of speaking it. Knowing how to interpret a communicative act is equally important, and it needs to be taught explicitly. Therefore, he makes this learning a regular part of the class experience. Greetings are one of the few speech acts that…

  17. Age-related changes to spectral voice characteristics affect judgments of prosodic, segmental, and talker attributes for child and adult speech.

    PubMed

    Dilley, Laura C; Wieland, Elizabeth A; Gamache, Jessica L; McAuley, J Devin; Redford, Melissa A

    2013-02-01

    As children mature, changes in voice spectral characteristics co-vary with changes in speech, language, and behavior. In this study, spectral characteristics were manipulated to alter the perceived ages of talkers' voices while leaving critical acoustic-prosodic correlates intact, to determine whether perceived age differences were associated with differences in judgments of prosodic, segmental, and talker attributes. Speech was modified by lowering formants and fundamental frequency, for 5-year-old children's utterances, or raising them, for adult caregivers' utterances. Next, participants differing in awareness of the manipulation (Experiment 1A) or amount of speech-language training (Experiment 1B) made judgments of prosodic, segmental, and talker attributes. Experiment 2 investigated the effects of spectral modification on intelligibility. Finally, in Experiment 3, trained analysts used formal prosody coding to assess prosodic characteristics of spectrally modified and unmodified speech. Differences in perceived age were associated with differences in ratings of speech rate, fluency, intelligibility, likeability, anxiety, cognitive impairment, and speech-language disorder/delay; effects of training and awareness of the manipulation on ratings were limited. There were no significant effects of the manipulation on intelligibility or formally coded prosody judgments. Age-related voice characteristics can greatly affect judgments of speech and talker characteristics, raising cautionary notes for developmental research and clinical work.

  18. A high quality voice coder with integrated echo canceller and voice activity detector for mobile satellite applications

    NASA Technical Reports Server (NTRS)

    Kondoz, A. M.; Evans, B. G.

    1993-01-01

    In the last decade, low bit rate speech coding research has received much attention resulting in newly developed, good quality, speech coders operating at as low as 4.8 Kb/s. Although speech quality at around 8 Kb/s is acceptable for a wide variety of applications, at 4.8 Kb/s more improvements in quality are necessary to make it acceptable to the majority of applications and users. In addition to the required low bit rate with acceptable speech quality, other facilities such as integrated digital echo cancellation and voice activity detection are now becoming necessary to provide a cost effective and compact solution. In this paper we describe a CELP speech coder with integrated echo canceller and a voice activity detector all of which have been implemented on a single DSP32C with 32 KBytes of SRAM. The quality of CELP coded speech has been improved significantly by a new codebook implementation which also simplifies the encoder/decoder complexity making room for the integration of a 64-tap echo canceller together with a voice activity detector.

  19. Research in speech communication.

    PubMed

    Flanagan, J

    1995-10-24

    Advances in digital speech processing are now supporting application and deployment of a variety of speech technologies for human/machine communication. In fact, new businesses are rapidly forming about these technologies. But these capabilities are of little use unless society can afford them. Happily, explosive advances in microelectronics over the past two decades have assured affordable access to this sophistication as well as to the underlying computing technology. The research challenges in speech processing remain in the traditionally identified areas of recognition, synthesis, and coding. These three areas have typically been addressed individually, often with significant isolation among the efforts. But they are all facets of the same fundamental issue--how to represent and quantify the information in the speech signal. This implies deeper understanding of the physics of speech production, the constraints that the conventions of language impose, and the mechanism for information processing in the auditory system. In ongoing research, therefore, we seek more accurate models of speech generation, better computational formulations of language, and realistic perceptual guides for speech processing--along with ways to coalesce the fundamental issues of recognition, synthesis, and coding. Successful solution will yield the long-sought dictation machine, high-quality synthesis from text, and the ultimate in low bit-rate transmission of speech. It will also open the door to language-translating telephony, where the synthetic foreign translation can be in the voice of the originating talker.

  20. Fingerspelled and Printed Words Are Recoded into a Speech-based Code in Short-term Memory.

    PubMed

    Sehyr, Zed Sevcikova; Petrich, Jennifer; Emmorey, Karen

    2017-01-01

    We conducted three immediate serial recall experiments that manipulated type of stimulus presentation (printed or fingerspelled words) and word similarity (speech-based or manual). Matched deaf American Sign Language signers and hearing non-signers participated (mean reading age = 14-15 years). Speech-based similarity effects were found for both stimulus types indicating that deaf signers recoded both printed and fingerspelled words into a speech-based phonological code. A manual similarity effect was not observed for printed words indicating that print was not recoded into fingerspelling (FS). A manual similarity effect was observed for fingerspelled words when similarity was based on joint angles rather than on handshape compactness. However, a follow-up experiment suggested that the manual similarity effect was due to perceptual confusion at encoding. Overall, these findings suggest that FS is strongly linked to English phonology for deaf adult signers who are relatively skilled readers. This link between fingerspelled words and English phonology allows for the use of a more efficient speech-based code for retaining fingerspelled words in short-term memory and may strengthen the representation of English vocabulary. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Emotion recognition from speech: tools and challenges

    NASA Astrophysics Data System (ADS)

    Al-Talabani, Abdulbasit; Sellahewa, Harin; Jassim, Sabah A.

    2015-05-01

    Human emotion recognition from speech is studied frequently for its importance in many applications, e.g. human-computer interaction. There is a wide diversity and non-agreement about the basic emotion or emotion-related states on one hand and about where the emotion related information lies in the speech signal on the other side. These diversities motivate our investigations into extracting Meta-features using the PCA approach, or using a non-adaptive random projection RP, which significantly reduce the large dimensional speech feature vectors that may contain a wide range of emotion related information. Subsets of Meta-features are fused to increase the performance of the recognition model that adopts the score-based LDC classifier. We shall demonstrate that our scheme outperform the state of the art results when tested on non-prompted databases or acted databases (i.e. when subjects act specific emotions while uttering a sentence). However, the huge gap between accuracy rates achieved on the different types of datasets of speech raises questions about the way emotions modulate the speech. In particular we shall argue that emotion recognition from speech should not be dealt with as a classification problem. We shall demonstrate the presence of a spectrum of different emotions in the same speech portion especially in the non-prompted data sets, which tends to be more "natural" than the acted datasets where the subjects attempt to suppress all but one emotion.

  2. Crew Communication as a Factor in Aviation Accidents

    NASA Technical Reports Server (NTRS)

    Goguen, J.; Linde, C.; Murphy, M.

    1986-01-01

    The crew communication process is analyzed. Planning and explanation are shown to be well-structured discourse types, described by formal rules. These formal rules are integrated with those describing the other most important discourse type within the cockpit: the command-and-control speech act chain. The latter is described as a sequence of speech acts for making requests (including orders and suggestions), for making reports, for supporting or challenging statements, and for acknowledging previous speech acts. Mitigation level, a linguistic indication of indirectness and tentativeness in speech, was an important variable in several hypotheses, i.e., the speech of subordinates is more mitigated than the speech of superiors, the speech of all crewmembers is less mitigated when they know that they are in either a problem or emergency situation, and mitigation is a factor in failures of crewmembers to initiate discussion of new topics or have suggestions ratified by the captain. Test results also show that planning and explanation are more frequently performed by captains, are done more during crew- recognized problems, and are done less during crew-recognized emergencies. The test results also indicated that planning and explanation are more frequently performed by captains than by other crewmembers, are done more during crew-recognized problems, and are done less during-recognized emergencies.

  3. Nebraska Speech, Debate, and Drama Manuals.

    ERIC Educational Resources Information Center

    Nebraska School Activities Association, Lincoln.

    Prepared and designed to provide general information in the administration of speech activities in the Nebraska schools, this manual offers rules and regulations for speech events, high school debate, and one act plays. The section on speech events includes information about general regulations, the scope of competition, district contests, the…

  4. A software tool for analyzing multichannel cochlear implant signals.

    PubMed

    Lai, Wai Kong; Bögli, Hans; Dillier, Norbert

    2003-10-01

    A useful and convenient means to analyze the radio frequency (RF) signals being sent by a speech processor to a cochlear implant would be to actually capture and display them with appropriate software. This is particularly useful for development or diagnostic purposes. sCILab (Swiss Cochlear Implant Laboratory) is such a PC-based software tool intended for the Nucleus family of Multichannel Cochlear Implants. Its graphical user interface provides a convenient and intuitive means for visualizing and analyzing the signals encoding speech information. Both numerical and graphic displays are available for detailed examination of the captured CI signals, as well as an acoustic simulation of these CI signals. sCILab has been used in the design and verification of new speech coding strategies, and has also been applied as an analytical tool in studies of how different parameter settings of existing speech coding strategies affect speech perception. As a diagnostic tool, it is also useful for troubleshooting problems with the external equipment of the cochlear implant systems.

  5. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    NASA Technical Reports Server (NTRS)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  6. Modelling the Architecture of Phonetic Plans: Evidence from Apraxia of Speech

    ERIC Educational Resources Information Center

    Ziegler, Wolfram

    2009-01-01

    In theories of spoken language production, the gestural code prescribing the movements of the speech organs is usually viewed as a linear string of holistic, encapsulated, hard-wired, phonetic plans, e.g., of the size of phonemes or syllables. Interactions between phonetic units on the surface of overt speech are commonly attributed to either the…

  7. Do North Carolina Students Have Freedom of Speech? A Review of Campus Speech Codes

    ERIC Educational Resources Information Center

    Robinson, Jenna Ashley

    2010-01-01

    America's colleges and universities are supposed to be strongholds of classically liberal ideals, including the protection of individual rights and openness to debate and inquiry. Too often, this is not the case. Across the country, universities deny students and faculty their fundamental rights to freedom of speech and expression. The report…

  8. An issue hiding in plain sight: when are speech-language pathologists special educators rather than related services providers?

    PubMed

    Giangreco, Michael F; Prelock, Patricia A; Turnbull, H Rutherford

    2010-10-01

    Under the Individuals With Disabilities Education Act (IDEA; as amended, 2004), speech-language pathology services may be either special education or a related service. Given the absence of guidance documents or research on this issue, the purposes of this clinical exchange are to (a) present and analyze the IDEA definitions related to speech-language pathologists (SLPs) and their roles, (b) offer a rationale for the importance of and distinction between their roles, (c) propose an initial conceptualization (i.e., flow chart) to distinguish between when an SLP should function as a related services provider versus a special educator, and (d) suggest actions to develop and disseminate a clearer shared understanding of this issue. Federal definitions of special education and related services as related to SLPs are discussed in terms of determining special education eligibility, meeting student needs, ensuring SLPs are following their code of ethics and scope of practice, and facilitating appropriate personnel utilization and service delivery planning. Clarifying the distinction between special education and related services should lead to increased likelihood of appropriate services for students with disabilities, improved working conditions for SLPs, and enhanced collaboration among team members. This clinical exchange is meant to promote dialogue and research about this underexamined issue.

  9. Sociolinguistics and Language Acquisition.

    ERIC Educational Resources Information Center

    Wolfson, Nessa, Ed.; Judd, Elliot, Ed.

    The following are included in this collection of essays on patterns of rules of speaking, and sociolinguistics and second language learning and teaching: "How to Tell When Someone Is Saying 'No' Revisited" (Joan Rubin); "Apology: A Speech-Act Set" (Elite Olshtain and Andrew Cohen); "Interpreting and Performing Speech Acts in a Second Language: A…

  10. Pragmatic Elements in EFL Course Books

    ERIC Educational Resources Information Center

    Ulum, Ömer Gökhan

    2015-01-01

    Pragmatic development or competence has been great concern particularly for the recent decades. Regarding this issue, questioning the existence and delivery of speech acts in EFL course books may be sententious, as learners employ them for pragmatic input. Although much research has been conducted referring to speech acts, comparably little…

  11. 76 FR 3887 - Notice of Public Information Collection Being Reviewed by the Federal Communications Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-21

    .... SUPPLEMENTARY INFORMATION: OMB Control Number: 3060-1043. Title: Telecommunications Relay Services and Speech-to-Speech Services for Individuals with Hearing and Speech Disabilities, CG Docket No. 03-123, FCC 04-137...], Telecommunications Services for Hearing-Impaired and Speech-Impaired Individuals; The Americans with Disabilities Act...

  12. Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions

    NASA Astrophysics Data System (ADS)

    Wrench, Alan A.

    Available from UMI in association with The British Library. Requires signed TDF. The Linear Prediction model was first applied to speech two and a half decades ago. Since then it has been the subject of intense research and continues to be one of the principal tools in the analysis of speech. Its mathematical tractability makes it a suitable subject for study and its proven success in practical applications makes the study worthwhile. The model is known to be unsuited to speech corrupted by background noise. This has led many researchers to investigate ways of enhancing the speech signal prior to Linear Predictive analysis. In this thesis this body of work is extended. The chosen application is low bit-rate (2.4 kbits/sec) speech coding. For this task the performance of the Linear Prediction algorithm is crucial because there is insufficient bandwidth to encode the error between the modelled speech and the original input. A review of the fundamentals of Linear Prediction and an independent assessment of the relative performance of methods of Linear Prediction modelling are presented. A new method is proposed which is fast and facilitates stability checking, however, its stability is shown to be unacceptably poorer than existing methods. A novel supposition governing the positioning of the analysis frame relative to a voiced speech signal is proposed and supported by observation. The problem of coding noisy speech is examined. Four frequency domain speech processing techniques are developed and tested. These are: (i) Combined Order Linear Prediction Spectral Estimation; (ii) Frequency Scaling According to an Aural Model; (iii) Amplitude Weighting Based on Perceived Loudness; (iv) Power Spectrum Squaring. These methods are compared with the Recursive Linearised Maximum a Posteriori method. Following on from work done in the frequency domain, a time domain implementation of spectrum squaring is developed. In addition, a new method of power spectrum estimation is developed based on the Minimum Variance approach. This new algorithm is shown to be closely related to Linear Prediction but produces slightly broader spectral peaks. Spectrum squaring is applied to both the new algorithm and standard Linear Prediction and their relative performance is assessed. (Abstract shortened by UMI.).

  13. A Cross-Cultural Comparative Study of Apology Strategies Employed by Iranian EFL Learners and English Native Speakers

    ERIC Educational Resources Information Center

    Abedi, Elham

    2016-01-01

    The development of speech-act theory has provided the hearers with a better understanding of what speakers intend to perform in the act of communication. One type of speech act is apologizing. When an action or utterance has resulted in an offense, the offender needs to apologize. In the present study, an attempt was made to compare the apology…

  14. Status Report on Speech Research: A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for its Investigation, and Practical Applications, April 1-September 30, 1983.

    ERIC Educational Resources Information Center

    Studdert-Kennedy, Michael, Ed.; O'Brien, Nancy, Ed.

    Prepared as part of a regular series on the status and progress of studies on the nature of speech, instrumentation for its evaluation, and practical applications for speech research, this compilation contains 14 reports. Topics covered in the reports include the following: (1) phonetic coding and order memory in relation to reading proficiency,…

  15. Age-related changes to spectral voice characteristics affect judgments of prosodic, segmental, and talker attributes for child and adult speech

    PubMed Central

    Dilley, Laura C.; Wieland, Elizabeth A.; Gamache, Jessica L.; McAuley, J. Devin; Redford, Melissa A.

    2013-01-01

    Purpose As children mature, changes in voice spectral characteristics covary with changes in speech, language, and behavior. Spectral characteristics were manipulated to alter the perceived ages of talkers’ voices while leaving critical acoustic-prosodic correlates intact, to determine whether perceived age differences were associated with differences in judgments of prosodic, segmental, and talker attributes. Method Speech was modified by lowering formants and fundamental frequency, for 5-year-old children’s utterances, or raising them, for adult caregivers’ utterances. Next, participants differing in awareness of the manipulation (Exp. 1a) or amount of speech-language training (Exp. 1b) made judgments of prosodic, segmental, and talker attributes. Exp. 2 investigated the effects of spectral modification on intelligibility. Finally, in Exp. 3 trained analysts used formal prosody coding to assess prosodic characteristics of spectrally-modified and unmodified speech. Results Differences in perceived age were associated with differences in ratings of speech rate, fluency, intelligibility, likeability, anxiety, cognitive impairment, and speech-language disorder/delay; effects of training and awareness of the manipulation on ratings were limited. There were no significant effects of the manipulation on intelligibility or formally coded prosody judgments. Conclusions Age-related voice characteristics can greatly affect judgments of speech and talker characteristics, raising cautionary notes for developmental research and clinical work. PMID:23275414

  16. Research in speech communication.

    PubMed Central

    Flanagan, J

    1995-01-01

    Advances in digital speech processing are now supporting application and deployment of a variety of speech technologies for human/machine communication. In fact, new businesses are rapidly forming about these technologies. But these capabilities are of little use unless society can afford them. Happily, explosive advances in microelectronics over the past two decades have assured affordable access to this sophistication as well as to the underlying computing technology. The research challenges in speech processing remain in the traditionally identified areas of recognition, synthesis, and coding. These three areas have typically been addressed individually, often with significant isolation among the efforts. But they are all facets of the same fundamental issue--how to represent and quantify the information in the speech signal. This implies deeper understanding of the physics of speech production, the constraints that the conventions of language impose, and the mechanism for information processing in the auditory system. In ongoing research, therefore, we seek more accurate models of speech generation, better computational formulations of language, and realistic perceptual guides for speech processing--along with ways to coalesce the fundamental issues of recognition, synthesis, and coding. Successful solution will yield the long-sought dictation machine, high-quality synthesis from text, and the ultimate in low bit-rate transmission of speech. It will also open the door to language-translating telephony, where the synthetic foreign translation can be in the voice of the originating talker. Images Fig. 1 Fig. 2 Fig. 5 Fig. 8 Fig. 11 Fig. 12 Fig. 13 PMID:7479806

  17. Signal Prediction With Input Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin

    1999-01-01

    A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.

  18. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  19. A variable rate speech compressor for mobile applications

    NASA Technical Reports Server (NTRS)

    Yeldener, S.; Kondoz, A. M.; Evans, B. G.

    1990-01-01

    One of the most promising speech coder at the bit rate of 9.6 to 4.8 kbits/s is CELP. Code Excited Linear Prediction (CELP) has been dominating 9.6 to 4.8 kbits/s region during the past 3 to 4 years. Its set back however, is its expensive implementation. As an alternative to CELP, the Base-Band CELP (CELP-BB) was developed which produced good quality speech comparable to CELP and a single chip implementable complexity as reported previously. Its robustness was also improved to tolerate errors up to 1.0 pct. and maintain intelligibility up to 5.0 pct. and more. Although, CELP-BB produces good quality speech at around 4.8 kbits/s, it has a fundamental problem when updating the pitch filter memory. A sub-optimal solution is proposed for this problem. Below 4.8 kbits/s, however, CELP-BB suffers from noticeable quantization noise as a result of the large vector dimensions used. Efficient representation of speech below 4.8 kbits/s is reported by introducing Sinusoidal Transform Coding (STC) to represent the LPC excitation which is called Sine Wave Excited LPC (SWELP). In this case, natural sounding good quality synthetic speech is obtained at around 2.4 kbits/s.

  20. Effects of irrelevant sounds on phonological coding in reading comprehension and short-term memory.

    PubMed

    Boyle, R; Coltheart, V

    1996-05-01

    The effects of irrelevant sounds on reading comprehension and short-term memory were studied in two experiments. In Experiment 1, adults judged the acceptability of written sentences during irrelevant speech, accompanied and unaccompanied singing, instrumental music, and in silence. Sentences varied in syntactic complexity: Simple sentences contained a right-branching relative clause (The applause pleased the woman that gave the speech) and syntactically complex sentences included a centre-embedded relative clause (The hay that the farmer stored fed the hungry animals). Unacceptable sentences either sounded acceptable (The dog chased the cat that eight up all his food) or did not (The man praised the child that sight up his spinach). Decision accuracy was impaired by syntactic complexity but not by irrelevant sounds. Phonological coding was indicated by increased errors on unacceptable sentences that sounded correct. These errors rates were unaffected by irrelevant sounds. Experiment 2 examined effects of irrelevant sounds on ordered recall of phonologically similar and dissimilar word lists. Phonological similarity impaired recall. Irrelevant speech reduced recall but did not interact with phonological similarity. The results of these experiments question assumptions about the relationship between speech input and phonological coding in reading and the short-term store.

  1. Educators' Perspectives on Facilitating Computer-Assisted Speech Intervention in Early Childhood Settings

    ERIC Educational Resources Information Center

    Crowe, Kathryn; Cumming, Tamara; McCormack, Jane; Baker, Elise; McLeod, Sharynne; Wren, Yvonne; Roulstone, Sue; Masso, Sarah

    2017-01-01

    Early childhood educators are frequently called on to support preschool-aged children with speech sound disorders and to engage these children in activities that target their speech production. This study explored factors that acted as facilitators and/or barriers to the provision of computer-based support for children with speech sound disorders…

  2. A Study of Korean EFL Learners' Apology Speech Acts: Strategy and Pragmatic Transfer Influenced by Sociolinguistic Variations.

    ERIC Educational Resources Information Center

    Yang, Tae-Kyoung

    2002-01-01

    Examines how apology speech act strategies frequently used in daily life are transferred in the framework of interlanguage pragmatics and sociolinguistics and how they are influenced by sociolinguistic variations such as social status, social distance, severity of offense, and formal or private relationships. (Author/VWL)

  3. The Sociolinguistic Patterns of Native Arabic Speakers: Implications for Teaching Arabic as a Foreign Language.

    ERIC Educational Resources Information Center

    Hussein, Anwar A.

    1995-01-01

    Presents a descriptive analysis of speech acts in Arabic: forms of address, apologies, requests, expressions of gratitude, disagreement, greetings, refusals, partings, and telephone etiquette. Results reveal that linguistic formulas of each speech act were determined by social distance, formality of the situation, age, level of education, and…

  4. Testing University Learners' Interlanguage Pragmatic Competence in a Chinese EFL Context

    ERIC Educational Resources Information Center

    Xu, Lan; Wannaruk, Anchalee

    2016-01-01

    Speech acts are the major concern of interlanguage pragmatists. The present study aimed to 1) examine the reliability and validity of an interlanguage pragmatic (ILP) competence test on speech acts in a Chinese EFL context, and 2) investigate EFL learners' variations of ILP competence by language proficiency. Altogether 390 students participated…

  5. Questioning Mechanisms During Tutoring, Conversation, and Human-Computer Interaction

    DTIC Science & Technology

    1992-10-14

    project on the grant, we are analyzing sequences of speech act categories in dialogues between children. The 90 dialogues occur in the context of free ... play , a puzzle task, versus a 20-questions game. Our goal is to assess the extent to which various computational models can predict speech act category N

  6. Interlanguage Pragmatics in Russian: The Speech Act of Request in Email

    ERIC Educational Resources Information Center

    Krulatz, Anna M.

    2012-01-01

    As face-threatening speech acts, requests are of particular interest to second language acquisition scholars. They affect the interlocutors' public self-images, and thus require a careful consideration of the social distance between the interlocutors, their status, and the level of the imposition, factors that are weighed differently in…

  7. One Speaker, Two Languages. Cross-Disciplinary Perspectives on Code-Switching.

    ERIC Educational Resources Information Center

    Milroy, Lesley, Ed.; Muysken, Pieter, Ed.

    Fifteen articles review code-switching in the four major areas: policy implications in specific institutional and community settings; perspectives of social theory of code-switching as a form of speech behavior in particular social contexts; the grammatical analysis of code-switching, including factors that constrain switching even within a…

  8. Real-time speech encoding based on Code-Excited Linear Prediction (CELP)

    NASA Technical Reports Server (NTRS)

    Leblanc, Wilfrid P.; Mahmoud, S. A.

    1988-01-01

    This paper reports on the work proceeding with regard to the development of a real-time voice codec for the terrestrial and satellite mobile radio environments. The codec is based on a complexity reduced version of code-excited linear prediction (CELP). The codebook search complexity was reduced to only 0.5 million floating point operations per second (MFLOPS) while maintaining excellent speech quality. Novel methods to quantize the residual and the long and short term model filters are presented.

  9. Speech perception of young children using nucleus 22-channel or CLARION cochlear implants.

    PubMed

    Young, N M; Grohne, K M; Carrasco, V N; Brown, C

    1999-04-01

    This study compares the auditory perceptual skill development of 23 congenitally deaf children who received the Nucleus 22-channel cochlear implant with the SPEAK speech coding strategy, and 20 children who received the CLARION Multi-Strategy Cochlear Implant with the Continuous Interleaved Sampler (CIS) speech coding strategy. All were under 5 years old at implantation. Preimplantation, there were no significant differences between the groups in age, length of hearing aid use, or communication mode. Auditory skills were assessed at 6 months and 12 months after implantation. Postimplantation, the mean scores on all speech perception tests were higher for the Clarion group. These differences were statistically significant for the pattern perception and monosyllable subtests of the Early Speech Perception battery at 6 months, and for the Glendonald Auditory Screening Procedure at 12 months. Multiple regression analysis revealed that device type accounted for the greatest variance in performance after 12 months of implant use. We conclude that children using the CIS strategy implemented in the Clarion implant may develop better auditory perceptual skills during the first year postimplantation than children using the SPEAK strategy with the Nucleus device.

  10. Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals

    PubMed Central

    Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro

    2012-01-01

    Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497

  11. How Does Reading Performance Modulate the Impact of Orthographic Knowledge on Speech Processing? A Comparison of Normal Readers and Dyslexic Adults

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Nelis, Aubéline; Kolinsky, Régine

    2014-01-01

    Studies on proficient readers showed that speech processing is affected by knowledge of the orthographic code. Yet, the automaticity of the orthographic influence depends on task demand. Here, we addressed this automaticity issue in normal and dyslexic adult readers by comparing the orthographic effects obtained in two speech processing tasks that…

  12. Perception of the Auditory-Visual Illusion in Speech Perception by Children with Phonological Disorders

    ERIC Educational Resources Information Center

    Dodd, Barbara; McIntosh, Beth; Erdener, Dogu; Burnham, Denis

    2008-01-01

    An example of the auditory-visual illusion in speech perception, first described by McGurk and MacDonald, is the perception of [ta] when listeners hear [pa] in synchrony with the lip movements for [ka]. One account of the illusion is that lip-read and heard speech are combined in an articulatory code since people who mispronounce words respond…

  13. Hearing and Speech Sciences in Educational Environment Mapping in Brazil: education, work and professional experience.

    PubMed

    Celeste, Letícia Corrêa; Zanoni, Graziela; Queiroga, Bianca; Alves, Luciana Mendonça

    2017-03-09

    To map the profile of Brazilian Speech Therapists who report acting in Educational Speech Therapy, with regard to aspects related to training, performance and professional experience. Retrospective study, based on secondary database analysis of the Federal Council of Hearing and Speech Sciences on the questionnaires reporting acting with Educational Environment. 312 questionnaires were completed, of which 93.3% by women aged 30-39 years. Most Speech Therapists continued the studies, opting mostly for specialization. Almost 50% of respondents, have worked for less than six years with the speciality, most significantly in the public service (especially municipal) and private area. The profile of the Speech Therapists active in the Educational area in Brazil is a professional predominantly female, who values to continue their studies after graduation, looking mostly for specialization in the following areas: Audiology and Orofacial Motor. The time experience of the majority is up to 10 years of work whose nature is divided mainly in public (municipal) and private schools. The performance of Speech Therapists in the Educational area concentrates in Elementary and Primary school, with varied workload.

  14. Countering Propaganda in the Global War on Terrorism: What can a Democracy do?

    DTIC Science & Technology

    2008-05-01

    Sedition Acts, “Over nineteen hundred prosecutions and other judicial proceedings during the war, involving speeches , newspaper articles, pamphlets...Liberties and National Security .................................................... 18 Protection of Free Speech : Zechariah Chafee, Limits of Free... Speech .......................... 20 Countering Propaganda (Protectionism) ....................................................................... 21

  15. Integrating Pragmatics Instruction in a Content-Based Classroom

    ERIC Educational Resources Information Center

    Krulatz, Anna

    2014-01-01

    The issue of teaching pragmatics in foreign and second language classrooms has received a lot of attention in the recent years. Its origins can be dated back to the Cross-Cultural Speech Act Realization Project (CCSRAP) led by Blum-Kulka, House and Kasper (1989) and the research on interlanguage speech acts that followed (for a comprehensive…

  16. Detecting and Understanding the Impact of Cognitive and Interpersonal Conflict in Computer Supported Collaborative Learning Environments

    ERIC Educational Resources Information Center

    Prata, David Nadler; Baker, Ryan S. J. d.; Costa, Evandro d. B.; Rose, Carolyn P.; Cui, Yue; de Carvalho, Adriana M. J. B.

    2009-01-01

    This paper presents a model which can automatically detect a variety of student speech acts as students collaborate within a computer supported collaborative learning environment. In addition, an analysis is presented which gives substantial insight as to how students' learning is associated with students' speech acts, knowledge that will…

  17. An Attempt to Raise Japanese EFL Learners' Pragmatic Awareness Using Online Discourse Completion Tasks

    ERIC Educational Resources Information Center

    Tanaka, Hiroya; Oki, Nanaho

    2015-01-01

    This practical paper discusses the effect of explicit instruction to raise Japanese EFL learners' pragmatic awareness using online discourse completion tasks. The five-part tasks developed by the authors use American TV drama scenes depicting particular speech acts and include explicit instruction in these speech acts. 46 Japanese EFL college…

  18. An Investigation of Refusal Strategies as Used by Bahdini Kurdish and Syriac Aramaic Speakers

    ERIC Educational Resources Information Center

    Shareef, Dilgash M.; Qyrio, Marina Isteefan; Ali, Chiman Nadheer

    2018-01-01

    For the purpose of achieving a successful communication, issues such as the appropriateness of speech acts and face saving become essential. Therefore, it is very important to achieve a high level of pragmatic competence in speech acts. Bearing this in mind, this study was conducted to investigate the preferred refusal strategies Kurdish and…

  19. Discussing Course Literature Online: Analysis of Macro Speech Acts in an Asynchronous Computer Conference

    ERIC Educational Resources Information Center

    Kosunen, Riitta

    2009-01-01

    This paper presents a macro speech act analysis of computer-mediated conferencing on a university course on language pedagogy. Students read scholarly articles on language learning and discussed them online, in order to make sense of them collaboratively in preparation for a reflective essay. The study explores how the course participants made use…

  20. Perceptions of Refusals to Invitations: Exploring the Minds of Foreign Language Learners

    ERIC Educational Resources Information Center

    Felix-Brasdefer, J. Cesar

    2008-01-01

    Descriptions of speech act realisations of native and non-native speakers abound in the cross-cultural and interlanguage pragmatics literature. Yet, what is lacking is an analysis of the cognitive processes involved in the production of speech acts. This study examines the cognitive processes and perceptions of learners of Spanish when refusing…

  1. The design of an adaptive predictive coder using a single-chip digital signal processor

    NASA Astrophysics Data System (ADS)

    Randolph, M. A.

    1985-01-01

    A speech coding processor architecture design study has been performed in which Texas Instruments TMS32010 has been selected from among three commercially available digital signal processing integrated circuits and evaluated in an implementation study of real-time Adaptive Predictive Coding (APC). The TMS32010 has been compared with AR&T Bell Laboratories DSP I and Nippon Electric Co. PD7720 and was found to be most suitable for a single chip implementation of APC. A preliminary design system based on TMS32010 has been performed, and several of the hardware and software design issues are discussed. Particular attention was paid to the design of an external memory controller which permits rapid sequential access of external RAM. As a result, it has been determined that a compact hardware implementation of the APC algorithm is feasible based of the TSM32010. Originator-supplied keywords include: vocoders, speech compression, adaptive predictive coding, digital signal processing microcomputers, speech processor architectures, and special purpose processor.

  2. Effect of the speed of a single-channel dynamic range compressor on intelligibility in a competing speech task

    NASA Astrophysics Data System (ADS)

    Stone, Michael A.; Moore, Brian C. J.

    2003-08-01

    Using a ``noise-vocoder'' cochlear implant simulator [Shannon et al., Science 270, 303-304 (1995)], the effect of the speed of dynamic range compression on speech intelligibility was assessed, using normal-hearing subjects. The target speech had a level 5 dB above that of the competing speech. Initially, baseline performance was measured with no compression active, using between 4 and 16 processing channels. Then, performance was measured using a fast-acting compressor and a slow-acting compressor, each operating prior to the vocoder simulation. The fast system produced significant gain variation over syllabic timescales. The slow system produced significant gain variation only over the timescale of sentences. With no compression active, about six channels were necessary to achieve 50% correct identification of words in sentences. Sixteen channels produced near-maximum performance. Slow-acting compression produced no significant degradation relative to the baseline. However, fast-acting compression consistently reduced performance relative to that for the baseline, over a wide range of performance levels. It is suggested that fast-acting compression degrades performance for two reasons: (1) because it introduces correlated fluctuations in amplitude in different frequency bands, which tends to produce perceptual fusion of the target and background sounds and (2) because it reduces amplitude modulation depth and intensity contrasts.

  3. Explicit authenticity and stimulus features interact to modulate BOLD response induced by emotional speech.

    PubMed

    Drolet, Matthis; Schubotz, Ricarda I; Fischer, Julia

    2013-06-01

    Context has been found to have a profound effect on the recognition of social stimuli and correlated brain activation. The present study was designed to determine whether knowledge about emotional authenticity influences emotion recognition expressed through speech intonation. Participants classified emotionally expressive speech in an fMRI experimental design as sad, happy, angry, or fearful. For some trials, stimuli were cued as either authentic or play-acted in order to manipulate participant top-down belief about authenticity, and these labels were presented both congruently and incongruently to the emotional authenticity of the stimulus. Contrasting authentic versus play-acted stimuli during uncued trials indicated that play-acted stimuli spontaneously up-regulate activity in the auditory cortex and regions associated with emotional speech processing. In addition, a clear interaction effect of cue and stimulus authenticity showed up-regulation in the posterior superior temporal sulcus and the anterior cingulate cortex, indicating that cueing had an impact on the perception of authenticity. In particular, when a cue indicating an authentic stimulus was followed by a play-acted stimulus, additional activation occurred in the temporoparietal junction, probably pointing to increased load on perspective taking in such trials. While actual authenticity has a significant impact on brain activation, individual belief about stimulus authenticity can additionally modulate the brain response to differences in emotionally expressive speech.

  4. Taking a Stand for Speech.

    ERIC Educational Resources Information Center

    Moore, Wayne D.

    1995-01-01

    Asserts that freedom of speech issues were among the first major confrontations in U.S. constitutional law. Maintains that lessons from the controversies surrounding the Sedition Act of 1798 have continuing practical relevance. Describes and discusses the significance of freedom of speech to the U.S. political system. (CFR)

  5. Free Speech Yearbook 1977.

    ERIC Educational Resources Information Center

    Phifer, Gregg, Ed.

    The eleven articles in this collection explore various aspects of freedom of speech. Topics include the lack of knowledge on the part of many judges regarding the complex act of communication; the legislatures and free speech in colonial Connecticut and Rhode Island; contributions of sixteenth century Anabaptist heretics to First Amendment…

  6. Health Insurance Portability and Accountability Act (HIPAA) legislation and its implication on speech privacy design in health care facilities

    NASA Astrophysics Data System (ADS)

    Tocci, Gregory C.; Storch, Christopher A.

    2005-09-01

    The Health Insurance Portability and Accountability Act (HIPAA) of 1996 (104th Congress, H.R. 3103, January 3, 1986), among many things, individual patient records and information be protected from unnecessary issue. This responsibility is assigned to the U.S. Department of Health and Human Services (HHS) which has issued a Privacy Rule most recently dated August 2002 with a revision being proposed in 2005 to strengthen penalties for inappropriate breaches of patient privacy. Despite this, speech privacy, in many instances in health care facilities need not be guaranteed by the facility. Nevertheless, the regulation implies that due regard be given to speech privacy in both facility design and operation. This presentation will explore the practical aspects of implementing speech privacy in health care facilities and make recommendations for certain specific speech privacy situations.

  7. Deriving Word Order in Code-Switching: Feature Inheritance and Light Verbs

    ERIC Educational Resources Information Center

    Shim, Ji Young

    2013-01-01

    This dissertation investigates code-switching (CS), the concurrent use of more than one language in conversation, commonly observed in bilingual speech. Assuming that code-switching is subject to universal principles, just like monolingual grammar, the dissertation provides a principled account of code-switching, with particular emphasis on OV~VO…

  8. JND measurements of the speech formants parameters and its implication in the LPC pole quantization

    NASA Astrophysics Data System (ADS)

    Orgad, Yaakov

    1988-08-01

    The inherent sensitivity of auditory perception is explicitly used with the objective of designing an efficient speech encoder. Speech can be modelled by a filter representing the vocal tract shape that is driven by an excitation signal representing glottal air flow. This work concentrates on the filter encoding problem, assuming that excitation signal encoding is optimal. Linear predictive coding (LPC) techniques were used to model a short speech segment by an all-pole filter; each pole was directly related to the speech formants. Measurements were made of the auditory just noticeable difference (JND) corresponding to the natural speech formants, with the LPC filter poles as the best candidates to represent the speech spectral envelope. The JND is the maximum precision required in speech quantization; it was defined on the basis of the shift of one pole parameter of a single frame of a speech segment, necessary to induce subjective perception of the distortion, with .75 probability. The average JND in LPC filter poles in natural speech was found to increase with increasing pole bandwidth and, to a lesser extent, frequency. The JND measurements showed a large spread of the residuals around the average values, indicating that inter-formant coupling and, perhaps, other, not yet fully understood, factors were not taken into account at this stage of the research. A future treatment should consider these factors. The average JNDs obtained in this work were used to design pole quantization tables for speech coding and provided a better bit-rate than the standard quantizer of reflection coefficient; a 30-bits-per-frame pole quantizer yielded a speech quality similar to that obtained with a standard 41-bits-per-frame reflection coefficient quantizer. Owing to the complexity of the numerical root extraction system, the practical implementation of the pole quantization approach remains to be proved.

  9. The development of the Nucleus Freedom Cochlear implant system.

    PubMed

    Patrick, James F; Busby, Peter A; Gibson, Peter J

    2006-12-01

    Cochlear Limited (Cochlear) released the fourth-generation cochlear implant system, Nucleus Freedom, in 2005. Freedom is based on 25 years of experience in cochlear implant research and development and incorporates advances in medicine, implantable materials, electronic technology, and sound coding. This article presents the development of Cochlear's implant systems, with an overview of the first 3 generations, and details of the Freedom system: the CI24RE receiver-stimulator, the Contour Advance electrode, the modular Freedom processor, the available speech coding strategies, the input processing options of Smart Sound to improve the signal before coding as electrical signals, and the programming software. Preliminary results from multicenter studies with the Freedom system are reported, demonstrating better levels of performance compared with the previous systems. The final section presents the most recent implant reliability data, with the early findings at 18 months showing improved reliability of the Freedom implant compared with the earlier Nucleus 3 System. Also reported are some of the findings of Cochlear's collaborative research programs to improve recipient outcomes. Included are studies showing the benefits from bilateral implants, electroacoustic stimulation using an ipsilateral and/or contralateral hearing aid, advanced speech coding, and streamlined speech processor programming.

  10. Tuning time-frequency methods for the detection of metered HF speech

    NASA Astrophysics Data System (ADS)

    Nelson, Douglas J.; Smith, Lawrence H.

    2002-12-01

    Speech is metered if the stresses occur at a nearly regular rate. Metered speech is common in poetry, and it can occur naturally in speech, if the speaker is spelling a word or reciting words or numbers from a list. In radio communications, the CQ request, call sign and other codes are frequently metered. In tactical communications and air traffic control, location, heading and identification codes may be metered. Moreover metering may be expected to survive even in HF communications, which are corrupted by noise, interference and mistuning. For this environment, speech recognition and conventional machine-based methods are not effective. We describe Time-Frequency methods which have been adapted successfully to the problem of mitigation of HF signal conditions and detection of metered speech. These methods are based on modeled time and frequency correlation properties of nearly harmonic functions. We derive these properties and demonstrate a performance gain over conventional correlation and spectral methods. Finally, in addressing the problem of HF single sideband (SSB) communications, the problems of carrier mistuning, interfering signals, such as manual Morse, and fast automatic gain control (AGC) must be addressed. We demonstrate simple methods which may be used to blindly mitigate mistuning and narrowband interference, and effectively invert the fast automatic gain function.

  11. Language choice in bimodal bilingual development.

    PubMed

    Lillo-Martin, Diane; de Quadros, Ronice M; Chen Pichler, Deborah; Fieldsteel, Zoe

    2014-01-01

    Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children. Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending-expressions in both speech and sign simultaneously-an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children's language choices. This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations, the children produced more signed utterances with Deaf interlocutors and more speech with hearing interlocutors. However, while three of the four children produced >75% speech alone in speech target sessions, they produced <25% sign alone in sign target sessions. All four produced bimodal utterances in both, but more frequently in the sign sessions, potentially because they find suppression of the dominant language more difficult. Our results indicate that these children are sensitive to the language used by their interlocutors, while showing considerable influence from the dominant community language.

  12. Language choice in bimodal bilingual development

    PubMed Central

    Lillo-Martin, Diane; de Quadros, Ronice M.; Chen Pichler, Deborah; Fieldsteel, Zoe

    2014-01-01

    Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children. Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending—expressions in both speech and sign simultaneously—an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children's language choices. This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations, the children produced more signed utterances with Deaf interlocutors and more speech with hearing interlocutors. However, while three of the four children produced >75% speech alone in speech target sessions, they produced <25% sign alone in sign target sessions. All four produced bimodal utterances in both, but more frequently in the sign sessions, potentially because they find suppression of the dominant language more difficult. Our results indicate that these children are sensitive to the language used by their interlocutors, while showing considerable influence from the dominant community language. PMID:25368591

  13. Balancing the Pendulum of Freedom

    DTIC Science & Technology

    2008-03-25

    16 Geoffrey R. Stone, “Perilous Times; Free Speech in Wartime from the Sedition Act of 1798 to the War on Terrorism,” linked from the Woodrow...the Sedition Act of 1798 to the War on Terrorism.” 20 “Eugene V. Debs.” 21 Stone, “Perilous Times; Free Speech in Wartime from the Sedition Act...deemed a violation of the Fourth Amendment… No federal official is authorized to commit a crime on behalf of the government.”8 The Supreme Court has

  14. Voice Modulations in German Ironic Speech

    ERIC Educational Resources Information Center

    Scharrer, Lisa; Christmann, Ursula; Knoll, Monja

    2011-01-01

    Previous research has shown that in different languages ironic speech is acoustically modulated compared to literal speech, and these modulations are assumed to aid the listener in the comprehension process by acting as cues that mark utterances as ironic. The present study was conducted to identify paraverbal features of German "ironic…

  15. Children's Comprehension and Use of Indirect Speech Acts: The Case of Soliciting Praise.

    ERIC Educational Resources Information Center

    Kovac, Ceil

    Children in school cooperate in the evaluation of their products and activities by teachers and other students by calling attention to these products and activities with various language strategies. The requests that someone notice something and/or praise it are the data base for this study. The unmarked speech act for this request type is in the…

  16. Processing of Basic Speech Acts Following Localized Brain Damage: A New Light on the Neuroanatomy of Language

    ERIC Educational Resources Information Center

    Soroker, N.; Kasher, A.; Giora, R.; Batori, G.; Corn, C.; Gil, M.; Zaidel, E.

    2005-01-01

    We examined the effect of localized brain lesions on processing of the basic speech acts (BSAs) of question, assertion, request, and command. Both left and right cerebral damage produced significant deficits relative to normal controls, and left brain damaged patients performed worse than patients with right-sided lesions. This finding argues…

  17. How to Ask for a Favor: An Exploration of Speech Act Pragmatics in Heritage Russian

    ERIC Educational Resources Information Center

    Dubinina, Irina Yevgenievna

    2012-01-01

    Heritage language (HL) is a linguistic system that arises in the context of early childhood bilingualism, both sequential and simultaneous, when one of the languages is not fully acquired. The performance of speech acts in HLs is yet to be understood, and this dissertation is a first step in this direction. The study investigates the pragmatic…

  18. Constructing a Scale to Assess L2 Written Speech Act Performance: WDCT and E-Mail Tasks

    ERIC Educational Resources Information Center

    Chen, Yuan-shan; Liu, Jianda

    2016-01-01

    This study reports the development of a scale to evaluate the speech act performance by intermediate-level Chinese learners of English. A qualitative analysis of the American raters' comments was conducted on learner scripts in response to a total of 16 apology and request written discourse completion task (WDCT) situations. The results showed…

  19. Ways of Examining Speech Acts in Young African American Children: Considering Inside-Out and Outside-In Approaches

    ERIC Educational Resources Information Center

    DeJarnette, Glenda; Rivers, Kenyatta O.; Hyter, Yvette D.

    2015-01-01

    To develop a framework for further study of pragmatic behavior in young children from African American English (AAE) speaking backgrounds, one aspect of pragmatic behavior is explored in this article, specifically, speech acts. The aims of this article are to (1) examine examples of how external taxonomies (i.e., an "etic" or…

  20. Getting Your Speech Act Together: The Pragmatic Ability of Second Language Learners. Working Papers on Bilingualism, No. 17.

    ERIC Educational Resources Information Center

    Rintell, Ellen

    A role-playing procedure for elicitation of speech acts was designed to study aspects of the communicative competence of second language learners, namely, their language variation with respect to deference when the age and sex of the addressee are systematically manipulated. Sixteen Spanish-speaking adult learners of English as a second language…

  1. “Down the Language Rabbit Hole with Alice”: A Case Study of a Deaf Girl with a Cochlear Implant

    PubMed Central

    Andrews, Jean F.; Dionne, Vickie

    2011-01-01

    Alice, a deaf girl who was implanted after age three years of age was exposed to four weeks of storybook sessions conducted in American Sign Language (ASL) and speech (English). Two research questions were address: (1) how did she use her sign bimodal/bilingualism, codeswitching, and code mixing during reading activities and (2) what sign bilingual code-switching and code-mixing strategies did she use while attending to stories delivered under two treatments: ASL only and speech only. Retelling scores were collected to determine the type and frequency of her codeswitching/codemixing strategies between both languages after Alice was read to a story in ASL and in spoken English. Qualitative descriptive methods were utilized. Teacher, clinician and student transcripts of the reading and retelling sessions were recorded. Results showed Alice frequently used codeswitching and codeswitching strategies while retelling the stories retold under both treatments. Alice increased in her speech production retellings of the stories under both the ASL storyreading and spoken English-only reading of the story. The ASL storyreading did not decrease Alice's retelling scores in spoken English. Professionals are encouraged to consider the benefits of early sign bimodal/bilingualism to enhance the overall speech, language and reading proficiency of deaf children with cochlear implants. PMID:22135677

  2. Ethnography of Communication: Cultural Codes and Norms.

    ERIC Educational Resources Information Center

    Carbaugh, Donal

    The primary tasks of the ethnographic researcher are to discover, describe, and comparatively analyze different speech communities' ways of speaking. Two general abstractions occurring in ethnographic analyses are normative and cultural. Communicative norms are formulated in analyzing and explaining the "patterned use of speech."…

  3. Variable frame rate transmission - A review of methodology and application to narrow-band LPC speech coding

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Makhoul, J.; Schwartz, R. M.; Huggins, A. W. F.

    1982-04-01

    The variable frame rate (VFR) transmission methodology developed, implemented, and tested in the years 1973-1978 for efficiently transmitting linear predictive coding (LPC) vocoder parameters extracted from the input speech at a fixed frame rate is reviewed. With the VFR method, parameters are transmitted only when their values have changed sufficiently over the interval since their preceding transmission. Two distinct approaches to automatic implementation of the VFR method are discussed. The first bases the transmission decisions on comparisons between the parameter values of the present frame and the last transmitted frame. The second, which is based on a functional perceptual model of speech, compares the parameter values of all the frames that lie in the interval between the present frame and the last transmitted frame against a linear model of parameter variation over that interval. Also considered is the application of VFR transmission to the design of narrow-band LPC speech coders with average bit rates of 2000-2400 bts/s.

  4. Coding strategies for cochlear implants under adverse environments

    NASA Astrophysics Data System (ADS)

    Tahmina, Qudsia

    Cochlear implants are electronic prosthetic devices that restores partial hearing in patients with severe to profound hearing loss. Although most coding strategies have significantly improved the perception of speech in quite listening conditions, there remains limitations on speech perception under adverse environments such as in background noise, reverberation and band-limited channels, and we propose strategies that improve the intelligibility of speech transmitted over the telephone networks, reverberated speech and speech in the presence of background noise. For telephone processed speech, we propose to examine the effects of adding low-frequency and high- frequency information to the band-limited telephone speech. Four listening conditions were designed to simulate the receiving frequency characteristics of telephone handsets. Results indicated improvement in cochlear implant and bimodal listening when telephone speech was augmented with high frequency information and therefore this study provides support for design of algorithms to extend the bandwidth towards higher frequencies. The results also indicated added benefit from hearing aids for bimodal listeners in all four types of listening conditions. Speech understanding in acoustically reverberant environments is always a difficult task for hearing impaired listeners. Reverberated sounds consists of direct sound, early reflections and late reflections. Late reflections are known to be detrimental to speech intelligibility. In this study, we propose a reverberation suppression strategy based on spectral subtraction to suppress the reverberant energies from late reflections. Results from listening tests for two reverberant conditions (RT60 = 0.3s and 1.0s) indicated significant improvement when stimuli was processed with SS strategy. The proposed strategy operates with little to no prior information on the signal and the room characteristics and therefore, can potentially be implemented in real-time CI speech processors. For speech in background noise, we propose a mechanism underlying the contribution of harmonics to the benefit of electroacoustic stimulations in cochlear implants. The proposed strategy is based on harmonic modeling and uses synthesis driven approach to synthesize the harmonics in voiced segments of speech. Based on objective measures, results indicated improvement in speech quality. This study warrants further work into development of algorithms to regenerate harmonics of voiced segments in the presence of noise.

  5. Applying the Verona coding definitions of emotional sequences (VR-CoDES) to code medical students' written responses to written case scenarios: Some methodological and practical considerations.

    PubMed

    Ortwein, Heiderose; Benz, Alexander; Carl, Petra; Huwendiek, Sören; Pander, Tanja; Kiessling, Claudia

    2017-02-01

    To investigate whether the Verona Coding Definitions of Emotional Sequences to code health providers' responses (VR-CoDES-P) can be used for assessment of medical students' responses to patients' cues and concerns provided in written case vignettes. Student responses in direct speech to patient cues and concerns were analysed in 21 different case scenarios using VR-CoDES-P. A total of 977 student responses were available for coding, and 857 responses were codable with the VR-CoDES-P. In 74.6% of responses, the students used either a "reducing space" statement only or a "providing space" statement immediately followed by a "reducing space" statement. Overall, the most frequent response was explicit information advice (ERIa) followed by content exploring (EPCEx) and content acknowledgement (EPCAc). VR-CoDES-P were applicable to written responses of medical students when they were phrased in direct speech. The application of VR-CoDES-P is reliable and feasible when using the differentiation of "providing" and "reducing space" responses. Communication strategies described by students in non-direct speech were difficult to code and produced many missings. VR-CoDES-P are useful for analysis of medical students' written responses when focusing on emotional issues. Students need precise instructions for their response in the given test format. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Neural Representations Used by Brain Regions Underlying Speech Production

    ERIC Educational Resources Information Center

    Segawa, Jennifer Anne

    2013-01-01

    Speech utterances are phoneme sequences but may not always be represented as such in the brain. For instance, electropalatography evidence indicates that as speaking rate increases, gestures within syllables are manipulated separately but those within consonant clusters act as one motor unit. Moreover, speech error data suggest that a syllable's…

  7. 76 FR 14661 - Notice of Public Information Collection(s) Being Submitted for Review and Approval to the Office...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-17

    ... and Speech-Impaired Individuals; the Americans with Disabilities Act of 1990, Public Law 101-336, 104..., 2007, the Commission released a Report and Order, IP-Enabled Services; Implementation of Sections 225... Persons with Disabilities; Telecommunications Relay Services and Speech-to- Speech Services for...

  8. Discourse Analysis and Language Learning [Summary of a Symposium].

    ERIC Educational Resources Information Center

    Hatch, Evelyn

    1981-01-01

    A symposium on discourse analysis and language learning is summarized. Discourse analysis can be divided into six fields of research: syntax, the amount of syntactic organization required for different types of discourse, large speech events, intra-sentential cohesion in text, speech acts, and unequal power discourse. Research on speech events and…

  9. Do Proficiency and Study-Abroad Experience Affect Speech Act Production? Analysis of Appropriateness, Accuracy, and Fluency

    ERIC Educational Resources Information Center

    Taguchi, Naoko

    2011-01-01

    This cross-sectional study examined the effect of general proficiency and study-abroad experience in production of speech acts among learners of L2 English. Participants were 25 native speakers of English and 64 Japanese college students of English divided into three groups. Group 1 (n = 22) had lower proficiency and no study-abroad experience.…

  10. Examination of Learner and Situation Level Variables: Choice of Speech Act and Request Strategy by Spanish L2 Learners

    ERIC Educational Resources Information Center

    Kuriscak, Lisa

    2015-01-01

    This study focuses on variation within a group of learners of Spanish (N = 253) who produced requests and complaints via a written discourse completion task. It examines the effects of learner and situational variables on production--the effect of proficiency and addressee-gender on speech-act choice and the effect of perception of imposition on…

  11. The contribution of the cerebellum to speech production and speech perception: clinical and functional imaging data.

    PubMed

    Ackermann, Hermann; Mathiak, Klaus; Riecker, Axel

    2007-01-01

    A classical tenet of clinical neurology proposes that cerebellar disorders may give rise to speech motor disorders (ataxic dysarthria), but spare perceptual and cognitive aspects of verbal communication. During the past two decades, however, a variety of higher-order deficits of speech production, e.g., more or less exclusive agrammatism, amnesic or transcortical motor aphasia, have been noted in patients with vascular cerebellar lesions, and transient mutism following resection of posterior fossa tumors in children may develop into similar constellations. Perfusion studies provided evidence for cerebello-cerebral diaschisis as a possible pathomechanism in these instances. Tight functional connectivity between the language-dominant frontal lobe and the contralateral cerebellar hemisphere represents a prerequisite of such long-distance effects. Recent functional imaging data point at a contribution of the right cerebellar hemisphere, concomitant with language-dominant dorsolateral and medial frontal areas, to the temporal organization of a prearticulatory verbal code ('inner speech'), in terms of the sequencing of syllable strings at a speaker's habitual speech rate. Besides motor control, this network also appears to be engaged in executive functions, e.g., subvocal rehearsal mechanisms of verbal working memory, and seems to be recruited during distinct speech perception tasks. Taken together, thus, a prearticulatory verbal code bound to reciprocal right cerebellar/left frontal interactions might represent a common platform for a variety of cerebellar engagements in cognitive functions. The distinct computational operation provided by cerebellar structures within this framework appears to be the concatenation of syllable strings into coarticulated sequences.

  12. Coding of sounds in the auditory system and its relevance to signal processing and coding in cochlear implants.

    PubMed

    Moore, Brian C J

    2003-03-01

    To review how the properties of sounds are "coded" in the normal auditory system and to discuss the extent to which cochlear implants can and do represent these codes. Data are taken from published studies of the response of the cochlea and auditory nerve to simple and complex stimuli, in both the normal and the electrically stimulated ear. REVIEW CONTENT: The review describes: 1) the coding in the normal auditory system of overall level (which partly determines perceived loudness), spectral shape (which partly determines perceived timbre and the identity of speech sounds), periodicity (which partly determines pitch), and sound location; 2) the role of the active mechanism in the cochlea, and particularly the fast-acting compression associated with that mechanism; 3) the neural response patterns evoked by cochlear implants; and 4) how the response patterns evoked by implants differ from those observed in the normal auditory system in response to sound. A series of specific issues is then discussed, including: 1) how to compensate for the loss of cochlear compression; 2) the effective number of independent channels in a normal ear and in cochlear implantees; 3) the importance of independence of responses across neurons; 4) the stochastic nature of normal neural responses; 5) the possible role of across-channel coincidence detection; and 6) potential benefits of binaural implantation. Current cochlear implants do not adequately reproduce several aspects of the neural coding of sound in the normal auditory system. Improved electrode arrays and coding systems may lead to improved coding and, it is hoped, to better performance.

  13. Interactive Activation Model of Speech Perception.

    DTIC Science & Technology

    1984-11-01

    contract. 0 Elar, .l... & .McC’lelland .1.1. Speech perception a, a cognitive proces,: The interactive act ia- %e., tion model of speech perception. In...attempts to provide a machine solution to the problem of speech perception. A second kind of model, growing out of Cognitive Psychology, attempts to...architectures to cognitive and perceptual problems. We also owe a debt to what we might call the computational connectionists -- those who have applied highly

  14. Implementation and Performance Exploration of a Cross-Genre Part of Speech Tagging Methodology to Determine Dialog Act Tags in the Chat Domain

    DTIC Science & Technology

    2010-09-01

    the flies.”) or a present tense verb when describing what an airplane does (“An airplane flies.”) This disambiguation is, in general, computationally...as part-of-speech and dialog-act tagging, and yet the volume of data created makes human analysis impractical. We present a cross-genre part-of...acceptable automatic dialog-act determination. Furthermore, we show that a simple naı̈ve Bayes classifier achieves the same performance in a fraction of

  15. Speech transport for packet telephony and voice over IP

    NASA Astrophysics Data System (ADS)

    Baker, Maurice R.

    1999-11-01

    Recent advances in packet switching, internetworking, and digital signal processing technologies have converged to allow realizable practical implementations of packet telephony systems. This paper provides a tutorial on transmission engineering for packet telephony covering the topics of speech coding/decoding, speech packetization, packet data network transport, and impairments which may negatively impact end-to-end system quality. Particular emphasis is placed upon Voice over Internet Protocol given the current popularity and ubiquity of IP transport.

  16. [Attention deficit and understanding of non-literal meanings: the interpretation of indirect speech acts and idioms].

    PubMed

    Crespo, N; Manghi, D; García, G; Cáceres, P

    To report on the oral comprehension of the non-literal meanings of indirect speech acts and idioms in everyday speech by children with attention deficit hyperactivity disorder (ADHD). The subjects in this study consisted of a sample of 29 Chilean schoolchildren aged between 6 and 13 with ADHD and a control group of children without ADHD sharing similar socio-demographic characteristics. A quantitative method was utilised: comprehension was measured individually by means of an interactive instrument. The children listened to a dialogue taken from a cartoon series that included indirect speech acts and idioms and they had to choose one of the three options they were given: literal, non-literal or distracter. The children without ADHD identified the non-literal meaning more often, especially in idioms. Likewise, it should be pointed out that whereas the children without ADHD increased their scores as their ages went up, those with ADHD remained at the same point. ADHD not only interferes in the inferential comprehension of non-literal meanings but also inhibits the development of this skill in subjects affected by it.

  17. 42 CFR 485.701 - Basis and scope.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Therapy and Speech-Language Pathology Services § 485.701 Basis and scope. This subpart implements section 1861(p)(4) of the Act, which— (a) Defines outpatient physical therapy and speech pathology services; (b...

  18. A Cross-Cultural Study of Offering Advice Speech Acts by Iranian EFL Learners and English Native Speakers: Pragmatic Transfer in Focus

    ERIC Educational Resources Information Center

    Babaie, Sherveh; Shahrokhi, Mohsen

    2015-01-01

    The purpose of the present study was to compare the speech act of offering advice as realized by Iranian EFL learners and English native speakers. The study, more specifically, attempted to find out whether there was any pragmatic transfer from Persian (L1) among Iranian EFL learners while offering advice in English. It also examined whether…

  19. Sparse gammatone signal model optimized for English speech does not match the human auditory filters.

    PubMed

    Strahl, Stefan; Mertins, Alfred

    2008-07-18

    Evidence that neurosensory systems use sparse signal representations as well as improved performance of signal processing algorithms using sparse signal models raised interest in sparse signal coding in the last years. For natural audio signals like speech and environmental sounds, gammatone atoms have been derived as expansion functions that generate a nearly optimal sparse signal model (Smith, E., Lewicki, M., 2006. Efficient auditory coding. Nature 439, 978-982). Furthermore, gammatone functions are established models for the human auditory filters. Thus far, a practical application of a sparse gammatone signal model has been prevented by the fact that deriving the sparsest representation is, in general, computationally intractable. In this paper, we applied an accelerated version of the matching pursuit algorithm for gammatone dictionaries allowing real-time and large data set applications. We show that a sparse signal model in general has advantages in audio coding and that a sparse gammatone signal model encodes speech more efficiently in terms of sparseness than a sparse modified discrete cosine transform (MDCT) signal model. We also show that the optimal gammatone parameters derived for English speech do not match the human auditory filters, suggesting for signal processing applications to derive the parameters individually for each applied signal class instead of using psychometrically derived parameters. For brain research, it means that care should be taken with directly transferring findings of optimality for technical to biological systems.

  20. Teaching Speech Organization and Outlining Using a Color-Coded Approach.

    ERIC Educational Resources Information Center

    Hearn, Ralene

    The organization/outlining unit in the basic Public Speaking course can be made more interesting by using a color-coded instructional method that captivates students, facilitates understanding, and provides the opportunity for interesting reinforcement activities. The two part lesson includes a mini-lecture with a color-coded outline and a two…

  1. Adapted cuing technique: facilitating sequential phoneme production.

    PubMed

    Klick, S L

    1994-09-01

    ACT is a visual cuing technique designed to facilitate dyspraxic speech by highlighting the sequential production of phonemes. In using ACT, cues are presented in such a way as to suggest sequential, coarticulatory movement in an overall pattern of motion. While using ACT, the facilitator's hand moves forward and back along the side of her (or his) own face. Finger movements signal specific speech sounds in formations loosely based on the manual alphabet for the hearing impaired. The best movements suggest the flowing, interactive nature of coarticulated phonemes. The synergistic nature of speech is suggested by coordinated hand motions which tighten and relax, move quickly or slowly, reflecting the motions of the vocal tract at various points during production of phonemic sequences. General principles involved in using ACT include a primary focus on speech-in-motion, the monitoring and fading of cues, and the presentation of stimuli based on motor-task analysis of phonemic sequences. Phonemic sequences are cued along three dimensions: place, manner, and vowel-related mandibular motion. Cuing vowels is a central feature of ACT. Two parameters of vowel production, focal point of resonance and mandibular closure, are cued. The facilitator's hand motions reflect the changing shape of the vocal tract and the trajectory of the tongue that result from the coarticulation of vowels and consonants. Rigid presentation of the phonemes is secondary to the facilitator's primary focus on presenting the overall sequential movement. The facilitator's goal is to self-tailor ACT in response to the changing needs and abilities of the client.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    PubMed Central

    Rallapalli, Varsha H.

    2016-01-01

    Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  3. Acoustic signals for emergency evacuation.

    DOT National Transportation Integrated Search

    1979-01-01

    Previous studies of binaural hearing suggested that speech sounds are less resistant to masking than are nonspeech sounds; experiments demonstrated that, when the nonspeech sounds are given a message to convey, they act more like speech. Earlier rese...

  4. Will Microfilm and Computers Replace Clippings?

    ERIC Educational Resources Information Center

    Oppendahl, Alison; And Others

    Four speeches are presented, each of which deals with the use of conputers to organize and retrieve news stories. The first speech relates in detail the step-by-step process devised by the "Free Press" in Detroit to analyze, categorize, code, film, process, and retrieve news stories through the use of the electronic film retrieval…

  5. Comparisons of Young Children's Private Speech Profiles: Analogical Versus Nonanalogical Reasoners.

    ERIC Educational Resources Information Center

    Manning, Brenda H.; White, C. Stephen

    The primary intention of this study was to compare private speech profiles of young children classified as analogical reasoners (AR) with young children classified as nonanalogical reasoners (NAR). The secondary purpose was to investigate Berk's (1986) research methodology and categorical scheme for the collection and coding of private speech…

  6. Transitioning from Analog to Digital Audio Recording in Childhood Speech Sound Disorders

    ERIC Educational Resources Information Center

    Shriberg, Lawrence D.; Mcsweeny, Jane L.; Anderson, Bruce E.; Campbell, Thomas F.; Chial, Michael R.; Green, Jordan R.; Hauner, Katherina K.; Moore, Christopher A.; Rusiewicz, Heather L.; Wilson, David L.

    2005-01-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing…

  7. Cultivating American- and Japanese-Style Relatedness through Mother-Child Conversation

    ERIC Educational Resources Information Center

    Crane, Lauren Shapiro; Fernald, Anne

    2017-01-01

    This study investigated whether European American and Japanese mothers' speech to preschoolers contained exchange- and alignment-oriented structures that reflect and possibly support culture-specific models of self-other relatedness. In each country 12 mothers were observed in free play with their 3-year-olds. Maternal speech was coded for…

  8. Freedom of Speech Wins in Wisconsin

    ERIC Educational Resources Information Center

    Downs, Donald Alexander

    2006-01-01

    One might derive, from the eradication of a particularly heinous speech code, some encouragement that all is not lost in the culture wars. A core of dedicated scholars, working from within, made it obvious, to all but the most radical left, that imposing social justice by restricting thought and expression was a recipe for tyranny. Donald…

  9. Preliminary Analysis of Automatic Speech Recognition and Synthesis Technology.

    DTIC Science & Technology

    1983-05-01

    16.311 % a. Seale In/Se"l tAL4 lrs e y i s 2 I ROM men "Ig eddiei, m releerla ons leveltc. Ŗ dots ghoeea INDtISTRtAIJ%6LITARY SPEECH SYNTHESIS PRODUCTS...saquence The SC-01 Suech Syntheszer conftains 64 cf, arent poneme~hs which are accessed try A 6-tht code. 1 - the proper sequ.enti omthnatiors of thoe...connected speech input with widely differing emotional states, diverse accents, and substantial nonperiodic background noise input. As noted previously

  10. MILCOM '85 - Military Communications Conference, Boston, MA, October 20-23, 1985, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    The present conference on the development status of communications systems in the context of electronic warfare gives attention to topics in spread spectrum code acquisition, digital speech technology, fiber-optics communications, free space optical communications, the networking of HF systems, and applications and evaluation methods for digital speech. Also treated are issues in local area network system design, coding techniques and applications, technology applications for HF systems, receiver technologies, software development status, channel simultion/prediction methods, C3 networking spread spectrum networks, the improvement of communication efficiency and reliability through technical control methods, mobile radio systems, and adaptive antenna arrays. Finally, communications system cost analyses, spread spectrum performance, voice and image coding, switched networks, and microwave GaAs ICs, are considered.

  11. Speech acts and performances of scientific citizenship: Examining how scientists talk about therapeutic cloning.

    PubMed

    Marks, Nicola J

    2014-07-01

    Scientists play an important role in framing public engagement with science. Their language can facilitate or impede particular interactions taking place with particular citizens: scientists' "speech acts" can "perform" different types of "scientific citizenship". This paper examines how scientists in Australia talked about therapeutic cloning during interviews and during the 2006 parliamentary debates on stem cell research. Some avoided complex labels, thereby facilitating public examination of this field. Others drew on language that only opens a space for publics to become educated, not to participate in a more meaningful way. Importantly, public utterances made by scientists here contrast with common international utterances: they did not focus on the therapeutic but the research promises of therapeutic cloning. Social scientists need to pay attention to the performative aspects of language in order to promote genuine citizen involvement in techno-science. Speech Act Theory is a useful analytical tool for this.

  12. Speech-Language Services in Public Schools: How Policy Ambiguity Regarding Eligibility Criteria Impacts Speech-Language Pathologists in a Litigious and Resource Constrained Environment

    ERIC Educational Resources Information Center

    Sylvan, Lesley

    2014-01-01

    Public school districts must determine which students are eligible to receive special education and related services under the Individuals with Disabilities Education Act (IDEA). This study, which involves 39 interviews with speech-language pathologists and school administrators, examines how eligibility recommendations are made for one widely…

  13. Moral Blow to the Marine Corps: The Repeal of the Don’t Ask Don’t Tell Policy

    DTIC Science & Technology

    2011-04-08

    else individually has to accept homosexual acts as acceptable (based on freedom of speech and freedom of religion). The DOD report stated that of...free speech rights being curtailed would lead them to withdraw their endorsement.൞ The issuebere is really freedom of speech . Chaplains already...effects regardless of the intent. Freedom of speech and religion are our most important rights as Americans. By trying to protect one group are we

  14. IEP goals for school-age children with speech sound disorders.

    PubMed

    Farquharson, Kelly; Tambyraja, Sherine R; Justice, Laura M; Redle, Erin E

    2014-01-01

    The purpose of the current study was to describe the current state of practice for writing Individualized Education Program (IEP) goals for children with speech sound disorders (SSDs). IEP goals for 146 children receiving services for SSDs within public school systems across two states were coded for their dominant theoretical framework and overall quality. A dichotomous scheme was used for theoretical framework coding: cognitive-linguistic or sensory-motor. Goal quality was determined by examining 7 specific indicators outlined by an empirically tested rating tool. In total, 147 long-term and 490 short-term goals were coded. The results revealed no dominant theoretical framework for long-term goals, whereas short-term goals largely reflected a sensory-motor framework. In terms of quality, the majority of speech production goals were functional and generalizable in nature, but were not able to be easily targeted during common daily tasks or by other members of the IEP team. Short-term goals were consistently rated higher in quality domains when compared to long-term goals. The current state of practice for writing IEP goals for children with SSDs indicates that theoretical framework may be eclectic in nature and likely written to support the individual needs of children with speech sound disorders. Further investigation is warranted to determine the relations between goal quality and child outcomes. (1) Identify two predominant theoretical frameworks and discuss how they apply to IEP goal writing. (2) Discuss quality indicators as they relate to IEP goals for children with speech sound disorders. (3) Discuss the relationship between long-term goals level of quality and related theoretical frameworks. (4) Identify the areas in which business-as-usual IEP goals exhibit strong quality.

  15. Doctors' voices in patients' narratives: coping with emotions in storytelling.

    PubMed

    Lucius-Hoene, Gabriele; Thiele, Ulrike; Breuning, Martina; Haug, Stephanie

    2012-09-01

    To understand doctors' impacts on the emotional coping of patients, their stories about encounters with doctors are used. These accounts reflect meaning-making processes and biographically contextualized experiences. We investigate how patients characterize their doctors by voicing them in their stories, thus assigning them functions in their coping process. 394 narrated scenes with reported speech of doctors were extracted from interviews with 26 patients with type 2 diabetes and 30 with chronic pain. Constructed speech acts were investigated by means of positioning and narrative analysis, and assigned into thematic categories by a bottom-up coding procedure. Patients use narratives as coping strategies when confronted with illness and their encounters with doctors by constructing them in a supportive and face-saving way. In correspondence with the variance of illness conditions, differing moral problems in dealing with doctors arise. Different evaluative stances towards the same events within interviews show that positionings are not fixed, but vary according to contexts and purposes. Our narrative approach deepens the standardized and predominantly cognitive statements of questionnaires in research on doctor-patient relations by individualized emotional and biographical aspects of patients' perspective. Doctors should be trained to become aware of their impact in patients' coping processes.

  16. Evaluation of inner-outer space distinction and verbal hallucinations in schizophrenia.

    PubMed

    Stephane, Massoud; Kuskowski, Michael; McClannahan, Kate; Surerus, Christa; Nelson, Katie

    2010-09-01

    Verbal hallucinations could result from attributing one's own inner speech to another. Inner speech is usually experienced in inner space, whereas hallucinations are often experienced in outer space. To clarify this paradox, we investigated schizophrenia patients' ability to distinguish between speech experienced in inner space, and speech experienced in outer space. 32 schizophrenia patients and 26 matched healthy controls underwent a two-stage experiment. First, they read sentences aloud or silently. Afterwards, they were required to distinguish between the sentences read aloud (experienced in outer space), the sentences read silently (experienced in inner space), and new sentences not previously read (no space coding). The sentences were in the first, second, or third person in equal proportions. Linear mixed models were used to investigate the effects of group, sentence location, pronoun, and hallucinations status. Schizophrenia patients were similar to controls in recognition capacity of sentences without space coding. They exhibited both inner-outer and outer-inner space confusion (they confused silently read sentences for sentences read aloud, and vice versa). Patients who experienced hallucinations inside their head were more likely to have outer-inner space bias. For speech generated by one's own brain, schizophrenia patients have bidirectional failure of inner-outer space distinction (inner-outer and outer-inner space biases); this might explain why hallucinations (abnormal inner speech) could be experienced in outer space. Furthermore, the direction of inner-outer space indistinction could determine the spatial location of the experienced hallucinations (inside or outside the head).

  17. Curses in Acts: Hearing the Apostles’ Words of Judgment Alongside ‘Magical’ Spell Texts

    PubMed Central

    Kent, Benedict H. M.

    2017-01-01

    Scholars of Luke–Acts have struggled to define the apostles’ proclamations of judgment on those who threatened the early Christian community. Ananias and Sapphira (Acts 4.32–5.11), Simon magus (8.4-25) and Bar-Jesus (13.4-12) all fall victim to the apostles’ words of power, yet scholars have typically shied away from categorizing their speeches as curses. Close analysis of the structure, style, phonaesthetic and dramatic aspects of the Greek texts suggests, however, that Luke indeed intends the apostles’ speeches to be heard as curses whilst simultaneously presenting them as legitimate acts of power. A comparison with Greek and Coptic ‘magical’ texts helps to place the curses of Acts in the context of cursing traditions in the wider ancient Mediterranean world. PMID:29278250

  18. From In-Session Behaviors to Drinking Outcomes: A Causal Chain for Motivational Interviewing

    ERIC Educational Resources Information Center

    Moyers, Theresa B.; Martin, Tim; Houck, Jon M.; Christopher, Paulette J.; Tonigan, J. Scott

    2009-01-01

    Client speech in favor of change within motivational interviewing sessions has been linked to treatment outcomes, but a causal chain has not yet been demonstrated. Using a sequential behavioral coding system for client speech, the authors found that, at both the session and utterance levels, specific therapist behaviors predict client change talk.…

  19. Speech and Prosody Characteristics of Adolescents and Adults with High-Functioning Autism and Asperger Syndrome.

    ERIC Educational Resources Information Center

    Shriberg, Lawrence D.; Paul, Rhea; McSweeny, Jane L.; Klin, Ami; Cohen, Donald J.; Volkmar, Fred R.

    2001-01-01

    This study compared the speech and prosody-voice profiles for 30 male speakers with either high-functioning autism (HFA) or Asperger syndrome (AS), and 53 typically developing male speakers. Both HFA and AS groups had more residual articulation distortion errors and utterances coded as inappropriate for phrasing, stress, and resonance. AS speakers…

  20. 36 CFR 1192.61 - Public information system.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... COMPLIANCE BOARD AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY GUIDELINES FOR TRANSPORTATION VEHICLES... or digitized human speech messages, to announce stations and provide other passenger information... transportation system personnel, or recorded or digitized human speech messages, to announce train, route, or...

  1. Auditory Support in Linguistically Diverse Classrooms: Factors Related to Bilingual Text-to-Speech Use

    ERIC Educational Resources Information Center

    Van Laere, E.; Braak, J.

    2017-01-01

    Text-to-speech technology can act as an important support tool in computer-based learning environments (CBLEs) as it provides auditory input, next to on-screen text. Particularly for students who use a language at home other than the language of instruction (LOI) applied at school, text-to-speech can be useful. The CBLE E-Validiv offers content in…

  2. Summer 1967 Clinics for Speech Handicapped Children. Evaluation of New York City Title I Educational Projects 1966-67.

    ERIC Educational Resources Information Center

    Fox, David J.; And Others

    Individualized and intensive daily therapy was provided to 870 New York City pupils with severe speech handicaps in this summer program funded by the Elementary and Secondary Education Act, Title I. The evaluation focuses on pupil's progress in correction of speech problems, the effectiveness of the clinical methods, the reactions of the staff and…

  3. The Impact of Interrupted Use of a Speech Generating Device on the Communication Acts of a Child with Autism Spectrum Disorder: A Case Study

    ERIC Educational Resources Information Center

    Neeley, Richard A.; Pulliam, Mary Hannah; Catt, Merrill; McDaniel, D. Mike

    2015-01-01

    This case study examined the initial and renewed impact of speech generating devices on the expressive communication behaviors of a child with autism spectrum disorder. The study spanned six years of interrupted use of two speech generating devices. The child's communication behaviors were analyzed from video recordings and included communication…

  4. Effect of Acting Experience on Emotion Expression and Recognition in Voice: Non-Actors Provide Better Stimuli than Expected.

    PubMed

    Jürgens, Rebecca; Grass, Annika; Drolet, Matthis; Fischer, Julia

    Both in the performative arts and in emotion research, professional actors are assumed to be capable of delivering emotions comparable to spontaneous emotional expressions. This study examines the effects of acting training on vocal emotion depiction and recognition. We predicted that professional actors express emotions in a more realistic fashion than non-professional actors. However, professional acting training may lead to a particular speech pattern; this might account for vocal expressions by actors that are less comparable to authentic samples than the ones by non-professional actors. We compared 80 emotional speech tokens from radio interviews with 80 re-enactments by professional and inexperienced actors, respectively. We analyzed recognition accuracies for emotion and authenticity ratings and compared the acoustic structure of the speech tokens. Both play-acted conditions yielded similar recognition accuracies and possessed more variable pitch contours than the spontaneous recordings. However, professional actors exhibited signs of different articulation patterns compared to non-trained speakers. Our results indicate that for emotion research, emotional expressions by professional actors are not better suited than those from non-actors.

  5. New cochlear implant research coding strategy based on the MP3(000™) strategy to reintroduce the virtual channel effect.

    PubMed

    Neben, Nicole; Lenarz, Thomas; Schuessler, Mark; Harpel, Theo; Buechner, Andreas

    2013-05-01

    Results for speech recognition in noise tests when using a new research coding strategy designed to introduce the virtual channel effect provided no advantage over MP3(000™). Although statistically significant smaller just noticeable differences (JNDs) were obtained, the findings for pitch ranking proved to have little clinical impact. The aim of this study was to explore whether modifications to MP3000 by including sequential virtual channel stimulation would lead to further improvements in hearing, particularly for speech recognition in background noise and in competing-talker conditions, and to compare results for pitch perception and melody recognition, as well as informally collect subjective impressions on strategy preference. Nine experienced cochlear implant subjects were recruited for the prospective study. Two variants of the experimental strategy were compared to MP3000. The study design was a single-blinded ABCCBA cross-over trial paradigm with 3 weeks of take-home experience for each user condition. Comparing results of pitch-ranking, a significantly reduced JND was identified. No significant effect of coding strategy on speech understanding in noise or competing-talker materials was found. Melody recognition skills were the same under all user conditions.

  6. Neural mechanisms underlying auditory feedback control of speech

    PubMed Central

    Reilly, Kevin J.; Guenther, Frank H.

    2013-01-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557

  7. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    NASA Astrophysics Data System (ADS)

    Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan

    2005-12-01

    Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.

  8. 49 CFR 38.61 - Public information system.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Transportation Office of the Secretary of Transportation AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY... transportation system personnel, or recorded or digitized human speech messages, to announce stations and provide... public address system to permit transportation system personnel, or recorded or digitized human speech...

  9. Speech research: Studies on the nature of speech, instrumentation for its investigation, and practical applications

    NASA Astrophysics Data System (ADS)

    Liberman, A. M.

    1982-03-01

    This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation and practical applications. Manuscripts cover the following topics: Speech perception and memory coding in relation to reading ability; The use of orthographic structure by deaf adults: Recognition of finger-spelled letters; Exploring the information support for speech; The stream of speech; Using the acoustic signal to make inferences about place and duration of tongue-palate contact. Patterns of human interlimb coordination emerge from the the properties of nonlinear limit cycle oscillatory processes: Theory and data; Motor control: Which themes do we orchestrate? Exploring the nature of motor control in Down's syndrome; Periodicity and auditory memory: A pilot study; Reading skill and language skill: On the role of sign order and morphological structure in memory for American Sign Language sentences; Perception of nasal consonants with special reference to Catalan; and Speech production Characteristics of the hearing impaired.

  10. Smart command recognizer (SCR) - For development, test, and implementation of speech commands

    NASA Technical Reports Server (NTRS)

    Simpson, Carol A.; Bunnell, John W.; Krones, Robert R.

    1988-01-01

    The SCR, a rapid prototyping system for the development, testing, and implementation of speech commands in a flight simulator or test aircraft, is described. A single unit performs all functions needed during these three phases of system development, while the use of common software and speech command data structure files greatly reduces the preparation time for successive development phases. As a smart peripheral to a simulation or flight host computer, the SCR interprets the pilot's spoken input and passes command codes to the simulation or flight computer.

  11. Contextual modulation of reading rate for direct versus indirect speech quotations.

    PubMed

    Yao, Bo; Scheepers, Christoph

    2011-12-01

    In human communication, direct speech (e.g., Mary said: "I'm hungry") is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, the processing consequences of this distinction are largely unclear. In two experiments, participants were asked to either orally (Experiment 1) or silently (Experiment 2, eye-tracking) read written stories that contained either a direct speech or an indirect speech quotation. The context preceding those quotations described a situation that implied either a fast-speaking or a slow-speaking quoted protagonist. It was found that this context manipulation affected reading rates (in both oral and silent reading) for direct speech quotations, but not for indirect speech quotations. This suggests that readers are more likely to engage in perceptual simulations of the reported speech act when reading direct speech as opposed to meaning-equivalent indirect speech quotations, as part of a more vivid representation of the former. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Levels of Code Switching on EFL Student's Daily Language; Study of Language Production

    ERIC Educational Resources Information Center

    Zainuddin

    2016-01-01

    This study is aimed at describing the levels of code switching on EFL students' daily conversation. The topic is chosen due to the facts that code switching phenomenon are commonly found in daily speech of Indonesian community such as in teenager talks, television serial dialogues and mass media. Therefore, qualitative data were collected by using…

  13. Authentic and Play-Acted Vocal Emotion Expressions Reveal Acoustic Differences

    PubMed Central

    Jürgens, Rebecca; Hammerschmidt, Kurt; Fischer, Julia

    2011-01-01

    Play-acted emotional expressions are a frequent aspect in our life, ranging from deception to theater, film, and radio drama, to emotion research. To date, however, it remained unclear whether play-acted emotions correspond to spontaneous emotion expressions. To test whether acting influences the vocal expression of emotion, we compared radio sequences of naturally occurring emotions to actors’ portrayals. It was hypothesized that play-acted expressions were performed in a more stereotyped and aroused fashion. Our results demonstrate that speech segments extracted from play-acted and authentic expressions differ in their voice quality. Additionally, the play-acted speech tokens revealed a more variable F0-contour. Despite these differences, the results did not support the hypothesis that the variation was due to changes in arousal. This analysis revealed that differences in perception of play-acted and authentic emotional stimuli reported previously cannot simply be attributed to differences in arousal, but by slight and implicitly perceptible differences in encoding. PMID:21847385

  14. What Should Be the Unique Application of Teaching in Departments of Speech Communication Located in Urban Environments?

    ERIC Educational Resources Information Center

    Galvin, Kathleen M.

    This paper focuses on certain approaches which an urban speech department can use as it contributes to the preparation of urban school teachers to communicate effectively with their students. The contents include: "Verbal and Nonverbal Codes," which discusses the teacher as an encoder of verbal messages and emphasizes that teachers must learn to…

  15. Hate Speech, the First Amendment, and Professional Codes of Conduct: Where to Draw the Line?

    ERIC Educational Resources Information Center

    Mello, Jeffrey A.

    2008-01-01

    This article presents a teaching case that involves the presentation of an actual incident in which a state commission on judicial performance had to balance a judge's First Amendment rights to protected free speech against his public statements about a societal class/group that were deemed to be derogatory and inflammatory and, hence, cast…

  16. Status report on speech research. A report on the status and progress of studies of the nature of speech, instrumentation for its investigation and practical applications

    NASA Astrophysics Data System (ADS)

    Studdert-Kennedy, M.; Obrien, N.

    1983-05-01

    This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. Manuscripts cover the following topics: The influence of subcategorical mismatches on lexical access; The Serbo-Croatian orthography constraints the reader to a phonologically analytic strategy; Grammatical priming effects between pronouns and inflected verb forms; Misreadings by beginning readers of Serrbo-Croatian; Bi-alphabetism and work recognition; Orthographic and phonemic coding for word identification: Evidence for Hebrew; Stress and vowel duration effects on syllable recognition; Phonetic and auditory trading relations between acoustic cues in speech perception: Further results; Linguistic coding by deaf children in relation beginning reading success; Determinants of spelling ability in deaf and hearing adults: Access to linguistic structures; A dynamical basis for action systems; On the space-time structure of human interlimb coordination; Some acoustic and physiological observations on diphthongs; Relationship between pitch control and vowel articulation; Laryngeal vibrations: A comparison between high-speed filming and glottographic techniques; Compensatory articulation in hearing impaired speakers: A cinefluorographic study; and Review (Pierre Delattre: Studies in comparative phonetics.)

  17. Fifty years of progress in speech coding standards

    NASA Astrophysics Data System (ADS)

    Cox, Richard

    2004-10-01

    Over the past 50 years, speech coding has taken root worldwide. Early applications were for the military and transmission for telephone networks. The military gave equal priority to intelligibility and low bit rate. The telephone network gave priority to high quality and low delay. These illustrate three of the four areas in which requirements must be set for any speech coder application: bit rate, quality, delay, and complexity. While the military could afford relatively expensive terminal equipment for secure communications, the telephone network needed low cost for massive deployment in switches and transmission equipment worldwide. Today speech coders are at the heart of the wireless phones and telephone answering systems we use every day. In addition to the technology and technical invention that has occurred, standards make it possible for all these different systems to interoperate. The primary areas of standardization are the public switched telephone network, wireless telephony, and secure telephony for government and military applications. With the advent of IP telephony there are additional standardization efforts and challenges. In this talk the progress in all areas is reviewed as well as a reflection on Jim Flanagan's impact on this field during the past half century.

  18. Influence of musical training on understanding voiced and whispered speech in noise.

    PubMed

    Ruggles, Dorea R; Freyman, Richard L; Oxenham, Andrew J

    2014-01-01

    This study tested the hypothesis that the previously reported advantage of musicians over non-musicians in understanding speech in noise arises from more efficient or robust coding of periodic voiced speech, particularly in fluctuating backgrounds. Speech intelligibility was measured in listeners with extensive musical training, and in those with very little musical training or experience, using normal (voiced) or whispered (unvoiced) grammatically correct nonsense sentences in noise that was spectrally shaped to match the long-term spectrum of the speech, and was either continuous or gated with a 16-Hz square wave. Performance was also measured in clinical speech-in-noise tests and in pitch discrimination. Musicians exhibited enhanced pitch discrimination, as expected. However, no systematic or statistically significant advantage for musicians over non-musicians was found in understanding either voiced or whispered sentences in either continuous or gated noise. Musicians also showed no statistically significant advantage in the clinical speech-in-noise tests. Overall, the results provide no evidence for a significant difference between young adult musicians and non-musicians in their ability to understand speech in noise.

  19. Provider-patient adherence dialogue in HIV care: results of a multisite study.

    PubMed

    Laws, M Barton; Beach, Mary Catherine; Lee, Yoojin; Rogers, William H; Saha, Somnath; Korthuis, P Todd; Sharp, Victoria; Wilson, Ira B

    2013-01-01

    Few studies have analyzed physician-patient adherence dialogue about ARV treatment in detail. We comprehensively describe physician-patient visits in HIV care, focusing on ARV-related dialogue, using a system that assigns each utterance both a topic code and a speech act code. Observational study using audio recordings of routine outpatient visits by people with HIV at specialty clinics. Providers were 34 physicians and 11 non-M.D. practitioners. Of 415 patients, 66% were male, 59% African-American. 78% reported currently taking ARVs. About 10% of utterances concerned ARV treatment. Among those using ARVs, 15% had any adherence problem solving dialogue. ARV problem solving talk included significantly more directives and control parameter utterances by providers than other topics. Providers were verbally dominant, asked five times as many questions as patients, and made 21 times as many directive utterances. Providers asked few open questions, and rarely checked patients' understanding. Physicians respond to the challenges of caring for patients with HIV by adopting a somewhat physician-centered approach which is particularly evident in discussions about ARV adherence.

  20. Declarations, accusations and judgement: examining conflict of interest discourses as performative speech-acts.

    PubMed

    Mayes, Christopher; Lipworth, Wendy; Kerridge, Ian

    2016-09-01

    Concerns over conflicts of interest (COI) in academic research and medical practice continue to provoke a great deal of discussion. What is most obvious in this discourse is that when COIs are declared, or perceived to exist in others, there is a focus on both the descriptive question of whether there is a COI and, subsequently, the normative question of whether it is good, bad or neutral. We contend, however, that in addition to the descriptive and normative, COI declarations and accusations can be understood as performatives. In this article, we apply J.L. Austin's performative speech-act theory to COI discourses and illustrate how this works using a contemporary case study of COI in biomedical publishing. We argue that using Austin's theory of performative speech-acts serves to highlight the social arrangements and role of authorities in COI discourse and so provides a rich framework to examine declarations, accusations and judgements of COI that often arise in the context of biomedical research and practice.

  1. A Coding System with Independent Annotations of Gesture Forms and Functions during Verbal Communication: Development of a Database of Speech and GEsture (DoSaGE)

    PubMed Central

    Kong, Anthony Pak-Hin; Law, Sam-Po; Kwan, Connie Ching-Yin; Lai, Christy; Lam, Vivian

    2014-01-01

    Gestures are commonly used together with spoken language in human communication. One major limitation of gesture investigations in the existing literature lies in the fact that the coding of forms and functions of gestures has not been clearly differentiated. This paper first described a recently developed Database of Speech and GEsture (DoSaGE) based on independent annotation of gesture forms and functions among 119 neurologically unimpaired right-handed native speakers of Cantonese (divided into three age and two education levels), and presented findings of an investigation examining how gesture use was related to age and linguistic performance. Consideration of these two factors, for which normative data are currently very limited or lacking in the literature, is relevant and necessary when one evaluates gesture employment among individuals with and without language impairment. Three speech tasks, including monologue of a personally important event, sequential description, and story-telling, were used for elicitation. The EUDICO Linguistic ANnotator (ELAN) software was used to independently annotate each participant’s linguistic information of the transcript, forms of gestures used, and the function for each gesture. About one-third of the subjects did not use any co-verbal gestures. While the majority of gestures were non-content-carrying, which functioned mainly for reinforcing speech intonation or controlling speech flow, the content-carrying ones were used to enhance speech content. Furthermore, individuals who are younger or linguistically more proficient tended to use fewer gestures, suggesting that normal speakers gesture differently as a function of age and linguistic performance. PMID:25667563

  2. A recursive linear predictive vocoder

    NASA Astrophysics Data System (ADS)

    Janssen, W. A.

    1983-12-01

    A non-real time 10 pole recursive autocorrelation linear predictive coding vocoder was created for use in studying effects of recursive autocorrelation on speech. The vocoder is composed of two interchangeable pitch detectors, a speech analyzer, and speech synthesizer. The time between updating filter coefficients is allowed to vary from .125 msec to 20 msec. The best quality was found using .125 msec between each update. The greatest change in quality was noted when changing from 20 msec/update to 10 msec/update. Pitch period plots for the center clipping autocorrelation pitch detector and simplified inverse filtering technique are provided. Plots of speech into and out of the vocoder are given. Formant versus time three dimensional plots are shown. Effects of noise on pitch detection and formants are shown. Noise effects the voiced/unvoiced decision process causing voiced speech to be re-constructed as unvoiced.

  3. Degraded neural and behavioral processing of speech sounds in a rat model of Rett syndrome

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Borland, Michael S.; Buell, Elizabeth P.; Centanni, Tracy M.; Fink, Melyssa K.; Im, Kwok W.; Wilson, Linda G.; Kilgard, Michael P.

    2015-01-01

    Individuals with Rett syndrome have greatly impaired speech and language abilities. Auditory brainstem responses to sounds are normal, but cortical responses are highly abnormal. In this study, we used the novel rat Mecp2 knockout model of Rett syndrome to document the neural and behavioral processing of speech sounds. We hypothesized that both speech discrimination ability and the neural response to speech sounds would be impaired in Mecp2 rats. We expected that extensive speech training would improve speech discrimination ability and the cortical response to speech sounds. Our results reveal that speech responses across all four auditory cortex fields of Mecp2 rats were hyperexcitable, responded slower, and were less able to follow rapidly presented sounds. While Mecp2 rats could accurately perform consonant and vowel discrimination tasks in quiet, they were significantly impaired at speech sound discrimination in background noise. Extensive speech training improved discrimination ability. Training shifted cortical responses in both Mecp2 and control rats to favor the onset of speech sounds. While training increased the response to low frequency sounds in control rats, the opposite occurred in Mecp2 rats. Although neural coding and plasticity are abnormal in the rat model of Rett syndrome, extensive therapy appears to be effective. These findings may help to explain some aspects of communication deficits in Rett syndrome and suggest that extensive rehabilitation therapy might prove beneficial. PMID:26321676

  4. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values aremore » constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.« less

  5. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    NASA Astrophysics Data System (ADS)

    Riera-Palou, Felip; den Brinker, Albertus C.

    2007-12-01

    This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).

  6. Can bilingual two-year-olds code-switch?

    PubMed

    Lanza, E

    1992-10-01

    Sociolinguists have investigated language mixing as code-switching in the speech of bilingual children three years old and older. Language mixing by bilingual two-year-olds, however, has generally been interpreted in the child language literature as a sign of the child's lack of language differentiation. The present study applies perspectives from sociolinguistics to investigate the language mixing of a bilingual two-year-old acquiring Norwegian and English simultaneously in Norway. Monthly recordings of the child's spontaneous speech in interactions with her parents were made from the age of 2;0 to 2;7. An investigation into the formal aspects of the child's mixing and the context of the mixing reveals that she does differentiate her language use in contextually sensitive ways, hence that she can code-switch. This investigation stresses the need to examine more carefully the roles of dominance and context in the language mixing of young bilingual children.

  7. Revisiting place and temporal theories of pitch

    PubMed Central

    2014-01-01

    The nature of pitch and its neural coding have been studied for over a century. A popular debate has revolved around the question of whether pitch is coded via “place” cues in the cochlea, or via timing cues in the auditory nerve. In the most recent incarnation of this debate, the role of temporal fine structure has been emphasized in conveying important pitch and speech information, particularly because the lack of temporal fine structure coding in cochlear implants might explain some of the difficulties faced by cochlear implant users in perceiving music and pitch contours in speech. In addition, some studies have postulated that hearing-impaired listeners may have a specific deficit related to processing temporal fine structure. This article reviews some of the recent literature surrounding the debate, and argues that much of the recent evidence suggesting the importance of temporal fine structure processing can also be accounted for using spectral (place) or temporal-envelope cues. PMID:25364292

  8. A 4.8 kbps code-excited linear predictive coder

    NASA Technical Reports Server (NTRS)

    Tremain, Thomas E.; Campbell, Joseph P., Jr.; Welch, Vanoy C.

    1988-01-01

    A secure voice system STU-3 capable of providing end-to-end secure voice communications (1984) was developed. The terminal for the new system will be built around the standard LPC-10 voice processor algorithm. The performance of the present STU-3 processor is considered to be good, its response to nonspeech sounds such as whistles, coughs and impulse-like noises may not be completely acceptable. Speech in noisy environments also causes problems with the LPC-10 voice algorithm. In addition, there is always a demand for something better. It is hoped that LPC-10's 2.4 kbps voice performance will be complemented with a very high quality speech coder operating at a higher data rate. This new coder is one of a number of candidate algorithms being considered for an upgraded version of the STU-3 in late 1989. The problems of designing a code-excited linear predictive (CELP) coder to provide very high quality speech at a 4.8 kbps data rate that can be implemented on today's hardware are considered.

  9. Digital Coding and the Self-Proving Message

    ERIC Educational Resources Information Center

    Dettering, Richard

    1971-01-01

    Author suggests that digital Communication", which relies on arbitrary coding elements, like the phones of speech," overshadows the importance of the analogic symbolism people use more extensively than realized. Non-verbal messages can be more convincing than verbal and can be used to predict patterns of future behavior. (Author/PD)

  10. From the analysis of verbal data to the analysis of organizations: organizing as a dialogical process.

    PubMed

    Lorino, Philippe

    2014-12-01

    The analysis of conversational turn-taking and its implications on time (the speaker cannot completely anticipate the future effects of her/his speech) and sociality (the speech is co-produced by the various speakers rather than by the speaking individual) can provide a useful basis to analyze complex organizing processes and collective action: the actor cannot completely anticipate the future effects of her/his acts and the act is co-produced by multiple actors. This translation from verbal to broader classes of interaction stresses the performativity of speeches, the importance of the situation, the role of semiotic mediations to make temporally and spatially distant "ghosts" present in the dialog, and the dissymmetrical relationship between successive conversational turns, due to temporal irreversibility.

  11. The Typology and Function of Private Speech in a Young Man with Intellectual Disabilities: An Empirical Case Study

    ERIC Educational Resources Information Center

    Lechler, Suzanne; Hare, Dougal Julian

    2015-01-01

    A naturalistic observational single case study was carried out to investigate the form and function of private speech (PS) in a young man with Dandy-Walker variant syndrome and trisomy 22. Video recordings were observed, transcribed and coded to identify all combinations of type and form of PS. Through comparison between theories of PS and the…

  12. Effects of Voice Coding and Speech Rate on a Synthetic Speech Display in a Telephone Information System

    DTIC Science & Technology

    1988-05-01

    Seeciv Limited- System for varying Senses term filter capacity output until some Figure 2. Original limited-capacity channel model (Frim Broadbent, 1958) S...2 Figure 2. Original limited-capacity channel model (From Broadbent, 1958) .... 10 Figure 3. Experimental...unlimited variety of human voices for digital recording sources. Synthesis by Analysis Analysis-synthesis methods electronically model the human voice

  13. Sensory Information Processing

    DTIC Science & Technology

    1975-12-31

    system noise . To see how this is avoided, note that zeroes in the blur spectrum become sharp, spike-like negative «*»• Page impulses when the...Synthetic Speech Quality Using Binaural Reverberation-- Boll 12 13 Section 4. Noise Suppression with Linear Prediction Filtering—Peterson 24 Section...5. Speech Processing to Reduce Noise and Improve Intelligibility— Callahan 28 Section 6. Linear Predictive Coding with a Glottal 36 Section 7

  14. Multiparticipant Chat Analysis: A Survey

    DTIC Science & Technology

    2013-02-26

    language variation (e.g., regional speech in Germany [6]; code-switching in German-speaking regions of Switzerland [84] and Indian IRC channels [77]), and...messages which may be missed in high- tempo situations [19], and automated analysis of chat messages [13]. Finally, the high number of chat messages can...Androutsopoulos, E. Ziegler, Exploring language variation on the internet: Regional speech in a chat community, in: Proceedings of the Second International

  15. Cross-language Activation and the Phonetics of Code-switching

    NASA Astrophysics Data System (ADS)

    Piccinini, Page Elizabeth

    It is now well established that bilinguals have both languages activated to some degree at all times. This cross-language activation has been documented in several research paradigms, including picture naming, reading, and electrophysiological studies. What is less well understood is how the degree a language is activated can vary in different language environments or contexts. Furthermore, when investigating effects of order of acquisition and language dominance, past research has been mixed, as the two variables are often conflated. In this dissertation, I test how degree of cross-language activation can vary according to context by examining phonetic productions in code-switching speech. Both spontaneous speech and scripted speech are analyzed. Follow-up perception experiments are conducted to see if listeners are able to anticipate language switches, potentially due to the phonetic cues in the signal. Additionally, by focusing on early bilinguals who are L1 Spanish but English dominant, I am able to see what plays a greater role in cross-language activation, order of acquisition or language dominance. I find that speakers do have intermediate phonetic productions in code-switching contexts relative to monolingual contexts. Effects are larger and more consistent in English than Spanish. Similar effects are found in speech perception. Listeners are able to anticipate language switches from English to Spanish but not Spanish to English. Together these results suggest that language dominance is a more important factor than order of acquisition in cross-language activation for early bilinguals. Future models on bilingual language organization and access should take into account both context and language dominance when modeling degrees of cross-language activation.

  16. Discovering Communicative Competencies in a Nonspeaking Child with Autism

    ERIC Educational Resources Information Center

    Stiegler, Lillian N.

    2007-01-01

    Purpose: This article is intended to demonstrate that adapted conversation analysis (CA) and speech act analysis (SAA) may be applied by speech-language pathologists (SLPs) to (a) identify communicative competencies in nonspeaking children with autism spectrum disorder (ASD), especially during particularly successful interactions, and (b) identify…

  17. CETA Vocational Linkage.

    ERIC Educational Resources Information Center

    Campbell-Thrane, Lucille

    An overview of cooperation between CETA (Comprehensive Employment and Training Act) and vocational education is presented in this speech, including a look at data on legislation, history, and funding sources. In light of CETA legislation's specificity on how local sponsors are to work with vocational educators, the speech gives excerpts and…

  18. 42 CFR 485.701 - Basis and scope.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Therapy and Speech-Language Pathology Services § 485.701 Basis and scope. This subpart implements section 1861(p)(4) of the Act, which— (a) Defines outpatient physical therapy and speech pathology services; (b... records; and (c) Authorizes the Secretary to establish by regulation other health and safety requirements...

  19. Results using the OPAL strategy in Mandarin speaking cochlear implant recipients.

    PubMed

    Vandali, Andrew E; Dawson, Pam W; Arora, Komal

    2017-01-01

    To evaluate the effectiveness of an experimental pitch-coding strategy for improving recognition of Mandarin lexical tone in cochlear implant (CI) recipients. Adult CI recipients were tested on recognition of Mandarin tones in quiet and speech-shaped noise at a signal-to-noise ratio of +10 dB; Mandarin sentence speech-reception threshold (SRT) in speech-shaped noise; and pitch discrimination of synthetic complex-harmonic tones in quiet. Two versions of the experimental strategy were examined: (OPAL) linear (1:1) mapping of fundamental frequency (F0) to the coded modulation rate; and (OPAL+) transposed mapping of high F0s to a lower coded rate. Outcomes were compared to results using the clinical ACE™ strategy. Five Mandarin speaking users of Nucleus® cochlear implants. A small but significant benefit in recognition of lexical tones was observed using OPAL compared to ACE in noise, but not in quiet, and not for OPAL+ compared to ACE or OPAL in quiet or noise. Sentence SRTs were significantly better using OPAL+ and comparable using OPAL to those using ACE. No differences in pitch discrimination thresholds were observed across strategies. OPAL can provide benefits to Mandarin lexical tone recognition in moderately noisy conditions and preserve perception of Mandarin sentences in challenging noise conditions.

  20. Psychoacoustic cues to emotion in speech prosody and music.

    PubMed

    Coutinho, Eduardo; Dibben, Nicola

    2013-01-01

    There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.

  1. Spectral analysis method and sample generation for real time visualization of speech

    NASA Astrophysics Data System (ADS)

    Hobohm, Klaus

    A method for translating speech signals into optical models, characterized by high sound discrimination and learnability and designed to provide to deaf persons a feedback towards control of their way of speaking, is presented. Important properties of speech production and perception processes and organs involved in these mechanisms are recalled in order to define requirements for speech visualization. It is established that the spectral representation of time, frequency and amplitude resolution of hearing must be fair and continuous variations of acoustic parameters of speech signal must be depicted by a continuous variation of images. A color table was developed for dynamic illustration and sonograms were generated with five spectral analysis methods such as Fourier transformations and linear prediction coding. For evaluating sonogram quality, test persons had to recognize consonant/vocal/consonant words and an optimized analysis method was achieved with a fast Fourier transformation and a postprocessor. A hardware concept of a real time speech visualization system, based on multiprocessor technology in a personal computer, is presented.

  2. Language, etayage et interactions therapeutiques: Actes du 5eme colloque d'orthophonie/logopedie (Language, Scaffolding and Therepeutic Interactions: Proceedings of the 5th Colloquium on Speech Therapy and Speech Pathology) (Neuchatel, Switzerland, September 25-26, 1998).

    ERIC Educational Resources Information Center

    Sovilla, J. Buttet, Ed.; de Weck, G., Ed.

    1998-01-01

    These articles on scaffolding in language and speech pathology/therapy are included in this issue: "Strategies d'etayage avec des enfants disphasiques: sont-elles specifiques?" ("Scaffolding Strategies for Dysphasic Children: Are They Specific?") (Genevieve de Weck); "Comparaison des strategies discursives d'etayage dans un conte et un recit…

  3. Constitutional restraints on the regulations of scientific speech and scientific research: commentary on "Democracy, individual rights and the regulation of science".

    PubMed

    Post, Robert

    2009-09-01

    The question of what constitutional constraints should apply to government efforts to regulate scientific speech is frequently contrasted to the question of what constitutional constraints should apply to government efforts to regulate scientific research. This comment argues that neither question is well formulated for constitutional analysis, which should instead turn on the relationship to constitutional values of specific acts of scientific speech and research.

  4. Examining the Role of Orthographic Coding Ability in Elementary Students with Previously Identified Reading Disability, Speech or Language Impairment, or Comorbid Language and Learning Disabilities

    ERIC Educational Resources Information Center

    Haugh, Erin Kathleen

    2017-01-01

    The purpose of this study was to examine the role orthographic coding might play in distinguishing between membership in groups of language-based disability types. The sample consisted of 36 second and third-grade subjects who were administered the PAL-II Receptive Coding and Word Choice Accuracy subtest as a measure of orthographic coding…

  5. Developmental profile of speech-language and communicative functions in an individual with the Preserved Speech Variant of Rett syndrome

    PubMed Central

    Marschik, Peter B.; Vollmann, Ralf; Bartl-Pokorny, Katrin D.; Green, Vanessa A.; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2018-01-01

    Objective We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant (PSV) of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. Methods For this study we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples, and picture stories to elicit narrative competences. Results Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Conclusion Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note. PMID:23870013

  6. Developmental profile of speech-language and communicative functions in an individual with the preserved speech variant of Rett syndrome.

    PubMed

    Marschik, Peter B; Vollmann, Ralf; Bartl-Pokorny, Katrin D; Green, Vanessa A; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2014-08-01

    We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. For this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences. Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.

  7. Human phoneme recognition depending on speech-intrinsic variability.

    PubMed

    Meyer, Bernd T; Jürgens, Tim; Wesker, Thorsten; Brand, Thomas; Kollmeier, Birger

    2010-11-01

    The influence of different sources of speech-intrinsic variation (speaking rate, effort, style and dialect or accent) on human speech perception was investigated. In listening experiments with 16 listeners, confusions of consonant-vowel-consonant (CVC) and vowel-consonant-vowel (VCV) sounds in speech-weighted noise were analyzed. Experiments were based on the OLLO logatome speech database, which was designed for a man-machine comparison. It contains utterances spoken by 50 speakers from five dialect/accent regions and covers several intrinsic variations. By comparing results depending on intrinsic and extrinsic variations (i.e., different levels of masking noise), the degradation induced by variabilities can be expressed in terms of the SNR. The spectral level distance between the respective speech segment and the long-term spectrum of the masking noise was found to be a good predictor for recognition rates, while phoneme confusions were influenced by the distance to spectrally close phonemes. An analysis based on transmitted information of articulatory features showed that voicing and manner of articulation are comparatively robust cues in the presence of intrinsic variations, whereas the coding of place is more degraded. The database and detailed results have been made available for comparisons between human speech recognition (HSR) and automatic speech recognizers (ASR).

  8. "Rate My Therapist": Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing.

    PubMed

    Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis G; Atkins, David C; Narayanan, Shrikanth S

    2015-01-01

    The technology for evaluating patient-provider interactions in psychotherapy-observational coding-has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies.

  9. Multilingual Practices in Contemporary and Historical Contexts: Interfaces between Code-Switching and Translation

    ERIC Educational Resources Information Center

    Kolehmainen, Leena; Skaffari, Janne

    2016-01-01

    This article serves as an introduction to a collection of four articles on multilingual practices in speech and writing, exploring both contemporary and historical sources. It not only introduces the articles but also discusses the scope and definitions of code-switching, attitudes towards multilingual interaction and, most pertinently, the…

  10. Neural Coding of Relational Invariance in Speech: Human Language Analogs to the Barn Owl.

    ERIC Educational Resources Information Center

    Sussman, Harvey M.

    1989-01-01

    The neuronal model shown to code sound-source azimuth in the barn owl by H. Wagner et al. in 1987 is used as the basis for a speculative brain-based human model, which can establish contrastive phonetic categories to solve the problem of perception "non-invariance." (SLD)

  11. Perception and Neural Coding of Harmonic Fusion in Ferrets

    DTIC Science & Technology

    2004-01-01

    distinct percepts that come under the rubric of pitch, be- cause periodicity pitch underlies speakers’ voices and speech prosody, as well as musical ...spectral fusion is unclear for sounds having predominantly low-frequency spectra such as speech, music , and many animal vocalizations. In summary...84, 560–565. von Helmholtz, H. (1863). Die Lehre von den Tonempfindungen als physiologische Grundlage fr die Theorie der Musik . (Vieweg und Sohn

  12. The Matrix Pencil and its Applications to Speech Processing

    DTIC Science & Technology

    2007-03-01

    Elementary Linear Algebra ” 8th edition, pp. 278, 2000 John Wiley & Sons, Inc., New York [37] Wai C. Chu, “Speech Coding Algorithms”, New Jeresy: John...Ben; Daniel, James W.; “Applied Linear Algebra ”, pp. 342-345, 1988 Prentice Hall, Englewood Cliffs, NJ [35] Haykin, Simon “Applied Linear Adaptive...ABSTRACT Matrix Pencils facilitate the study of differential equations resulting from oscillating systems. Certain problems in linear ordinary

  13. Simplified APC for Space Shuttle applications. [Adaptive Predictive Coding for speech transmission

    NASA Technical Reports Server (NTRS)

    Hutchins, S. E.; Batson, B. H.

    1975-01-01

    This paper describes an 8 kbps adaptive predictive digital speech transmission system which was designed for potential use in the Space Shuttle Program. The system was designed to provide good voice quality in the presence of both cabin noise on board the Shuttle and the anticipated bursty channel. Minimal increase in size, weight, and power over the current high data rate system was also a design objective.

  14. Modification and preliminary use of the five-minute speech sample in the postpartum: associations with postnatal depression and posttraumatic stress.

    PubMed

    Iles, Jane; Spiby, Helen; Slade, Pauline

    2014-10-01

    Little is known about what constitutes key components of partner support during the childbirth experience. This study modified the five minute speech sample, a measure of expressed emotion (EE), for use with new parents in the immediate postpartum. A coding framework was developed to rate the speech samples on dimensions of couple support. Associations were explored between these codes and subsequent symptoms of postnatal depression and posttraumatic stress. 372 couples were recruited in the early postpartum and individually provided short speech samples. Posttraumatic stress and postnatal depression symptoms were assessed via questionnaire measures at six and thirteen weeks. Two hundred and twelve couples completed all time-points. Key elements of supportive interactions were identified and reliably categorised. Mothers' posttraumatic stress was associated with criticisms of the partner during childbirth, general relationship criticisms and men's perception of helplessness. Postnatal depression was associated with absence of partner empathy and any positive comments regarding the partner's support. The content of new parents' descriptions of labour and childbirth, their partner during labour and birth and their relationship within the immediate postpartum may have significant implications for later psychological functioning. Interventions to enhance specific supportive elements between couples during the antenatal period merit development and evaluation.

  15. No evidence of somatotopic place of articulation feature mapping in motor cortex during passive speech perception.

    PubMed

    Arsenault, Jessica S; Buchsbaum, Bradley R

    2016-08-01

    The motor theory of speech perception has experienced a recent revival due to a number of studies implicating the motor system during speech perception. In a key study, Pulvermüller et al. (2006) showed that premotor/motor cortex differentially responds to the passive auditory perception of lip and tongue speech sounds. However, no study has yet attempted to replicate this important finding from nearly a decade ago. The objective of the current study was to replicate the principal finding of Pulvermüller et al. (2006) and generalize it to a larger set of speech tokens while applying a more powerful statistical approach using multivariate pattern analysis (MVPA). Participants performed an articulatory localizer as well as a speech perception task where they passively listened to a set of eight syllables while undergoing fMRI. Both univariate and multivariate analyses failed to find evidence for somatotopic coding in motor or premotor cortex during speech perception. Positive evidence for the null hypothesis was further confirmed by Bayesian analyses. Results consistently show that while the lip and tongue areas of the motor cortex are sensitive to movements of the articulators, they do not appear to preferentially respond to labial and alveolar speech sounds during passive speech perception.

  16. Fundamental frequency discrimination and speech perception in noise in cochlear implant simulationsa)

    PubMed Central

    Carroll, Jeff; Zeng, Fan-Gang

    2007-01-01

    Increasing the number of channels at low frequencies improves discrimination of fundamental frequency (F0) in cochlear implants [Geurts and Wouters 2004]. We conducted three experiments to test whether improved F0 discrimination can be translated into increased speech intelligibility in noise in a cochlear implant simulation. The first experiment measured F0 discrimination and speech intelligibility in quiet as a function of channel density over different frequency regions. The results from this experiment showed a tradeoff in performance between F0 discrimination and speech intelligibility with a limited number of channels. The second experiment tested whether improved F0 discrimination and optimizing this tradeoff could improve speech performance with a competing talker. However, improved F0 discrimination did not improve speech intelligibility in noise. The third experiment identified the critical number of channels needed at low frequencies to improve speech intelligibility in noise. The result showed that, while 16 channels below 500 Hz were needed to observe any improvement in speech intelligibility in noise, even 32 channels did not achieve normal performance. Theoretically, these results suggest that without accurate spectral coding, F0 discrimination and speech perception in noise are two independent processes. Practically, the present results illustrate the need to increase the number of independent channels in cochlear implants. PMID:17604581

  17. Attentional Gain Control of Ongoing Cortical Speech Representations in a “Cocktail Party”

    PubMed Central

    Kerlin, Jess R.; Shahin, Antoine J.; Miller, Lee M.

    2010-01-01

    Normal listeners possess the remarkable perceptual ability to select a single speech stream among many competing talkers. However, few studies of selective attention have addressed the unique nature of speech as a temporally extended and complex auditory object. We hypothesized that sustained selective attention to speech in a multi-talker environment would act as gain control on the early auditory cortical representations of speech. Using high-density electroencephalography and a template-matching analysis method, we found selective gain to the continuous speech content of an attended talker, greatest at a frequency of 4–8 Hz, in auditory cortex. In addition, the difference in alpha power (8–12 Hz) at parietal sites across hemispheres indicated the direction of auditory attention to speech, as has been previously found in visual tasks. The strength of this hemispheric alpha lateralization, in turn, predicted an individual’s attentional gain of the cortical speech signal. These results support a model of spatial speech stream segregation, mediated by a supramodal attention mechanism, enabling selection of the attended representation in auditory cortex. PMID:20071526

  18. Performance of concatenated Reed-Solomon trellis-coded modulation over Rician fading channels

    NASA Technical Reports Server (NTRS)

    Moher, Michael L.; Lodge, John H.

    1990-01-01

    A concatenated coding scheme for providing very reliable data over mobile-satellite channels at power levels similar to those used for vocoded speech is described. The outer code is a shorter Reed-Solomon code which provides error detection as well as error correction capabilities. The inner code is a 1-D 8-state trellis code applied independently to both the inphase and quadrature channels. To achieve the full error correction potential of this inner code, the code symbols are multiplexed with a pilot sequence which is used to provide dynamic channel estimation and coherent detection. The implementation structure of this scheme is discussed and its performance is estimated.

  19. Quantification and Systematic Characterization of Stuttering-Like Disfluencies in Acquired Apraxia of Speech.

    PubMed

    Bailey, Dallin J; Blomgren, Michael; DeLong, Catharine; Berggren, Kiera; Wambaugh, Julie L

    2017-06-22

    The purpose of this article is to quantify and describe stuttering-like disfluencies in speakers with acquired apraxia of speech (AOS), utilizing the Lidcombe Behavioural Data Language (LBDL). Additional purposes include measuring test-retest reliability and examining the effect of speech sample type on disfluency rates. Two types of speech samples were elicited from 20 persons with AOS and aphasia: repetition of mono- and multisyllabic words from a protocol for assessing AOS (Duffy, 2013), and connected speech tasks (Nicholas & Brookshire, 1993). Sampling was repeated at 1 and 4 weeks following initial sampling. Stuttering-like disfluencies were coded using the LBDL, which is a taxonomy that focuses on motoric aspects of stuttering. Disfluency rates ranged from 0% to 13.1% for the connected speech task and from 0% to 17% for the word repetition task. There was no significant effect of speech sampling time on disfluency rate in the connected speech task, but there was a significant effect of time for the word repetition task. There was no significant effect of speech sample type. Speakers demonstrated both major types of stuttering-like disfluencies as categorized by the LBDL (fixed postures and repeated movements). Connected speech samples yielded more reliable tallies over repeated measurements. Suggestions are made for modifying the LBDL for use in AOS in order to further add to systematic descriptions of motoric disfluencies in this disorder.

  20. Bilingual Language Assessment: Contemporary versus Recommended Practice in American Schools

    ERIC Educational Resources Information Center

    Arias, Graciela; Friberg, Jennifer

    2017-01-01

    Purpose: The purpose of this study was to identify current practices of school-based speech-language pathologists (SLPs) in the United States for bilingual language assessment and compare them to American Speech-Language-Hearing Association (ASHA) best practice guidelines and mandates of the Individuals with Disabilities Education Act (IDEA,…

  1. Apology Strategies Employed by Saudi EFL Teachers

    ERIC Educational Resources Information Center

    Alsulayyi, Marzouq Nasser

    2016-01-01

    This study examines the apology strategies used by 30 Saudi EFL teachers in Najran, the Kingdom of Saudi Arabia (KSA), paying special attention to variables such as social distance and power and offence severity. The study also delineates gender differences in the respondents' speech as opposed to studies that only examined speech act output by…

  2. 78 FR 36230 - 60-Day Notice of Proposed Information Collection: FHA-Insured Mortgage Loan Servicing of Payments...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-17

    ... described below. In accordance with the Paperwork Reduction Act, HUD is requesting comment from all... the proposed forms or other available information. Persons with hearing or speech impairments may... with hearing or speech impairments may access this number through TTY by calling the toll-free Federal...

  3. Early Intervening for Students with Speech Sound Disorders: Lessons from a School District

    ERIC Educational Resources Information Center

    Mire, Stephen P.; Montgomery, Judy K.

    2009-01-01

    The concept of early intervening services was introduced into public school systems with the implementation of the Individuals With Disabilities Education Improvement Act (IDEA) of 2004. This article describes a program developed for students with speech sound disorders that incorporated concepts of early intervening services, response to…

  4. Emotional Speech Acts and the Educational Perlocutions of Speech

    ERIC Educational Resources Information Center

    Gasparatou, Renia

    2016-01-01

    Over the past decades, there has been an ongoing debate about whether education should aim at the cultivation of emotional wellbeing of self-esteeming personalities or whether it should prioritise literacy and the cognitive development of students. However, it might be the case that the two are not easily distinguished in educational contexts. In…

  5. 25 CFR 700.525 - Use of government information or expertise.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... provisions of the Freedom of information and the Privacy Acts (5 U.S.C. 552). An employee may not release.... (e) Commission personnel may not accept compensation for an article, speech, consultant service, or... position on a matter which is the subject of an employee's writing or speech, and the individual has been...

  6. Using others' words: conversational use of reported speech by individuals with aphasia and their communication partners.

    PubMed

    Hengst, Julie A; Frame, Simone R; Neuman-Stritzel, Tiffany; Gannaway, Rachel

    2005-02-01

    Reported speech, wherein one quotes or paraphrases the speech of another, has been studied extensively as a set of linguistic and discourse practices. Researchers agree that reported speech is pervasive, found across languages, and used in diverse contexts. However, to date, there have been no studies of the use of reported speech among individuals with aphasia. Grounded in an interactional sociolinguistic perspective, the study presented here documents and analyzes the use of reported speech by 7 adults with mild to moderately severe aphasia and their routine communication partners. Each of the 7 pairs was videotaped in 4 everyday activities at home or around the community, yielding over 27 hr of conversational interaction for analysis. A coding scheme was developed that identified 5 types of explicitly marked reported speech: direct, indirect, projected, indexed, and undecided. Analysis of the data documented reported speech as a common discourse practice used successfully by the individuals with aphasia and their communication partners. All participants produced reported speech at least once, and across all observations the target pairs produced 400 reported speech episodes (RSEs), 149 by individuals with aphasia and 251 by their communication partners. For all participants, direct and indirect forms were the most prevalent (70% of RSEs). Situated discourse analysis of specific episodes of reported speech used by 3 of the pairs provides detailed portraits of the diverse interactional, referential, social, and discourse functions of reported speech and explores ways that the pairs used reported speech to successfully frame talk despite their ongoing management of aphasia.

  7. An Adaptive Approach to a 2.4 kb/s LPC Speech Coding System.

    DTIC Science & Technology

    1985-07-01

    laryngeal cancer ). Spectral estimation is at the foundation of speech analysis for all these goals and accurate AR model estimation in noise is...S ,5 mWnL NrinKt ) o ,-G p (d va Rmea.imn flU: 5() WOM Lu M(G)INUNM 40 4KeemS! MU= 1 UD M5) SIGHSM A SO= WAGe . M. (d) I U NS maIm ( IW vis MAMA

  8. Geo-Coding for the Mapping of Documents and Social Media Messages

    DTIC Science & Technology

    2013-08-22

    O.L. (2007). UBC-ALM: Combining KNN with SVD for WSD. Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007), Prague...and Yarowsky, D. (1992). One sense per discourse. In Proceedings of the 4th DARPA Speech and Natural Language Workshop. pp. 233-237, 1992. Retrieved...Part-of- Speech Tagging for Twitter: Annotation, Features, and Experiments. Proceedings of the Annual Meeting of the Association for Computational

  9. DEBLICOM: Deaf-Blind Communication & Control Systems: First Quarterly Progress Report.

    ERIC Educational Resources Information Center

    Kafafian, Haig

    Reported on is the first phase of development of DEBLICOM, a code for a two-way communication system for deaf-blind individuals who may be speech-impaired. Brief sections cover the following topics: alternatives to and considerations for the development of cutaneous codes for deaf-blind people; the DEBLICOM system which provides a means of…

  10. Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues.

    PubMed

    Peeters, David; Snijders, Tineke M; Hagoort, Peter; Özyürek, Aslı

    2017-01-27

    In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Self-Organization: Complex Dynamical Systems in the Evolution of Speech

    NASA Astrophysics Data System (ADS)

    Oudeyer, Pierre-Yves

    Human vocalization systems are characterized by complex structural properties. They are combinatorial, based on the systematic reuse of phonemes, and the set of repertoires in human languages is characterized by both strong statistical regularities—universals—and a great diversity. Besides, they are conventional codes culturally shared in each community of speakers. What are the origins of the forms of speech? What are the mechanisms that permitted their evolution in the course of phylogenesis and cultural evolution? How can a shared speech code be formed in a community of individuals? This chapter focuses on the way the concept of self-organization, and its interaction with natural selection, can throw light on these three questions. In particular, a computational model is presented which shows that a basic neural equipment for adaptive holistic vocal imitation, coupling directly motor and perceptual representations in the brain, can generate spontaneously shared combinatorial systems of vocalizations in a society of babbling individuals. Furthermore, we show how morphological and physiological innate constraints can interact with these self-organized mechanisms to account for both the formation of statistical regularities and diversity in vocalization systems.

  12. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation maymore » decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.« less

  13. Gesturing with an injured brain: How gesture helps children with early brain injury learn linguistic constructions

    PubMed Central

    Özçalışkan, Şeyda; Levine, Susan C.; Goldin-Meadow, Susan

    2013-01-01

    Children with pre/perinatal unilateral brain lesions (PL) show remarkable plasticity for language development. Is this plasticity characterized by the same developmental trajectory that characterizes typically developing (TD) children, with gesture leading the way into speech? We explored this question, comparing 11 children with PL—matched to 30 TD children on expressive vocabulary—in the second year of life. Children with PL showed similarities to TD children for simple but not complex sentence types. Children with PL produced simple sentences across gesture and speech several months before producing them entirely in speech, exhibiting parallel delays in both gesture+speech and speech-alone. However, unlike TD children, children with PL produced complex sentence types first in speech-alone. Overall, the gesture-speech system appears to be a robust feature of language-learning for simple—but not complex—sentence constructions, acting as a harbinger of change in language development even when that language is developing in an injured brain. PMID:23217292

  14. Zoological Nomenclature and Speech Act Theory

    NASA Astrophysics Data System (ADS)

    Cambefort, Yves

    To know natural objects, it is necessary to give them names. This has always been done, from antiquity up to modern times. Today, the nomenclature system invented by Linnaeus in the eighteenth century is still in use, even if the philosophical principles underlying it have changed. Naming living objects still means giving them a sort of existence, since without a name they cannot be referred to, just as if they did not exist. Therefore, naming a living object is a process close to creating it. Naming is performed by means of a particular kind of text: original description written by specialists, and more often accompanied by other, ancillary texts whose purpose is to gain the acceptance and support of fellow zoologists. It is noteworthy that the actions performed by these texts are called "nomenclatural acts". These texts and acts, together with related scientific and social relationships, are examined here in the frame of speech act theory.

  15. Evaluation of speech errors in Putonghua speakers with cleft palate: a critical review of methodology issues.

    PubMed

    Jiang, Chenghui; Whitehill, Tara L

    2014-04-01

    Speech errors associated with cleft palate are well established for English and several other Indo-European languages. Few articles describing the speech of Putonghua (standard Mandarin Chinese) speakers with cleft palate have been published in English language journals. Although methodological guidelines have been published for the perceptual speech evaluation of individuals with cleft palate, there has been no critical review of methodological issues in studies of Putonghua speakers with cleft palate. A literature search was conducted to identify relevant studies published over the past 30 years in Chinese language journals. Only studies incorporating perceptual analysis of speech were included. Thirty-seven articles which met inclusion criteria were analyzed and coded on a number of methodological variables. Reliability was established by having all variables recoded for all studies. This critical review identified many methodological issues. These design flaws make it difficult to draw reliable conclusions about characteristic speech errors in this group of speakers. Specific recommendations are made to improve the reliability and validity of future studies, as well to facilitate cross-center comparisons.

  16. Phonology and Vocal Behavior in Toddlers with Autism Spectrum Disorders

    PubMed Central

    Schoen, Elizabeth; Paul, Rhea; Chawarska, Katyrzyna

    2011-01-01

    Scientific Abstract The purpose of this study is to examine the phonological and other vocal productions of children, 18-36 months, with autism spectrum disorder (ASD) and to compare these productions to those of age-matched and language-matched controls. Speech samples were obtained from 30 toddlers with ASD, 11 age-matched toddlers and 23 language-matched toddlers during either parent-child or clinician-child play sessions. Samples were coded for a variety of speech-like and non-speech vocalization productions. Toddlers with ASD produced speech-like vocalizations similar to those of language-matched peers, but produced significantly more atypical non-speech vocalizations when compared to both control groups.Toddlers with ASD show speech-like sound production that is linked to their language level, in a manner similar to that seen in typical development. The main area of difference in vocal development in this population is in the production of atypical vocalizations. Findings suggest that toddlers with autism spectrum disorders might not tune into the language model of their environment. Failure to attend to the ambient language environment negatively impacts the ability to acquire spoken language. PMID:21308998

  17. Problems and Processes in Medical Encounters: The CASES method of dialogue analysis

    PubMed Central

    Laws, M. Barton; Taubin, Tatiana; Bezreh, Tanya; Lee, Yoojin; Beach, Mary Catherine; Wilson, Ira B.

    2013-01-01

    Objective To develop methods to reliably capture structural and dynamic temporal features of clinical interactions. Methods Observational study of 50 audio-recorded routine outpatient visits to HIV specialty clinics, using innovative analytic methods. The Comprehensive Analysis of the Structure of Encounters System (CASES) uses transcripts coded for speech acts, then imposes larger-scale structural elements: threads – the problems or issues addressed; and processes within threads –basic tasks of clinical care labeled Presentation, Information, Resolution (decision making) and Engagement (interpersonal exchange). Threads are also coded for the nature of resolution. Results 61% of utterances are in presentation processes. Provider verbal dominance is greatest in information and resolution processes, which also contain a high proportion of provider directives. About half of threads result in no action or decision. Information flows predominantly from patient to provider in presentation processes, and from provider to patient in information processes. Engagement is rare. Conclusions In this data, resolution is provider centered; more time for patient participation in resolution, or interpersonal engagement, would have to come from presentation. Practice Implications Awareness of the use of time in clinical encounters, and the interaction processes associated with various tasks, may help make clinical communication more efficient and effective. PMID:23391684

  18. Problems and processes in medical encounters: the cases method of dialogue analysis.

    PubMed

    Laws, M Barton; Taubin, Tatiana; Bezreh, Tanya; Lee, Yoojin; Beach, Mary Catherine; Wilson, Ira B

    2013-05-01

    To develop methods to reliably capture structural and dynamic temporal features of clinical interactions. Observational study of 50 audio-recorded routine outpatient visits to HIV specialty clinics, using innovative analytic methods. The comprehensive analysis of the structure of encounters system (CASES) uses transcripts coded for speech acts, then imposes larger-scale structural elements: threads--the problems or issues addressed; and processes within threads--basic tasks of clinical care labeled presentation, information, resolution (decision making) and Engagement (interpersonal exchange). Threads are also coded for the nature of resolution. 61% of utterances are in presentation processes. Provider verbal dominance is greatest in information and resolution processes, which also contain a high proportion of provider directives. About half of threads result in no action or decision. Information flows predominantly from patient to provider in presentation processes, and from provider to patient in information processes. Engagement is rare. In this data, resolution is provider centered; more time for patient participation in resolution, or interpersonal engagement, would have to come from presentation. Awareness of the use of time in clinical encounters, and the interaction processes associated with various tasks, may help make clinical communication more efficient and effective. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Implications of Texas V. Johnson on Military Practice

    DTIC Science & Technology

    1991-01-01

    immunized by the constitutional guarantee of freedom of speech ." 2 8 Finally, under United States v. O’Brien,29 a state may restrict symbolic acts when...flag against the respondent’s interest in freedom of speech . 36 Concerning the rirst part of the Court’s analysis, Texas advanced two interests which it

  20. When "No" Means "Yes": Agreeing and Disagreeing in Indian English Discourse.

    ERIC Educational Resources Information Center

    Valentine, Tamara M.

    This study examined the speech act of agreement and disagreement in the ordinary conversation of English-speakers in India. Data were collected in natural speech elicited from educated, bilingual speakers in cross-sex and same-sex conversations in a range of formal and informal settings. Subjects' ages ranged from 19 to about 60. Five agreement…

  1. 77 FR 28741 - The Housing and Economic Recovery Act of 2008 (HERA): Changes to the Section 8 Tenant-Based...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-15

    ... (this is not a toll-free number). Individuals with speech or hearing impairments may access this number... listed telephone number is not a toll-free number. Persons with hearing or speech impairments may access... identify and consider regulatory approaches that reduce burdens and maintain flexibility and freedom of...

  2. "Impeached or Questioned": The Uses of Speech in a Privileged Environment.

    ERIC Educational Resources Information Center

    Evans, Gill R.

    2001-01-01

    Examines the legal protections of academic freedom of speech in Britain along with the issues which surround Parliamentary privilege (a university campus is considered a privileged environment under the Education No. 2 Act). Explores implications for freedom to publish and for the freedom of others to comment upon and criticize what the academic…

  3. Cognitive Processing Speed, Working Memory, and the Intelligibility of Hearing Aid-Processed Speech in Persons with Hearing Impairment

    PubMed Central

    Yumba, Wycliffe Kabaywe

    2017-01-01

    Previous studies have demonstrated that successful listening with advanced signal processing in digital hearing aids is associated with individual cognitive capacity, particularly working memory capacity (WMC). This study aimed to examine the relationship between cognitive abilities (cognitive processing speed and WMC) and individual listeners’ responses to digital signal processing settings in adverse listening conditions. A total of 194 native Swedish speakers (83 women and 111 men), aged 33–80 years (mean = 60.75 years, SD = 8.89), with bilateral, symmetrical mild to moderate sensorineural hearing loss who had completed a lexical decision speed test (measuring cognitive processing speed) and semantic word-pair span test (SWPST, capturing WMC) participated in this study. The Hagerman test (capturing speech recognition in noise) was conducted using an experimental hearing aid with three digital signal processing settings: (1) linear amplification without noise reduction (NoP), (2) linear amplification with noise reduction (NR), and (3) non-linear amplification without NR (“fast-acting compression”). The results showed that cognitive processing speed was a better predictor of speech intelligibility in noise, regardless of the types of signal processing algorithms used. That is, there was a stronger association between cognitive processing speed and NR outcomes and fast-acting compression outcomes (in steady state noise). We observed a weaker relationship between working memory and NR, but WMC did not relate to fast-acting compression. WMC was a relatively weaker predictor of speech intelligibility in noise. These findings might have been different if the participants had been provided with training and or allowed to acclimatize to binary masking noise reduction or fast-acting compression. PMID:28861009

  4. Gesture and speech during shared book reading with preschoolers with specific language impairment.

    PubMed

    Lavelli, Manuela; Barachetti, Chiara; Florit, Elena

    2015-11-01

    This study examined (a) the relationship between gesture and speech produced by children with specific language impairment (SLI) and typically developing (TD) children, and their mothers, during shared book-reading, and (b) the potential effectiveness of gestures accompanying maternal speech on the conversational responsiveness of children. Fifteen preschoolers with expressive SLI were compared with fifteen age-matched and fifteen language-matched TD children. Child and maternal utterances were coded for modality, gesture type, gesture-speech informational relationship, and communicative function. Relative to TD peers, children with SLI used more bimodal utterances and gestures adding unique information to co-occurring speech. Some differences were mirrored in maternal communication. Sequential analysis revealed that only in the SLI group maternal reading accompanied by gestures was significantly followed by child's initiatives, and when maternal non-informative repairs were accompanied by gestures, they were more likely to elicit adequate answers from children. These findings support the 'gesture advantage' hypothesis in children with SLI, and have implications for educational and clinical practice.

  5. The minor third communicates sadness in speech, mirroring its use in music.

    PubMed

    Curtis, Meagan E; Bharucha, Jamshed J

    2010-06-01

    There is a long history of attempts to explain why music is perceived as expressing emotion. The relationship between pitches serves as an important cue for conveying emotion in music. The musical interval referred to as the minor third is generally thought to convey sadness. We reveal that the minor third also occurs in the pitch contour of speech conveying sadness. Bisyllabic speech samples conveying four emotions were recorded by 9 actresses. Acoustic analyses revealed that the relationship between the 2 salient pitches of the sad speech samples tended to approximate a minor third. Participants rated the speech samples for perceived emotion, and the use of numerous acoustic parameters as cues for emotional identification was modeled using regression analysis. The minor third was the most reliable cue for identifying sadness. Additional participants rated musical intervals for emotion, and their ratings verified the historical association between the musical minor third and sadness. These findings support the theory that human vocal expressions and music share an acoustic code for communicating sadness.

  6. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  7. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  8. Persistent Use of Mixed Code: An Exploration of Its Functions in Hong Kong Schools

    ERIC Educational Resources Information Center

    Low, Winnie W. M.; Lu, Dan

    2006-01-01

    Codemixing of Cantonese Chinese and English is a common speech behaviour used by bilingual people in Hong Kong. Though codemixing is repeatedly criticised as a cause of the decline of students' language competence, there is little hard evidence to indicate its detrimental effects. This study examines the use of mixed code in the context of the…

  9. Speech coding at low to medium bit rates

    NASA Astrophysics Data System (ADS)

    Leblanc, Wilfred Paul

    1992-09-01

    Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.

  10. Near-toll quality digital speech transmission in the mobile satellite service

    NASA Technical Reports Server (NTRS)

    Townes, S. A.; Divsalar, D.

    1986-01-01

    This paper discusses system considerations for near-toll quality digital speech transmission in a 5 kHz mobile satellite system channel. Tradeoffs are shown for power performance versus delay for a 4800 bps speech compression system in conjunction with a 16 state rate 2/3 trellis coded 8PSK modulation system. The suggested system has an additional 150 ms of delay beyond the propagation delay and requires an E(b)/N(0) of about 7 dB for a Ricean channel assumption with line-of-sight to diffuse component ratio of 10 assuming ideal synchronization. An additional loss of 2 to 3 dB is expected for synchronization in fading environment.

  11. Perception drives production across sensory modalities: A network for sensorimotor integration of visual speech.

    PubMed

    Venezia, Jonathan H; Fillmore, Paul; Matchin, William; Isenberg, A Lisette; Hickok, Gregory; Fridriksson, Julius

    2016-02-01

    Sensory information is critical for movement control, both for defining the targets of actions and providing feedback during planning or ongoing movements. This holds for speech motor control as well, where both auditory and somatosensory information have been shown to play a key role. Recent clinical research demonstrates that individuals with severe speech production deficits can show a dramatic improvement in fluency during online mimicking of an audiovisual speech signal suggesting the existence of a visuomotor pathway for speech motor control. Here we used fMRI in healthy individuals to identify this new visuomotor circuit for speech production. Participants were asked to perceive and covertly rehearse nonsense syllable sequences presented auditorily, visually, or audiovisually. The motor act of rehearsal, which is prima facie the same whether or not it is cued with a visible talker, produced different patterns of sensorimotor activation when cued by visual or audiovisual speech (relative to auditory speech). In particular, a network of brain regions including the left posterior middle temporal gyrus and several frontoparietal sensorimotor areas activated more strongly during rehearsal cued by a visible talker versus rehearsal cued by auditory speech alone. Some of these brain regions responded exclusively to rehearsal cued by visual or audiovisual speech. This result has significant implications for models of speech motor control, for the treatment of speech output disorders, and for models of the role of speech gesture imitation in development. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Perception drives production across sensory modalities: A network for sensorimotor integration of visual speech

    PubMed Central

    Venezia, Jonathan H.; Fillmore, Paul; Matchin, William; Isenberg, A. Lisette; Hickok, Gregory; Fridriksson, Julius

    2015-01-01

    Sensory information is critical for movement control, both for defining the targets of actions and providing feedback during planning or ongoing movements. This holds for speech motor control as well, where both auditory and somatosensory information have been shown to play a key role. Recent clinical research demonstrates that individuals with severe speech production deficits can show a dramatic improvement in fluency during online mimicking of an audiovisual speech signal suggesting the existence of a visuomotor pathway for speech motor control. Here we used fMRI in healthy individuals to identify this new visuomotor circuit for speech production. Participants were asked to perceive and covertly rehearse nonsense syllable sequences presented auditorily, visually, or audiovisually. The motor act of rehearsal, which is prima facie the same whether or not it is cued with a visible talker, produced different patterns of sensorimotor activation when cued by visual or audiovisual speech (relative to auditory speech). In particular, a network of brain regions including the left posterior middle temporal gyrus and several frontoparietal sensorimotor areas activated more strongly during rehearsal cued by a visible talker versus rehearsal cued by auditory speech alone. Some of these brain regions responded exclusively to rehearsal cued by visual or audiovisual speech. This result has significant implications for models of speech motor control, for the treatment of speech output disorders, and for models of the role of speech gesture imitation in development. PMID:26608242

  13. An Issue Hiding in Plain Sight: When Are Speech-Language Pathologists Special Educators Rather than Related Services Providers?

    ERIC Educational Resources Information Center

    Giangreco, Michael F.; Prelock, Patricia A.; Turnbull, H. Rutherford, III

    2010-01-01

    Purpose: Under the Individuals With Disabilities Education Act (IDEA; as amended, 2004), speech-language pathology services may be either special education or a related service. Given the absence of guidance documents or research on this issue, the purposes of this clinical exchange are to (a) present and analyze the IDEA definitions related to…

  14. 75 FR 31334 - Real Estate Settlement Procedures Act (RESPA): Strengthening and Clarifying RESPA's “Required Use...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-03

    ... (this is not a toll-free number). Individuals with speech or hearing impairments may access this number through TTY by calling the Federal Information Relay Service at 800-877-8339 (this is not a toll-free...-free number). Persons with hearing or speech impairments may access this number through TTY by calling...

  15. Moral Imagination in Education: A Deweyan Proposal for Teachers Responding to Hate Speech

    ERIC Educational Resources Information Center

    Arneback, Emma

    2014-01-01

    This article is about moments when teachers experience hate speech in education and need to act. Based on John Dewey's work on moral philosophy and examples from teaching practice, I would like to contribute to the discussion about moral education by emphasizing the following: (1) the importance of experience, (2) the problem with prescribed…

  16. Le langage ecrit: Actes du 6e colloque d'orthophonie/logopedie (Written Language: Proceedings of the Sixth Colloquium on Speech Therapy/Speech Pathology) (Neuchatel, Switzerland, September 21-22, 2000).

    ERIC Educational Resources Information Center

    de Weck, Genevieve, Ed.; Sovilla, Jocelyne Buttet, Ed.

    This collection of papers discusses various theoretical, clinical, and assessment issues in reading and writing delays and disorders. Topics include the following: integrating different theoretical approaches (cognitive psychology, neuropsychology, constructivism) into clinical approaches to reading and writing difficulties; difficulties of…

  17. The equilibrium point hypothesis and its application to speech motor control.

    PubMed

    Perrier, P; Ostry, D J; Laboissière, R

    1996-04-01

    In this paper, we address a number of issues in speech research in the context of the equilibrium point hypothesis of motor control. The hypothesis suggests that movements arise from shifts in the equilibrium position of the limb or the speech articulator. The equilibrium is a consequence of the interaction of central neural commands, reflex mechanisms, muscle properties, and external loads, but it is under the control of central neural commands. These commands act to shift the equilibrium via centrally specified signals acting at the level of the motoneurone (MN) pool. In the context of a model of sagittal plane jaw and hyoid motion based on the lambda version of the equilibrium point hypothesis, we consider the implications of this hypothesis for the notion of articulatory targets. We suggest that simple linear control signals may underlie smooth articulatory trajectories. We explore as well the phenomenon of intraarticulator coarticulation in jaw movement. We suggest that even when no account is taken of upcoming context, that apparent anticipatory changes in movement amplitude and duration may arise due to dynamics. We also present a number of simulations that show in different ways how variability in measured kinematics can arise in spite of constant magnitude speech control signals.

  18. Asymmetric Dynamic Attunement of Speech and Gestures in the Construction of Children's Understanding.

    PubMed

    De Jonge-Hoekstra, Lisette; Van der Steen, Steffie; Van Geert, Paul; Cox, Ralf F A

    2016-01-01

    As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6) from Kindergarten (n = 5) and first grade (n = 7) participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA) to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on (1) the temporal relation between gestures and speech, (2) the relative strength and direction of the interaction between gestures and speech, (3) the relative strength and direction between gestures and speech for different levels of understanding, and (4) relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal) asymmetry in the gestures-speech interaction. For younger children, the balance leans more toward gestures leading speech in time, while the balance leans more toward speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools' language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between gestures and speech. The picture that emerges from our analyses suggests that the relation between gestures, speech and cognition is more complex than previously thought. We suggest that temporal differences and asymmetry in influence between gestures and speech arise from simultaneous coordination of synergies.

  19. 29 CFR 779.314 - “Goods” and “services” defined.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... OF GENERAL POLICY OR INTERPRETATION NOT DIRECTLY RELATED TO REGULATIONS THE FAIR LABOR STANDARDS ACT... term “goods” is defined in section 3(i) of the Act and has been discussed above in § 779.14. The Act... consistent with its usage in ordinary speech, with the context in which it appears and with the legislative...

  20. 76 FR 2381 - Notice of Public Information Collection(s) Being Reviewed by the Federal Communications...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-13

    ... Services for Hearing-Impaired and Speech-Impaired Individuals; the Americans with Disabilities Act of 1990... required by the Paperwork Reduction Act (PRA) of 1995, 44 U.S.C. 3501-3520. Comments are requested... of information subject to the Paperwork Reduction Act (PRA) that does not display a valid OMB control...

  1. 78 FR 19632 - Administrative Claims Under the Federal Tort Claims Act and Related Statutes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-02

    ... Administration 20 CFR Parts 638 and 670 RIN 1290-AA25 Administrative Claims Under the Federal Tort Claims Act and... governing administrative claims under the Federal Tort Claims Act and related statutes. DATES: Effective... (this is not a toll-free number). Individuals with hearing or speech impairments may access this...

  2. 76 FR 42463 - Consolidated Redelegation of Authority to the Office of General Counsel

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-18

    ....) Individuals with speech or hearing impairments may access this number through TTY by calling 1-800-877- 8339...''). 9. To act upon appeals under the Freedom of Information Act, 5 U.S.C. 552, except appeals from..., the authority to act upon appeals emanating from Headquarters or Regional Offices under the Freedom of...

  3. 77 FR 21559 - Sunshine Act Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-10

    ... FEDERAL ELECTION COMMISSION Sunshine Act Meeting AGENCY: Federal Election Commission. DATE AND TIME: Thursday, April 12, 2012 at 10 a.m. PLACE: 999 E Street NW., Washington, DC (Ninth Floor). STATUS.... Draft Advisory Opinion 2012-11: Free Speech. Management and Administrative Matters. Individuals who plan...

  4. No, There Is No 150 ms Lead of Visual Speech on Auditory Speech, but a Range of Audiovisual Asynchronies Varying from Small Audio Lead to Large Audio Lag

    PubMed Central

    Schwartz, Jean-Luc; Savariaux, Christophe

    2014-01-01

    An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call “preparatory gestures”. However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call “comodulatory gestures” providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na) showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction. PMID:25079216

  5. Temporal predictive mechanisms modulate motor reaction time during initiation and inhibition of speech and hand movement.

    PubMed

    Johari, Karim; Behroozmand, Roozbeh

    2017-08-01

    Skilled movement is mediated by motor commands executed with extremely fine temporal precision. The question of how the brain incorporates temporal information to perform motor actions has remained unanswered. This study investigated the effect of stimulus temporal predictability on response timing of speech and hand movement. Subjects performed a randomized vowel vocalization or button press task in two counterbalanced blocks in response to temporally-predictable and unpredictable visual cues. Results indicated that speech and hand reaction time was decreased for predictable compared with unpredictable stimuli. This finding suggests that a temporal predictive code is established to capture temporal dynamics of sensory cues in order to produce faster movements in responses to predictable stimuli. In addition, results revealed a main effect of modality, indicating faster hand movement compared with speech. We suggest that this effect is accounted for by the inherent complexity of speech production compared with hand movement. Lastly, we found that movement inhibition was faster than initiation for both hand and speech, suggesting that movement initiation requires a longer processing time to coordinate activities across multiple regions in the brain. These findings provide new insights into the mechanisms of temporal information processing during initiation and inhibition of speech and hand movement. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. New Perspectives on Assessing Amplification Effects

    PubMed Central

    Souza, Pamela E.; Tremblay, Kelly L.

    2006-01-01

    Clinicians have long been aware of the range of performance variability with hearing aids. Despite improvements in technology, there remain many instances of well-selected and appropriately fitted hearing aids whereby the user reports minimal improvement in speech understanding. This review presents a multistage framework for understanding how a hearing aid affects performance. Six stages are considered: (1) acoustic content of the signal, (2) modification of the signal by the hearing aid, (3) interaction between sound at the output of the hearing aid and the listener's ear, (4) integrity of the auditory system, (5) coding of available acoustic cues by the listener's auditory system, and (6) correct identification of the speech sound. Within this framework, this review describes methodology and research on 2 new assessment techniques: acoustic analysis of speech measured at the output of the hearing aid and auditory evoked potentials recorded while the listener wears hearing aids. Acoustic analysis topics include the relationship between conventional probe microphone tests and probe microphone measurements using speech, appropriate procedures for such tests, and assessment of signal-processing effects on speech acoustics and recognition. Auditory evoked potential topics include an overview of physiologic measures of speech processing and the effect of hearing loss and hearing aids on cortical auditory evoked potential measurements in response to speech. Finally, the clinical utility of these procedures is discussed. PMID:16959734

  7. Musicians change their tune: how hearing loss alters the neural code.

    PubMed

    Parbery-Clark, Alexandra; Anderson, Samira; Kraus, Nina

    2013-08-01

    Individuals with sensorineural hearing loss have difficulty understanding speech, especially in background noise. This deficit remains even when audibility is restored through amplification, suggesting that mechanisms beyond a reduction in peripheral sensitivity contribute to the perceptual difficulties associated with hearing loss. Given that normal-hearing musicians have enhanced auditory perceptual skills, including speech-in-noise perception, coupled with heightened subcortical responses to speech, we aimed to determine whether similar advantages could be observed in middle-aged adults with hearing loss. Results indicate that musicians with hearing loss, despite self-perceptions of average performance for understanding speech in noise, have a greater ability to hear in noise relative to nonmusicians. This is accompanied by more robust subcortical encoding of sound (e.g., stimulus-to-response correlations and response consistency) as well as more resilient neural responses to speech in the presence of background noise (e.g., neural timing). Musicians with hearing loss also demonstrate unique neural signatures of spectral encoding relative to nonmusicians: enhanced neural encoding of the speech-sound's fundamental frequency but not of its upper harmonics. This stands in contrast to previous outcomes in normal-hearing musicians, who have enhanced encoding of the harmonics but not the fundamental frequency. Taken together, our data suggest that although hearing loss modifies a musician's spectral encoding of speech, the musician advantage for perceiving speech in noise persists in a hearing-impaired population by adaptively strengthening underlying neural mechanisms for speech-in-noise perception. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Neural coding of sound envelope in reverberant environments.

    PubMed

    Slama, Michaël C C; Delgutte, Bertrand

    2015-03-11

    Speech reception depends critically on temporal modulations in the amplitude envelope of the speech signal. Reverberation encountered in everyday environments can substantially attenuate these modulations. To assess the effect of reverberation on the neural coding of amplitude envelope, we recorded from single units in the inferior colliculus (IC) of unanesthetized rabbit using sinusoidally amplitude modulated (AM) broadband noise stimuli presented in simulated anechoic and reverberant environments. Although reverberation degraded both rate and temporal coding of AM in IC neurons, in most neurons, the degradation in temporal coding was smaller than the AM attenuation in the stimulus. This compensation could largely be accounted for by the compressive shape of the modulation input-output function (MIOF), which describes the nonlinear transformation of modulation depth from acoustic stimuli into neural responses. Additionally, in a subset of neurons, the temporal coding of AM was better for reverberant stimuli than for anechoic stimuli having the same modulation depth at the ear. Using hybrid anechoic stimuli that selectively possess certain properties of reverberant sounds, we show that this reverberant advantage is not caused by envelope distortion, static interaural decorrelation, or spectral coloration. Overall, our results suggest that the auditory system may possess dual mechanisms that make the coding of amplitude envelope relatively robust in reverberation: one general mechanism operating for all stimuli with small modulation depths, and another mechanism dependent on very specific properties of reverberant stimuli, possibly the periodic fluctuations in interaural correlation at the modulation frequency. Copyright © 2015 the authors 0270-6474/15/354452-17$15.00/0.

  9. The impact of workplace factors on evidence-based speech-language pathology practice for children with autism spectrum disorders.

    PubMed

    Cheung, Gladys; Trembath, David; Arciuli, Joanne; Togher, Leanne

    2013-08-01

    Although researchers have examined barriers to implementing evidence-based practice (EBP) at the level of the individual, little is known about the effects workplaces have on speech-language pathologists' implementation of EBP. The aim of this study was to examine the impact of workplace factors on the use of EBP amongst speech-language pathologists who work with children with Autism Spectrum Disorder (ASD). This study sought to (a) explore views about EBP amongst speech-language pathologists who work with children with ASD, (b) identify workplace factors which, in the participants' opinions, acted as barriers or enablers to their provision of evidence-based speech-language pathology services, and (c) examine whether or not speech-language pathologists' responses to workplace factors differed based on the type of workplace or their years of experience. A total of 105 speech-language pathologists from across Australia completed an anonymous online questionnaire. The results indicate that, although the majority of speech-language pathologists agreed that EBP is necessary, they experienced barriers to their implementation of EBP including workplace culture and support, lack of time, cost of EBP, and the availability and accessibility of EBP resources. The barriers reported by speech-language pathologists were similar, regardless of their workplace (private practice vs organization) and years of experience.

  10. Development of a good-quality speech coder for transmission over noisy channels at 2.4 kb/s

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Berouti, M.; Higgins, A.; Russell, W.

    1982-03-01

    This report describes the development, study, and experimental results of a 2.4 kb/s speech coder called harmonic deviations (HDV) vocoder, which transmits good-quality speech over noisy channels with bit-error rates of up to 1%. The HDV coder is based on the linear predictive coding (LPC) vocoder, and it transmits additional information over and above the data transmitted by the LPC vocoder, in the form of deviations between the speech spectrum and the LPC all-pole model spectrum at a selected set of frequencies. At the receiver, the spectral deviations are used to generate the excitation signal for the all-pole synthesis filter. The report describes and compares several methods for extracting the spectral deviations from the speech signal and for encoding them. To limit the bit-rate of the HDV coder to 2.4 kb/s the report discusses several methods including orthogonal transformation and minimum-mean-square-error scalar quantization of log area ratios, two-stage vector-scalar quantization, and variable frame rate transmission. The report also presents the results of speech-quality optimization of the HDV coder at 2.4 kb/s.

  11. Statistical Analysis of Spectral Properties and Prosodic Parameters of Emotional Speech

    NASA Astrophysics Data System (ADS)

    Přibil, J.; Přibilová, A.

    2009-01-01

    The paper addresses reflection of microintonation and spectral properties in male and female acted emotional speech. Microintonation component of speech melody is analyzed regarding its spectral and statistical parameters. According to psychological research of emotional speech, different emotions are accompanied by different spectral noise. We control its amount by spectral flatness according to which the high frequency noise is mixed in voiced frames during cepstral speech synthesis. Our experiments are aimed at statistical analysis of cepstral coefficient values and ranges of spectral flatness in three emotions (joy, sadness, anger), and a neutral state for comparison. Calculated histograms of spectral flatness distribution are visually compared and modelled by Gamma probability distribution. Histograms of cepstral coefficient distribution are evaluated and compared using skewness and kurtosis. Achieved statistical results show good correlation comparing male and female voices for all emotional states portrayed by several Czech and Slovak professional actors.

  12. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling.

    PubMed

    Xiao, Bo; Huang, Chewei; Imel, Zac E; Atkins, David C; Georgiou, Panayiotis; Narayanan, Shrikanth S

    2016-04-01

    Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy-a key therapy quality index-from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training.

  13. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling

    PubMed Central

    Xiao, Bo; Huang, Chewei; Imel, Zac E.; Atkins, David C.; Georgiou, Panayiotis; Narayanan, Shrikanth S.

    2016-01-01

    Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy—a key therapy quality index—from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training. PMID:28286867

  14. Surveys 2. Eight State-of-the-Art Articles on Key Areas in Language Teaching. Cambridge Language Teaching Surveys.

    ERIC Educational Resources Information Center

    Kinsella, Valerie, Ed.

    The articles in this volume are an overview of work in a number of subjects and disciplines which contribute to the field of applied linguistics and language teaching. Specifically, they treat universal properties common to all languages, the historical developments and central issues in speech act theory, speech research on the various stages of…

  15. Leaving Mango Street: Speech, Action and the Construction of Narrative in Britton's Spectator Stance

    ERIC Educational Resources Information Center

    Crawford-Garrett, Katherine

    2009-01-01

    This paper attempts to unite "The House on Mango Street" by Sandra Cisneros with the participant and spectator theories of James Britton and D. W. Harding in the hopes that such a union will provide new insights into each. In particular, this article explores how the speech acts of Esperanza, the novel's protagonist, are indicative of a shifting…

  16. Functional overlap between regions involved in speech perception and in monitoring one's own voice during speech production.

    PubMed

    Zheng, Zane Z; Munhall, Kevin G; Johnsrude, Ingrid S

    2010-08-01

    The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not and by examining the overlap with the network recruited during passive listening to speech sounds. We used real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word ("Ted") and either heard this clearly or heard voice-gated masking noise. We compared this to when they listened to yoked stimuli (identical recordings of "Ted" or noise) without speaking. Activity along the STS and superior temporal gyrus bilaterally was significantly greater if the auditory stimulus was (a) processed as the auditory concomitant of speaking and (b) did not match the predicted outcome (noise). The network exhibiting this Feedback Type x Production/Perception interaction includes a superior temporal gyrus/middle temporal gyrus region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts and that processes an error signal in speech-sensitive regions when this and the sensory data do not match.

  17. Functional overlap between regions involved in speech perception and in monitoring one’s own voice during speech production

    PubMed Central

    Zheng, Zane Z.; Munhall, Kevin G; Johnsrude, Ingrid S

    2009-01-01

    The fluency and reliability of speech production suggests a mechanism that links motor commands and sensory feedback. Here, we examine the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not, and examining the overlap with the network recruited during passive listening to speech sounds. We use real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word (‘Ted’) and either heard this clearly, or heard voice-gated masking noise. We compare this to when they listened to yoked stimuli (identical recordings of ‘Ted’ or noise) without speaking. Activity along the superior temporal sulcus (STS) and superior temporal gyrus (STG) bilaterally was significantly greater if the auditory stimulus was a) processed as the auditory concomitant of speaking and b) did not match the predicted outcome (noise). The network exhibiting this Feedback type by Production/Perception interaction includes an STG/MTG region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts, and that processes an error signal in speech-sensitive regions when this and the sensory data do not match. PMID:19642886

  18. Space station interior noise analysis program

    NASA Technical Reports Server (NTRS)

    Stusnick, E.; Burn, M.

    1987-01-01

    Documentation is provided for a microcomputer program which was developed to evaluate the effect of the vibroacoustic environment on speech communication inside a space station. The program, entitled Space Station Interior Noise Analysis Program (SSINAP), combines a Statistical Energy Analysis (SEA) prediction of sound and vibration levels within the space station with a speech intelligibility model based on the Modulation Transfer Function and the Speech Transmission Index (MTF/STI). The SEA model provides an effective analysis tool for predicting the acoustic environment based on proposed space station design. The MTF/STI model provides a method for evaluating speech communication in the relatively reverberant and potentially noisy environments that are likely to occur in space stations. The combinations of these two models provides a powerful analysis tool for optimizing the acoustic design of space stations from the point of view of speech communications. The mathematical algorithms used in SSINAP are presented to implement the SEA and MTF/STI models. An appendix provides an explanation of the operation of the program along with details of the program structure and code.

  19. Report on the First APCA Government Affairs Seminar "The Clean Air Act."

    ERIC Educational Resources Information Center

    Beery, Williamina, T.

    1973-01-01

    A summary of 18 speeches and sessions from the Government Affairs Seminar is given. Topics featured were emission standards for mobile sources, implementation strategies for stationary sources, non-degradation of air quality standards, and technology assessment and the National Environmental Policy Act. (BL)

  20. Local television news coverage of President Clinton's introduction of the Health Security Act.

    PubMed

    Dorfman, L; Schauffler, H H; Wilkerson, J; Feinson, J

    1996-04-17

    To investigate how local television news reported on health system reform during the week President Clinton presented his health system reform bill. Retrospective content analysis of the 1342-page Health Security Act of 1993, the printed text of President Clinton's speech before Congress on September 22, 1993, and a sample of local television news stories on health system reform broadcast during the week of September 19 through 25, 1993. The state of California. During the week, 316 television news stories on health system reform were aired during the 166 local news broadcasts sampled. Health system reform was the second most frequently reported topic, second to stories on violent crime. News stories on health system reform averaged 1 minute 38 seconds in length, compared with 57 seconds for violent crime. Fifty-seven percent of the local news stories focused on interest group politics. Compared with the content of the Health Security Act, local news broadcasts devoted a significantly greater portion of their stories to financing, eligibility, and preventive services. Local news stories gave significantly less attention to cost-saving mechanisms, long-term care benefits, and changes in Medicare and Medicaid, and less than 2% of stories mentioned quality assurance mechanisms, malpractice reform, or new public health initiatives. Of the 316 televised news stories, 53 reported on the president's speech, covering many of the same topics emphasized in the speech (financing, organization and administration, and eligibility) and de-emphasizing many of the same topics (Medicare and Medicaid, quality assurance, and malpractice reform). Two percent of the president's speech covered partisan politics; 45% of the local news stories on the speech featured challenges from partisan politicians. Although health system reform was the focus of a large number of local television news stories during the week, in-depth explanation was scarce. In general, the news stories provided superficial coverage framed largely in terms of the risks and costs of reform to specific stakeholders.

  1. Filtering, Coding, and Compression with Malvar Wavelets

    DTIC Science & Technology

    1993-12-01

    speech coding techniques being investigated by the military (38). Imagery: Space imagery often requires adaptive restoration to deblur out-of-focus...and blurred image, find an estimate of the ideal image using a priori information about the blur, noise , and the ideal image" (12). The research for...recording can be described as the original signal convolved with impulses , which appear as echoes in the seismic event. The term deconvolution indicates

  2. Multipath/RFI/modulation study for DRSS-RFI problem: Voice coding and intelligibility testing for a satellite-based air traffic control system

    NASA Technical Reports Server (NTRS)

    Birch, J. N.; Getzin, N.

    1971-01-01

    Analog and digital voice coding techniques for application to an L-band satellite-basedair traffic control (ATC) system for over ocean deployment are examined. In addition to performance, the techniques are compared on the basis of cost, size, weight, power consumption, availability, reliability, and multiplexing features. Candidate systems are chosen on the bases of minimum required RF bandwidth and received carrier-to-noise density ratios. A detailed survey of automated and nonautomated intelligibility testing methods and devices is presented and comparisons given. Subjective evaluation of speech system by preference tests is considered. Conclusion and recommendations are developed regarding the selection of the voice system. Likewise, conclusions and recommendations are developed for the appropriate use of intelligibility tests, speech quality measurements, and preference tests with the framework of the proposed ATC system.

  3. Reading your own lips: common-coding theory and visual speech perception.

    PubMed

    Tye-Murray, Nancy; Spehar, Brent P; Myerson, Joel; Hale, Sandra; Sommers, Mitchell S

    2013-02-01

    Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one's own previous behavior activates motor plans to an even greater degree than does observing someone else's behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.

  4. Do perceived context pictures automatically activate their phonological code?

    PubMed

    Jescheniak, Jörg D; Oppermann, Frank; Hantsch, Ansgar; Wagner, Valentin; Mädebach, Andreas; Schriefers, Herbert

    2009-01-01

    Morsella and Miozzo (Morsella, E., & Miozzo, M. (2002). Evidence for a cascade model of lexical access in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 555-563) have reported that the to-be-ignored context pictures become phonologically activated when participants name a target picture, and took this finding as support for cascaded models of lexical retrieval in speech production. In a replication and extension of their experiment in German, we failed to obtain priming effects from context pictures phonologically related to a to-be-named target picture. By contrast, corresponding context words (i.e., the names of the respective pictures) and the same context pictures, when used in an identity condition, did reliably facilitate the naming process. This pattern calls into question the generality of the claim advanced by Morsella and Miozzo that perceptual processing of pictures in the context of a naming task automatically leads to the activation of corresponding lexical-phonological codes.

  5. Small intragenic deletion in FOXP2 associated with childhood apraxia of speech and dysarthria.

    PubMed

    Turner, Samantha J; Hildebrand, Michael S; Block, Susan; Damiano, John; Fahey, Michael; Reilly, Sheena; Bahlo, Melanie; Scheffer, Ingrid E; Morgan, Angela T

    2013-09-01

    Relatively little is known about the neurobiological basis of speech disorders although genetic determinants are increasingly recognized. The first gene for primary speech disorder was FOXP2, identified in a large, informative family with verbal and oral dyspraxia. Subsequently, many de novo and familial cases with a severe speech disorder associated with FOXP2 mutations have been reported. These mutations include sequencing alterations, translocations, uniparental disomy, and genomic copy number variants. We studied eight probands with speech disorder and their families. Family members were phenotyped using a comprehensive assessment of speech, oral motor function, language, literacy skills, and cognition. Coding regions of FOXP2 were screened to identify novel variants. Segregation of the variant was determined in the probands' families. Variants were identified in two probands. One child with severe motor speech disorder had a small de novo intragenic FOXP2 deletion. His phenotype included features of childhood apraxia of speech and dysarthria, oral motor dyspraxia, receptive and expressive language disorder, and literacy difficulties. The other variant was found in a family in two of three family members with stuttering, and also in the mother with oral motor impairment. This variant was considered a benign polymorphism as it was predicted to be non-pathogenic with in silico tools and found in database controls. This is the first report of a small intragenic deletion of FOXP2 that is likely to be the cause of severe motor speech disorder associated with language and literacy problems. Copyright © 2013 Wiley Periodicals, Inc.

  6. Action planning and predictive coding when speaking

    PubMed Central

    Wang, Jun; Mathalon, Daniel H.; Roach, Brian J.; Reilly, James; Keedy, Sarah; Sweeney, John A.; Ford, Judith M.

    2014-01-01

    Across the animal kingdom, sensations resulting from an animal's own actions are processed differently from sensations resulting from external sources, with self-generated sensations being suppressed. A forward model has been proposed to explain this process across sensorimotor domains. During vocalization, reduced processing of one's own speech is believed to result from a comparison of speech sounds to corollary discharges of intended speech production generated from efference copies of commands to speak. Until now, anatomical and functional evidence validating this model in humans has been indirect. Using EEG with anatomical MRI to facilitate source localization, we demonstrate that inferior frontal gyrus activity during the 300ms before speaking was associated with suppressed processing of speech sounds in auditory cortex around 100ms after speech onset (N1). These findings indicate that an efference copy from speech areas in prefrontal cortex is transmitted to auditory cortex, where it is used to suppress processing of anticipated speech sounds. About 100ms after N1, a subsequent auditory cortical component (P2) was not suppressed during talking. The combined N1 and P2 effects suggest that although sensory processing is suppressed as reflected in N1, perceptual gaps are filled as reflected in the lack of P2 suppression, explaining the discrepancy between sensory suppression and preserved sensory experiences. These findings, coupled with the coherence between relevant brain regions before and during speech, provide new mechanistic understanding of the complex interactions between action planning and sensory processing that provide for differentiated tagging and monitoring of one's own speech, processes disrupted in neuropsychiatric disorders. PMID:24423729

  7. Cracking the Language Code: Neural Mechanisms Underlying Speech Parsing

    PubMed Central

    McNealy, Kristin; Mazziotta, John C.; Dapretto, Mirella

    2013-01-01

    Word segmentation, detecting word boundaries in continuous speech, is a critical aspect of language learning. Previous research in infants and adults demonstrated that a stream of speech can be readily segmented based solely on the statistical and speech cues afforded by the input. Using functional magnetic resonance imaging (fMRI), the neural substrate of word segmentation was examined on-line as participants listened to three streams of concatenated syllables, containing either statistical regularities alone, statistical regularities and speech cues, or no cues. Despite the participants’ inability to explicitly detect differences between the speech streams, neural activity differed significantly across conditions, with left-lateralized signal increases in temporal cortices observed only when participants listened to streams containing statistical regularities, particularly the stream containing speech cues. In a second fMRI study, designed to verify that word segmentation had implicitly taken place, participants listened to trisyllabic combinations that occurred with different frequencies in the streams of speech they just heard (“words,” 45 times; “partwords,” 15 times; “nonwords,” once). Reliably greater activity in left inferior and middle frontal gyri was observed when comparing words with partwords and, to a lesser extent, when comparing partwords with nonwords. Activity in these regions, taken to index the implicit detection of word boundaries, was positively correlated with participants’ rapid auditory processing skills. These findings provide a neural signature of on-line word segmentation in the mature brain and an initial model with which to study developmental changes in the neural architecture involved in processing speech cues during language learning. PMID:16855090

  8. Speech enhancement based on neural networks improves speech intelligibility in noise for cochlear implant users.

    PubMed

    Goehring, Tobias; Bolner, Federico; Monaghan, Jessica J M; van Dijk, Bas; Zarowski, Andrzej; Bleeck, Stefan

    2017-02-01

    Speech understanding in noisy environments is still one of the major challenges for cochlear implant (CI) users in everyday life. We evaluated a speech enhancement algorithm based on neural networks (NNSE) for improving speech intelligibility in noise for CI users. The algorithm decomposes the noisy speech signal into time-frequency units, extracts a set of auditory-inspired features and feeds them to the neural network to produce an estimation of which frequency channels contain more perceptually important information (higher signal-to-noise ratio, SNR). This estimate is used to attenuate noise-dominated and retain speech-dominated CI channels for electrical stimulation, as in traditional n-of-m CI coding strategies. The proposed algorithm was evaluated by measuring the speech-in-noise performance of 14 CI users using three types of background noise. Two NNSE algorithms were compared: a speaker-dependent algorithm, that was trained on the target speaker used for testing, and a speaker-independent algorithm, that was trained on different speakers. Significant improvements in the intelligibility of speech in stationary and fluctuating noises were found relative to the unprocessed condition for the speaker-dependent algorithm in all noise types and for the speaker-independent algorithm in 2 out of 3 noise types. The NNSE algorithms used noise-specific neural networks that generalized to novel segments of the same noise type and worked over a range of SNRs. The proposed algorithm has the potential to improve the intelligibility of speech in noise for CI users while meeting the requirements of low computational complexity and processing delay for application in CI devices. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Sixth Annual Conference on Computers, Freedom, and Privacy: The RealAudio Proceedings.

    ERIC Educational Resources Information Center

    Glover, Barbara; Meernik, Mary

    1996-01-01

    Reviews the sixth Conference on Computers, Freedom, and Privacy (CFP) held in March 1996. Highlights include the Communications Decency Act, part of the 1996 Telecommunications Reform Act; European views; Internet service providers; limiting online speech on campus; cryptography; the global information infrastructure; copyright; and China and the…

  10. 77 FR 18258 - Notice of FHA Debenture Call

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-27

    ... in accordance with authority provided in the National Housing Act. FOR FURTHER INFORMATION CONTACT... toll-free number. Persons with hearing or speech impairments may access this number through TTY by... section 207(j) of the National Housing Act, 12 U.S.C. 1713(j), and in accordance with HUD's regulation at...

  11. 18 CFR 401.103 - Request for existing records.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... routine distribution to the public shall be deemed to be a request for records pursuant to the Freedom of Information Act, whether or not the Freedom of Information Act is mentioned in the request, and shall be... public distribution, e.g., pamphlets, speeches, public information and educational materials, shall be...

  12. Eye’m talking to you: speakers’ gaze direction modulates co-speech gesture processing in the right MTG

    PubMed Central

    Toni, Ivan; Hagoort, Peter; Kelly, Spencer D.; Özyürek, Aslı

    2015-01-01

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture. Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts. PMID:24652857

  13. Analysis of glottal source parameters in Parkinsonian speech.

    PubMed

    Hanratty, Jane; Deegan, Catherine; Walsh, Mary; Kirkpatrick, Barry

    2016-08-01

    Diagnosis and monitoring of Parkinson's disease has a number of challenges as there is no definitive biomarker despite the broad range of symptoms. Research is ongoing to produce objective measures that can either diagnose Parkinson's or act as an objective decision support tool. Recent research on speech based measures have demonstrated promising results. This study aims to investigate the characteristics of the glottal source signal in Parkinsonian speech. An experiment is conducted in which a selection of glottal parameters are tested for their ability to discriminate between healthy and Parkinsonian speech. Results for each glottal parameter are presented for a database of 50 healthy speakers and a database of 16 speakers with Parkinsonian speech symptoms. Receiver operating characteristic (ROC) curves were employed to analyse the results and the area under the ROC curve (AUC) values were used to quantify the performance of each glottal parameter. The results indicate that glottal parameters can be used to discriminate between healthy and Parkinsonian speech, although results varied for each parameter tested. For the task of separating healthy and Parkinsonian speech, 2 out of the 7 glottal parameters tested produced AUC values of over 0.9.

  14. Glove-talk II - a neural-network interface which maps gestures to parallel formant speech synthesizer controls.

    PubMed

    Fels, S S; Hinton, G E

    1997-01-01

    Glove-Talk II is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-Talk II uses several input devices, a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. With Glove-Talk II, the subject can speak slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.

  15. Status Report on Speech Research: A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for its Investigation, and Practical Application.

    DTIC Science & Technology

    1986-03-01

    attributed to insufficient power in the experimental design: Two of the studies that failed to find evidence of sign-based coding when printed words...perception of [p]; so may a lesser amount of silence, insufficient to cue a [p] percept in itself, followed bytransitions characteristic of [p] release...posterior pharyngeal wall has become visible through the nasal passage; the Velotrace is inserted using a procedure similar to that used for nasal

  16. Neural Oscillations Carry Speech Rhythm through to Comprehension

    PubMed Central

    Peelle, Jonathan E.; Davis, Matthew H.

    2012-01-01

    A key feature of speech is the quasi-regular rhythmic information contained in its slow amplitude modulations. In this article we review the information conveyed by speech rhythm, and the role of ongoing brain oscillations in listeners’ processing of this content. Our starting point is the fact that speech is inherently temporal, and that rhythmic information conveyed by the amplitude envelope contains important markers for place and manner of articulation, segmental information, and speech rate. Behavioral studies demonstrate that amplitude envelope information is relied upon by listeners and plays a key role in speech intelligibility. Extending behavioral findings, data from neuroimaging – particularly electroencephalography (EEG) and magnetoencephalography (MEG) – point to phase locking by ongoing cortical oscillations to low-frequency information (~4–8 Hz) in the speech envelope. This phase modulation effectively encodes a prediction of when important events (such as stressed syllables) are likely to occur, and acts to increase sensitivity to these relevant acoustic cues. We suggest a framework through which such neural entrainment to speech rhythm can explain effects of speech rate on word and segment perception (i.e., that the perception of phonemes and words in connected speech is influenced by preceding speech rate). Neuroanatomically, acoustic amplitude modulations are processed largely bilaterally in auditory cortex, with intelligible speech resulting in differential recruitment of left-hemisphere regions. Notable among these is lateral anterior temporal cortex, which we propose functions in a domain-general fashion to support ongoing memory and integration of meaningful input. Together, the reviewed evidence suggests that low-frequency oscillations in the acoustic speech signal form the foundation of a rhythmic hierarchy supporting spoken language, mirrored by phase-locked oscillations in the human brain. PMID:22973251

  17. Categorical speech processing in Broca's area: an fMRI study using multivariate pattern-based analysis.

    PubMed

    Lee, Yune-Sang; Turkeltaub, Peter; Granger, Richard; Raizada, Rajeev D S

    2012-03-14

    Although much effort has been directed toward understanding the neural basis of speech processing, the neural processes involved in the categorical perception of speech have been relatively less studied, and many questions remain open. In this functional magnetic resonance imaging (fMRI) study, we probed the cortical regions mediating categorical speech perception using an advanced brain-mapping technique, whole-brain multivariate pattern-based analysis (MVPA). Normal healthy human subjects (native English speakers) were scanned while they listened to 10 consonant-vowel syllables along the /ba/-/da/ continuum. Outside of the scanner, individuals' own category boundaries were measured to divide the fMRI data into /ba/ and /da/ conditions per subject. The whole-brain MVPA revealed that Broca's area and the left pre-supplementary motor area evoked distinct neural activity patterns between the two perceptual categories (/ba/ vs /da/). Broca's area was also found when the same analysis was applied to another dataset (Raizada and Poldrack, 2007), which previously yielded the supramarginal gyrus using a univariate adaptation-fMRI paradigm. The consistent MVPA findings from two independent datasets strongly indicate that Broca's area participates in categorical speech perception, with a possible role of translating speech signals into articulatory codes. The difference in results between univariate and multivariate pattern-based analyses of the same data suggest that processes in different cortical areas along the dorsal speech perception stream are distributed on different spatial scales.

  18. The Indian Child Welfare Act: Unto the Seventh Generation. Conference Proceedings (Los Angeles, California, January 15-17, 1992). National Conference Proceedings Series.

    ERIC Educational Resources Information Center

    Johnson, Troy R., Ed.

    This proceedings contains edited transcripts of speeches and workshops given at a conference on the Indian Child Welfare Act (ICWA), held at UCLA in January 1992. Workshop titles were: fetal alcohol syndrome; responding to the family in Indian child welfare; joint in-service training for management of Indian Child Welfare Act cases; domestic…

  19. Bilinguals at the "cocktail party": dissociable neural activity in auditory-linguistic brain regions reveals neurobiological basis for nonnative listeners' speech-in-noise recognition deficits.

    PubMed

    Bidelman, Gavin M; Dexter, Lauren

    2015-04-01

    We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Role of N-Methyl-D-Aspartate Receptors in Action-Based Predictive Coding Deficits in Schizophrenia.

    PubMed

    Kort, Naomi S; Ford, Judith M; Roach, Brian J; Gunduz-Bruce, Handan; Krystal, John H; Jaeger, Judith; Reinhart, Robert M G; Mathalon, Daniel H

    2017-03-15

    Recent theoretical models of schizophrenia posit that dysfunction of the neural mechanisms subserving predictive coding contributes to symptoms and cognitive deficits, and this dysfunction is further posited to result from N-methyl-D-aspartate glutamate receptor (NMDAR) hypofunction. Previously, by examining auditory cortical responses to self-generated speech sounds, we demonstrated that predictive coding during vocalization is disrupted in schizophrenia. To test the hypothesized contribution of NMDAR hypofunction to this disruption, we examined the effects of the NMDAR antagonist, ketamine, on predictive coding during vocalization in healthy volunteers and compared them with the effects of schizophrenia. In two separate studies, the N1 component of the event-related potential elicited by speech sounds during vocalization (talk) and passive playback (listen) were compared to assess the degree of N1 suppression during vocalization, a putative measure of auditory predictive coding. In the crossover study, 31 healthy volunteers completed two randomly ordered test days, a saline day and a ketamine day. Event-related potentials during the talk/listen task were obtained before infusion and during infusion on both days, and N1 amplitudes were compared across days. In the case-control study, N1 amplitudes from 34 schizophrenia patients and 33 healthy control volunteers were compared. N1 suppression to self-produced vocalizations was significantly and similarly diminished by ketamine (Cohen's d = 1.14) and schizophrenia (Cohen's d = .85). Disruption of NMDARs causes dysfunction in predictive coding during vocalization in a manner similar to the dysfunction observed in schizophrenia patients, consistent with the theorized contribution of NMDAR hypofunction to predictive coding deficits in schizophrenia. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  1. Remarks by James P. Turner, Acting Assistant Attorney General, Civil Rights Division, before the First Annual Conference, National Fair Housing Alliance, Washington, D.C.

    ERIC Educational Resources Information Center

    Turner, James P.

    This speech by an official of the U.S. Department of Justice reports on the steps that the Department is taking through its Civil Rights Division to enforce the new Fair Housing Act Amendments, and discusses how the Act fosters a cooperative interagency approach to enforcement. Between passage of the Act and its effective date of March 12, 1989,…

  2. Solar 79 Northwest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, S

    The highlights of the many public programs are described and summaries of plenary session speeches are included. Names, addresses, and solar interest codes of conference registrants are included. Eleven technical papers or summaries are included. A separate citation was prepared for each one. (MHR)

  3. Mapping the Speech Code: Cortical Responses Linking the Perception and Production of Vowels

    PubMed Central

    Schuerman, William L.; Meyer, Antje S.; McQueen, James M.

    2017-01-01

    The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation. PMID:28439232

  4. Some Behavioral and Neurobiological Constraints on Theories of Audiovisual Speech Integration: A Review and Suggestions for New Directions

    PubMed Central

    Altieri, Nicholas; Pisoni, David B.; Townsend, James T.

    2012-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield’s feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration. PMID:21968081

  5. Some behavioral and neurobiological constraints on theories of audiovisual speech integration: a review and suggestions for new directions.

    PubMed

    Altieri, Nicholas; Pisoni, David B; Townsend, James T

    2011-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield's feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration.

  6. Technical devices for hearing-impaired individuals: cochlear implants and brain stem implants - developments of the last decade

    PubMed Central

    Müller, Joachim

    2005-01-01

    Over the past two decades, the fascinating possibilities of cochlear implants for congenitally deaf or deafened children and adults developed tremendously and created a rapidly developing interdisciplinary research field. The main advancements of cochlear implantation in the past decade are marked by significant improvement of hearing and speech understanding in CI users. These improvements are attributed to the enhancement of speech coding strategies. The Implantation of more (and increasingly younger) children as well as the possibilities of the restoration of binaural hearing abilities with cochlear implants reflect the high standards reached by this development. Despite this progress, modern cochlear implants do not yet enable normal speech understanding, not even for the best patients. In particular speech understanding in noise remains problematic [1]. Until the mid 1990ies research concentrated on unilateral implantation. Remarkable and effective improvements have been made with bilateral implantation since 1996. Nowadays an increasing numbers of patients enjoy these benefits. PMID:22073052

  7. Technical devices for hearing-impaired individuals: cochlear implants and brain stem implants - developments of the last decade.

    PubMed

    Müller, Joachim

    2005-01-01

    Over the past two decades, the fascinating possibilities of cochlear implants for congenitally deaf or deafened children and adults developed tremendously and created a rapidly developing interdisciplinary research field.The main advancements of cochlear implantation in the past decade are marked by significant improvement of hearing and speech understanding in CI users. These improvements are attributed to the enhancement of speech coding strategies.The Implantation of more (and increasingly younger) children as well as the possibilities of the restoration of binaural hearing abilities with cochlear implants reflect the high standards reached by this development. Despite this progress, modern cochlear implants do not yet enable normal speech understanding, not even for the best patients. In particular speech understanding in noise remains problematic [1]. Until the mid 1990ies research concentrated on unilateral implantation. Remarkable and effective improvements have been made with bilateral implantation since 1996. Nowadays an increasing numbers of patients enjoy these benefits.

  8. 36 CFR 1192.87 - Public information system.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... COMPLIANCE BOARD AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY GUIDELINES FOR TRANSPORTATION VEHICLES... digitized human speech messages, to announce stations and provide other passenger information. Alternative...

  9. 36 CFR 1192.121 - Public information system.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... COMPLIANCE BOARD AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY GUIDELINES FOR TRANSPORTATION VEHICLES... public address system permitting transportation system personnel, or recorded or digitized human speech...

  10. 36 CFR 1192.103 - Public information system.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... COMPLIANCE BOARD AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY GUIDELINES FOR TRANSPORTATION VEHICLES... speech messages, to announce stations and provide other passenger information. Alternative systems or...

  11. Checklist interruption and resumption: A linguistic study

    NASA Technical Reports Server (NTRS)

    Linde, Charlotte; Goguen, Joseph

    1987-01-01

    This study forms part of a project investigating the relationships among the formal structure of aviation procedures, the ways in which the crew members are taught to execute them, and the ways in which thet are actually performed in flight. Specifically, this report examines the interactions between the performance of checklists and interruptions, considering both interruptions by radio communications and by other crew members. The data consists of 14 crews' performance of a full mission simulation of a higher ratio of checklist speech acts to all speech acts within the span of the performance of the checklist. Further, it is not number of interruptions but length of interruptions which is associated with crew performance quality. Use of explicit holds is also associated with crew performance.

  12. Attentional Control Buffers the Effect of Public Speaking Anxiety on Performance.

    PubMed

    Jones, Christopher R; Fazio, Russell H; Vasey, Michael W

    2012-09-01

    We explored dispositional differences in the ability to self-regulate attentional processes in the domain of public speaking. Participants first completed measures of speech anxiety and attentional control. In a second session, participants prepared and performed a short speech. Fear of public speaking negatively impacted performance only for those low in attentional control. Thus, attentional control appears to act as a buffer that facilitates successful self-regulation despite performance anxiety.

  13. Attentional Control Buffers the Effect of Public Speaking Anxiety on Performance

    PubMed Central

    Jones, Christopher R.; Fazio, Russell H.; Vasey, Michael W.

    2011-01-01

    We explored dispositional differences in the ability to self-regulate attentional processes in the domain of public speaking. Participants first completed measures of speech anxiety and attentional control. In a second session, participants prepared and performed a short speech. Fear of public speaking negatively impacted performance only for those low in attentional control. Thus, attentional control appears to act as a buffer that facilitates successful self-regulation despite performance anxiety. PMID:22924093

  14. The analysis of verbal interaction sequences in dyadic clinical communication: a review of methods.

    PubMed

    Connor, Martin; Fletcher, Ian; Salmon, Peter

    2009-05-01

    To identify methods available for sequential analysis of dyadic verbal clinical communication and to review their methodological and conceptual differences. Critical review, based on literature describing sequential analyses of clinical and other relevant social interaction. Dominant approaches are based on analysis of communication according to its precise position in the series of utterances that constitute event-coded dialogue. For practical reasons, methods focus on very short-term processes, typically the influence of one party's speech on what the other says next. Studies of longer-term influences are rare. Some analyses have statistical limitations, particularly in disregarding heterogeneity between consultations, patients or practitioners. Additional techniques, including ones that can use information about timing and duration of speech from interval-coding are becoming available. There is a danger that constraints of commonly used methods shape research questions and divert researchers from potentially important communication processes including ones that operate over a longer-term than one or two speech turns. Given that no one method can model the complexity of clinical communication, multiple methods, both quantitative and qualitative, are necessary. Broadening the range of methods will allow the current emphasis on exploratory studies to be balanced by tests of hypotheses about clinically important communication processes.

  15. Incorporating Speech Recognition into a Natural User Interface

    NASA Technical Reports Server (NTRS)

    Chapa, Nicholas

    2017-01-01

    The Augmented/ Virtual Reality (AVR) Lab has been working to study the applicability of recent virtual and augmented reality hardware and software to KSC operations. This includes the Oculus Rift, HTC Vive, Microsoft HoloLens, and Unity game engine. My project in this lab is to integrate voice recognition and voice commands into an easy to modify system that can be added to an existing portion of a Natural User Interface (NUI). A NUI is an intuitive and simple to use interface incorporating visual, touch, and speech recognition. The inclusion of speech recognition capability will allow users to perform actions or make inquiries using only their voice. The simplicity of needing only to speak to control an on-screen object or enact some digital action means that any user can quickly become accustomed to using this system. Multiple programs were tested for use in a speech command and recognition system. Sphinx4 translates speech to text using a Hidden Markov Model (HMM) based Language Model, an Acoustic Model, and a word Dictionary running on Java. PocketSphinx had similar functionality to Sphinx4 but instead ran on C. However, neither of these programs were ideal as building a Java or C wrapper slowed performance. The most ideal speech recognition system tested was the Unity Engine Grammar Recognizer. A Context Free Grammar (CFG) structure is written in an XML file to specify the structure of phrases and words that will be recognized by Unity Grammar Recognizer. Using Speech Recognition Grammar Specification (SRGS) 1.0 makes modifying the recognized combinations of words and phrases very simple and quick to do. With SRGS 1.0, semantic information can also be added to the XML file, which allows for even more control over how spoken words and phrases are interpreted by Unity. Additionally, using a CFG with SRGS 1.0 produces a Finite State Machine (FSM) functionality limiting the potential for incorrectly heard words or phrases. The purpose of my project was to investigate options for a Speech Recognition System. To that end I attempted to integrate Sphinx4 into a user interface. Sphinx4 had great accuracy and is the only free program able to perform offline speech dictation. However it had a limited dictionary of words that could be recognized, single syllable words were almost impossible for it to hear, and since it ran on Java it could not be integrated into the Unity based NUI. PocketSphinx ran much faster than Sphinx4 which would've made it ideal as a plugin to the Unity NUI, unfortunately creating a C# wrapper for the C code made the program unusable with Unity due to the wrapper slowing code execution and class files becoming unreachable. Unity Grammar Recognizer is the ideal speech recognition interface, it is flexible in recognizing multiple variations of the same command. It is also the most accurate program in recognizing speech due to using an XML grammar to specify speech structure instead of relying solely on a Dictionary and Language model. The Unity Grammar Recognizer will be used with the NUI for these reasons as well as being written in C# which further simplifies the incorporation.

  16. Organization

    Science.gov Websites

    PACOM Professional Development Reading List History Defense Strategic Guidance (PDF) USPACOM Area of Speeches / Testimony Freedom of Information Act FOIA - Reading Room Submit FOIA Request Request Status FOIA

  17. Newcomers

    Science.gov Websites

    PACOM Professional Development Reading List History Defense Strategic Guidance (PDF) USPACOM Area of Speeches / Testimony Freedom of Information Act FOIA - Reading Room Submit FOIA Request Request Status FOIA

  18. Media

    Science.gov Websites

    PACOM Professional Development Reading List History Defense Strategic Guidance (PDF) USPACOM Area of Speeches / Testimony Freedom of Information Act FOIA - Reading Room Submit FOIA Request Request Status FOIA

  19. 29 CFR 452.7 - Bill of Rights, title I.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... DISCLOSURE ACT OF 1959 Other Provisions of the Act Affecting Title IV § 452.7 Bill of Rights, title I. The...) “Equal Rights,” section 101(a)(2) “Freedom of Speech and Assembly,” and section 101(a)(5) “Safeguards.... 522, 29 U.S.C. 411. 8 But the Secretary may bring suit to enforce section 104 (29 U.S.C. 414). 9 Act...

  20. Collaborative Dialogue in Learning Pragmatics: Pragmatic-Related Episodes as an Opportunity for Learning Request-Making

    ERIC Educational Resources Information Center

    Taguchi, Naoko; Kim, Youjin

    2016-01-01

    This study examined the effects of collaborative dialogue in learning the speech act of request. Seventy-four second-grade girls' junior high students were divided into three groups. The "collaborative group" (n = 25) received explicit metapragmatic information on request (request head act and modifications) followed by a dialogue…

  1. 78 FR 19510 - Privacy Act of 1974; Republication To Delete and Update Privacy Act System of Records...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-01

    ... reported under that notice. FOR FURTHER INFORMATION CONTACT: The Chief Privacy Officer, 451 Seventh Street...). (This is not a toll-free number.) A telecommunication device for hearing- and speech-impaired... of General Routine Uses inadvertently reported repeated information that HUD proposes to exclude from...

  2. 25 CFR 700.525 - Use of government information or expertise.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... provisions of the Freedom of information and the Privacy Acts (5 U.S.C. 552). An employee may not release... terms of the Privacy Act in addition to any disciplinary penalties levied by the employee's supervisor. (e) Commission personnel may not accept compensation for an article, speech, consultant service, or...

  3. 21 CFR 20.23 - Request for existing records.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... request for records pursuant to the Freedom of Information Act, whether or not the Freedom of Information Act is mentioned in the request, and shall be governed by the provisions of this part. (b) Records or..., speeches, and educational materials, shall be furnished free of charge upon request as long as the supply...

  4. Paper-Based Textbooks with Audio Support for Print-Disabled Students.

    PubMed

    Fujiyoshi, Akio; Ohsawa, Akiko; Takaira, Takuya; Tani, Yoshiaki; Fujiyoshi, Mamoru; Ota, Yuko

    2015-01-01

    Utilizing invisible 2-dimensional codes and digital audio players with a 2-dimensional code scanner, we developed paper-based textbooks with audio support for students with print disabilities, called "multimodal textbooks." Multimodal textbooks can be read with the combination of the two modes: "reading printed text" and "listening to the speech of the text from a digital audio player with a 2-dimensional code scanner." Since multimodal textbooks look the same as regular textbooks and the price of a digital audio player is reasonable (about 30 euro), we think multimodal textbooks are suitable for students with print disabilities in ordinary classrooms.

  5. The association between Mycoplasma pneumoniae infection and speech and language impairment: A nationwide population-based study in Taiwan.

    PubMed

    Tsai, Ching-Shu; Chen, Vincent Chin-Hung; Yang, Yao-Hsu; Hung, Tai-Hsin; Lu, Mong-Liang; Huang, Kuo-You; Gossop, Michael

    2017-01-01

    Manifestations of Mycoplasma pneumoniae infection can range from self-limiting upper respiratory symptoms to various neurological complications, including speech and language impairment. But an association between Mycoplasma pneumoniae infection and speech and language impairment has not been sufficiently explored. In this study, we aim to investigate the association between Mycoplasma pneumoniae infection and subsequent speech and language impairment in a nationwide population-based sample using Taiwan's National Health Insurance Research Database. We identified 5,406 children with Mycoplasma pneumoniae infection (International Classification of Disease, Revision 9, Clinical Modification code 4830) and compared to 21,624 age-, sex-, urban- and income-matched controls on subsequent speech and language impairment. The mean follow-up interval for all subjects was 6.44 years (standard deviation = 2.42 years); the mean latency period between the initial Mycoplasma pneumoniae infection and presence of speech and language impairment was 1.96 years (standard deviation = 1.64 years). The results showed that Mycoplasma pneumoniae infection was significantly associated with greater incidence of speech and language impairment [hazard ratio (HR) = 1.49, 95% CI: 1.23-1.80]. In addition, significantly increased hazard ratio of subsequent speech and language impairment in the groups younger than 6 years old and no significant difference in the groups over the age of 6 years were found (HR = 1.43, 95% CI:1.09-1.88 for age 0-3 years group; HR = 1.67, 95% CI: 1.25-2.23 for age 4-5 years group; HR = 1.14, 95% CI: 0.54-2.39 for age 6-7 years group; and HR = 0.83, 95% CI:0.23-2.92 for age 8-18 years group). In conclusion, Mycoplasma pneumoniae infection is temporally associated with incident speech and language impairment.

  6. Forerunner of the Science of Psychoanalysis? An Essay on the Spanish and Portuguese Inquisition.

    PubMed

    Simms, Norman

    2015-01-01

    The inquisitions in Spain and Portugual were state organs, rather than church-run enterprises; their purpose to modernize disparate jurisdictions during the final stages of Reconquista (return of Moorish areas to Christian administration) to ensure security and loyalty. So many Jews converted (under duress or willingly for strategic reasons) and inter-married with middle-class and aristocratic families, that their sincerity and loyalty was suspected, This meant going beyond traditional monitoring of ritual acts and social behaviour; there was a need to look below the surface, to interpret ambiguity, and to break codes of duplicity. Inquisitors developed techniques of a form of psychoanalysis before the discoveries of Freud: methods of questioning to bring out repressed beliefs and motivations, unriddling equivocational performance and speech-acts, and integrating fragments of information from family members, business associates and neighbours collected over many years. Torture, more threatened than actual, and lengthy incarceration punctuated by periods of exile and re-arrest after years quiet, provoked desperate confessions and specious denunciations, all of which had to be subject to intense scrutiny and analysis. The assumption was modern: a person's self was no longer equivalent to their words and actions; instead, a deep dark and traumatized inner self to be revealed.

  7. Functional Characterization of the Human Speech Articulation Network.

    PubMed

    Basilakos, Alexandra; Smith, Kimberly G; Fillmore, Paul; Fridriksson, Julius; Fedorenko, Evelina

    2018-05-01

    A number of brain regions have been implicated in articulation, but their precise computations remain debated. Using functional magnetic resonance imaging, we examine the degree of functional specificity of articulation-responsive brain regions to constrain hypotheses about their contributions to speech production. We find that articulation-responsive regions (1) are sensitive to articulatory complexity, but (2) are largely nonoverlapping with nearby domain-general regions that support diverse goal-directed behaviors. Furthermore, premotor articulation regions show selectivity for speech production over some related tasks (respiration control), but not others (nonspeech oral-motor [NSO] movements). This overlap between speech and nonspeech movements concords with electrocorticographic evidence that these regions encode articulators and their states, and with patient evidence whereby articulatory deficits are often accompanied by oral-motor deficits. In contrast, the superior temporal regions show strong selectivity for articulation relative to nonspeech movements, suggesting that these regions play a specific role in speech planning/production. Finally, articulation-responsive portions of posterior inferior frontal gyrus show some selectivity for articulation, in line with the hypothesis that this region prepares an articulatory code that is passed to the premotor cortex. Taken together, these results inform the architecture of the human articulation system.

  8. Speech Versus Speaking: The Experiences of People With Parkinson's Disease and Implications for Intervention.

    PubMed

    Yorkston, Kathryn; Baylor, Carolyn; Britton, Deanna

    2017-06-22

    In this project, we explore the experiences of people who report speech changes associated with Parkinson's disease as they describe taking part in everyday communication situations and report impressions related to speech treatment. Twenty-four community-dwelling adults with Parkinson's disease took part in face-to-face, semistructured interviews. Qualitative research methods were used to code and develop themes related to the interviews. Two major themes emerged. The first, called "speaking," included several subthemes: thinking about speaking, weighing value versus effort, feelings associated with speaking, the environmental context of speaking, and the impact of Parkinson's disease on speaking. The second theme involved "treatment experiences" and included subthemes: choosing not to have treatment, the clinician, drills and exercise, and suggestions for change. From the perspective of participants with Parkinson's disease, speaking is an activity requiring both physical and cognitive effort that takes place in a social context. Although many report positive experiences with speech treatment, some reported dissatisfaction with speech drills and exercises and a lack of focus on the social aspects of communication. Suggestions for improvement include increased focus on the cognitive demands of speaking and on the psychosocial aspects of communication.

  9. Speech Versus Speaking: The Experiences of People With Parkinson's Disease and Implications for Intervention

    PubMed Central

    Baylor, Carolyn; Britton, Deanna

    2017-01-01

    Purpose In this project, we explore the experiences of people who report speech changes associated with Parkinson's disease as they describe taking part in everyday communication situations and report impressions related to speech treatment. Method Twenty-four community-dwelling adults with Parkinson's disease took part in face-to-face, semistructured interviews. Qualitative research methods were used to code and develop themes related to the interviews. Results Two major themes emerged. The first, called “speaking,” included several subthemes: thinking about speaking, weighing value versus effort, feelings associated with speaking, the environmental context of speaking, and the impact of Parkinson's disease on speaking. The second theme involved “treatment experiences” and included subthemes: choosing not to have treatment, the clinician, drills and exercise, and suggestions for change. Conclusions From the perspective of participants with Parkinson's disease, speaking is an activity requiring both physical and cognitive effort that takes place in a social context. Although many report positive experiences with speech treatment, some reported dissatisfaction with speech drills and exercises and a lack of focus on the social aspects of communication. Suggestions for improvement include increased focus on the cognitive demands of speaking and on the psychosocial aspects of communication. PMID:28654939

  10. The Library Systems Act and Rules for Administering the Library Systems Act.

    ERIC Educational Resources Information Center

    Texas State Library, Austin. Library Development Div.

    This document contains the Texas Library Systems Act and rules for administering the Library Systems Act. Specifically, it includes the following documents: Texas Library Systems Act; Summary of Codes;Texas Administrative Code: Service Complaints and Protest Procedure; Criteria For Texas Library System Membership; and Certification Requirements…

  11. Homosexual Cohabitees Act, 18 June 1987.

    PubMed

    1989-01-01

    The purpose of this Act is to place homosexual cohabitees in the same legal position as heterosexual cohabitees. It provides that if 2 persons are living together in a homosexual relationship, the following legal provisions relating to cohabitation shall apply to them: 1) the Cohabitees (Joint Homes) Act (1987:232), 2) the Inheritance Code, 3) the Real Property Code, 4) Chapter 10, section 9, of the Code of Judicial Procedure, 5) Chapter 4, section 19, 1st paragraph, of the Code of Execution, 6) section 19, 1st paragraph, section 35, subsection 4, and point 2a, 7th paragraph, of the regulations relating to Section 36 of the Municipal Tax Act (1928:370), 7) the Inheritance and Gift Taxes Act (1941:416), 8) Section 6 of the Court Procedures (Miscellaneous Business) Act (1946:807), 9) the Tenant Owner Act (1971:479), 10) section 10 of the Legal Aid Act (1972:429), and 11) the Notice to Unknown Creditors Act (1981:131).

  12. 36 CFR 1192.35 - Public information system.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... COMPLIANCE BOARD AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY GUIDELINES FOR TRANSPORTATION VEHICLES... the driver, or recorded or digitized human speech messages, to announce stops and provide other...

  13. Speech and motor disturbances in Rett syndrome.

    PubMed

    Bashina, V M; Simashkova, N V; Grachev, V V; Gorbachevskaya, N L

    2002-01-01

    Rett syndrome is a severe, genetically determined disease of early childhood which produces a defined clinical phenotype in girls. The main clinical manifestations include lesions affecting speech functions, involving both expressive and receptive speech, as well as motor functions, producing apraxia of the arms and profound abnormalities of gait in the form of ataxia-apraxia. Most investigators note that patients have variability in the severity of derangement to large motor acts and in the damage to fine hand movements and speech functions. The aims of the present work were to study disturbances of speech and motor functions over 2-5 years in 50 girls aged 12 months to 14 years with Rett syndrome and to analyze the correlations between these disturbances. The results of comparing clinical data and EEG traces supported the stepwise involvement of frontal and parietal-temporal cortical structures in the pathological process. The ability to organize speech and motor activity is affected first, with subsequent development of lesions to gnostic functions, which are in turn followed by derangement of subcortical structures and the cerebellum and later by damage to structures in the spinal cord. A clear correlation was found between the severity of lesions to motor and speech functions and neurophysiological data: the higher the level of preservation of elements of speech and motor functions, the smaller were the contributions of theta activity and the greater the contributions of alpha and beta activities to the EEG. The possible pathogenetic mechanisms underlying the motor and speech disturbances in Rett syndrome are discussed.

  14. When infants talk, infants listen: pre-babbling infants prefer listening to speech with infant vocal properties.

    PubMed

    Masapollo, Matthew; Polka, Linda; Ménard, Lucie

    2016-03-01

    To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre-babbling infants (at 4-6 months) prefer listening to vowel sounds with infant vocal properties over vowel sounds with adult vocal properties. A listening preference favoring infant vowels may derive from their higher voice pitch, which has been shown to attract infant attention in infant-directed speech (IDS). In addition, infants' nascent articulatory abilities may induce a bias favoring infant speech given that 4- to 6-month-olds are beginning to produce vowel sounds. We created infant and adult /i/ ('ee') vowels using a production-based synthesizer that simulates the act of speaking in talkers at different ages and then tested infants across four experiments using a sequential preferential listening task. The findings provide the first evidence that infants preferentially attend to vowel sounds with infant voice pitch and/or formants over vowel sounds with no infant-like vocal properties, supporting the view that infants' production abilities influence how they process infant speech. The findings with respect to voice pitch also reveal parallels between IDS and infant speech, raising new questions about the role of this speech register in infant development. Research exploring the underpinnings and impact of this perceptual bias can expand our understanding of infant language development. © 2015 John Wiley & Sons Ltd.

  15. Status report on speech research. A report on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications

    NASA Astrophysics Data System (ADS)

    Liberman, A. M.

    1985-10-01

    This interim status report on speech research discusses the following topics: On Vagueness and Fictions as Cornerstones of a Theory of Perceiving and Acting: A Comment on Walter (1983); The Informational Support for Upright Stance; Determining the Extent of Coarticulation-effects of Experimental Design; The Roles of Phoneme Frequency, Similarity, and Availability in the Experimental Elicitation of Speech Errors; On Learning to Speak; The Motor Theory of Speech Perception Revised; Linguistic and Acoustic Correlates of the Perceptual Structure Found in an Individual Differences Scaling Study of Vowels; Perceptual Coherence of Speech: Stability of Silence-cued Stop Consonants; Development of the Speech Perceptuomotor System; Dependence of Reading on Orthography-Investigations in Serbo-Croatian; The Relationship between Knowledge of Derivational Morphology and Spelling Ability in Fourth, Sixth, and Eighth Graders; Relations among Regular and Irregular, Morphologically-Related Words in the Lexicon as Revealed by Repetition Priming; Grammatical Priming of Inflected Nouns by the Gender of Possessive Adjectives; Grammatical Priming of Inflected Nouns by Inflected Adjectives; Deaf Signers and Serial Recall in the Visual Modality-Memory for Signs, Fingerspelling, and Print; Did Orthographies Evolve?; The Development of Children's Sensitivity to Factors Inf luencing Vowel Reading.

  16. Investigation of an HMM/ANN hybrid structure in pattern recognition application using cepstral analysis of dysarthric (distorted) speech signals.

    PubMed

    Polur, Prasad D; Miller, Gerald E

    2006-10-01

    Computer speech recognition of individuals with dysarthria, such as cerebral palsy patients requires a robust technique that can handle conditions of very high variability and limited training data. In this study, application of a 10 state ergodic hidden Markov model (HMM)/artificial neural network (ANN) hybrid structure for a dysarthric speech (isolated word) recognition system, intended to act as an assistive tool, was investigated. A small size vocabulary spoken by three cerebral palsy subjects was chosen. The effect of such a structure on the recognition rate of the system was investigated by comparing it with an ergodic hidden Markov model as a control tool. This was done in order to determine if this modified technique contributed to enhanced recognition of dysarthric speech. The speech was sampled at 11 kHz. Mel frequency cepstral coefficients were extracted from them using 15 ms frames and served as training input to the hybrid model setup. The subsequent results demonstrated that the hybrid model structure was quite robust in its ability to handle the large variability and non-conformity of dysarthric speech. The level of variability in input dysarthric speech patterns sometimes limits the reliability of the system. However, its application as a rehabilitation/control tool to assist dysarthric motor impaired individuals holds sufficient promise.

  17. Glove-TalkII--a neural-network interface which maps gestures to parallel formant speech synthesizer controls.

    PubMed

    Fels, S S; Hinton, G E

    1998-01-01

    Glove-TalkII is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three-space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.

  18. Interventions en groupe et interactions. Actes du 3eme colloque d'orthophonie/logopedie (Neuchatel, 29-30 septembre, 1994) (Group Interventions and Interactions. Proceedings of the Colloquium on Speech Therapy (3rd, Neuchatel, Switzerland, September 29-30, 1994).

    ERIC Educational Resources Information Center

    Py, Bernard, Ed.

    1995-01-01

    Conference papers on group methods of speech therapy include: "Donnees nouvelles sur les competences du jeune enfant. Proposition de nouveaux concepts" (New Data on the Competences of the Young Child. Proposition of New Concepts) (Hubert Montagner); "Interactions sociales et apprentissages: quels savoirs en jeu" (Social Interactions and Teaching:…

  19. Intelligibility in speech maskers with a binaural cochlear implant sound coding strategy inspired by the contralateral medial olivocochlear reflex.

    PubMed

    Lopez-Poveda, Enrique A; Eustaquio-Martín, Almudena; Stohl, Joshua S; Wolford, Robert D; Schatzer, Reinhold; Gorospe, José M; Ruiz, Santiago Santa Cruz; Benito, Fernando; Wilson, Blake S

    2017-05-01

    We have recently proposed a binaural cochlear implant (CI) sound processing strategy inspired by the contralateral medial olivocochlear reflex (the MOC strategy) and shown that it improves intelligibility in steady-state noise (Lopez-Poveda et al., 2016, Ear Hear 37:e138-e148). The aim here was to evaluate possible speech-reception benefits of the MOC strategy for speech maskers, a more natural type of interferer. Speech reception thresholds (SRTs) were measured in six bilateral and two single-sided deaf CI users with the MOC strategy and with a standard (STD) strategy. SRTs were measured in unilateral and bilateral listening conditions, and for target and masker stimuli located at azimuthal angles of (0°, 0°), (-15°, +15°), and (-90°, +90°). Mean SRTs were 2-5 dB better with the MOC than with the STD strategy for spatially separated target and masker sources. For bilateral CI users, the MOC strategy (1) facilitated the intelligibility of speech in competition with spatially separated speech maskers in both unilateral and bilateral listening conditions; and (2) led to an overall improvement in spatial release from masking in the two listening conditions. Insofar as speech is a more natural type of interferer than steady-state noise, the present results suggest that the MOC strategy holds potential for promising outcomes for CI users. Copyright © 2017. Published by Elsevier B.V.

  20. 36 CFR 1193.51 - Compatibility.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... BOARD TELECOMMUNICATIONS ACT ACCESSIBILITY GUIDELINES Requirements for Compatibility With Peripheral... user to easily turn any microphone on and off to allow the user to intermix speech with TTY use. (e...

  1. 36 CFR 1193.51 - Compatibility.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... BOARD TELECOMMUNICATIONS ACT ACCESSIBILITY GUIDELINES Requirements for Compatibility With Peripheral... user to easily turn any microphone on and off to allow the user to intermix speech with TTY use. (e...

  2. Limb versus speech motor control: a conceptual review.

    PubMed

    Grimme, Britta; Fuchs, Susanne; Perrier, Pascal; Schöner, Gregor

    2011-01-01

    This paper presents a comparative conceptual review of speech and limb motor control. Speech is essentially cognitive in nature and constrained by the rules of language, while limb movement is often oriented to physical objects. We discuss the issue of intrinsic vs. extrinsic variables underlying the representations of motor goals as well as whether motor goals specify terminal postures or entire trajectories. Timing and coordination is recognized as an area of strong interchange between the two domains. Although coordination among different motor acts within a sequence and coarticulation are central to speech motor control, they have received only limited attention in manipulatory movements. The biomechanics of speech production is characterized by the presence of soft tissue, a variable number of degrees of freedom, and the challenges of high rates of production, while limb movements deal more typically with inertial constraints from manipulated objects. This comparative review thus leads us to identify many strands of thinking that are shared across the two domains, but also points us to issues on which approaches in the two domains differ. We conclude that conceptual interchange between the fields of limb and speech motor control has been useful in the past and promises continued benefit.

  3. 26 CFR 7.48-3 - Election to apply the amendments made by sections 804 (a) and (b) of the Tax Reform Act of 1976...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the Act to movie and television films that are property described in section 50(a) of the Code and... sections 804 (a) and (b) of the Tax Reform Act of 1976 to property described in section 50(a) of the Code... described in section 50(a) of the Code. (a) General rule. Under section 804(e)(2) of the Tax Reform Act of...

  4. Negative pragmatic transfer in Chinese students' complimentary speech acts.

    PubMed

    Ren, Juanjuan; Gao, Xiaofang

    2012-02-01

    Summary.-This study was designed to examine negative pragmatic transfer of the speech act of English compliments by Chinese who learn English as a foreign language and to estimate the correlation between the amount of negative pragmatic transfer and English proficiency of the Chinese learners. Frequencies of students' performance showed that both in the favored compliments and the response strategies, differences were evident between Chinese English learners and native English speakers. This indicated that Chinese learners had trouble with the "slang" or "idioms" of the target language and tended to transfer negatively their L1 pragmatic norms to their L2 communication. Moreover, the favored compliment response strategies used by two groups of Chinese learners--who had different levels of English proficiency--differed, and negative pragmatic transfer decreased as proficiency in English increased.

  5. Justifying Suppression: The Sedition Act of 1798 as Evidence of Framers' Intent.

    ERIC Educational Resources Information Center

    Herbeck, Dale A.

    The Bill of Rights contains a set of simple statements about the rights which citizens may claim in disputes with the government. Those who suggest that the First Amendment has always represented a strong commitment to free speech ignore the historical lesson offered by the Sedition Act of 1798. The early American republic maintained careful…

  6. [Verbal and gestural communication in interpersonal interaction with Alzheimer's disease patients].

    PubMed

    Schiaratura, Loris Tamara; Di Pastena, Angela; Askevis-Leherpeux, Françoise; Clément, Sylvain

    2015-03-01

    Communication can be defined as a verbal and non verbal exchange of thoughts and emotions. While verbal communication deficit in Alzheimer's disease is well documented, very little is known about gestural communication, especially in interpersonal situations. This study examines the production of gestures and its relations with verbal aspects of communication. Three patients suffering from moderately severe Alzheimer's disease were compared to three healthy adults. Each one were given a series of pictures and asked to explain which one she preferred and why. The interpersonal interaction was video recorded. Analyses concerned verbal production (quantity and quality) and gestures. Gestures were either non representational (i.e., gestures of small amplitude punctuating speech or accentuating some parts of utterance) or representational (i.e., referring to the object of the speech). Representational gestures were coded as iconic (depicting of concrete aspects), metaphoric (depicting of abstract meaning) or deictic (pointing toward an object). In comparison with healthy participants, patients revealed a decrease in quantity and quality of speech. Nevertheless, their production of gestures was always present. This pattern is in line with the conception that gestures and speech depend on different communicational systems and look inconsistent with the assumption of a parallel dissolution of gesture and speech. Moreover, analyzing the articulation between verbal and gestural dimensions suggests that representational gestures may compensate for speech deficits. It underlines the importance for the role of gestures in maintaining interpersonal communication.

  7. Snake oil salesmen or purveyors of knowledge: off-label promotions and the commercial speech doctrine.

    PubMed

    Bagley, Constance E; Mitts, Joshua; Tinsley, Richard J

    2013-01-01

    The Second Circuit's December 2012 decision in United States v. Caronia striking down the prohibition on off-label marketing of pharmaceutical drugs has profound implications for economic regulation in general, calling into question the constitutionality of restrictions on the offer and sale of securities under the Securities Act of 1933, the solicitation of shareholder proxies and periodic reporting under the Securities Exchange Act of 1934, mandatory labels on food, tobacco, and pesticides, and a wide range of privacy protections. In this Article we suggest that Caronia misconstrues the Supreme Court's holding in Sorrell v. IMS Health, which was motivated by concerns of favoring one industry participant over another rather than a desire to return to the anti-regulator fervor of the Lochner era. Reexamining the theoretical justification for limiting truthful commercial speech shows that a more nuanced approach to regulating off-label marketing with the purpose of promoting public health and safety would pass constitutional muster. We argue that as long as the government both has a rational basis for subjecting a particular industry to limits on commercial speech intended to further a legitimate public interest, rather than unfounded paternalism, and does not discriminate against disfavored industry participants, those limits should be subject to intermediate scrutiny under the Central Hudson standard. We believe that our articulation of the commercial speech doctrine post-Sorrell will help resolve the current split in the Circuits on the appropriate standard of review in cases involving both restrictions on commercial speech and mandated speech. Finally, we critique the FDA's 2011 Guidance for Responding to Unsolicited Requests for Off- Label Information (draft) and present a proposal for new rules for regulating the off-label marketing of pharmaceutical drugs based on transparency, the sophistication of the listener and the type of information offered, and the requirement that the pharmaceutical company comply with ongoing duties of training, monitoring, reporting, and auditing.

  8. 11 CFR 5.4 - Availability of records.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... public distribution, e.g. campaign guidelines, FEC Record, press releases, speeches, notices to... records subject to the Act and the maximum availability of such records to the public, nothing herein...

  9. U.S. Pacific Command > Leadership

    Science.gov Websites

    PACOM Professional Development Reading List History Defense Strategic Guidance (PDF) USPACOM Area of Speeches / Testimony Freedom of Information Act FOIA - Reading Room Submit FOIA Request Request Status FOIA

  10. Contribution of auditory working memory to speech understanding in mandarin-speaking cochlear implant users.

    PubMed

    Tao, Duoduo; Deng, Rui; Jiang, Ye; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2014-01-01

    To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI) users. Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH) participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a) word-in-sentence recognition in quiet, (b) word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c) Chinese disyllable recognition in quiet, (d) Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork. There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants. Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical importance of voice pitch cues (albeit poorly coded by the CI) did not influence the relationship between working memory and speech perception.

  11. The Effects of Word Length on Memory for Pictures: Evidence for Speech Coding in Young Children.

    ERIC Educational Resources Information Center

    Hulme, Charles; And Others

    1986-01-01

    Three experiments demonstrate that children four to ten years old, when presented with a series recall task with pictures of common objects having short or long names, showed consistently better recall of pictures with short names. (HOD)

  12. Different Timescales for the Neural Coding of Consonant and Vowel Sounds

    PubMed Central

    Perez, Claudia A.; Engineer, Crystal T.; Jakkamsetti, Vikram; Carraway, Ryan S.; Perry, Matthew S.

    2013-01-01

    Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders. PMID:22426334

  13. Decoding Articulatory Features from fMRI Responses in Dorsal Speech Regions.

    PubMed

    Correia, Joao M; Jansma, Bernadette M B; Bonte, Milene

    2015-11-11

    The brain's circuitry for perceiving and producing speech may show a notable level of overlap that is crucial for normal development and behavior. The extent to which sensorimotor integration plays a role in speech perception remains highly controversial, however. Methodological constraints related to experimental designs and analysis methods have so far prevented the disentanglement of neural responses to acoustic versus articulatory speech features. Using a passive listening paradigm and multivariate decoding of single-trial fMRI responses to spoken syllables, we investigated brain-based generalization of articulatory features (place and manner of articulation, and voicing) beyond their acoustic (surface) form in adult human listeners. For example, we trained a classifier to discriminate place of articulation within stop syllables (e.g., /pa/ vs /ta/) and tested whether this training generalizes to fricatives (e.g., /fa/ vs /sa/). This novel approach revealed generalization of place and manner of articulation at multiple cortical levels within the dorsal auditory pathway, including auditory, sensorimotor, motor, and somatosensory regions, suggesting the representation of sensorimotor information. Additionally, generalization of voicing included the right anterior superior temporal sulcus associated with the perception of human voices as well as somatosensory regions bilaterally. Our findings highlight the close connection between brain systems for speech perception and production, and in particular, indicate the availability of articulatory codes during passive speech perception. Sensorimotor integration is central to verbal communication and provides a link between auditory signals of speech perception and motor programs of speech production. It remains highly controversial, however, to what extent the brain's speech perception system actively uses articulatory (motor), in addition to acoustic/phonetic, representations. In this study, we examine the role of articulatory representations during passive listening using carefully controlled stimuli (spoken syllables) in combination with multivariate fMRI decoding. Our approach enabled us to disentangle brain responses to acoustic and articulatory speech properties. In particular, it revealed articulatory-specific brain responses of speech at multiple cortical levels, including auditory, sensorimotor, and motor regions, suggesting the representation of sensorimotor information during passive speech perception. Copyright © 2015 the authors 0270-6474/15/3515015-11$15.00/0.

  14. Fostering a Culture of Engagement. (Military Review, September-October 2009)

    DTIC Science & Technology

    2009-10-01

    the publication of any information that could even remotely be considered to aid the enemy.”8 A year later, the Sedition Act made criticism of the...Kenneth Payne, “The Media as an Instrument of War,” Parameters (Spring 2005), 81-93. 31. Transcript of speech given by GEN Martin E. Dempsey, Commanding...General TRADOC, to the U.S. Army War College, Carlisle Barracks, PA, 25 March 2009. <www.tradoc.army.mil/pao/ Speeches /Gen%20Dempsey%202008-09/AWC%20

  15. 75 FR 15441 - Privacy Act of 1974; Report of an Altered System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-29

    ..., speech pathologists, health care administration personnel, nurses, allied health personnel, medical technologists, chiropractors, clinical psychologists, and other health personnel may be included. CATEGORIES OF...

  16. U.S. Pacific Command > Contact > Directory

    Science.gov Websites

    PACOM Professional Development Reading List History Defense Strategic Guidance (PDF) USPACOM Area of Speeches / Testimony Freedom of Information Act FOIA - Reading Room Submit FOIA Request Request Status FOIA

  17. The Army Lawyer

    DTIC Science & Technology

    2009-07-01

    newspapers of the leftist persuasions were banned under the Espionage Act of 1917 and the Sedition Act of 1918. Id. 38 Id. 39 Id.; see also Near v... speech ). 40 VAUGHN, supra note 25, at 85. The 1941 War Powers Act banned publishing material on subjects such as military plans, intelligence...to trigger a war crime under Article 85, para. 3(e). Protocol I, supra note 4, art. 85(3). 170 Protocol I, supra note 4, art. 48. Article 48’s

  18. Audio-vocal responses of vocal fundamental frequency and formant during sustained vowel vocalizations in different noises.

    PubMed

    Lee, Shao-Hsuan; Hsiao, Tzu-Yu; Lee, Guo-She

    2015-06-01

    Sustained vocalizations of vowels [a], [i], and syllable [mə] were collected in twenty normal-hearing individuals. On vocalizations, five conditions of different audio-vocal feedback were introduced separately to the speakers including no masking, wearing supra-aural headphones only, speech-noise masking, high-pass noise masking, and broad-band-noise masking. Power spectral analysis of vocal fundamental frequency (F0) was used to evaluate the modulations of F0 and linear-predictive-coding was used to acquire first two formants. The results showed that while the formant frequencies were not significantly shifted, low-frequency modulations (<3 Hz) of F0 significantly increased with reduced audio-vocal feedback across speech sounds and were significantly correlated with auditory awareness of speakers' own voices. For sustained speech production, the motor speech controls on F0 may depend on a feedback mechanism while articulation should rely more on a feedforward mechanism. Power spectral analysis of F0 might be applied to evaluate audio-vocal control for various hearing and neurological disorders in the future. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Automatic conversational scene analysis in children with Asperger syndrome/high-functioning autism and typically developing peers.

    PubMed

    Tavano, Alessandro; Pesarin, Anna; Murino, Vittorio; Cristani, Marco

    2014-01-01

    Individuals with Asperger syndrome/High Functioning Autism fail to spontaneously attribute mental states to the self and others, a life-long phenotypic characteristic known as mindblindness. We hypothesized that mindblindness would affect the dynamics of conversational interaction. Using generative models, in particular Gaussian mixture models and observed influence models, conversations were coded as interacting Markov processes, operating on novel speech/silence patterns, termed Steady Conversational Periods (SCPs). SCPs assume that whenever an agent's process changes state (e.g., from silence to speech), it causes a general transition of the entire conversational process, forcing inter-actant synchronization. SCPs fed into observed influence models, which captured the conversational dynamics of children and adolescents with Asperger syndrome/High Functioning Autism, and age-matched typically developing participants. Analyzing the parameters of the models by means of discriminative classifiers, the dialogs of patients were successfully distinguished from those of control participants. We conclude that meaning-free speech/silence sequences, reflecting inter-actant synchronization, at least partially encode typical and atypical conversational dynamics. This suggests a direct influence of theory of mind abilities onto basic speech initiative behavior.

  20. A false sense of security: safety behaviors erode objective speech performance in individuals with social anxiety disorder.

    PubMed

    Rowa, Karen; Paulitzki, Jeffrey R; Ierullo, Maria D; Chiang, Brenda; Antony, Martin M; McCabe, Randi E; Moscovitch, David A

    2015-05-01

    In the current study, 55 participants with a diagnosis of generalized social anxiety disorder (SAD), 23 participants with a diagnosis of an anxiety disorder other than SAD with no comorbid SAD, and 50 healthy controls completed a speech task as well as self-reported measures of safety behavior use. Speeches were videotaped and coded for global and specific indicators of performance by two raters who were blind to participants' diagnostic status. Results suggested that the objective performance of people with SAD was poorer than that of both control groups, who did not differ from each other. Moreover, self-reported use of safety behaviors during the speech strongly mediated the relationship between diagnostic group and observers' performance ratings. These results are consistent with contemporary cognitive-behavioral and interpersonal models of SAD and suggest that socially anxious individuals' performance skills may be undermined by the use of safety behaviors. These data provide further support for recommendations from previous studies that the elimination of safety behaviors ought to be a priority in cognitive behavioral therapy for SAD. Copyright © 2014. Published by Elsevier Ltd.

  1. 77 FR 34221 - Air Quality Designations for the 2008 Ozone National Ambient Air Quality Standards for Several...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-11

    ... Regulatory Review B. Paperwork Reduction Act C. Regulatory Flexibility Act D. Unfunded Mandates Reform Act E... preamble. APA Administrative Procedure Act CAA Clean Air Act CFR Code of Federal Regulations D.C. District... Authority Rule U.S. United States U.S.C. United States Code VCS Voluntary Consensus Standards VOC Volatile...

  2. U.S. Pacific Command > Organization > Organization Chart

    Science.gov Websites

    PACOM Professional Development Reading List History Defense Strategic Guidance (PDF) USPACOM Area of Speeches / Testimony Freedom of Information Act FOIA - Reading Room Submit FOIA Request Request Status FOIA

  3. U.S. Pacific Command > Resources > Useful Links

    Science.gov Websites

    PACOM Professional Development Reading List History Defense Strategic Guidance (PDF) USPACOM Area of Speeches / Testimony Freedom of Information Act FOIA - Reading Room Submit FOIA Request Request Status FOIA

  4. Free All Speech Act of 2014

    THOMAS, 113th Congress

    Sen. Cruz, Ted [R-TX

    2014-06-03

    Senate - 06/03/2014 Read twice and referred to the Committee on Commerce, Science, and Transportation. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  5. Measuring Syntactic Complexity in Spontaneous Spoken Swedish

    ERIC Educational Resources Information Center

    Roll, Mikael; Frid, Johan; Horne, Merle

    2007-01-01

    Hesitation disfluencies after phonetically prominent stranded function words are thought to reflect the cognitive coding of complex structures. Speech fragments following the Swedish function word "att" "that" were analyzed syntactically, and divided into two groups: one with "att" in disfluent contexts, and the other with "att" in fluent…

  6. Voice synthesis application

    NASA Astrophysics Data System (ADS)

    Lightstone, P. C.; Davidson, W. M.

    1982-04-01

    The military detection assessment laboratory houses an experimental field system which assesses different alarm indicators such as fence disturbance sensors, MILES cables, and microwave Racons. A speech synthesis board which could be interfaced, by means of a computer, to an alarm logger making verbal acknowledgement of alarms possible was purchased. Different products and different types of voice synthesis were analyzed before a linear predictive code device produced by Telesensory Speech Systems of Palo Alto, California was chosen. This device is called the Speech 1000 Board and has a dedicated 8085 processor. A multiplexer card was designed and the Sp 1000 interfaced through the card into a TMS 990/100M Texas Instrument microcomputer. It was also necessary to design the software with the capability of recognizing and flagging an alarm on any 1 of 32 possible lines. The experimental field system was then packaged with a dc power supply, LED indicators, speakers, and switches, and deployed in the field performing reliably.

  7. Effects of prior information on decoding degraded speech: an fMRI study.

    PubMed

    Clos, Mareike; Langner, Robert; Meyer, Martin; Oechslin, Mathias S; Zilles, Karl; Eickhoff, Simon B

    2014-01-01

    Expectations and prior knowledge are thought to support the perceptual analysis of incoming sensory stimuli, as proposed by the predictive-coding framework. The current fMRI study investigated the effect of prior information on brain activity during the decoding of degraded speech stimuli. When prior information enabled the comprehension of the degraded sentences, the left middle temporal gyrus and the left angular gyrus were activated, highlighting a role of these areas in meaning extraction. In contrast, the activation of the left inferior frontal gyrus (area 44/45) appeared to reflect the search for meaningful information in degraded speech material that could not be decoded because of mismatches with the prior information. Our results show that degraded sentences evoke instantaneously different percepts and activation patterns depending on the type of prior information, in line with prediction-based accounts of perception. Copyright © 2012 Wiley Periodicals, Inc.

  8. "Are You with Me?" A Metadiscursive Analysis of Interactive Strategies in College Students' Course Presentations

    ERIC Educational Resources Information Center

    Agnes, Magnuczne Godo

    2012-01-01

    In recent years increasing research attention has been devoted to the definition and development of presentation skills. As an interactive oral discourse type, the presentation is characterised by specific speech acts, of which cooperative acts have proved to be of a highly developmental nature (Sazdovska, 2009). The aim of the present paper is to…

  9. 16 CFR 1011.4 - Forms of advance public notice of meetings; Public Calendar/Master Calendar and Federal Register.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... meetings, selected staff meetings, advisory committee meetings, and other activities such as speeches and... Safety Act (15 U.S.C. 2076(j)(8)). (b) Federal Register. Federal Register is the publication through... by the Government in the Sunshine Act (as provided in 16 CFR part 1013) or other applicable law, or...

  10. 16 CFR 1011.4 - Forms of advance public notice of meetings; Public Calendar/Master Calendar and Federal Register.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... meetings, selected staff meetings, advisory committee meetings, and other activities such as speeches and... Safety Act (15 U.S.C. 2076(j)(8)). (b) Federal Register. Federal Register is the publication through... by the Government in the Sunshine Act (as provided in 16 CFR part 1013) or other applicable law, or...

  11. Regulating Nation-State Cyber Attacks in Counterterrorism Operations

    DTIC Science & Technology

    2010-06-01

    24 e. The 1973 United Nations Convention to Prevent and Punish Acts of Terrorism in the Form of Crimes Against...International Convention for the Suppression of Acts of Nuclear Terrorism ......................28 n. The 1998 Rome Statute and the Crime of Aggression...Intelligence Agency, https://www.cia.gov/news-information/ speeches -testimony/2000/cyberthreats_022300.html. 3 Peter Brookes, “The Cyberspy Threat

  12. Digitised evaluation of speech intelligibility using vowels in maxillectomy patients.

    PubMed

    Sumita, Y I; Hattori, M; Murase, M; Elbashti, M E; Taniguchi, H

    2018-03-01

    Among the functional disabilities that patients face following maxillectomy, speech impairment is a major factor influencing quality of life. Proper rehabilitation of speech, which may include prosthodontic and surgical treatments and speech therapy, requires accurate evaluation of speech intelligibility (SI). A simple, less time-consuming yet accurate evaluation is desirable both for maxillectomy patients and the various clinicians providing maxillofacial treatment. This study sought to determine the utility of digital acoustic analysis of vowels for the prediction of SI in maxillectomy patients, based on a comprehensive understanding of speech production in the vocal tract of maxillectomy patients and its perception. Speech samples were collected from 33 male maxillectomy patients (mean age 57.4 years) in two conditions, without and with a maxillofacial prosthesis, and formant data for the vowels /a/,/e/,/i/,/o/, and /u/ were calculated based on linear predictive coding. The frequency range of formant 2 (F2) was determined by differences between the minimum and maximum frequency. An SI test was also conducted to reveal the relationship between SI score and F2 range. Statistical analyses were applied. F2 range and SI score were significantly different between the two conditions without and with a prosthesis (both P < .0001). F2 range was significantly correlated with SI score in both the conditions (Spearman's r = .843, P < .0001; r = .832, P < .0001, respectively). These findings indicate that calculating the F2 range from 5 vowels has clinical utility for the prediction of SI after maxillectomy. © 2017 John Wiley & Sons Ltd.

  13. Perceptual learning of degraded speech by minimizing prediction error.

    PubMed

    Sohoglu, Ediz; Davis, Matthew H

    2016-03-22

    Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent magnetoencephalographic and EEG recordings of neural responses evoked by degraded speech. When speech clarity was enhanced by prior knowledge obtained from matching text, we observed reduced neural activity in a peri-auditory region of the superior temporal gyrus (STG). Critically, longer-term improvements in the accuracy of speech recognition following perceptual learning resulted in reduced activity in a nearly identical STG region. Moreover, short-term neural changes caused by prior knowledge and longer-term neural changes arising from perceptual learning were correlated across subjects with the magnitude of learning-induced changes in recognition accuracy. These experience-dependent effects on neural processing could be dissociated from the neural effect of hearing physically clearer speech, which similarly enhanced perception but increased rather than decreased STG responses. Hence, the observed neural effects of prior knowledge and perceptual learning cannot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception. Instead, our results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech.

  14. Perceptual learning of degraded speech by minimizing prediction error

    PubMed Central

    Sohoglu, Ediz

    2016-01-01

    Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent magnetoencephalographic and EEG recordings of neural responses evoked by degraded speech. When speech clarity was enhanced by prior knowledge obtained from matching text, we observed reduced neural activity in a peri-auditory region of the superior temporal gyrus (STG). Critically, longer-term improvements in the accuracy of speech recognition following perceptual learning resulted in reduced activity in a nearly identical STG region. Moreover, short-term neural changes caused by prior knowledge and longer-term neural changes arising from perceptual learning were correlated across subjects with the magnitude of learning-induced changes in recognition accuracy. These experience-dependent effects on neural processing could be dissociated from the neural effect of hearing physically clearer speech, which similarly enhanced perception but increased rather than decreased STG responses. Hence, the observed neural effects of prior knowledge and perceptual learning cannot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception. Instead, our results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech. PMID:26957596

  15. How reading differs from object naming at the neuronal level.

    PubMed

    Price, C J; McCrory, E; Noppeney, U; Mechelli, A; Moore, C J; Biggio, N; Devlin, J T

    2006-01-15

    This paper uses whole brain functional neuroimaging in neurologically normal participants to explore how reading aloud differs from object naming in terms of neuronal implementation. In the first experiment, we directly compared brain activation during reading aloud and object naming. This revealed greater activation for reading in bilateral premotor, left posterior superior temporal and precuneus regions. In a second experiment, we segregated the object-naming system into object recognition and speech production areas by factorially manipulating the presence or absence of objects (pictures of objects or their meaningless scrambled counterparts) with the presence or absence of speech production (vocal vs. finger press responses). This demonstrated that the areas associated with speech production (object naming and repetitively saying "OK" to meaningless scrambled pictures) corresponded exactly to the areas where responses were higher for reading aloud than object naming in Experiment 1. Collectively the results suggest that, relative to object naming, reading increases the demands on shared speech production processes. At a cognitive level, enhanced activation for reading in speech production areas may reflect the multiple and competing phonological codes that are generated from the sublexical parts of written words. At a neuronal level, it may reflect differences in the speed with which different areas are activated and integrate with one another.

  16. Status and progress of studies on the nature of speech, instrumentation for its investigation and practical applications

    NASA Astrophysics Data System (ADS)

    Liberman, A. M.

    1983-09-01

    This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. Manuscripts cover the following topics: The association between comprehension of spoken sentences and early reading ability: The role of phonetic representation; Phonetic coding and order memory in relation to reading proficiency: A comparison of short-term memory for temporal and spatial order information; Exploring the oral and written language errors made by language disabled children; Perceiving phonetic events; Converging evidence in support of common dynamical principles for speech and movement coordination; Phase transitions and critical behavior in human bimanual coordination; Timing and coarticulation for alveolo-palatals and sequences of alveolar +J in Catalan; V-to-C coarticulation in Catalan VCV sequences: An articulatory and acoustical study; Prosody and the /S/-/c/ distinction; Intersections of tone and intonation in Thai; Simultaneous measurements of vowels produced by a hearing-impaired speaker; Extending format transitions may not improve aphasics' perception of stop consonant place of articulation; Against a role of chirp identification in duplex perception; Further evidence for the role of relative timing in speech: A reply to Barry; Review (Phonological intervention: Concepts and procedures); and Review (Temporal variables in speech).

  17. Temporal order processing of syllables in the left parietal lobe.

    PubMed

    Moser, Dana; Baker, Julie M; Sanchez, Carmen E; Rorden, Chris; Fridriksson, Julius

    2009-10-07

    Speech processing requires the temporal parsing of syllable order. Individuals suffering from posterior left hemisphere brain injury often exhibit temporal processing deficits as well as language deficits. Although the right posterior inferior parietal lobe has been implicated in temporal order judgments (TOJs) of visual information, there is limited evidence to support the role of the left inferior parietal lobe (IPL) in processing syllable order. The purpose of this study was to examine whether the left inferior parietal lobe is recruited during temporal order judgments of speech stimuli. Functional magnetic resonance imaging data were collected on 14 normal participants while they completed the following forced-choice tasks: (1) syllable order of multisyllabic pseudowords, (2) syllable identification of single syllables, and (3) gender identification of both multisyllabic and monosyllabic speech stimuli. Results revealed increased neural recruitment in the left inferior parietal lobe when participants made judgments about syllable order compared with both syllable identification and gender identification. These findings suggest that the left inferior parietal lobe plays an important role in processing syllable order and support the hypothesized role of this region as an interface between auditory speech and the articulatory code. Furthermore, a breakdown in this interface may explain some components of the speech deficits observed after posterior damage to the left hemisphere.

  18. Temporal Order Processing of Syllables in the Left Parietal Lobe

    PubMed Central

    Baker, Julie M.; Sanchez, Carmen E.; Rorden, Chris; Fridriksson, Julius

    2009-01-01

    Speech processing requires the temporal parsing of syllable order. Individuals suffering from posterior left hemisphere brain injury often exhibit temporal processing deficits as well as language deficits. Although the right posterior inferior parietal lobe has been implicated in temporal order judgments (TOJs) of visual information, there is limited evidence to support the role of the left inferior parietal lobe (IPL) in processing syllable order. The purpose of this study was to examine whether the left inferior parietal lobe is recruited during temporal order judgments of speech stimuli. Functional magnetic resonance imaging data were collected on 14 normal participants while they completed the following forced-choice tasks: (1) syllable order of multisyllabic pseudowords, (2) syllable identification of single syllables, and (3) gender identification of both multisyllabic and monosyllabic speech stimuli. Results revealed increased neural recruitment in the left inferior parietal lobe when participants made judgments about syllable order compared with both syllable identification and gender identification. These findings suggest that the left inferior parietal lobe plays an important role in processing syllable order and support the hypothesized role of this region as an interface between auditory speech and the articulatory code. Furthermore, a breakdown in this interface may explain some components of the speech deficits observed after posterior damage to the left hemisphere. PMID:19812331

  19. Conversation Analysis.

    ERIC Educational Resources Information Center

    Schiffrin, Deborah

    1990-01-01

    Summarizes the current state of research in conversation analysis, referring primarily to six different perspectives that have developed from the philosophy, sociology, anthropology, and linguistics disciplines. These include pragmatics; speech act theory; interactional sociolinguistics; ethnomethodology; ethnography of communication; and…

  20. Current Research in Southeast Asia.

    ERIC Educational Resources Information Center

    Beh, Yolanda

    1990-01-01

    Summaries of eight language-related research projects are presented from Brunei Darussalam, Indonesia, Malaysia, and Singapore. Topics include children's reading, nonstandard spoken Indonesian, English speech act performance, classroom verbal interaction, journal writing, and listening comprehension. (LB)

  1. Masking of errors in transmission of VAPC-coded speech

    NASA Technical Reports Server (NTRS)

    Cox, Neil B.; Froese, Edwin L.

    1990-01-01

    A subjective evaluation is provided of the bit error sensitivity of the message elements of a Vector Adaptive Predictive (VAPC) speech coder, along with an indication of the amenability of these elements to a popular error masking strategy (cross frame hold over). As expected, a wide range of bit error sensitivity was observed. The most sensitive message components were the short term spectral information and the most significant bits of the pitch and gain indices. The cross frame hold over strategy was found to be useful for pitch and gain information, but it was not beneficial for the spectral information unless severe corruption had occurred.

  2. Wireless communication and their mathematics

    NASA Astrophysics Data System (ADS)

    Komaki, Shozo

    2015-05-01

    Mobile phone and smart phone are penetrating into social use. To develop these system, various type of theoretical works based on mathematics are done, such as radio propagation theory, traffic theory, security coding and wireless device etc. In this speech, I will mention about the related mathematics and problems in it.

  3. Campus Intolerance, Then & Now: The Influence of Marcusian Ideology. Perspectives on Higher Education

    ERIC Educational Resources Information Center

    Lewy, Guenter

    2018-01-01

    Freedom of expression is imperiled on today's college campuses. Citizens and educators alike are concerned about the number of shout-downs and disinvitations and their silencing effect on intellectual diversity. The use of speech codes, "safe spaces," new rules demanding "trigger warnings," and condemning…

  4. Returning Fire

    ERIC Educational Resources Information Center

    Gould, Jon B.

    2007-01-01

    Last December saw another predictable report from the Foundation for Individual Rights in Education (FIRE), a self-described watchdog group, highlighting how higher education is supposedly under siege from a politically correct plague of so-called hate-speech codes. In that report, FIRE declared that as many as 96 percent of top-ranked colleges…

  5. American Campaign Oratory: Verbal Response Mode Use by Candidates in the 1980 American Presidential Primaries.

    ERIC Educational Resources Information Center

    Stiles, William B.; And Others

    1983-01-01

    Coded campaign speeches recorded during the 1980 American presidential primaries and college lectures using a taxonomy of verbal response modes. Both candidates and lecturers used mostly informative modes, but candidates used relatively more disclosures (subjective information) and fewer edifications (objective information). Candidates…

  6. It's Power, Stupid!

    ERIC Educational Resources Information Center

    Gray, Mary W.

    1994-01-01

    Sexual harassment is abuse of power. It should be prohibited in colleges and universities, not through constraints on academic freedom such as speech codes, but through enforcement of standards of ethical professional conduct. Faculty have an ethical obligation not to engage in harassment and to hold colleagues accountable if they do so. (MSE)

  7. English in Political Discourse of Post-Suharto Indonesia.

    ERIC Educational Resources Information Center

    Bernsten, Suzanne

    This paper illustrates increases in the use of English in political speeches in post-Suharto Indonesia by analyzing the phonological, morphological, and syntactic assimilation of loanwords (linguistic borrowing), as well as hybridization and code switching, and phenomena such as doubling and loan translations. The paper also examines the mixed…

  8. The Courts as Educational Policy Makers.

    ERIC Educational Resources Information Center

    Maready, William F.

    This report discusses the expanding role of Federal judges as educational policymakers. The report discusses court decisions related to interpretations by the Federal Courts of the U.S. Constitution. The report notes that court decisions have covered the following topics: dress codes, flying of the flag, freedom of speech, unwed mothers,…

  9. Spanish-English Speech Perception in Children and Adults: Developmental Trends

    ERIC Educational Resources Information Center

    Brice, Alejandro E.; Gorman, Brenda K.; Leung, Cynthia B.

    2013-01-01

    This study explored the developmental trends and phonetic category formation in bilingual children and adults. Participants included 30 fluent Spanish-English bilingual children, aged 8-11, and bilingual adults, aged 18-40. All completed gating tasks that incorporated code-mixed Spanish-English stimuli. There were significant differences in…

  10. Student Disciplinary Codes -- What Makes Them Tick.

    ERIC Educational Resources Information Center

    Johnson, Donald V.

    In this speech, the author describes how one school developed discipline guidelines with the cooperation of staff, parents, and students. Due process procedures, types of discipline, and an alternative out-of-school program for adjustment students (those who have experienced chronic or serious disciplinary problems in the school) are described.…

  11. Predictive top-down integration of prior knowledge during speech perception.

    PubMed

    Sohoglu, Ediz; Peelle, Jonathan E; Carlyon, Robert P; Davis, Matthew H

    2012-06-20

    A striking feature of human perception is that our subjective experience depends not only on sensory information from the environment but also on our prior knowledge or expectations. The precise mechanisms by which sensory information and prior knowledge are integrated remain unclear, with longstanding disagreement concerning whether integration is strictly feedforward or whether higher-level knowledge influences sensory processing through feedback connections. Here we used concurrent EEG and MEG recordings to determine how sensory information and prior knowledge are integrated in the brain during speech perception. We manipulated listeners' prior knowledge of speech content by presenting matching, mismatching, or neutral written text before a degraded (noise-vocoded) spoken word. When speech conformed to prior knowledge, subjective perceptual clarity was enhanced. This enhancement in clarity was associated with a spatiotemporal profile of brain activity uniquely consistent with a feedback process: activity in the inferior frontal gyrus was modulated by prior knowledge before activity in lower-level sensory regions of the superior temporal gyrus. In parallel, we parametrically varied the level of speech degradation, and therefore the amount of sensory detail, so that changes in neural responses attributable to sensory information and prior knowledge could be directly compared. Although sensory detail and prior knowledge both enhanced speech clarity, they had an opposite influence on the evoked response in the superior temporal gyrus. We argue that these data are best explained within the framework of predictive coding in which sensory activity is compared with top-down predictions and only unexplained activity propagated through the cortical hierarchy.

  12. Co-speech hand movements during narrations: What is the impact of right vs. left hemisphere brain damage?

    PubMed

    Hogrefe, Katharina; Rein, Robert; Skomroch, Harald; Lausberg, Hedda

    2016-12-01

    Persons with brain damage show deviant patterns of co-speech hand movement behaviour in comparison to healthy speakers. It has been claimed by several authors that gesture and speech rely on a single production mechanism that depends on the same neurological substrate while others claim that both modalities are closely related but separate production channels. Thus, findings so far are contradictory and there is a lack of studies that systematically analyse the full range of hand movements that accompany speech in the condition of brain damage. In the present study, we aimed to fill this gap by comparing hand movement behaviour in persons with unilateral brain damage to the left and the right hemisphere and a matched control group of healthy persons. For hand movement coding, we applied Module I of NEUROGES, an objective and reliable analysis system that enables to analyse the full repertoire of hand movements independent of speech, which makes it specifically suited for the examination of persons with aphasia. The main results of our study show a decreased use of communicative conceptual gestures in persons with damage to the right hemisphere and an increased use of these gestures in persons with left brain damage and aphasia. These results not only suggest that the production of gesture and speech do not rely on the same neurological substrate but also underline the important role of right hemisphere functioning for gesture production. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    PubMed

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  14. Intonational Rises and Dialog Acts in the Australian English Map Task.

    ERIC Educational Resources Information Center

    Fletcher, Janet; Stirling, Lesley; Mushin, Ilana; Wales, Roger

    2002-01-01

    Eight map task dialogs representative of general Australian English were coded for speaker turn and for dialog acts using a version of SWBD-DAMSL, a dialog act annotation scheme. High, low, simple, and complex rising tunes, and any corresponding dialog act codes were then compared. The Australian statement high rise (usually realized as a L…

  15. Contemporary Criticism and the Return of Zeno.

    ERIC Educational Resources Information Center

    Harris, Wendell V.

    1983-01-01

    Suggests that contemporary critical literary theories such as hermaneutics, reader-response, speech-act, structuralism, and deconstructionism share with pre-Platonic Eleatic thought a distrust of cause-and-effect reasoning and an emphasis on paradox. (MM)

  16. 36 CFR 1120.22 - Requests to which this subpart applies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the Freedom of Information Act, 5 U.S.C. 552, except with respect to records for which a less formal... speeches, press releases, and educational materials, shall be honored. No individual determination under...

  17. U.S. Pacific Command > About USPACOM > USPACOM Area of Responsibility

    Science.gov Websites

    PACOM Professional Development Reading List History Defense Strategic Guidance (PDF) USPACOM Area of Speeches / Testimony Freedom of Information Act FOIA - Reading Room Submit FOIA Request Request Status FOIA

  18. Commander, U.S. Pacific Command > U.S. Pacific Command > Article View

    Science.gov Websites

    PACOM Professional Development Reading List History Defense Strategic Guidance (PDF) USPACOM Area of Speeches / Testimony Freedom of Information Act FOIA - Reading Room Submit FOIA Request Request Status FOIA

  19. U.S. Pacific Command > Contact > Directory > J0 > Office of Inspector

    Science.gov Websites

    PACOM Professional Development Reading List History Defense Strategic Guidance (PDF) USPACOM Area of Speeches / Testimony Freedom of Information Act FOIA - Reading Room Submit FOIA Request Request Status FOIA

  20. 36 CFR 1120.22 - Requests to which this subpart applies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the Freedom of Information Act, 5 U.S.C. 552, except with respect to records for which a less formal... speeches, press releases, and educational materials, shall be honored. No individual determination under...

  1. 21 CFR 20.23 - Request for existing records.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... request for records pursuant to the Freedom of Information Act, whether or not the Freedom of Information..., speeches, and educational materials, shall be furnished free of charge upon request as long as the supply...

  2. Liars and Ghosts in the House of Congress: Frank's "Ad Hominem" Arguments in the Case against the Defense of Marriage Act.

    ERIC Educational Resources Information Center

    Clarke, Lynn E.

    2000-01-01

    Offers a critical analysis of Rep. Barney Frank's speech delivered in the House of Representatives concerning the "Defense of Marriage Act." Argues that Frank attempts to persuade colleagues by advancing two "ad hominem" arguments, one of which could potentially shift the focus from the need to defend marriages from same-sex…

  3. Dramatic Arts in the Secondary School. Michigan Speech Association Curriculum Guide Series, No. 1.

    ERIC Educational Resources Information Center

    Ratliffe, Sharon

    The 15 units in this curriculum guide for dramatic arts in the high school are intended to provide learning experiences to facilitate the student's personal development. Eight of the units deal with various aspects of acting, criticism, and history (e.g., building a character, presenting a one-act play, the history of the drama and the theater,…

  4. Bilingual processing of ASL-English code-blends: The consequences of accessing two lexical representations simultaneously

    PubMed Central

    Emmorey, Karen; Petrich, Jennifer; Gollan, Tamar H.

    2012-01-01

    Bilinguals who are fluent in American Sign Language (ASL) and English often produce code-blends - simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization times (Experiment 2) for code-blends versus ASL signs and English words produced alone. In production, code-blending did not slow lexical retrieval for ASL and actually facilitated access to low-frequency signs. However, code-blending delayed speech production because bimodal bilinguals synchronized English and ASL lexical onsets. In comprehension, code-blending speeded access to both languages. Bimodal bilinguals’ ability to produce code-blends without any cost to ASL implies that the language system either has (or can develop) a mechanism for switching off competition to allow simultaneous production of close competitors. Code-blend facilitation effects during comprehension likely reflect cross-linguistic (and cross-modal) integration at the phonological and/or semantic levels. The absence of any consistent processing costs for code-blending illustrates a surprising limitation on dual-task costs and may explain why bimodal bilinguals code-blend more often than they code-switch. PMID:22773886

  5. Homeland Security as a Stock Market: Antifragility as a Strategy for Homeland Security

    DTIC Science & Technology

    2013-12-01

    analogy.”27 Merriam-Webster defines metaphor as “a figure of speech in which one word or phrase literally denoting one kind of object or idea is used in...things agree with one another in some respects they will probably agree in others.”39 Wolfe expands this definition as “a figure of speech where one or...and Sedition Acts, the rise of the Red Scare in the 1950s and arguably the unsustainable post-9/11 buildup of homeland security resources and laws

  6. Status Report on Speech Research. A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for Its Investigation, and Practical Applications.

    DTIC Science & Technology

    1985-01-01

    vision. In R. Shaw & J. Bransford (Eds.), Perceiving, acting and know- ing: Toward an ecological psychology. Hillsdale, NJ: Erlbaum. 37 6I 37" S! - 7r...may vary across subjects and across languages (Bell-Berti, Raphael, Pisoni, & Sawusch, 1979; Wood , 1982). Second, the tongue has been observed to...Society of America, 75, S23-S24. (Ab- stract) Wood , S. (1982). X-ray and model studies of vowel articulation. Working Pa- pers (Lund Univer3ity

  7. VERBAL AND SPATIAL WORKING MEMORY LOAD HAVE SIMILARLY MINIMAL EFFECTS ON SPEECH PRODUCTION.

    PubMed

    Lee, Ogyoung; Redford, Melissa A

    2015-08-10

    The goal of the present study was to test the effects of working memory on speech production. Twenty American-English speaking adults produced syntactically complex sentences in tasks that taxed either verbal or spatial working memory. Sentences spoken under load were produced with more errors, fewer prosodic breaks, and at faster rates than sentence produced in the control conditions, but other acoustic correlates of rhythm and intonation did not change. Verbal and spatial working memory had very similar effects on production, suggesting that the different span tasks used to tax working memory merely shifted speakers' attention away from the act of speaking. This finding runs contra the hypothesis of incremental phonological/phonetic encoding, which predicts the manipulation of information in verbal working memory during speech production.

  8. Effect of high-frequency spectral components in computer recognition of dysarthric speech based on a Mel-cepstral stochastic model.

    PubMed

    Polur, Prasad D; Miller, Gerald E

    2005-01-01

    Computer speech recognition of individuals with dysarthria, such as cerebral palsy patients, requires a robust technique that can handle conditions of very high variability and limited training data. In this study, a hidden Markov model (HMM) was constructed and conditions investigated that would provide improved performance for a dysarthric speech (isolated word) recognition system intended to act as an assistive/control tool. In particular, we investigated the effect of high-frequency spectral components on the recognition rate of the system to determine if they contributed useful additional information to the system. A small-size vocabulary spoken by three cerebral palsy subjects was chosen. Mel-frequency cepstral coefficients extracted with the use of 15 ms frames served as training input to an ergodic HMM setup. Subsequent results demonstrated that no significant useful information was available to the system for enhancing its ability to discriminate dysarthric speech above 5.5 kHz in the current set of dysarthric data. The level of variability in input dysarthric speech patterns limits the reliability of the system. However, its application as a rehabilitation/control tool to assist dysarthric motor-impaired individuals such as cerebral palsy subjects holds sufficient promise.

  9. Universal and language-specific sublexical cues in speech perception: a novel electroencephalography-lesion approach.

    PubMed

    Obrig, Hellmuth; Mentzel, Julia; Rossi, Sonja

    2016-06-01

    SEE CAPPA DOI101093/BRAIN/AWW090 FOR A SCIENTIFIC COMMENTARY ON THIS ARTICLE  : The phonological structure of speech supports the highly automatic mapping of sound to meaning. While it is uncontroversial that phonotactic knowledge acts upon lexical access, it is unclear at what stage these combinatorial rules, governing phonological well-formedness in a given language, shape speech comprehension. Moreover few studies have investigated the neuronal network affording this important step in speech comprehension. Therefore we asked 70 participants-half of whom suffered from a chronic left hemispheric lesion-to listen to 252 different monosyllabic pseudowords. The material models universal preferences of phonotactic well-formedness by including naturally spoken pseudowords and digitally reversed exemplars. The latter partially violate phonological structure of all human speech and are rich in universally dispreferred phoneme sequences while preserving basic auditory parameters. Language-specific constraints were modelled in that half of the naturally spoken pseudowords complied with the phonotactics of the native language of the monolingual participants (German) while the other half did not. To ensure universal well-formedness and naturalness, the latter stimuli comply with Slovak phonotactics and all stimuli were produced by an early bilingual speaker. To maximally attenuate lexico-semantic influences, transparent pseudowords were avoided and participants had to detect immediate repetitions, a task orthogonal to the contrasts of interest. The results show that phonological 'well-formedness' modulates implicit processing of speech at different levels: universally dispreferred phonological structure elicits early, medium and late latency differences in the evoked potential. On the contrary, the language-specific phonotactic contrast selectively modulates a medium latency component of the event-related potentials around 400 ms. Using a novel event-related potential-lesion approach allowed us to furthermore supply first evidence that implicit processing of these different phonotactic levels relies on partially separable brain areas in the left hemisphere: contrasting forward to reversed speech the approach delineated an area comprising supramarginal and angular gyri. Conversely, the contrast between legal versus illegal phonotactics consistently projected to anterior and middle portions of the middle temporal and superior temporal gyri. Our data support the notion that phonological structure acts on different stages of phonologically and lexically driven steps of speech comprehension. In the context of previous work we propose context-dependent sensitivity to different levels of phonotactic well-formedness. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Pragmatic Transfer.

    ERIC Educational Resources Information Center

    Kasper, Gabriele

    1992-01-01

    Attempting to clarify the concept of pragmatic transfer, this article proposes as a basic distinction Leech/Thomas' dichotomy of sociopragmatics versus pragmalinguistics, presenting evidence for transfer at both levels. Issues discussed include pragmatic universals in speech act realization, conditions for pragmatic transfer, communicative…

  11. 30 CFR 905.773 - Requirements for permits and permit processing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., 42 U.S.C. 7401 et seq California Air Pollution Control Laws, Cal. Health & Safety Code section 39000... (11) Noise Control Act, 42 U.S.C. 4903 California Noise Control Act of 1973, Cal. Health & Safety Code... Pollution Control Laws, Cal. Health & Safety Code section 39000 et seq.; the Hazardous Waste Control Law...

  12. Automatic Coding of Dialogue Acts in Collaboration Protocols

    ERIC Educational Resources Information Center

    Erkens, Gijsbert; Janssen, Jeroen

    2008-01-01

    Although protocol analysis can be an important tool for researchers to investigate the process of collaboration and communication, the use of this method of analysis can be time consuming. Hence, an automatic coding procedure for coding dialogue acts was developed. This procedure helps to determine the communicative function of messages in online…

  13. Speech parts as Poisson processes.

    PubMed

    Badalamenti, A F

    2001-09-01

    This paper presents evidence that six of the seven parts of speech occur in written text as Poisson processes, simple or recurring. The six major parts are nouns, verbs, adjectives, adverbs, prepositions, and conjunctions, with the interjection occurring too infrequently to support a model. The data consist of more than the first 5000 words of works by four major authors coded to label the parts of speech, as well as periods (sentence terminators). Sentence length is measured via the period and found to be normally distributed with no stochastic model identified for its occurrence. The models for all six speech parts but the noun significantly distinguish some pairs of authors and likewise for the joint use of all words types. Any one author is significantly distinguished from any other by at least one word type and sentence length very significantly distinguishes each from all others. The variety of word type use, measured by Shannon entropy, builds to about 90% of its maximum possible value. The rate constants for nouns are close to the fractions of maximum entropy achieved. This finding together with the stochastic models and the relations among them suggest that the noun may be a primitive organizer of written text.

  14. Role of maternal gesture use in speech use by children with fragile X syndrome.

    PubMed

    Hahn, Laura J; Zimmer, B Jean; Brady, Nancy C; Swinburne Romine, Rebecca E; Fleming, Kandace K

    2014-05-01

    The purpose of this study was to investigate how maternal gesture relates to speech production by children with fragile X syndrome (FXS). Participants were 27 young children with FXS (23 boys, 4 girls) and their mothers. Videotaped home observations were conducted between the ages of 25 and 37 months (toddler period) and again between the ages of 60 and 71 months (child period). The videos were later coded for types of maternal utterances and maternal gestures that preceded child speech productions. Children were also assessed with the Mullen Scales of Early Learning at both ages. Maternal gesture use in the toddler period was positively related to expressive language scores at both age periods and was related to receptive language scores in the child period. Maternal proximal pointing, in comparison to other gestures, evoked more speech responses from children during the mother-child interactions, particularly when combined with wh-questions. This study adds to the growing body of research on the importance of contextual variables, such as maternal gestures, in child language development. Parental gesture use may be an easily added ingredient to parent-focused early language intervention programs.

  15. Is the Speech Transmission Index (STI) a robust measure of sound system speech intelligibility performance?

    NASA Astrophysics Data System (ADS)

    Mapp, Peter

    2002-11-01

    Although RaSTI is a good indicator of the speech intelligibility capability of auditoria and similar spaces, during the past 2-3 years it has been shown that RaSTI is not a robust predictor of sound system intelligibility performance. Instead, it is now recommended, within both national and international codes and standards, that full STI measurement and analysis be employed. However, new research is reported, that indicates that STI is not as flawless, nor robust as many believe. The paper highlights a number of potential error mechanisms. It is shown that the measurement technique and signal excitation stimulus can have a significant effect on the overall result and accuracy, particularly where DSP-based equipment is employed. It is also shown that in its current state of development, STI is not capable of appropriately accounting for a number of fundamental speech and system attributes, including typical sound system frequency response variations and anomalies. This is particularly shown to be the case when a system is operating under reverberant conditions. Comparisons between actual system measurements and corresponding word score data are reported where errors of up to 50 implications for VA and PA system performance verification will be discussed.

  16. Automatic Conversational Scene Analysis in Children with Asperger Syndrome/High-Functioning Autism and Typically Developing Peers

    PubMed Central

    Tavano, Alessandro; Pesarin, Anna; Murino, Vittorio; Cristani, Marco

    2014-01-01

    Individuals with Asperger syndrome/High Functioning Autism fail to spontaneously attribute mental states to the self and others, a life-long phenotypic characteristic known as mindblindness. We hypothesized that mindblindness would affect the dynamics of conversational interaction. Using generative models, in particular Gaussian mixture models and observed influence models, conversations were coded as interacting Markov processes, operating on novel speech/silence patterns, termed Steady Conversational Periods (SCPs). SCPs assume that whenever an agent's process changes state (e.g., from silence to speech), it causes a general transition of the entire conversational process, forcing inter-actant synchronization. SCPs fed into observed influence models, which captured the conversational dynamics of children and adolescents with Asperger syndrome/High Functioning Autism, and age-matched typically developing participants. Analyzing the parameters of the models by means of discriminative classifiers, the dialogs of patients were successfully distinguished from those of control participants. We conclude that meaning-free speech/silence sequences, reflecting inter-actant synchronization, at least partially encode typical and atypical conversational dynamics. This suggests a direct influence of theory of mind abilities onto basic speech initiative behavior. PMID:24489674

  17. From Interdisciplinary to Integrated Care of the Child with Autism: The Essential Role for a Code of Ethics

    ERIC Educational Resources Information Center

    Cox, David J.

    2012-01-01

    To address the developmental deficits of children with autism, several disciplines have come to the forefront within intervention programs. These are speech-pathologists, psychologists/counselors, occupational-therapists/physical-therapists, special-education consultants, behavior analysts, and physicians/medical personnel. As the field of autism…

  18. Speech Perception Deficits in Poor Readers: Auditory Processing or Phonological Coding?

    ERIC Educational Resources Information Center

    Mody, Maria; And Others

    1997-01-01

    Forty second-graders, 20 good and 20 poor readers, completed a /ba/-/da/ temporal order judgment (TOJ) task. The groups did not differ in TOJ when /ba/ and /da/ were paired with more easily discriminated syllables. Poor readers' difficulties with /ba/-/da/ reflected perceptual confusion between phonetically similar syllables rather than difficulty…

  19. Predicting Phonetic Transcription Agreement: Insights from Research in Infant Vocalizations

    ERIC Educational Resources Information Center

    Ramsdell, Heather L.; Oller, D. Kimbrough; Ethington, Corinna A.

    2007-01-01

    The purpose of this study is to provide new perspectives on correlates of phonetic transcription agreement. Our research focuses on phonetic transcription and coding of infant vocalizations. The findings are presumed to be broadly applicable to other difficult cases of transcription, such as found in severe disorders of speech, which similarly…

  20. Searching for Syllabic Coding Units in Speech Perception

    ERIC Educational Resources Information Center

    Dumay, Nicolas; Content, Alain

    2012-01-01

    Two auditory priming experiments tested whether the effect of final phonological overlap relies on syllabic representations. Amount of shared phonemic information and syllabic status of the overlap between nonword primes and targets were varied orthogonally. In the related conditions, CV.CCVC items shared the last syllable (e.g., vi.klyd-p[image…

  1. The Effects of Prohibiting Gestures on Children's Lexical Retrieval Ability

    ERIC Educational Resources Information Center

    Pine, Karen J.; Bird, Hannah; Kirk, Elizabeth

    2007-01-01

    Two alternative accounts have been proposed to explain the role of gestures in thinking and speaking. The Information Packaging Hypothesis (Kita, 2000) claims that gestures are important for the conceptual packaging of information before it is coded into a linguistic form for speech. The Lexical Retrieval Hypothesis (Rauscher, Krauss & Chen, 1996)…

  2. Design and Evaluation of a Cochlear Implant Strategy Based on a “Phantom” Channel

    PubMed Central

    Nogueira, Waldo; Litvak, Leonid M.; Saoji, Aniket A.; Büchner, Andreas

    2015-01-01

    Unbalanced bipolar stimulation, delivered using charge balanced pulses, was used to produce “Phantom stimulation”, stimulation beyond the most apical contact of a cochlear implant’s electrode array. The Phantom channel was allocated audio frequencies below 300Hz in a speech coding strategy, conveying energy some two octaves lower than the clinical strategy and hence delivering the fundamental frequency of speech and of many musical tones. A group of 12 Advanced Bionics cochlear implant recipients took part in a chronic study investigating the fitting of the Phantom strategy and speech and music perception when using Phantom. The evaluation of speech in noise was performed immediately after fitting Phantom for the first time (Session 1) and after one month of take-home experience (Session 2). A repeated measures of analysis of variance (ANOVA) within factors strategy (Clinical, Phantom) and interaction time (Session 1, Session 2) revealed a significant effect for the interaction time and strategy. Phantom obtained a significant improvement in speech intelligibility after one month of use. Furthermore, a trend towards a better performance with Phantom (48%) with respect to F120 (37%) after 1 month of use failed to reach significance after type 1 error correction. Questionnaire results show a preference for Phantom when listening to music, likely driven by an improved balance between high and low frequencies. PMID:25806818

  3. Are written and spoken recall of text equivalent?

    PubMed

    Kellogg, Ronald T

    2007-01-01

    Writing is less practiced than speaking, graphemic codes are activated only in writing, and the retrieved representations of the text must be maintained in working memory longer because handwritten output is slower than speech. These extra demands on working memory could result in less effort being given to retrieval during written compared with spoken text recall. To test this hypothesis, college students read or heard Bartlett's "War of the Ghosts" and then recalled the text in writing or speech. Spoken recall produced more accurately recalled propositions and more major distortions (e.g., inferences) than written recall. The results suggest that writing reduces the retrieval effort given to reconstructing the propositions of a text.

  4. Using the Natural Language Paradigm (NLP) to increase vocalizations of older adults with cognitive impairments.

    PubMed

    Leblanc, Linda A; Geiger, Kaneen B; Sautter, Rachael A; Sidener, Tina M

    2007-01-01

    The Natural Language Paradigm (NLP) has proven effective in increasing spontaneous verbalizations for children with autism. This study investigated the use of NLP with older adults with cognitive impairments served at a leisure-based adult day program for seniors. Three individuals with limited spontaneous use of functional language participated in a multiple baseline design across participants. Data were collected on appropriate and inappropriate vocalizations with appropriate vocalizations coded as prompted or unprompted during baseline and treatment sessions. All participants experienced increases in appropriate speech during NLP with variable response patterns. Additionally, the two participants with substantial inappropriate vocalizations showed decreases in inappropriate speech. Implications for intervention in day programs are discussed.

  5. Echanges, interventions et actes de langage dans la structure de la conversation (Exchanges, Turns at Talk and Speech Acts in the Structure of Conversation).

    ERIC Educational Resources Information Center

    Roulet, Eddy

    1981-01-01

    Attempts to show how the surface structure of conversation can be described by means of a few principles and simple categories, regardless of its level of complexity. Accordingly, proposes a model that emphasizes the pragmatic functions of certain connectors and markers in the context of conversation exchanges. Societe Nouvelle Didier Erudition,…

  6. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    PubMed

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  7. Examining the relationship between comprehension and production processes in code-switched language

    PubMed Central

    Guzzardo Tamargo, Rosa E.; Valdés Kroff, Jorge R.; Dussias, Paola E.

    2016-01-01

    We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish–English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants’ comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension. PMID:28670049

  8. Examining the relationship between comprehension and production processes in code-switched language.

    PubMed

    Guzzardo Tamargo, Rosa E; Valdés Kroff, Jorge R; Dussias, Paola E

    2016-08-01

    We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish-English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants' comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension.

  9. Experience with code-switching modulates the use of grammatical gender during sentence processing

    PubMed Central

    Valdés Kroff, Jorge R.; Dussias, Paola E.; Gerfen, Chip; Perrotti, Lauren; Bajo, M. Teresa

    2016-01-01

    Using code-switching as a tool to illustrate how language experience modulates comprehension, the visual world paradigm was employed to examine the extent to which gender-marked Spanish determiners facilitate upcoming target nouns in a group of Spanish-English bilingual code-switchers. The first experiment tested target Spanish nouns embedded in a carrier phrase (Experiment 1b) and included a control Spanish monolingual group (Experiment 1a). The second set of experiments included critical trials in which participants heard code-switches from Spanish determiners into English nouns (e.g., la house) either in a fixed carrier phrase (Experiment 2a) or in variable and complex sentences (Experiment 2b). Across the experiments, bilinguals revealed an asymmetric gender effect in processing, showing facilitation only for feminine target items. These results reflect the asymmetric use of gender in the production of code-switched speech. The extension of the asymmetric effect into Spanish (Experiment 1b) underscores the permeability between language modes in bilingual code-switchers. PMID:28663771

  10. Training Peer Partners to Use a Speech-Generating Device With Classmates With Autism Spectrum Disorder: Exploring Communication Outcomes Across Preschool Contexts.

    PubMed

    Thiemann-Bourque, Kathy S; McGuff, Sara; Goldstein, Howard

    2017-09-18

    This study examined effects of a peer-mediated intervention that provided training on the use of a speech-generating device for preschoolers with severe autism spectrum disorder (ASD) and peer partners. Effects were examined using a multiple probe design across 3 children with ASD and limited to no verbal skills. Three peers without disabilities were taught to Stay, Play, and Talk using a GoTalk 4+ (Attainment Company) and were then paired up with a classmate with ASD in classroom social activities. Measures included rates of communication acts, communication mode and function, reciprocity, and engagement with peers. Following peer training, intervention effects were replicated across 3 peers, who all demonstrated an increased level and upward trend in communication acts to their classmates with ASD. Outcomes also revealed moderate intervention effects and increased levels of peer-directed communication for 3 children with ASD in classroom centers. Additional analyses revealed higher rates of communication in the added context of preferred toys and snack. The children with ASD also demonstrated improved communication reciprocity and peer engagement. Results provide preliminary evidence on the benefits of combining peer-mediated and speech-generating device interventions to improve children's communication. Furthermore, it appears that preferred contexts are likely to facilitate greater communication and social engagement with peers.

  11. Belief Systems and Language Understanding

    DTIC Science & Technology

    1975-01-01

    Sedlak (1074), Searle (1060), Gtrawson (196U), and Wittgenstein (1958). [4] (1973), Charniak (1072), ilcCarthy and Hayes (1969), HcDermott (107...in Speech Acts", Philosophical Review 73, ^39-460. Wittgenstein , Ludwig 1958 Philosophical Investigations ( New York: The Macmillan Co.) (Tr

  12. 77 FR 2990 - Federal Property Suitable as Facilities To Assist the Homeless

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-20

    ... 7266, Washington, DC 20410; telephone (202) 708-1234; TTY number for the hearing- and speech-impaired... of the Stewart B. McKinney Homeless Assistance Act (42 U.S.C. 11411), as amended, HUD is publishing...

  13. United States: challenges filed to anti-prostitution pledge requirement.

    PubMed

    Schleifer, Rebecca

    2005-12-01

    Two separate lawsuits were filed recently in US federal courts challenging a provision of US law requiring that non-governmental organizations have a policy "explicitly opposing prostitution" as a condition of receiving funding under the United States Leadership against HIV/AIDS, Tuberculosis, and Malaria Act of 2003 (US Global AIDS Act). US-based plaintiffs in both cases argue that the anti-prostitution pledge requirement in the Act violates US Constitutional guarantees of free speech and due process, and undermines proven, effective efforts to fight HIV/AIDS among sex workers.

  14. Is Ontario Moving to Provincial Negotiation of Teaching Contracts?

    ERIC Educational Resources Information Center

    Jefferson, Anne L.

    2008-01-01

    In Canada, the statutes governing public school teachers' collective bargaining are a combination of the provincial Labour Relations Act or Code and the respective provincial Education/School/Public Schools Act. As education is within the provincial, not federal, domain of legal responsibility, the specifics of each act or code can vary.…

  15. Effects of a metronome on the filled pauses of fluent speakers.

    PubMed

    Christenfeld, N

    1996-12-01

    Filled pauses (the "ums" and "uhs" that litter spontaneous speech) seem to be a product of the speaker paying deliberate attention to the normally automatic act of talking. This is the same sort of explanation that has been offered for stuttering. In this paper we explore whether a manipulation that has long been known to decrease stuttering, synchronizing speech to the beats of a metronome, will then also decrease filled pauses. Two experiments indicate that a metronome has a dramatic effect on the production of filled pauses. This effect is not due to any simplification or slowing of the speech and supports the view that a metronome causes speakers to attend more to how they are talking and less to what they are saying. It also lends support to the connection between stutters and filled pauses.

  16. 26 CFR 1.801-2 - Taxable years affected.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-2 Taxable years affected. Section 1.801-1 is... Code are to the Internal Revenue Code of 1954, as amended by the Life Insurance Company Income Tax Act... Internal Revenue Code of 1954, as amended by the Life Insurance Company Income Tax Act of 1959 (73 Stat...

  17. 26 CFR 1.801-2 - Taxable years affected.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-2 Taxable years affected. Section 1.801-1 is... Code are to the Internal Revenue Code of 1954, as amended by the Life Insurance Company Income Tax Act... Internal Revenue Code of 1954, as amended by the Life Insurance Company Income Tax Act of 1959 (73 Stat...

  18. 26 CFR 1.801-2 - Taxable years affected.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-2 Taxable years affected. Section 1.801-1 is... Code are to the Internal Revenue Code of 1954, as amended by the Life Insurance Company Income Tax Act... Internal Revenue Code of 1954, as amended by the Life Insurance Company Income Tax Act of 1959 (73 Stat...

  19. 26 CFR 1.801-2 - Taxable years affected.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-2 Taxable years affected. Section 1.801-1 is... Code are to the Internal Revenue Code of 1954, as amended by the Life Insurance Company Income Tax Act... Internal Revenue Code of 1954, as amended by the Life Insurance Company Income Tax Act of 1959 (73 Stat...

  20. Ethics in the practice of speech-language pathology in health care settings.

    PubMed

    Kummer, Ann W; Turner, Jan

    2011-11-01

    ETHICS refers to a moral philosophy or a set of moral principles that determine appropriate behavior in a society. Medical ethics includes a set of specific values that are considered in determining appropriate conduct in the practice of medicine or health care. Because the practice of medicine and medical speech-language pathology affects the health, well-being, and quality of life of individuals served, adherence to a code of ethical conduct is critically important in the health care environment. When ethical dilemmas arise, consultation with a bioethics committee can be helpful in determining the best course of action. This article will help to define medical ethics and to discuss the six basic values that are commonly considered in discussions of medical ethics. Common ethical mistakes in the practice of speech-language pathology will be described. Finally, the value of a bioethics consultation for help in resolving complex ethical issues will be discussed. © Thieme Medical Publishers.

Top