Sample records for acoustic vowel space

  1. Vowel Acoustic Space Development in Children: A Synthesis of Acoustic and Anatomic Data

    ERIC Educational Resources Information Center

    Vorperian, Houri K.; Kent, Ray D.

    2007-01-01

    Purpose: This article integrates published acoustic data on the development of vowel production. Age specific data on formant frequencies are considered in the light of information on the development of the vocal tract (VT) to create an anatomic-acoustic description of the maturation of the vowel acoustic space for English. Method: Literature…

  2. Articulatory-acoustic vowel space: application to clear speech in individuals with Parkinson's disease.

    PubMed

    Whitfield, Jason A; Goberman, Alexander M

    2014-01-01

    Individuals with Parkinson disease (PD) often exhibit decreased range of movement secondary to the disease process, which has been shown to affect articulatory movements. A number of investigations have failed to find statistically significant differences between control and disordered groups, and between speaking conditions, using traditional vowel space area measures. The purpose of the current investigation was to evaluate both between-group (PD versus control) and within-group (habitual versus clear) differences in articulatory function using a novel vowel space measure, the articulatory-acoustic vowel space (AAVS). The novel AAVS is calculated from continuously sampled formant trajectories of connected speech. In the current study, habitual and clear speech samples from twelve individuals with PD along with habitual control speech samples from ten neurologically healthy adults were collected and acoustically analyzed. In addition, a group of listeners completed perceptual rating of speech clarity for all samples. Individuals with PD were perceived to exhibit decreased speech clarity compared to controls. Similarly, the novel AAVS measure was significantly lower in individuals with PD. In addition, the AAVS measure significantly tracked changes between the habitual and clear conditions that were confirmed by perceptual ratings. In the current study, the novel AAVS measure is shown to be sensitive to disease-related group differences and within-person changes in articulatory function of individuals with PD. Additionally, these data confirm that individuals with PD can modulate the speech motor system to increase articulatory range of motion and speech clarity when given a simple prompt. The reader will be able to (i) describe articulatory behavior observed in the speech of individuals with Parkinson disease; (ii) describe traditional measures of vowel space area and how they relate to articulation; (iii) describe a novel measure of vowel space, the articulatory-acoustic

  3. Effect of body position on vocal tract acoustics: Acoustic pharyngometry and vowel formants.

    PubMed

    Vorperian, Houri K; Kurtzweil, Sara L; Fourakis, Marios; Kent, Ray D; Tillman, Katelyn K; Austin, Diane

    2015-08-01

    The anatomic basis and articulatory features of speech production are often studied with imaging studies that are typically acquired in the supine body position. It is important to determine if changes in body orientation to the gravitational field alter vocal tract dimensions and speech acoustics. The purpose of this study was to assess the effect of body position (upright versus supine) on (1) oral and pharyngeal measurements derived from acoustic pharyngometry and (2) acoustic measurements of fundamental frequency (F0) and the first four formant frequencies (F1-F4) for the quadrilateral point vowels. Data were obtained for 27 male and female participants, aged 17 to 35 yrs. Acoustic pharyngometry showed a statistically significant effect of body position on volumetric measurements, with smaller values in the supine than upright position, but no changes in length measurements. Acoustic analyses of vowels showed significantly larger values in the supine than upright position for the variables of F0, F3, and the Euclidean distance from the centroid to each corner vowel in the F1-F2-F3 space. Changes in body position affected measurements of vocal tract volume but not length. Body position also affected the aforementioned acoustic variables, but the main vowel formants were preserved.

  4. Acoustic and Durational Properties of Indian English Vowels

    ERIC Educational Resources Information Center

    Maxwell, Olga; Fletcher, Janet

    2009-01-01

    This paper presents findings of an acoustic phonetic analysis of vowels produced by speakers of English as a second language from northern India. The monophthongal vowel productions of a group of male speakers of Hindi and male speakers of Punjabi were recorded, and acoustic phonetic analyses of vowel formant frequencies and vowel duration were…

  5. Acoustic properties of vowel production in prelingually deafened Mandarin-speaking children with cochlear implants

    PubMed Central

    Yang, Jing; Brown, Emily; Fox, Robert A.; Xu, Li

    2015-01-01

    The present study examined the acoustic features of vowel production in Mandarin-speaking children with cochlear implants (CIs). The subjects included 14 native Mandarin-speaking, prelingually deafened children with CIs (2.9–8.3 yr old) and 60 age-matched, normal-hearing (NH) children (3.1–9.0 years old). Each subject produced a list of monosyllables containing seven Mandarin vowels: [i, a, u, y, ɤ, ʅ, ɿ]. Midpoint F1 and F2 of each vowel token were extracted and normalized to eliminate the effects of different vocal tract sizes. Results showed that the CI children produced significantly longer vowels and less compact vowel categories than the NH children did. The CI children's acoustic vowel space was reduced due to a retracted production of the vowel [i]. The vowel space area showed a strong negative correlation with age at implantation (r = −0.80). The analysis of acoustic distance showed that the CI children produced corner vowels [a, u] similarly to the NH children, but other vowels (e.g., [ʅ, ɿ]) differently from the NH children, which suggests that CI children generally follow a similar developmental path of vowel acquisition as NH children. These findings highlight the importance of early implantation and have implications in clinical aural habilitation in young children with CIs. PMID:26627755

  6. Temporal and acoustic characteristics of Greek vowels produced by adults with cerebral palsy

    NASA Astrophysics Data System (ADS)

    Botinis, Antonis; Orfanidou, Ioanna; Fourakis, Marios; Fourakis, Marios

    2005-09-01

    The present investigation examined the temporal and spectral characteristics of Greek vowels as produced by speakers with intact (NO) versus cerebral palsy affected (CP) neuromuscular systems. Six NO and six CP native speakers of Greek produced the Greek vowels [i, e, a, o, u] in the first syllable of CVCV nonsense words in a short carrier phrase. Stress could be on either the first or second syllable. There were three female and three male speakers in each group. In terms of temporal characteristics, the results showed that: vowels produced by CP speakers were longer than vowels produced by NO speakers; stressed vowels were longer than unstressed vowels; vowels produced by female speakers were longer than vowels produced by male speakers. In terms of spectral characteristics the results showed that the vowel space of the CP speakers was smaller than that of the NO speakers. This is similar to the results recently reported by Liu et al. [J. Acoust. Soc. Am. 117, 3879-3889 (2005)] for CP speakers of Mandarin. There was also a reduction of the acoustic vowel space defined by unstressed vowels, but this reduction was much more pronounced in the vowel productions of CP speakers than NO speakers.

  7. An Acoustic Analysis of the Vowel Space in Young and Old Cochlear-Implant Speakers

    ERIC Educational Resources Information Center

    Neumeyer, Veronika; Harrington, Jonathan; Draxler, Christoph

    2010-01-01

    The main purpose of this study was to compare acoustically the vowel spaces of two groups of cochlear implantees (CI) with two age-matched normal hearing groups. Five young test persons (15-25 years) and five older test persons (55-70 years) with CI and two control groups of the same age with normal hearing were recorded. The speech material…

  8. Degraded Vowel Acoustics and the Perceptual Consequences in Dysarthria

    NASA Astrophysics Data System (ADS)

    Lansford, Kaitlin L.

    Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been paid to quantifying the vowel production deficits associated with the specific dysarthrias. Attempts to characterize the relationship between naturally degraded vowel production in dysarthria with overall intelligibility have met with mixed results, leading some to question the nature of this relationship. It has been suggested that aberrant vowel acoustics may be an index of overall severity of the impairment and not an "integral component" of the intelligibility deficit. A limitation of previous work detailing perceptual consequences of disordered vowel acoustics is that overall intelligibility, not vowel identification accuracy, has been the perceptual measure of interest. A series of three experiments were conducted to address the problems outlined herein. The goals of the first experiment were to identify subsets of vowel metrics that reliably distinguish speakers with dysarthria from non-disordered speakers and differentiate the dysarthria subtypes. Vowel metrics that capture vowel centralization and reduced spectral distinctiveness among vowels differentiated dysarthric from non-disordered speakers. Vowel metrics generally failed to differentiate speakers according to their dysarthria diagnosis. The second and third experiments were conducted to evaluate the relationship between degraded vowel acoustics and the resulting percept. In the second experiment, correlation and regression analyses revealed vowel metrics that capture vowel centralization and distinctiveness and movement of the second formant frequency were most predictive of vowel identification accuracy and overall intelligibility. The third

  9. Examining Acoustic and Kinematic Measures of Articulatory Working Space: Effects of Speech Intensity.

    PubMed

    Whitfield, Jason A; Dromey, Christopher; Palmer, Panika

    2018-05-17

    The purpose of this study was to examine the effect of speech intensity on acoustic and kinematic vowel space measures and conduct a preliminary examination of the relationship between kinematic and acoustic vowel space metrics calculated from continuously sampled lingual marker and formant traces. Young adult speakers produced 3 repetitions of 2 different sentences at 3 different loudness levels. Lingual kinematic and acoustic signals were collected and analyzed. Acoustic and kinematic variants of several vowel space metrics were calculated from the formant frequencies and the position of 2 lingual markers. Traditional metrics included triangular vowel space area and the vowel articulation index. Acoustic and kinematic variants of sentence-level metrics based on the articulatory-acoustic vowel space and the vowel space hull area were also calculated. Both acoustic and kinematic variants of the sentence-level metrics significantly increased with an increase in loudness, whereas no statistically significant differences in traditional vowel-point metrics were observed for either the kinematic or acoustic variants across the 3 loudness conditions. In addition, moderate-to-strong relationships between the acoustic and kinematic variants of the sentence-level vowel space metrics were observed for the majority of participants. These data suggest that both kinematic and acoustic vowel space metrics that reflect the dynamic contributions of both consonant and vowel segments are sensitive to within-speaker changes in articulation associated with manipulations of speech intensity.

  10. The effect of reduced vowel working space on speech intelligibility in Mandarin-speaking young adults with cerebral palsy

    NASA Astrophysics Data System (ADS)

    Liu, Huei-Mei; Tsao, Feng-Ming; Kuhl, Patricia K.

    2005-06-01

    The purpose of this study was to examine the effect of reduced vowel working space on dysarthric talkers' speech intelligibility using both acoustic and perceptual approaches. In experiment 1, the acoustic-perceptual relationship between vowel working space area and speech intelligibility was examined in Mandarin-speaking young adults with cerebral palsy. Subjects read aloud 18 bisyllabic words containing the vowels /eye/, /aye/, and /you/ using their normal speaking rate. Each talker's words were identified by three normal listeners. The percentage of correct vowel and word identification were calculated as vowel intelligibility and word intelligibility, respectively. Results revealed that talkers with cerebral palsy exhibited smaller vowel working space areas compared to ten age-matched controls. The vowel working space area was significantly correlated with vowel intelligibility (r=0.632, p<0.005) and with word intelligibility (r=0.684, p<0.005). Experiment 2 examined whether tokens of expanded vowel working spaces were perceived as better vowel exemplars and represented with greater perceptual spaces than tokens of reduced vowel working spaces. The results of the perceptual experiment support this prediction. The distorted vowels of talkers with cerebral palsy compose a smaller acoustic space that results in shrunken intervowel perceptual distances for listeners. .

  11. Acoustic Analysis of Nasal Vowels in Monguor Language

    NASA Astrophysics Data System (ADS)

    Zhang, Hanbin

    2017-09-01

    The purpose of the study is to analyze the spectrum characteristics and acoustic features for the nasal vowels [ɑ˜] and [ɔ˜] in Monguor language. On the base of acoustic parameter database of the Monguor speech, the study finds out that there are five main zero-pole pairs appearing for the nasal vowel [ɔ˜] and two zero-pole pairs appear for the nasal vowel [ɔ˜]. The results of regression analysis demonstrate that the duration of the nasal vowel [ɔ˜] or the nasal vowel [ɔ˜] can be predicted by its F1, F2 and F3 respectively.

  12. Contextual variation in the acoustic and perceptual similarity of North German and American English vowels

    NASA Astrophysics Data System (ADS)

    Strange, Winifred; Bohn, Ocke-Schwen; Nishi, Kanae; Trent, Sonja A.

    2005-09-01

    Strange et al. [J. Acoust. Soc. Am. 115, 1791-1807 (2004)] reported that North German (NG) front-rounded vowels in hVp syllables were acoustically intermediate between front and back American English (AE) vowels. However, AE listeners perceptually assimilated them as poor exemplars of back AE vowels. In this study, speaker- and context-independent cross-language discriminant analyses of NG and AE vowels produced in CVC syllables (C=labial, alveolar, velar stops) in sentences showed that NG front-rounded vowels fell within AE back-vowel distributions, due to the ``fronting'' of AE back vowels in alveolar/velar contexts. NG [smcapi, e, ɛ, openo] were located relatively ``higher'' in acoustic vowel space than their AE counterparts and varied in cross-language similarity across consonantal contexts. In a perceptual assimilation task, naive listeners classified NG vowels in terms of native AE categories and rated their goodness on a 7-point scale (very foreign to very English sounding). Both front- and back-rounded NG vowels were perceptually assimilated overwhelmingly to back AE categories and judged equally good exemplars. Perceptual assimilation patterns did not vary with context, and were not always predictable from acoustic similarity. These findings suggest that listeners adopt a context-independent strategy when judging the cross-language similarity of vowels produced and presented in continuous speech contexts.

  13. Contextual variation in the acoustic and perceptual similarity of North German and American English vowels.

    PubMed

    Strange, Winifred; Bohn, Ocke-Schwen; Nishi, Kanae; Trent, Sonja A

    2005-09-01

    Strange et al. [J. Acoust. Soc. Am. 115, 1791-1807 (2004)] reported that North German (NG) front-rounded vowels in hVp syllables were acoustically intermediate between front and back American English (AE) vowels. However, AE listeners perceptually assimilated them as poor exemplars of back AE vowels. In this study, speaker- and context-independent cross-language discriminant analyses of NG and AE vowels produced in CVC syllables (C=labial, alveolar, velar stops) in sentences showed that NG front-rounded vowels fell within AE back-vowel distributions, due to the "fronting" of AE back vowels in alveolar/velar contexts. NG [I, e, epsilon, inverted c] were located relatively "higher" in acoustic vowel space than their AE counterparts and varied in cross-language similarity across consonantal contexts. In a perceptual assimilation task, naive listeners classified NG vowels in terms of native AE categories and rated their goodness on a 7-point scale (very foreign to very English sounding). Both front- and back-rounded NG vowels were perceptually assimilated overwhelmingly to back AE categories and judged equally good exemplars. Perceptual assimilation patterns did not vary with context, and were not always predictable from acoustic similarity. These findings suggest that listeners adopt a context-independent strategy when judging the cross-language similarity of vowels produced and presented in continuous speech contexts.

  14. Vowel Space Characteristics and Vowel Identification Accuracy

    ERIC Educational Resources Information Center

    Neel, Amy T.

    2008-01-01

    Purpose: To examine the relation between vowel production characteristics and intelligibility. Method: Acoustic characteristics of 10 vowels produced by 45 men and 48 women from the J. M. Hillenbrand, L. A. Getty, M. J. Clark, and K. Wheeler (1995) study were examined and compared with identification accuracy. Global (mean f0, F1, and F2;…

  15. Articulatory changes in muscle tension dysphonia: evidence of vowel space expansion following manual circumlaryngeal therapy.

    PubMed

    Roy, Nelson; Nissen, Shawn L; Dromey, Christopher; Sapir, Shimon

    2009-01-01

    In a preliminary study, we documented significant changes in formant transitions associated with successful manual circumlaryngeal treatment (MCT) of muscle tension dysphonia (MTD), suggesting improvement in speech articulation. The present study explores further the effects of MTD on vowel articulation by means of additional vowel acoustic measures. Pre- and post-treatment audio recordings of 111 women with MTD were analyzed acoustically using two measures: vowel space area (VSA) and vowel articulation index (VAI), constructed using the first (F1) and second (F2) formants of 4 point vowels/ a, i, ae, u/, extracted from eight words within a standard reading passage. Pairwise t-tests revealed significant increases in both VSA and VAI, confirming that successful treatment of MTD is associated with vowel space expansion. Although MTD is considered a voice disorder, its treatment with MCT appears to positively affect vocal tract dynamics. While the precise mechanism underlying vowel space expansion remains unknown, improvements may be related to lowering of the larynx, expanding oropharyngeal space, and improving articulatory movements. The reader will be able to: (1) describe possible articulatory changes associated with successful treatment of muscle tension dysphonia; (2) describe two acoustic methods to assess vowel centralization and decentralization, and; (3) understand the basis for viewing muscle tension dysphonia as a disorder not solely confined to the larynx.

  16. A comprehensive three-dimensional cortical map of vowel space.

    PubMed

    Scharinger, Mathias; Idsardi, William J; Poe, Samantha

    2011-12-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral-medial, anterior-posterior, and inferior-superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom-up information but crucially involves featural-phonetic top-down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.

  17. Variability in English vowels is comparable in articulation and acoustics

    PubMed Central

    Noiray, Aude; Iskarous, Khalil; Whalen, D. H.

    2014-01-01

    The nature of the links between speech production and perception has been the subject of longstanding debate. The present study investigated the articulatory parameter of tongue height and the acoustic F1-F0 difference for the phonological distinction of vowel height in American English front vowels. Multiple repetitions of /i, ɪ, e, ε, æ/ in [(h)Vd] sequences were recorded in seven adult speakers. Articulatory (ultrasound) and acoustic data were collected simultaneously to provide a direct comparison of variability in vowel production in both domains. Results showed idiosyncratic patterns of articulation for contrasting the three front vowel pairs /i-ɪ/, /e-ε/ and /ε-æ/ across subjects, with the degree of variability in vowel articulation comparable to that observed in the acoustics for all seven participants. However, contrary to what was expected, some speakers showed reversals for tongue height for /ɪ/-/e/ that was also reflected in acoustics with F1 higher for /ɪ/ than for /e/. The data suggest the phonological distinction of height is conveyed via speaker-specific articulatory-acoustic patterns that do not strictly match features descriptions. However, the acoustic signal is faithful to the articulatory configuration that generated it, carrying the crucial information for perceptual contrast. PMID:25101144

  18. Talker Differences in Clear and Conversational Speech: Acoustic Characteristics of Vowels

    ERIC Educational Resources Information Center

    Ferguson, Sarah Hargus; Kewley-Port, Diane

    2007-01-01

    Purpose: To determine the specific acoustic changes that underlie improved vowel intelligibility in clear speech. Method: Seven acoustic metrics were measured for conversational and clear vowels produced by 12 talkers--6 who previously were found (S. H. Ferguson, 2004) to produce a large clear speech vowel intelligibility effect for listeners with…

  19. Materials of acoustic analysis: sustained vowel versus sentence.

    PubMed

    Moon, Kyung Ray; Chung, Sung Min; Park, Hae Sang; Kim, Han Su

    2012-09-01

    Sustained vowel is a widely used material of acoustic analysis. However, vowel phonation does not sufficiently demonstrate sentence-based real-life phonation, and biases may occur depending on the test subjects intent during pronunciation. The purpose of this study was to investigate the differences between the results of acoustic analysis using each material. An individual prospective study. Two hundred two individuals (87 men and 115 women) with normal findings in videostroboscopy were enrolled. Acoustic analysis was done using the speech pattern element acquisition and display program. Fundamental frequency (Fx), amplitude (Ax), contact quotient (Qx), jitter, and shimmer were measured with sustained vowel-based acoustic analysis. Average fundamental frequency (FxM), average amplitude (AxM), average contact quotient (QxM), Fx perturbation (CFx), and amplitude perturbation (CAx) were measured with sentence-based acoustic analysis. Corresponding data of the two methods were compared with each other. SPSS (Statistical Package for the Social Sciences, Version 12.0; SPSS, Inc., Chicago, IL) software was used for statistical analysis. FxM was higher than Fx in men (Fx, 124.45 Hz; FxM, 133.09 Hz; P=0.000). In women, FxM seemed to be lower than Fx, but the results were not statistically significant (Fx, 210.58 Hz; FxM, 208.34 Hz; P=0.065). There was no statistical significance between Ax and AxM in both the groups. QxM was higher than Qx in men and women. Jitter was lower in men, but CFx was lower in women. Both Shimmer and CAx were higher in men. Sustained vowel phonation could not be a complete substitute for real-time phonation in acoustic analysis. Characteristics of acoustic materials should be considered when choosing the material for acoustic analysis and interpreting the results. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  20. Acoustic characteristics of different target vowels during the laryngeal telescopy.

    PubMed

    Shu, Min-Tsan; Lee, Kuo-Shen; Chang, Chin-Wen; Hsieh, Li-Chun; Yang, Cheng-Chien

    2014-10-01

    The aim of this study was to investigate the acoustic characteristics of target vowels phonated in normal voice persons while performing laryngeal telescopy. The acoustic characteristics are compared to show the extent of possible difference to speculate their impact on phonation function. Thirty-four male subjects aged 20-39 years with normal voice were included in this study. The target vowels were /i/ and /ɛ/. Recording of voice samples was done under natural phonation and during laryngeal telescopy. The acoustic analysis included the parameters of fundamental frequency, jitter, shimmer and noise-to-harmonic ratio. The sound of a target vowel /ɛ/ was perceived identical in more than 90% of the subjects by the examiner and speech language pathologist during the telescopy. Both /i/ and /ɛ/ sounds showed significant difference when compared with the results under natural phonation. There was no significant difference between /i/ and /ɛ/ during the telescopy. The present study showed that change in target vowels during laryngeal telescopy makes no significant difference in the acoustic characteristics. The results may lead to the speculation that the phonation mechanism was not affected significantly by different vowels during the telescopy. This study may suggest that in the principle of comfortable phonation, introduction of the target vowels /i/ and /ɛ/ is practical. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. An Acoustic Study of Vowels Produced by Alaryngeal Speakers in Taiwan.

    PubMed

    Liao, Jia-Shiou

    2016-11-01

    This study investigated the acoustic properties of 6 Taiwan Southern Min vowels produced by 10 laryngeal speakers (LA), 10 speakers with a pneumatic artificial larynx (PA), and 8 esophageal speakers (ES). Each of the 6 monophthongs of Taiwan Southern Min (/i, e, a, ɔ, u, ə/) was represented by a Taiwan Southern Min character and appeared randomly on a list 3 times (6 Taiwan Southern Min characters × 3 repetitions = 18 tokens). Each Taiwan Southern Min character in this study has the same syllable structure, /V/, and all were read with tone 1 (high and level). Acoustic measurements of the 1st formant, 2nd formant, and 3rd formant were taken for each vowel. Then, vowel space areas (VSAs) enclosed by /i, a, u/ were calculated for each group of speakers. The Euclidean distance between vowels in the pairs /i, a/, /i, u/, and /a, u/ was also calculated and compared across the groups. PA and ES have higher 1st or 2nd formant values than LA for each vowel. The distance is significantly shorter between vowels in the corner vowel pairs /i, a/ and /i, u/. PA and ES have a significantly smaller VSA compared with LA. In accordance with previous studies, alaryngeal speakers have higher formant frequency values than LA because they have a shortened vocal tract as a result of their total laryngectomy. Furthermore, the resonance frequencies are inversely related to the length of the vocal tract (on the basis of the assumption of the source filter theory). PA and ES have a smaller VSA and shorter distances between corner vowels compared with LA, which may be related to speech intelligibility. This hypothesis needs further support from future study.

  2. Prosodic domain-initial effects on the acoustic structure of vowels

    NASA Astrophysics Data System (ADS)

    Fox, Robert Allen; Jacewicz, Ewa; Salmons, Joseph

    2003-10-01

    In the process of language change, vowels tend to shift in ``chains,'' leading to reorganizations of entire vowel systems over time. A long research tradition has described such patterns, but little is understood about what factors motivate such shifts. Drawing data from changes in progress in American English dialects, the broad hypothesis is tested that changes in vowel systems are related to prosodic organization and stress patterns. Changes in vowels under greater prosodic prominence correlate directly with, and likely underlie, historical patterns of shift. This study examines acoustic characteristics of vowels at initial edges of prosodic domains [Fougeron and Keating, J. Acoust. Soc. Am. 101, 3728-3740 (1997)]. The investigation is restricted to three distinct prosodic levels: utterance (sentence-initial), phonological phrase (strong branch of a foot), and syllable (weak branch of a foot). The predicted changes in vowels /e/ and /ɛ/ in two American English dialects (from Ohio and Wisconsin) are examined along a set of acoustic parameters: duration, formant frequencies (including dynamic changes over time), and fundamental frequency (F0). In addition to traditional methodology which elicits list-like intonation, a design is adapted to examine prosodic patterns in more typical sentence intonations. [Work partially supported by NIDCD R03 DC005560-01.

  3. Acoustic characteristics of the vowel systems of six regional varieties of American English

    PubMed Central

    Clopper, Cynthia G.; Pisoni, David B.; de Jong, Kenneth

    2012-01-01

    Previous research by speech scientists on the acoustic characteristics of American English vowel systems has typically focused on a single regional variety, despite decades of sociolinguistic research demonstrating the extent of regional phonological variation in the United States. In the present study, acoustic measures of duration and first and second formant frequencies were obtained from five repetitions of 11 different vowels produced by 48 talkers representing both genders and six regional varieties of American English. Results revealed consistent variation due to region of origin, particularly with respect to the production of low vowels and high back vowels. The Northern talkers produced shifted low vowels consistent with the Northern Cities Chain Shift, the Southern talkers produced fronted back vowels consistent with the Southern Vowel Shift, and the New England, Midland, and Western talkers produced the low back vowel merger. These findings indicate that the vowel systems of American English are better characterized in terms of the region of origin of the talkers than in terms of a single set of idealized acoustic-phonetic baselines of “General” American English and provide benchmark data for six regional varieties. PMID:16240825

  4. Acoustic characteristics of the vowel systems of six regional varieties of American English

    NASA Astrophysics Data System (ADS)

    Clopper, Cynthia G.; Pisoni, David B.; de Jong, Kenneth

    2005-09-01

    Previous research by speech scientists on the acoustic characteristics of American English vowel systems has typically focused on a single regional variety, despite decades of sociolinguistic research demonstrating the extent of regional phonological variation in the United States. In the present study, acoustic measures of duration and first and second formant frequencies were obtained from five repetitions of 11 different vowels produced by 48 talkers representing both genders and six regional varieties of American English. Results revealed consistent variation due to region of origin, particularly with respect to the production of low vowels and high back vowels. The Northern talkers produced shifted low vowels consistent with the Northern Cities Chain Shift, the Southern talkers produced fronted back vowels consistent with the Southern Vowel Shift, and the New England, Midland, and Western talkers produced the low back vowel merger. These findings indicate that the vowel systems of American English are better characterized in terms of the region of origin of the talkers than in terms of a single set of idealized acoustic-phonetic baselines of ``General'' American English and provide benchmark data for six regional varieties.

  5. Examining Acoustic and Kinematic Measures of Articulatory Working Space: Effects of Speech Intensity

    ERIC Educational Resources Information Center

    Whitfield, Jason A.; Dromey, Christopher; Palmer, Panika

    2018-01-01

    Purpose: The purpose of this study was to examine the effect of speech intensity on acoustic and kinematic vowel space measures and conduct a preliminary examination of the relationship between kinematic and acoustic vowel space metrics calculated from continuously sampled lingual marker and formant traces. Method: Young adult speakers produced 3…

  6. Colloquial Arabic vowels in Israel: a comparative acoustic study of two dialects.

    PubMed

    Amir, Noam; Amir, Ofer; Rosenhouse, Judith

    2014-10-01

    This study explores the acoustic properties of the vowel systems of two dialects of colloquial Arabic spoken in Israel. One dialect is spoken in the Galilee region in the north of Israel, and the other is spoken in the Triangle (Muthallath) region, in central Israel. These vowel systems have five short and five long vowels /i, i:, e, e:, a, a:, o, o:, u, u:/. Twenty men and twenty women from each region were included, uttering 30 vowels each. All speakers were adult Muslim native speakers of these two dialects. The studied vowels were uttered in non-pharyngeal and non-laryngeal environments in the context of CVC words, embedded in a carrier sentence. The acoustic parameters studied were the two first formants, F0, and duration. Results revealed that long vowels were approximately twice as long as short vowels and differed also in their formant values. The two dialects diverged mainly in the short vowels rather than in the long ones. An overlap was found between the two short vowel pairs /i/-/e/ and /u/-/o/. This study demonstrates the existence of dialectal differences in the colloquial Arabic vowel systems, underlining the need for further research into the numerous additional dialects found in the region.

  7. Revisiting the Canadian English vowel space

    NASA Astrophysics Data System (ADS)

    Hagiwara, Robert

    2005-04-01

    In order to fill a need for experimental-acoustic baseline measurements of Canadian English vowels, a database is currently being constructed in Winnipeg, Manitoba. The database derives from multiple repetitions of fifteen English vowels (eleven standard monophthongs, syllabic /r/ and three standard diphthongs) in /hVd/ and /hVt/ contexts, as spoken by multiple speakers. Frequencies of the first four formants are taken from three timepoints in every vowel token (25, 50, and 75% of vowel duration). Preliminary results (from five men and five women) confirm some features characteristic of Canadian English, but call others into question. For instance the merger of low back vowels appears to be complete for these speakers, but the result is a lower-mid and probably rounded vowel rather than the low back unround vowel often described. With these data Canadian Raising can be quantified as an average 200 Hz or 1.5 Bark downward shift in the frequency of F1 before voiceless /t/. Analysis of the database will lead to a more accurate picture of the Canadian English vowel system, as well as provide a practical and up-to-date point of reference for further phonetic and sociophonetic comparisons.

  8. Acoustic analyses of thyroidectomy-related changes in vowel phonation.

    PubMed

    Solomon, Nancy Pearl; Awan, Shaheen N; Helou, Leah B; Stojadinovic, Alexander

    2012-11-01

    Changes in vocal function that can occur after thyroidectomy were tracked with acoustic analyses of sustained vowel productions. The purpose was to determine which time-based or spectral/cepstral-based measures of two vowels were able to detect voice changes over time in patients undergoing thyroidectomy. Prospective, longitudinal, and observational clinical trial. Voice samples of sustained /ɑ/ and /i/ recorded from 70 adults before and approximately 2 weeks, 3 months, and 6 months after thyroid surgery were analyzed for jitter, shimmer, harmonic-to-noise ratio (HNR), cepstral peak prominence (CPP), low-to-high ratio of spectral energy (L/H ratio), and the standard deviations of CPP and L/H ratio. Three trained listeners rated vowel and sentence productions for the four data collection sessions for each participant. For analysis purposes, participants were categorized post hoc according to voice outcome (VO) at their first postthyroidectomy assessment session. Shimmer, HNR, and CPP differed significantly across sessions; follow-up analyses revealed the strongest effect for CPP. CPP for /ɑ/ and /i/ differed significantly between groups of participants with normal versus negative (adverse) VO and between the pre- and 2-week postthyroidectomy sessions for the negative VO group. HNR, CPP, and L/H ratio differed across vowels, but both /ɑ/ and /i/ were similarly effective in tracking voice changes over time and differentiating VO groups. This study indicated that shimmer, HNR, and CPP determined from vowel productions can be used to track changes in voice over time as patients undergo and subsequently recover from thyroid surgery, with CPP being the strongest variable for this purpose. Evidence did not clearly reveal whether acoustic voice evaluations should include both /ɑ/ and /i/ vowels, but they should specify which vowel is used to allow for comparisons across studies and multiple clinical assessments. Copyright © 2012 The Voice Foundation. All rights

  9. Quantitative and descriptive comparison of four acoustic analysis systems: vowel measurements.

    PubMed

    Burris, Carlyn; Vorperian, Houri K; Fourakis, Marios; Kent, Ray D; Bolt, Daniel M

    2014-02-01

    This study examines accuracy and comparability of 4 trademarked acoustic analysis software packages (AASPs): Praat, WaveSurfer, TF32, and CSL by using synthesized and natural vowels. Features of AASPs are also described. Synthesized and natural vowels were analyzed using each of the AASP's default settings to secure 9 acoustic measures: fundamental frequency (F0), formant frequencies (F1-F4), and formant bandwidths (B1-B4). The discrepancy between the software measured values and the input values (synthesized, previously reported, and manual measurements) was used to assess comparability and accuracy. Basic AASP features are described. Results indicate that Praat, WaveSurfer, and TF32 generate accurate and comparable F0 and F1-F4 data for synthesized vowels and adult male natural vowels. Results varied by vowel for women and children, with some serious errors. Bandwidth measurements by AASPs were highly inaccurate as compared with manual measurements and published data on formant bandwidths. Values of F0 and F1-F4 are generally consistent and fairly accurate for adult vowels and for some child vowels using the default settings in Praat, WaveSurfer, and TF32. Manipulation of default settings yields improved output values in TF32 and CSL. Caution is recommended especially before accepting F1-F4 results for children and B1-B4 results for all speakers.

  10. Acoustic Properties Predict Perception of Unfamiliar Dutch Vowels by Adult Australian English and Peruvian Spanish Listeners

    PubMed Central

    Alispahic, Samra; Mulak, Karen E.; Escudero, Paola

    2017-01-01

    Research suggests that the size of the second language (L2) vowel inventory relative to the native (L1) inventory may affect the discrimination and acquisition of L2 vowels. Models of non-native and L2 vowel perception stipulate that naïve listeners' non-native and L2 perceptual patterns may be predicted by the relationship in vowel inventory size between the L1 and the L2. Specifically, having a smaller L1 vowel inventory than the L2 impedes L2 vowel perception, while having a larger one often facilitates it. However, the Second Language Linguistic Perception (L2LP) model specifies that it is the L1–L2 acoustic relationships that predict non-native and L2 vowel perception, regardless of L1 vowel inventory. To test the effects of vowel inventory size vs. acoustic properties on non-native vowel perception, we compared XAB discrimination and categorization of five Dutch vowel contrasts between monolinguals whose L1 contains more (Australian English) or fewer (Peruvian Spanish) vowels than Dutch. No effect of language background was found, suggesting that L1 inventory size alone did not account for performance. Instead, participants in both language groups were more accurate in discriminating contrasts that were predicted to be perceptually easy based on L1–L2 acoustic relationships, and were less accurate for contrasts likewise predicted to be difficult. Further, cross-language discriminant analyses predicted listeners' categorization patterns which in turn predicted listeners' discrimination difficulty. Our results show that listeners with larger vowel inventories appear to activate multiple native categories as reflected in lower accuracy scores for some Dutch vowels, while listeners with a smaller vowel inventory seem to have higher accuracy scores for those same vowels. In line with the L2LP model, these findings demonstrate that L1–L2 acoustic relationships better predict non-native and L2 perceptual performance and that inventory size alone is not a good

  11. [Pilot study of the acoustic values of the vowels in Spanish as indicators of the severity of dysarthria].

    PubMed

    Delgado-Hernandez, J

    2017-02-01

    The acoustic analysis is a tool that provides objective data on changes of speech in dysarthria. To evaluate, in the ataxic dysarthria, the relationship between the vowel space area (VSA), the formant centralization ratio (FCR) and the mean of the primary distances with the speech intelligibility. A sample of fourteen Spanish speakers, ten with dysarthria and four controls, was used. The values of first and second formants in 140 vowels extracted of 140 words were analyzed. To calculate the level of intelligibility seven listeners were involved and a task of identification verbal stimuli was used. The dysarthric subjects have less contrast between middle and high vowels and between back vowels. Significant differences in the VSA, FCR and mean of the primary distances compared to control subjects (p = 0.007, 0.005 and 0.030, respectively) are observed. Regression analysis show the relationship between VSA and the mean of primary distances with the level of speech intelligibility (r = 0.60 and 0.74, respectively). Ataxic dysarthria subjects have lower contrast and vowel centralization in carrying out the vowels. The acoustic measures studied in this preliminary work have a high sensitivity in the detection of dysarthria but only the VSA and the mean of primary distances provide information on the severity of this type of speech disturbance.

  12. Vowel space development in a child acquiring English and Spanish from birth

    NASA Astrophysics Data System (ADS)

    Andruski, Jean; Kim, Sahyang; Nathan, Geoffrey; Casielles, Eugenia; Work, Richard

    2005-04-01

    To date, research on bilingual first language acquisition has tended to focus on the development of higher levels of language, with relatively few analyses of the acoustic characteristics of bilingual infants' and childrens' speech. Since monolingual infants begin to show perceptual divisions of vowel space that resemble adult native speakers divisions by about 6 months of age [Kuhl et al., Science 255, 606-608 (1992)], bilingual childrens' vowel production may provide evidence of their awareness of language differences relatively early during language development. This paper will examine the development of vowel categories in a child whose mother is a native speaker of Castilian Spanish, and whose father is a native speaker of American English. Each parent speaks to the child only in her/his native language. For this study, recordings made at the ages of 2;5 and 2;10 were analyzed and F1-F2 measurements were made of vowels from the stressed syllables of content words. The development of vowel space is compared across ages within each language, and across languages at each age. In addition, the child's productions are compared with the mother's and father's vocalic productions, which provide the predominant input in Spanish and English respectively.

  13. Quantitative and Descriptive Comparison of Four Acoustic Analysis Systems: Vowel Measurements

    ERIC Educational Resources Information Center

    Burris, Carlyn; Vorperian, Houri K.; Fourakis, Marios; Kent, Ray D.; Bolt, Daniel M.

    2014-01-01

    Purpose: This study examines accuracy and comparability of 4 trademarked acoustic analysis software packages (AASPs): Praat, WaveSurfer, TF32, and CSL by using synthesized and natural vowels. Features of AASPs are also described. Method: Synthesized and natural vowels were analyzed using each of the AASP's default settings to secure 9…

  14. Does Vowel Inventory Density Affect Vowel-to-Vowel Coarticulation?

    ERIC Educational Resources Information Center

    Mok, Peggy P. K.

    2013-01-01

    This study tests the output constraints hypothesis that languages with a crowded phonemic vowel space would allow less vowel-to-vowel coarticulation than languages with a sparser vowel space to avoid perceptual confusion. Mandarin has fewer vowel phonemes than Cantonese, but their allophonic vowel spaces are similarly crowded. The hypothesis…

  15. An Evaluation of Articulatory Working Space Area in Vowel Production of Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Bunton, Kate; Leddy, Mark

    2011-01-01

    Many adolescents and adults with Down syndrome have reduced speech intelligibility. Reasons for this reduction may relate to differences in anatomy and physiology, both of which are important for creating an intelligible speech signal. The purpose of this study was to document acoustic vowel space and articulatory working space for two adult…

  16. The Effect of Microphone Type on Acoustical Measures of Synthesized Vowels.

    PubMed

    Kisenwether, Jessica Sofranko; Sataloff, Robert T

    2015-09-01

    The purpose of this study was to compare microphones of different directionality, transducer type, and cost, with attention to their effects on acoustical measurements of period perturbation, amplitude perturbation, and noise using synthesized sustained vowel samples. This was a repeated measures design. Synthesized sustained vowel stimuli (with known acoustic characteristics and systematic changes in jitter, shimmer, and noise-to-harmonics ratio) were recorded by a variety of dynamic and condenser microphones. Files were then analyzed for mean fundamental frequency (fo), fo standard deviation, absolute jitter, shimmer in dB, peak-to-peak amplitude variation, and noise-to-harmonics ratio. Acoustical measures following recording were compared with the synthesized, known acoustical measures before recording. Although informal analyses showed some differences among microphones, and analyses of variance showed that type of microphone is a significant predictor, t-tests revealed that none of the microphones generated different means compared with the generated acoustical measures. In this sample, microphone type, directionality, and cost did not have a significant effect on the validity of acoustic measures. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. Functional Connectivity Associated with Acoustic Stability During Vowel Production: Implications for Vocal-Motor Control

    PubMed Central

    2015-01-01

    Abstract Vowels provide the acoustic foundation of communication through speech and song, but little is known about how the brain orchestrates their production. Positron emission tomography was used to study regional cerebral blood flow (rCBF) during sustained production of the vowel /a/. Acoustic and blood flow data from 13, normal, right-handed, native speakers of American English were analyzed to identify CBF patterns that predicted the stability of the first and second formants of this vowel. Formants are bands of resonance frequencies that provide vowel identity and contribute to voice quality. The results indicated that formant stability was directly associated with blood flow increases and decreases in both left- and right-sided brain regions. Secondary brain regions (those associated with the regions predicting formant stability) were more likely to have an indirect negative relationship with first formant variability, but an indirect positive relationship with second formant variability. These results are not definitive maps of vowel production, but they do suggest that the level of motor control necessary to produce stable vowels is reflected in the complexity of an underlying neural system. These results also extend a systems approach to functional image analysis, previously applied to normal and ataxic speech rate that is solely based on identifying patterns of brain activity associated with specific performance measures. Understanding the complex relationships between multiple brain regions and the acoustic characteristics of vocal stability may provide insight into the pathophysiology of the dysarthrias, vocal disorders, and other speech changes in neurological and psychiatric disorders. PMID:25295385

  18. Acoustic variability within and across German, French, and American English vowels: phonetic context effects.

    PubMed

    Strange, Winifred; Weber, Andrea; Levy, Erika S; Shafiro, Valeriy; Hisagi, Miwako; Nishi, Kanae

    2007-08-01

    Cross-language perception studies report influences of speech style and consonantal context on perceived similarity and discrimination of non-native vowels by inexperienced and experienced listeners. Detailed acoustic comparisons of distributions of vowels produced by native speakers of North German (NG), Parisian French (PF) and New York English (AE) in citation (di)syllables and in sentences (surrounded by labial and alveolar stops) are reported here. Results of within- and cross-language discriminant analyses reveal striking dissimilarities across languages in the spectral/temporal variation of coarticulated vowels. As expected, vocalic duration was most important in differentiating NG vowels; it did not contribute to PF vowel classification. Spectrally, NG long vowels showed little coarticulatory change, but back/low short vowels were fronted/raised in alveolar context. PF vowels showed greater coarticulatory effects overall; back and front rounded vowels were fronted, low and mid-low vowels were raised in both sentence contexts. AE mid to high back vowels were extremely fronted in alveolar contexts, with little change in mid-low and low long vowels. Cross-language discriminant analyses revealed varying patterns of spectral (dis)similarity across speech styles and consonantal contexts that could, in part, account for AE listeners' perception of German and French front rounded vowels, and "similar" mid-high to mid-low vowels.

  19. Acoustic Typology of Vowel Inventories and Dispersion Theory: Insights from a Large Cross-Linguistic Corpus

    ERIC Educational Resources Information Center

    Becker-Kristal, Roy

    2010-01-01

    This dissertation examines the relationship between the structural, phonemic properties of vowel inventories and their acoustic phonetic realization, with particular focus on the adequacy of Dispersion Theory, which maintains that inventories are structured so as to maximize perceptual contrast between their component vowels. In order to assess…

  20. Effect of Domain Initial Strengthening on Vowel Height and Backness Contrasts in French: Acoustic and Ultrasound Data

    ERIC Educational Resources Information Center

    Georgeton, Laurianne; Antolík, Tanja Kocjancic; Fougeron, Cécile

    2016-01-01

    Purpose: Phonetic variation due to domain initial strengthening was investigated with respect to the acoustic and articulatory distinctiveness of vowels within a subset of the French oral vowel system /i, e, ?, a, o, u/, organized along 4 degrees of height for the front vowels and 2 degrees of backness at the close and midclose height levels.…

  1. Effects of Long-Term Tracheostomy on Spectral Characteristics of Vowel Production.

    ERIC Educational Resources Information Center

    Kamen, Ruth Saletsky; Watson, Ben C.

    1991-01-01

    Eight preschool children who underwent tracheotomy during the prelingual period were compared to matched controls on a variety of speech measures. Children with tracheotomies showed reduced acoustic vowel space, suggesting they were limited in their ability to produce extreme vocal tract configurations for vowels postdecannulation. Oral motor…

  2. Exploration of Acoustic Features for Automatic Vowel Discrimination in Spontaneous Speech

    ERIC Educational Resources Information Center

    Tyson, Na'im R.

    2012-01-01

    In an attempt to understand what acoustic/auditory feature sets motivated transcribers towards certain labeling decisions, I built machine learning models that were capable of discriminating between canonical and non-canonical vowels excised from the Buckeye Corpus. Specifically, I wanted to model when the dictionary form and the transcribed-form…

  3. Vowel change across three age groups of speakers in three regional varieties of American English

    PubMed Central

    Jacewicz, Ewa; Fox, Robert A.; Salmons, Joseph

    2011-01-01

    This acoustic study examines sound (vowel) change in apparent time across three successive generations of 123 adult female speakers ranging in age from 20 to 65 years old, representing three regional varieties of American English, typical of western North Carolina, central Ohio and southeastern Wisconsin. A set of acoustic measures characterized the dynamic nature of formant trajectories, the amount of spectral change over the course of vowel duration and the position of the spectral centroid. The study found a set of systematic changes to /I, ε, æ/ including positional changes in the acoustic space (mostly lowering of the vowels) and significant variation in formant dynamics (increased monophthongization). This common sound change is evident in both emphatic (articulated clearly) and nonemphatic (casual) productions and occurs regardless of dialect-specific vowel dispersions in the vowel space. The cross-generational and cross-dialectal patterns of variation found here support an earlier report by Jacewicz, Fox, and Salmons (2011) which found this recent development in these three dialect regions in isolated citation-form words. While confirming the new North American Shift in different styles of production, the study underscores the importance of addressing the stress-related variation in vowel production in a careful and valid assessment of sound change. PMID:22125350

  4. A Comprehensive Three-Dimensional Cortical Map of Vowel Space

    ERIC Educational Resources Information Center

    Scharinger, Mathias; Idsardi, William J.; Poe, Samantha

    2011-01-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space…

  5. Effects of Levodopa on Vowel Articulation in Patients with Parkinson's Disease.

    PubMed

    Okada, Yukihiro; Murata, Miho; Toda, Tatsushi

    2016-04-27

    The effects of levodopa on articulatory dysfunction in patients with Parkinson's disease remain inconclusive. This study aimed to investigate the effects of levodopa on isolated vowel articulation and motor performance in patients with moderate to severe Parkinson's disease, excluding speech fluctuations caused by dyskinesia. 21 patients (14 males and 7 females) and 21 age- and sex- matched healthy subjects were enrolled. Together with motor assessment, the patients phonated five Japanese isolated vowels (/a/, /i/, /u/, /e/, and /o/) 20 times before and 1 h after levodopa treatment. We made the frequency analysis of each vowel and measured the first and second formants. From these formants we constructed the pentagonal vowel space area which should be the good indicator for articulatory dysfunction of vowels. In control subjects, only speech samples were analyzed. To investigate the sequential relationship between plasma levodopa concentrations, motor performances, and acoustic measurements after treatment, entire drug cycle tests were performed in 4 patients. The pentagonal vowel space area was significantly expanded together with motor amelioration after levodopa treatment, although the enlargement is not enough for the space area of control subjects. Drug cycle tests revealed that sequential increases or decreases in plasma levodopa levels after treatment correlated well with expansion or decrease of the vowel space areas and improvement or deterioration of motor manifestations. Levodopa expanded the vowel space area and ameliorated motor performance, suggesting that dysfunctions in vowel articulation and motor performance in patients with Parkinson's disease are based on dopaminergic pathology.

  6. Cross-dialectal variation in formant dynamics of American English vowels

    PubMed Central

    Fox, Robert Allen; Jacewicz, Ewa

    2009-01-01

    This study aims to characterize the nature of the dynamic spectral change in vowels in three distinct regional varieties of American English spoken in the Western North Carolina, in Central Ohio, and in Southern Wisconsin. The vowels ∕ɪ, ε, e, æ, aɪ∕ were produced by 48 women for a total of 1920 utterances and were contained in words of the structure ∕bVts∕ and ∕bVdz∕ in sentences which elicited nonemphatic and emphatic vowels. Measurements made at the vowel target (i.e., the central 60% of the vowel) produced a set of acoustic parameters which included position and movement in the F1 by F2 space, vowel duration, amount of spectral change [measured as vector length (VL) and trajectory length (TL)], and spectral rate of change. Results revealed expected variation in formant dynamics as a function of phonetic factors (vowel emphasis and consonantal context). However, for each vowel and for each measure employed, dialect was a strong source of variation in vowel-inherent spectral change. In general, the dialect-specific nature and amount of spectral change can be characterized quite effectively by position and movement in the F1 by F2 space, vowel duration, TL (but not VL which underestimates formant movement), and spectral rate of change. PMID:19894839

  7. Vowel Space Characteristics of Speech Directed to Children With and Without Hearing Loss

    PubMed Central

    Wieland, Elizabeth A.; Burnham, Evamarie B.; Kondaurova, Maria; Bergeson, Tonya R.

    2015-01-01

    Purpose This study examined vowel characteristics in adult-directed (AD) and infant-directed (ID) speech to children with hearing impairment who received cochlear implants or hearing aids compared with speech to children with normal hearing. Method Mothers' AD and ID speech to children with cochlear implants (Study 1, n = 20) or hearing aids (Study 2, n = 11) was compared with mothers' speech to controls matched on age and hearing experience. The first and second formants of vowels /i/, /ɑ/, and /u/ were measured, and vowel space area and dispersion were calculated. Results In both studies, vowel space was modified in ID compared with AD speech to children with and without hearing loss. Study 1 showed larger vowel space area and dispersion in ID compared with AD speech regardless of infant hearing status. The pattern of effects of ID and AD speech on vowel space characteristics in Study 2 was similar to that in Study 1, but depended partly on children's hearing status. Conclusion Given previously demonstrated associations between expanded vowel space in ID compared with AD speech and enhanced speech perception skills, this research supports a focus on vowel pronunciation in developing intervention strategies for improving speech-language skills in children with hearing impairment. PMID:25658071

  8. The influence of sexual orientation on vowel production (L)

    NASA Astrophysics Data System (ADS)

    Pierrehumbert, Janet B.; Bent, Tessa; Munson, Benjamin; Bradlow, Ann R.; Bailey, J. Michael

    2004-10-01

    Vowel production in gay, lesbian, bisexual (GLB), and heterosexual speakers was examined. Differences in the acoustic characteristics of vowels were found as a function of sexual orientation. Lesbian and bisexual women produced less fronted /u/ and /opena/ than heterosexual women. Gay men produced a more expanded vowel space than heterosexual men. However, the vowels of GLB speakers were not generally shifted toward vowel patterns typical of the opposite sex. These results are inconsistent with the conjecture that innate biological factors have a broadly feminizing influence on the speech of gay men and a broadly masculinizing influence on the speech of lesbian/bisexual women. They are consistent with the idea that innate biological factors influence GLB speech patterns indirectly by causing selective adoption of certain speech patterns characteristic of the opposite sex. .

  9. Articulation Rate and Vowel Space Characteristics of Young Males with Fragile X Syndrome: Preliminary Acoustic Findings

    ERIC Educational Resources Information Center

    Zajac, David J.; Roberts, Joanne E.; Hennon, Elizabeth A.; Harris, Adrianne A.; Barnes, Elizabeth F.; Misenheimer, Jan

    2006-01-01

    Purpose: Increased speaking rate is a commonly reported perceptual characteristic among males with fragile X syndrome (FXS). The objective of this preliminary study was to determine articulation rate--one component of perceived speaking rate--and vowel space characteristics of young males with FXS. Method: Young males with FXS (n = 38), …

  10. The Impact of Contrastive Stress on Vowel Acoustics and Intelligibility in Dysarthria

    ERIC Educational Resources Information Center

    Connaghan, Kathryn P.; Patel, Rupal

    2017-01-01

    Purpose: To compare vowel acoustics and intelligibility in words produced with and without contrastive stress by speakers with spastic (mixed-spastic) dysarthria secondary to cerebral palsy (DYS[subscript CP]) and healthy controls (HCs). Method: Fifteen participants (9 men, 6 women; age M = 42 years) with DYS[subscript CP] and 15 HCs (9 men, 6…

  11. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels

    PubMed Central

    2014-01-01

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined. Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production. PMID:25060583

  12. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels.

    PubMed

    Zourmand, Alireza; Mirhassani, Seyed Mostafa; Ting, Hua-Nong; Bux, Shaik Ismail; Ng, Kwan Hoong; Bilgen, Mehmet; Jalaludin, Mohd Amin

    2014-07-25

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined.Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production.

  13. Vowel Development in an Emergent Mandarin-English Bilingual Child: A Longitudinal Study

    ERIC Educational Resources Information Center

    Yang, Jing; Fox, Robert A.; Jacewicz, Ewa

    2015-01-01

    This longitudinal case study documents the emergence of bilingualism in a young monolingual Mandarin boy on the basis of an acoustic analysis of his vowel productions recorded via a picture-naming task over 20 months following his enrollment in an all-English (L2) preschool at the age of 3;7. The study examined (1) his initial L2 vowel space, (2)…

  14. Emotions in freely varying and mono-pitched vowels, acoustic and EGG analyses.

    PubMed

    Waaramaa, Teija; Palo, Pertti; Kankare, Elina

    2015-12-01

    Vocal emotions are expressed either by speech or singing. The difference is that in singing the pitch is predetermined while in speech it may vary freely. It was of interest to study whether there were voice quality differences between freely varying and mono-pitched vowels expressed by professional actors. Given their profession, actors have to be able to express emotions both by speech and singing. Electroglottogram and acoustic analyses of emotional utterances embedded in expressions of freely varying vowels [a:], [i:], [u:] (96 samples) and mono-pitched protracted vowels (96 samples) were studied. Contact quotient (CQEGG) was calculated using 35%, 55%, and 80% threshold levels. Three different threshold levels were used in order to evaluate their effects on emotions. Genders were studied separately. The results suggested significant gender differences for CQEGG 80% threshold level. SPL, CQEGG, and F4 were used to convey emotions, but to a lesser degree, when F0 was predetermined. Moreover, females showed fewer significant variations than males. Both genders used more hypofunctional phonation type in mono-pitched utterances than in the expressions with freely varying pitch. The present material warrants further study of the interplay between CQEGG threshold levels and formant frequencies, and listening tests to investigate the perceptual value of the mono-pitched vowels in the communication of emotions.

  15. The Effects of Inventory on Vowel Perception in French and Spanish: An MEG Study

    ERIC Educational Resources Information Center

    Hacquard, Valentine; Walter, Mary Ann; Marantz, Alec

    2007-01-01

    Production studies have shown that speakers of languages with larger phoneme inventories expand their acoustic space relative to languages with smaller inventories [Bradlow, A. (1995). A comparative acoustic study of English and Spanish vowels. "Journal of the Acoustical Society of America," 97(3), 1916-1924; Jongman, A., Fourakis, M., & Sereno,…

  16. Stress-Induced Acoustic Variation in L2 and L1 Spanish Vowels.

    PubMed

    Romanelli, Sofía; Menegotto, Andrea; Smyth, Ron

    2018-05-28

    We assessed the effect of lexical stress on the duration and quality of Spanish word-final vowels /a, e, o/ produced by American English late intermediate learners of L2 Spanish, as compared to those of native L1 Argentine Spanish speakers. Participants read 54 real words ending in /a, e, o/, with either final or penultimate lexical stress, embedded in a text and a word list. We measured vowel duration and both F1 and F2 frequencies at 3 temporal points. stressed vowels were longer than unstressed vowels, in Spanish L1 and L2. L1 and L2 Spanish stressed /a/ and /e/ had higher F1 values than their unstressed counterparts. Only the L2 speakers showed evidence of rising offglides for /e/ and /o/. The L2 and L1 Spanish vowel space was compressed in the absence of stress. Lexical stress affected the vowel quality of L1 and L2 Spanish vowels. We provide an up-to-date account of the formant trajectories of Argentine River Plate Spanish word-final /a, e, o/ and offer experimental support to the claim that stress affects the quality of Spanish vowels in word-final contexts. © 2018 S. Karger AG, Basel.

  17. Vowel Acoustics in Dysarthria: Mapping to Perception

    ERIC Educational Resources Information Center

    Lansford, Kaitlin L.; Liss, Julie M.

    2014-01-01

    Purpose: The aim of the present report was to explore whether vowel metrics, demonstrated to distinguish dysarthric and healthy speech in a companion article (Lansford & Liss, 2014), are able to predict human perceptual performance. Method: Vowel metrics derived from vowels embedded in phrases produced by 45 speakers with dysarthria were…

  18. Vowels in clear and conversational speech: Talker differences in acoustic characteristics and intelligibility for normal-hearing listeners

    NASA Astrophysics Data System (ADS)

    Hargus Ferguson, Sarah; Kewley-Port, Diane

    2002-05-01

    Several studies have shown that when a talker is instructed to speak as though talking to a hearing-impaired person, the resulting ``clear'' speech is significantly more intelligible than typical conversational speech. Recent work in this lab suggests that talkers vary in how much their intelligibility improves when they are instructed to speak clearly. The few studies examining acoustic characteristics of clear and conversational speech suggest that these differing clear speech effects result from different acoustic strategies on the part of individual talkers. However, only two studies to date have directly examined differences among talkers producing clear versus conversational speech, and neither included acoustic analysis. In this project, clear and conversational speech was recorded from 41 male and female talkers aged 18-45 years. A listening experiment demonstrated that for normal-hearing listeners in noise, vowel intelligibility varied widely among the 41 talkers for both speaking styles, as did the magnitude of the speaking style effect. Acoustic analyses using stimuli from a subgroup of talkers shown to have a range of speaking style effects will be used to assess specific acoustic correlates of vowel intelligibility in clear and conversational speech. [Work supported by NIHDCD-02229.

  19. English vowel identification and vowel formant discrimination by native Mandarin Chinese- and native English-speaking listeners: The effect of vowel duration dependence.

    PubMed

    Mi, Lin; Tao, Sha; Wang, Wenjing; Dong, Qi; Guan, Jingjing; Liu, Chang

    2016-03-01

    The purpose of this study was to examine the relationship between English vowel identification and English vowel formant discrimination for native Mandarin Chinese- and native English-speaking listeners. The identification of 12 English vowels was measured with the duration cue preserved or removed. The thresholds of vowel formant discrimination on the F2 of two English vowels,/Λ/and/i/, were also estimated using an adaptive-tracking procedure. Native Mandarin Chinese-speaking listeners showed significantly higher thresholds of vowel formant discrimination and lower identification scores than native English-speaking listeners. The duration effect on English vowel identification was similar between native Mandarin Chinese- and native English-speaking listeners. Moreover, regardless of listeners' language background, vowel identification was significantly correlated with vowel formant discrimination for the listeners who were less dependent on duration cues, whereas the correlation between vowel identification and vowel formant discrimination was not significant for the listeners who were highly dependent on duration cues. This study revealed individual variability in using multiple acoustic cues to identify English vowels for both native and non-native listeners. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Cross-linguistic vowel variation in trilingual speakers of Saterland Frisian, Low German, and High German.

    PubMed

    Peters, Jörg; Heeringa, Wilbert J; Schoormann, Heike E

    2017-08-01

    The present study compares the acoustic realization of Saterland Frisian, Low German, and High German vowels by trilingual speakers in the Saterland. The Saterland is a rural municipality in northwestern Germany. It offers the unique opportunity to study trilingualism with languages that differ both by their vowel inventories and by external factors, such as their social status and the autonomy of their speech communities. The objective of the study was to examine whether the trilingual speakers differ in their acoustic realizations of vowel categories shared by the three languages and whether those differences can be interpreted as effects of either the differences in the vowel systems or of external factors. Monophthongs produced in a /hVt/ frame revealed that High German vowels show the most divergent realizations in terms of vowel duration and formant frequencies, whereas Saterland Frisian and Low German vowels show small differences. These findings suggest that vowels of different languages are likely to share the same phonological space when the speech communities largely overlap, as is the case with Saterland Frisian and Low German, but may resist convergence if at least one language is shared with a larger, monolingual speech community, as is the case with High German.

  1. Production and perception of whispered vowels

    NASA Astrophysics Data System (ADS)

    Kiefte, Michael

    2005-09-01

    Information normally associated with pitch, such as intonation, can still be conveyed in whispered speech despite the absence of voicing. For example, it is possible to whisper the question ``You are going today?'' without any syntactic information to distinguish this sentence from a simple declarative. It has been shown that pitch change in whispered speech is correlated with the simultaneous raising or lowering of several formants [e.g., M. Kiefte, J. Acoust. Soc. Am. 116, 2546 (2004)]. However, spectral peak frequencies associated with formants have been identified as important correlates to vowel identity. Spectral peak frequencies may serve two roles in the perception of whispered speech: to indicate both vowel identity and intended pitch. Data will be presented to examine the relative importance of several acoustic properties including spectral peak frequencies and spectral shape parameters in both the production and perception of whispered vowels. Speakers were asked to phonate and whisper vowels at three different pitches across a range of roughly a musical fifth. It will be shown that relative spectral change is preserved within vowels across intended pitches in whispered speech. In addition, several models of vowel identification by listeners will be presented. [Work supported by SSHRC.

  2. Identification and discrimination of Spanish front vowels

    NASA Astrophysics Data System (ADS)

    Castellanos, Isabel; Lopez-Bascuas, Luis E.

    2004-05-01

    The idea that vowels are perceived less categorically than consonants is widely accepted. Ades [Psychol. Rev. 84, 524-530 (1977)] tried to explain this fact on the basis of the Durlach and Braida [J. Acoust. Soc. Am. 46, 372-383 (1969)] theory of intensity resolution. Since vowels seem to cover a broader perceptual range, context-coding noise for vowels should be greater than for consonants leading to a less categorical performance on the vocalic segments. However, relatively recent work by Macmillan et al. [J. Acoust. Soc. Am. 84, 1262-1280 (1988)] has cast doubt on the assumption of different perceptual ranges for vowels and consonants even though context variance is acknowledged to be greater for the former. A possibility is that context variance increases as number of long-term phonemic categories also increases. To test this hypothesis we focused on Spanish as the target language. Spanish has less vowel categories than English and the implication is that Spanish vowels will be more categorically perceived. Identification and discrimination experiments were conducted on a synthetic /i/-/e/ continuum and the obtained functions were studied to assess whether Spanish vowels are more categorically perceived than English vowels. The results are discussed in the context of different theories of speech perception.

  3. Interspeaker Variability in Hard Palate Morphology and Vowel Production

    ERIC Educational Resources Information Center

    Lammert, Adam; Proctor, Michael; Narayanan, Shrikanth

    2013-01-01

    Purpose: Differences in vocal tract morphology have the potential to explain interspeaker variability in speech production. The potential acoustic impact of hard palate shape was examined in simulation, in addition to the interplay among morphology, articulation, and acoustics in real vowel production data. Method: High-front vowel production from…

  4. Articulatory characteristics of Hungarian ‘transparent’ vowels

    PubMed Central

    Benus, Stefan; Gafos, Adamantios I.

    2007-01-01

    Using a combination of magnetometry and ultrasound, we examined the articulatory characteristics of the so-called ‘transparent’ vowels [iː], [i], and [eː] in Hungarian vowel harmony. Phonologically, transparent vowels are front, but they can be followed by either front or back suffixes. However, a finer look reveals an underlying phonetic coherence in two respects. First, transparent vowels in back harmony contexts show a less advanced (more retracted) tongue body posture than phonemically identical vowels in front harmony contexts: e.g. [i] in buli-val is less advanced than [i] in bili-vel. Second, transparent vowels in monosyllabic stems selecting back suffixes are also less advanced than phonemically identical vowels in stems selecting front suffixes: e.g. [iː] in ír, taking back suffixes, compared to [iː] of hír, taking front suffixes, is less advanced when these stems are produced in bare form (no suffixes). We thus argue that the phonetic degree of tongue body horizontal position correlates with the phonological alternation in suffixes. A hypothesis that emerges from this work is that a plausible phonetic basis for transparency can be found in quantal characteristics of the relation between articulation and acoustics of transparent vowels. More broadly, the proposal is that the phonology of transparent vowels is better understood when their phonological patterning is studied together with their articulatory and acoustic characteristics. PMID:18389086

  5. English vowel learning by speakers of Mandarin

    NASA Astrophysics Data System (ADS)

    Thomson, Ron I.

    2005-04-01

    One of the most influential models of second language (L2) speech perception and production [Flege, Speech Perception and Linguistic Experience (York, Baltimore, 1995) pp. 233-277] argues that during initial stages of L2 acquisition, perceptual categories sharing the same or nearly the same acoustic space as first language (L1) categories will be processed as members of that L1 category. Previous research has generally been limited to testing these claims on binary L2 contrasts, rather than larger portions of the perceptual space. This study examines the development of 10 English vowel categories by 20 Mandarin L1 learners of English. Imitation of English vowel stimuli by these learners, at 6 data collection points over the course of one year, were recorded. Using a statistical pattern recognition model, these productions were then assessed against native speaker norms. The degree to which the learners' perception/production shifted toward the target English vowels and the degree to which they matched L1 categories in ways predicted by theoretical models are discussed. The results of this experiment suggest that previous claims about perceptual assimilation of L2 categories to L1 categories may be too strong.

  6. Recognition of Emotions in Mexican Spanish Speech: An Approach Based on Acoustic Modelling of Emotion-Specific Vowels

    PubMed Central

    Caballero-Morales, Santiago-Omar

    2013-01-01

    An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness). Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR's output for the sentence. With this approach, accuracy of 87–100% was achieved for the recognition of emotional state of Mexican Spanish speech. PMID:23935410

  7. Phonetic Modification of Vowel Space in Storybook Speech to Infants up to 2 Years of Age

    PubMed Central

    Burnham, Evamarie B.; Wieland, Elizabeth A.; Kondaurova, Maria V.; McAuley, J. Devin; Bergeson, Tonya R.

    2015-01-01

    Purpose A large body of literature has indicated vowel space area expansion in infant-directed (ID) speech compared with adult-directed (AD) speech, which may promote language acquisition. The current study tested whether this expansion occurs in storybook speech read to infants at various points during their first 2 years of life. Method In 2 studies, mothers read a storybook containing target vowels in ID and AD speech conditions. Study 1 was longitudinal, with 11 mothers recorded when their infants were 3, 6, and 9 months old. Study 2 was cross-sectional, with 48 mothers recorded when their infants were 3, 9, 13, or 20 months old (n = 12 per group). The 1st and 2nd formants of vowels /i/, /ɑ/, and /u/ were measured, and vowel space area and dispersion were calculated. Results Across both studies, 1st and/or 2nd formant frequencies shifted systematically for /i/ and /u/ vowels in ID compared with AD speech. No difference in vowel space area or dispersion was found. Conclusions The results suggest that a variety of communication and situational factors may affect phonetic modifications in ID speech, but that vowel space characteristics in speech to infants stay consistent across the first 2 years of life. PMID:25659121

  8. Pacific northwest vowels: A Seattle neighborhood dialect study

    NASA Astrophysics Data System (ADS)

    Ingle, Jennifer K.; Wright, Richard; Wassink, Alicia

    2005-04-01

    According to current literature a large region encompassing nearly the entire west half of the U.S. belongs to one dialect region referred to as Western, which furthermore, according to Labov et al., ``... has developed a characteristic but not unique phonology.'' [http://www.ling.upenn.edu/phono-atlas/NationalMap/NationalMap.html] This paper will describe the vowel space of a set of Pacific Northwest American English speakers native to the Ballard neighborhood of Seattle, Wash. based on the acoustical analysis of high-quality Marantz CDR 300 recordings. Characteristics, such as low back merger and [u] fronting will be compared to findings by other studies. It is hoped that these recordings will contribute to a growing number of corpora of North American English dialects. All participants were born in Seattle and began their residence in Ballard between ages 0-8. They were recorded in two styles of speech: individually reading repetitions of a word list containing one token each of 10 vowels within carrier phrases, and in casual conversation for 40 min with a partner matched in age, gender, and social mobility. The goal was to create a compatible data set for comparison with current acoustic studies. F1 and F2 and vowel duration from LPC spectral analysis will be presented.

  9. DIMENSION-BASED STATISTICAL LEARNING OF VOWELS

    PubMed Central

    Liu, Ran; Holt, Lori L.

    2015-01-01

    Speech perception depends on long-term representations that reflect regularities of the native language. However, listeners rapidly adapt when speech acoustics deviate from these regularities due to talker idiosyncrasies such as foreign accents and dialects. To better understand these dual aspects of speech perception, we probe native English listeners’ baseline perceptual weighting of two acoustic dimensions (spectral quality and vowel duration) towards vowel categorization and examine how they subsequently adapt to an “artificial accent” that deviates from English norms in the correlation between the two dimensions. At baseline, listeners rely relatively more on spectral quality than vowel duration to signal vowel category, but duration nonetheless contributes. Upon encountering an “artificial accent” in which the spectral-duration correlation is perturbed relative to English language norms, listeners rapidly down-weight reliance on duration. Listeners exhibit this type of short-term statistical learning even in the context of nonwords, confirming that lexical information is not necessary to this form of adaptive plasticity in speech perception. Moreover, learning generalizes to both novel lexical contexts and acoustically-distinct altered voices. These findings are discussed in the context of a mechanistic proposal for how supervised learning may contribute to this type of adaptive plasticity in speech perception. PMID:26280268

  10. Acoustic-articulatory mapping in vowels by locally weighted regression

    PubMed Central

    McGowan, Richard S.; Berger, Michael A.

    2009-01-01

    A method for mapping between simultaneously measured articulatory and acoustic data is proposed. The method uses principal components analysis on the articulatory and acoustic variables, and mapping between the domains by locally weighted linear regression, or loess [Cleveland, W. S. (1979). J. Am. Stat. Assoc. 74, 829–836]. The latter method permits local variation in the slopes of the linear regression, assuming that the function being approximated is smooth. The methodology is applied to vowels of four speakers in the Wisconsin X-ray Microbeam Speech Production Database, with formant analysis. Results are examined in terms of (1) examples of forward (articulation-to-acoustics) mappings and inverse mappings, (2) distributions of local slopes and constants, (3) examples of correlations among slopes and constants, (4) root-mean-square error, and (5) sensitivity of formant frequencies to articulatory change. It is shown that the results are qualitatively correct and that loess performs better than global regression. The forward mappings show different root-mean-square error properties than the inverse mappings indicating that this method is better suited for the forward mappings than the inverse mappings, at least for the data chosen for the current study. Some preliminary results on sensitivity of the first two formant frequencies to the two most important articulatory principal components are presented. PMID:19813812

  11. How Native Do They Sound? An Acoustic Analysis of the Spanish Vowels of Elementary Spanish Immersion Students

    ERIC Educational Resources Information Center

    Menke, Mandy R.

    2015-01-01

    Language immersion students' lexical, syntactic, and pragmatic competencies are well documented, yet their phonological skill has remained relatively unexplored. This study investigates the Spanish vowel productions of a cross-sectional sample of 35 one-way Spanish immersion students. Learner productions were analyzed acoustically and compared to…

  12. Speechant: A Vowel Notation System to Teach English Pronunciation

    ERIC Educational Resources Information Center

    dos Reis, Jorge; Hazan, Valerie

    2012-01-01

    This paper introduces a new vowel notation system aimed at aiding the teaching of English pronunciation. This notation system, designed as an enhancement to orthographic text, was designed to use concepts borrowed from the representation of musical notes and is also linked to the acoustic characteristics of vowel sounds. Vowel timbre is…

  13. Regional dialect variation in the vowel systems of typically developing children

    PubMed Central

    Jacewicz, Ewa; Fox, Robert Allen; Salmons, Joseph

    2015-01-01

    Purpose To investigate regional dialect variation in the vowel systems of normally developing 8–12 years-old children. Method Thirteen vowels in isolated h_d words were produced by 94 children and 93 adults, males and females. All participants spoke American English and were born and raised in one of three distinct dialect regions in the United States: western North Carolina (Southern dialect), central Ohio (Midland) and southeastern Wisconsin (Northern Midwestern dialect). Acoustic analysis included formant frequencies (F1 and F2) measured at five equidistant time points in a vowel and formant movement (trajectory length). Results Children’s productions showed many dialect-specific features comparable to those in adult speakers, both in terms of vowel dispersion patterns and formant movement. Different features were also found including systemic vowel changes, significant monophthongization of selected vowels and greater formant movement in diphthongs. Conclusions The acoustic results provide evidence for regional distinctiveness in children’s vowel systems. Children acquire not only the systemic relations among vowels but also their dialect-specific patterns of formant dynamics. Directing attention to the regional variation in the production of American English vowels, this work may prove helpful in better understanding and interpretation of the development of vowel categories and vowel systems in children. PMID:20966384

  14. Perceptual invariance of coarticulated vowels over variations in speaking rate.

    PubMed

    Stack, Janet W; Strange, Winifred; Jenkins, James J; Clarke, William D; Trent, Sonja A

    2006-04-01

    This study examined the perception and acoustics of a large corpus of vowels spoken in consonant-vowel-consonant syllables produced in citation-form (lists) and spoken in sentences at normal and rapid rates by a female adult. Listeners correctly categorized the speaking rate of sentence materials as normal or rapid (2% errors) but did not accurately classify the speaking rate of the syllables when they were excised from the sentences (25% errors). In contrast, listeners accurately identified the vowels produced in sentences spoken at both rates when presented the sentences and when presented the excised syllables blocked by speaking rate or randomized. Acoustical analysis showed that formant frequencies at syllable midpoint for vowels in sentence materials showed "target undershoot" relative to citation-form values, but little change over speech rate. Syllable durations varied systematically with vowel identity, speaking rate, and voicing of final consonant. Vowel-inherent-spectral-change was invariant in direction of change over rate and context for most vowels. The temporal location of maximum F1 frequency further differentiated spectrally adjacent lax and tense vowels. It was concluded that listeners were able to utilize these rate- and context-independent dynamic spectrotemporal parameters to identify coarticulated vowels, even when sentential information about speaking rate was not available.

  15. Parkinson's disease and the effect of lexical factors on vowel articulation.

    PubMed

    Watson, Peter J; Munson, Benjamin

    2008-11-01

    Lexical factors (i.e., word frequency and phonological neighborhood density) influence speech perception and production. It is unknown if these factors are affected by Parkinson's disease (PD). Ten men with PD and ten healthy men read CVC words (varying orthogonally for word frequency and density) aloud while audio recorded. Acoustic analysis was performed on duration and Bark-scaled F1-F2 values of the vowels contained in the words. Vowel space was larger for low-frequency words from dense neighborhoods than from sparse ones for both groups. However, the participants with PD did not show an effect of density on dispersion for high-frequency words.

  16. Vowel Acoustics in Parkinson's Disease and Multiple Sclerosis: Comparison of Clear, Loud, and Slow Speaking Conditions

    ERIC Educational Resources Information Center

    Tjaden, Kris; Lam, Jennifer; Wilding, Greg

    2013-01-01

    Purpose: The impact of clear speech, increased vocal intensity, and rate reduction on acoustic characteristics of vowels was compared in speakers with Parkinson's disease (PD), speakers with multiple sclerosis (MS), and healthy controls. Method: Speakers read sentences in habitual, clear, loud, and slow conditions. Variations in clarity,…

  17. Vowels in infant-directed speech: More breathy and more variable, but not clearer.

    PubMed

    Miyazawa, Kouki; Shinya, Takahito; Martin, Andrew; Kikuchi, Hideaki; Mazuka, Reiko

    2017-09-01

    Infant-directed speech (IDS) is known to differ from adult-directed speech (ADS) in a number of ways, and it has often been argued that some of these IDS properties facilitate infants' acquisition of language. An influential study in support of this view is Kuhl et al. (1997), which found that vowels in IDS are produced with expanded first and second formants (F1/F2) on average, indicating that the vowels are acoustically further apart in IDS than in ADS. These results have been interpreted to mean that the way vowels are produced in IDS makes infants' task of learning vowel categories easier. The present paper revisits this interpretation by means of a thorough analysis of IDS vowels using a large-scale corpus of Japanese natural utterances. We will show that the expansion of F1/F2 values does occur in spontaneous IDS even when the vowels' prosodic position, lexical pitch accent, and lexical bias are accounted for. When IDS vowels are compared to carefully read speech (CS) by the same mothers, however, larger variability among IDS vowel tokens means that the acoustic distances among vowels are farther apart only in CS, but not in IDS when compared to ADS. Finally, we will show that IDS vowels are significantly more breathy than ADS or CS vowels. Taken together, our results demonstrate that even though expansion of formant values occurs in spontaneous IDS, this expansion cannot be interpreted as an indication that the acoustic distances among vowels are farther apart, as is the case in CS. Instead, we found that IDS vowels are characterized by breathy voice, which has been associated with the communication of emotional affect. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Processing voiceless vowels in Japanese: Effects of language-specific phonological knowledge

    NASA Astrophysics Data System (ADS)

    Ogasawara, Naomi

    2005-04-01

    There has been little research on processing allophonic variation in the field of psycholinguistics. This study focuses on processing the voiced/voiceless allophonic alternation of high vowels in Japanese. Three perception experiments were conducted to explore how listeners parse out vowels with the voicing alternation from other segments in the speech stream and how the different voicing statuses of the vowel affect listeners' word recognition process. The results from the three experiments show that listeners use phonological knowledge of their native language for phoneme processing and for word recognition. However, interactions of the phonological and acoustic effects are observed to be different in each process. The facilitatory phonological effect and the inhibitory acoustic effect cancel out one another in phoneme processing; while in word recognition, the facilitatory phonological effect overrides the inhibitory acoustic effect.

  19. The Acoustic Characteristics of Diphthongs in Indian English

    ERIC Educational Resources Information Center

    Maxwell, Olga; Fletcher, Janet

    2010-01-01

    This paper presents the results of an acoustic analysis of English diphthongs produced by three L1 speakers of Hindi and four L1 speakers of Punjabi. Formant trajectories of rising and falling diphthongs (i.e., vowels where there is a clear rising or falling trajectory through the F1/F2 vowel space) were analysed in a corpus of citation-form…

  20. Acoustic Analysis on the Palatalized Vowels of Modern Mongolian

    ERIC Educational Resources Information Center

    Bulgantamir, Sangidkhorloo

    2015-01-01

    In Modern Mongolian the palatalized vowels [a?, ??, ?? ] before palatalized consonants are considered as phoneme allophones according to the most scholars. Nevertheless theses palatalized vowels have the distinctive features what could be proved by the minimal pairs and nowadays this question is open and not profoundly studied. The purpose of this…

  1. Perceptual Adaptation of Voice Gender Discrimination with Spectrally Shifted Vowels

    ERIC Educational Resources Information Center

    Li, Tianhao; Fu, Qian-Jie

    2011-01-01

    Purpose: To determine whether perceptual adaptation improves voice gender discrimination of spectrally shifted vowels and, if so, which acoustic cues contribute to the improvement. Method: Voice gender discrimination was measured for 10 normal-hearing subjects, during 5 days of adaptation to spectrally shifted vowels, produced by processing the…

  2. Deep Brain Stimulation of the Subthalamic Nucleus Parameter Optimization for Vowel Acoustics and Speech Intelligibility in Parkinson's Disease

    ERIC Educational Resources Information Center

    Knowles, Thea; Adams, Scott; Abeyesekera, Anita; Mancinelli, Cynthia; Gilmore, Greydon; Jog, Mandar

    2018-01-01

    Purpose: The settings of 3 electrical stimulation parameters were adjusted in 12 speakers with Parkinson's disease (PD) with deep brain stimulation of the subthalamic nucleus (STN-DBS) to examine their effects on vowel acoustics and speech intelligibility. Method: Participants were tested under permutations of low, mid, and high STN-DBS frequency,…

  3. Articulatory Changes in Vowel Production following STN DBS and Levodopa Intake in Parkinson's Disease

    PubMed Central

    Cantin, Léo; Prud'Homme, Michel; Langlois, Mélanie

    2015-01-01

    Purpose. To investigate the impact of deep brain stimulation of the subthalamic nucleus (STN DBS) and levodopa intake on vowel articulation in dysarthric speakers with Parkinson's disease (PD). Methods. Vowel articulation was assessed in seven Quebec French speakers diagnosed with idiopathic PD who underwent STN DBS. Assessments were conducted on- and off-medication, first prior to surgery and then 1 year later. All recordings were made on-stimulation. Vowel articulation was measured using acoustic vowel space and formant centralization ratio. Results. Compared to the period before surgery, vowel articulation was reduced after surgery when patients were off-medication, while it was better on-medication. The impact of levodopa intake on vowel articulation changed with STN DBS: before surgery, levodopa impaired articulation, while it no longer had a negative effect after surgery. Conclusions. These results indicate that while STN DBS could lead to a direct deterioration in articulation, it may indirectly improve it by reducing the levodopa dose required to manage motor symptoms. These findings suggest that, with respect to speech production, STN DBS and levodopa intake cannot be investigated separately because the two are intrinsically linked. Along with motor symptoms, speech production should be considered when optimizing therapeutic management of patients with PD. PMID:26558134

  4. Cross-modal associations in synaesthesia: Vowel colours in the ear of the beholder.

    PubMed

    Moos, Anja; Smith, Rachel; Miller, Sam R; Simmons, David R

    2014-01-01

    Human speech conveys many forms of information, but for some exceptional individuals (synaesthetes), listening to speech sounds can automatically induce visual percepts such as colours. In this experiment, grapheme-colour synaesthetes and controls were asked to assign colours, or shades of grey, to different vowel sounds. We then investigated whether the acoustic content of these vowel sounds influenced participants' colour and grey-shade choices. We found that both colour and grey-shade associations varied systematically with vowel changes. The colour effect was significant for both participant groups, but significantly stronger and more consistent for synaesthetes. Because not all vowel sounds that we used are "translatable" into graphemes, we conclude that acoustic-phonetic influences co-exist with established graphemic influences in the cross-modal correspondences of both synaesthetes and non-synaesthetes.

  5. Optimizing Vowel Formant Measurements in Four Acoustic Analysis Systems for Diverse Speaker Groups

    PubMed Central

    Derdemezis, Ekaterini; Kent, Ray D.; Fourakis, Marios; Reinicke, Emily L.; Bolt, Daniel M.

    2016-01-01

    Purpose This study systematically assessed the effects of select linear predictive coding (LPC) analysis parameter manipulations on vowel formant measurements for diverse speaker groups using 4 trademarked Speech Acoustic Analysis Software Packages (SAASPs): CSL, Praat, TF32, and WaveSurfer. Method Productions of 4 words containing the corner vowels were recorded from 4 speaker groups with typical development (male and female adults and male and female children) and 4 speaker groups with Down syndrome (male and female adults and male and female children). Formant frequencies were determined from manual measurements using a consensus analysis procedure to establish formant reference values, and from the 4 SAASPs (using both the default analysis parameters and with adjustments or manipulations to select parameters). Smaller differences between values obtained from the SAASPs and the consensus analysis implied more optimal analysis parameter settings. Results Manipulations of default analysis parameters in CSL, Praat, and TF32 yielded more accurate formant measurements, though the benefit was not uniform across speaker groups and formants. In WaveSurfer, manipulations did not improve formant measurements. Conclusions The effects of analysis parameter manipulations on accuracy of formant-frequency measurements varied by SAASP, speaker group, and formant. The information from this study helps to guide clinical and research applications of SAASPs. PMID:26501214

  6. Sparseness of vowel category structure: Evidence from English dialect comparison

    PubMed Central

    Scharinger, Mathias; Idsardi, William J.

    2014-01-01

    Current models of speech perception tend to emphasize either fine-grained acoustic properties or coarse-grained abstract characteristics of speech sounds. We argue for a particular kind of 'sparse' vowel representations and provide new evidence that these representations account for the successful access of the corresponding categories. In an auditory semantic priming experiment, American English listeners made lexical decisions on targets (e.g. load) preceded by semantically related primes (e.g. pack). Changes of the prime vowel that crossed a vowel-category boundary (e.g. peck) were not treated as a tolerable variation, as assessed by a lack of priming, although the phonetic categories of the two different vowels considerably overlap in American English. Compared to the outcome of the same experiment with New Zealand English listeners, where such prime variations were tolerated, our experiment supports the view that phonological representations are important in guiding the mapping process from the acoustic signal to an abstract mental representation. Our findings are discussed with regard to current models of speech perception and recent findings from brain imaging research. PMID:24653528

  7. Vowel reduction across tasks for male speakers of American English.

    PubMed

    Kuo, Christina; Weismer, Gary

    2016-07-01

    This study examined acoustic variation of vowels within speakers across speech tasks. The overarching goal of the study was to understand within-speaker variation as one index of the range of normal speech motor behavior for American English vowels. Ten male speakers of American English performed four speech tasks including citation form sentence reading with a clear-speech style (clear-speech), citation form sentence reading (citation), passage reading (reading), and conversational speech (conversation). Eight monophthong vowels in a variety of consonant contexts were studied. Clear-speech was operationally defined as the reference point for describing variation. Acoustic measures associated with the conventions of vowel targets were obtained and examined. These included temporal midpoint formant frequencies for the first three formants (F1, F2, and F3) and the derived Euclidean distances in the F1-F2 and F2-F3 planes. Results indicated that reduction toward the center of the F1-F2 and F2-F3 planes increased in magnitude across the tasks in the order of clear-speech, citation, reading, and conversation. The cross-task variation was comparable for all speakers despite fine-grained individual differences. The characteristics of systematic within-speaker acoustic variation across tasks have potential implications for the understanding of the mechanisms of speech motor control and motor speech disorders.

  8. Multichannel Compression: Effects of Reduced Spectral Contrast on Vowel Identification

    ERIC Educational Resources Information Center

    Bor, Stephanie; Souza, Pamela; Wright, Richard

    2008-01-01

    Purpose: To clarify if large numbers of wide dynamic range compression channels provide advantages for vowel identification and to measure its acoustic effects. Methods: Eight vowels produced by 12 talkers in the /hVd/ context were compressed using 1, 2, 4, 8, and 16 channels. Formant contrast indices (mean formant peak minus mean formant trough;…

  9. Tongue- and Jaw-Specific Contributions to Acoustic Vowel Contrast Changes in the Diphthong /ai/ in Response to Slow, Loud, And Clear Speech

    ERIC Educational Resources Information Center

    Mefferd, Antje S.

    2017-01-01

    Purpose: This study sought to determine decoupled tongue and jaw displacement changes and their specific contributions to acoustic vowel contrast changes during slow, loud, and clear speech. Method: Twenty typical talkers repeated "see a kite again" 5 times in 4 speech conditions (typical, slow, loud, clear). Speech kinematics were…

  10. Clear Speech Variants: An Acoustic Study in Parkinson's Disease.

    PubMed

    Lam, Jennifer; Tjaden, Kris

    2016-08-01

    The authors investigated how different variants of clear speech affect segmental and suprasegmental acoustic measures of speech in speakers with Parkinson's disease and a healthy control group. A total of 14 participants with Parkinson's disease and 14 control participants served as speakers. Each speaker produced 18 different sentences selected from the Sentence Intelligibility Test (Yorkston & Beukelman, 1996). All speakers produced stimuli in 4 speaking conditions (habitual, clear, overenunciate, and hearing impaired). Segmental acoustic measures included vowel space area and first moment (M1) coefficient difference measures for consonant pairs. Second formant slope of diphthongs and measures of vowel and fricative durations were also obtained. Suprasegmental measures included fundamental frequency, sound pressure level, and articulation rate. For the majority of adjustments, all variants of clear speech instruction differed from the habitual condition. The overenunciate condition elicited the greatest magnitude of change for segmental measures (vowel space area, vowel durations) and the slowest articulation rates. The hearing impaired condition elicited the greatest fricative durations and suprasegmental adjustments (fundamental frequency, sound pressure level). Findings have implications for a model of speech production for healthy speakers as well as for speakers with dysarthria. Findings also suggest that particular clear speech instructions may target distinct speech subsystems.

  11. A comparison of vowel normalization procedures for language variation research

    NASA Astrophysics Data System (ADS)

    Adank, Patti; Smits, Roel; van Hout, Roeland

    2004-11-01

    An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels (``vowel-extrinsic'' information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself (``vowel-intrinsic'' information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., ``formant-extrinsic'' F2-F1). .

  12. A comparison of vowel normalization procedures for language variation research.

    PubMed

    Adank, Patti; Smits, Roel; van Hout, Roeland

    2004-11-01

    An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels ("vowel-extrinsic" information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself ("vowel-intrinsic" information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., "formant-extrinsic" F2-F1).

  13. The Acoustic Correlates of Breathy Voice: a Study of Source-Vowel INTERACTION{00}{00}{00}{00}{00}{00}{00} {00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00} {00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00} {00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}.

    NASA Astrophysics Data System (ADS)

    Lin, Yeong-Fen Emily

    This thesis is the result of an investigation of the source-vowel interaction from the point of view of perception. Major objectives include the identification of the acoustic correlates of breathy voice and the disclosure of the interdependent relationship between the perception of vowel identity and breathiness. Two experiments were conducted to achieve these objectives. In the first experiment, voice samples from one control group and seven patient groups were compared. The control group consisted of five female and five male adults. The ten normals were recruited to perform a sustained vowel phonation task with constant pitch and loudness. The voice samples of seventy patients were retrieved from a hospital data base, with vowels extracted from sentences repeated by patients at their habitual pitch and loudness. The seven patient groups were divided, based on a unique combination of patients' measures on mean flow rate and glottal resistance. Eighteen acoustic variables were treated with a three-way (Gender x Group x Vowel) ANOVA. Parameters showing a significant female-male difference as well as group differences, especially those between the presumed breathy group and the other groups, were identified as relevant to the distinction of breathy voice. As a result, F1-F3 amplitude difference and slope were found to be most effective in distinguishing breathy voice. Other acoustic correlates of breathy voice included F1 bandwidth, RMS-H1 amplitude difference, and F1-F2 amplitude difference and slope. In the second experiment, a formant synthesizer was used to generate vowel stimuli with varying spectral tilt and F1 bandwidth. Thirteen native American English speakers made dissimilarity judgements on paired stimuli in terms of vowel identity and breathiness. Listeners' perceptual vowel spaces were found to be affected by changes in the acoustic correlates of breathy voice. The threshold of detecting a change of vocal quality in the breathiness domain was also

  14. Toward a Systematic Evaluation of Vowel Target Events across Speech Tasks

    ERIC Educational Resources Information Center

    Kuo, Christina

    2011-01-01

    The core objective of this study was to examine whether acoustic variability of vowel production in American English, across speaking tasks, is systematic. Ten male speakers who spoke a relatively homogeneous Wisconsin dialect produced eight monophthong vowels (in hVd and CVC contexts) in four speaking tasks, including clear-speech, citation form,…

  15. Malaysian English: An Instrumental Analysis of Vowel Contrasts

    ERIC Educational Resources Information Center

    Pillai, Stefanie; Don, Zuraidah Mohd.; Knowles, Gerald; Tang, Jennifer

    2010-01-01

    This paper makes an instrumental analysis of English vowel monophthongs produced by 47 female Malaysian speakers. The focus is on the distribution of Malaysian English vowels in the vowel space, and the extent to which there is phonetic contrast between traditionally paired vowels. The results indicate that, like neighbouring varieties of English,…

  16. A comparison of vowel formant frequencies in the babbling of infants exposed to Canadian English and Canadian French

    NASA Astrophysics Data System (ADS)

    Mattock, Karen; Rvachew, Susan; Polka, Linda; Turner, Sara

    2005-04-01

    It is well established that normally developing infants typically enter the canonical babbling stage of production between 6 and 8 months of age. However, whether the linguistic environment affects babbling, either in terms of the phonetic inventory of vowels produced by infants [Oller & Eiler (1982)] or the acoustics of vowel formants [Boysson-Bardies et al. (1989)] is controversial. The spontaneous speech of 42 Canadian English- and Canadian French-learning infants aged 8 to 11, 12 to 15 and 16 to 18 months of age was recorded and digitized to yield a total of 1253 vowels that were spectrally analyzed and statistically compared for differences in first and second formant frequencies. Language-specific influences on vowel acoustics were hypothesized. Preliminary results reveal changes in formant frequencies as a function of age and language background. There is evidence of decreases over age in the F1 values of French but not English infants vowels, and decreases over age in the F2 values of English but not French infants vowels. The notion of an age-related shift in infants attention to language-specific acoustic features and the implications of this for early vocal development as well as for the production of Canadian English and Canadian French vowels will be discussed.

  17. Pitch (F0) and formant profiles of human vowels and vowel-like baboon grunts: The role of vocalizer body size and voice-acoustic allometry

    NASA Astrophysics Data System (ADS)

    Rendall, Drew; Kollias, Sophie; Ney, Christina; Lloyd, Peter

    2005-02-01

    Key voice features-fundamental frequency (F0) and formant frequencies-can vary extensively between individuals. Much of the variation can be traced to differences in the size of the larynx and vocal-tract cavities, but whether these differences in turn simply reflect differences in speaker body size (i.e., neutral vocal allometry) remains unclear. Quantitative analyses were therefore undertaken to test the relationship between speaker body size and voice F0 and formant frequencies for human vowels. To test the taxonomic generality of the relationships, the same analyses were conducted on the vowel-like grunts of baboons, whose phylogenetic proximity to humans and similar vocal production biology and voice acoustic patterns recommend them for such comparative research. For adults of both species, males were larger than females and had lower mean voice F0 and formant frequencies. However, beyond this, F0 variation did not track body-size variation between the sexes in either species, nor within sexes in humans. In humans, formant variation correlated significantly with speaker height but only in males and not in females. Implications for general vocal allometry are discussed as are implications for speech origins theories, and challenges to them, related to laryngeal position and vocal tract length. .

  18. Investigating the effect of STN-DBS stimulation and different frequency settings on the acoustic-articulatory features of vowels.

    PubMed

    Yilmaz, Atilla; Sarac, Elif Tuğba; Aydinli, Fatma Esen; Yildizgoren, Mustafa Turgut; Okuyucu, Emine Esra; Serarslan, Yurdal

    2018-06-25

    Parkinson's disease (PD) is the second most frequent progressive neuro-degenerative disorder. In addition to motor symptoms, nonmotor symptoms and voice and speech disorders can also develop in 90% of PD patients. The aim of our study was to investigate the effects of DBS and different DBS frequencies on speech acoustics of vowels in PD patients. The study included 16 patients who underwent STN-DBS surgery due to PD. The voice recordings for the vowels including [a], [e], [i], and [o] were performed at frequencies including 230, 130, 90, and 60 Hz and off-stimulation. The voice recordings were gathered and evaluated by the Praat software, and the effects on the first (F1), second (F2), and third formant (F3) frequencies were analyzed. A significant difference was found for the F1 value of the vowel [a] at 130 Hz compared to off-stimulation. However, no significant difference was found between the three formant frequencies with regard to the stimulation frequencies and off-stimulation. In addition, though not statistically significant, stimulation at 60 and 230 Hz led to several differences in the formant frequencies of other three vowels. Our results indicated that STN-DBS stimulation at 130 Hz had a significant positive effect on articulation of [a] compared to off-stimulation. Although there is not any statistical significant stimulation at 60 and 230 Hz may also have an effect on the articulation of [e], [i], and [o] but this effect needs to be investigated in future studies with higher numbers of participants.

  19. Mechanisms of Vowel Variation in African American English.

    PubMed

    Holt, Yolanda Feimster

    2018-02-15

    This research explored mechanisms of vowel variation in African American English by comparing 2 geographically distant groups of African American and White American English speakers for participation in the African American Shift and the Southern Vowel Shift. Thirty-two male (African American: n = 16, White American controls: n = 16) lifelong residents of cities in eastern and western North Carolina produced heed,hid,heyd,head,had,hod,hawed,whod,hood,hoed,hide,howed,hoyd, and heard 3 times each in random order. Formant frequency, duration, and acoustic analyses were completed for the vowels /i, ɪ, e, ɛ, æ, ɑ, ɔ, u, ʊ, o, aɪ, aʊ, oɪ, ɝ/ produced in the listed words. African American English speakers show vowel variation. In the west, the African American English speakers are participating in the Southern Vowel Shift and hod fronting of the African American Shift. In the east, neither the African American English speakers nor their White peers are participating in the Southern Vowel Shift. The African American English speakers show limited participation in the African American Shift. The results provide evidence of regional and socio-ethnic variation in African American English in North Carolina.

  20. Call me Alix, not Elix: vowels are more important than consonants in own-name recognition at 5 months.

    PubMed

    Bouchon, Camillia; Floccia, Caroline; Fux, Thibaut; Adda-Decker, Martine; Nazzi, Thierry

    2015-07-01

    Consonants and vowels differ acoustically and articulatorily, but also functionally: Consonants are more relevant for lexical processing, and vowels for prosodic/syntactic processing. These functional biases could be powerful bootstrapping mechanisms for learning language, but their developmental origin remains unclear. The relative importance of consonants and vowels at the onset of lexical acquisition was assessed in French-learning 5-month-olds by testing sensitivity to minimal phonetic changes in their own name. Infants' reactions to mispronunciations revealed sensitivity to vowel but not consonant changes. Vowels were also more salient (on duration and intensity) but less distinct (on spectrally based measures) than consonants. Lastly, vowel (but not consonant) mispronunciation detection was modulated by acoustic factors, in particular spectrally based distance. These results establish that consonant changes do not affect lexical recognition at 5 months, while vowel changes do; the consonant bias observed later in development does not emerge until after 5 months through additional language exposure. © 2014 John Wiley & Sons Ltd.

  1. Acoustic Analysis of Persian Vowels in Cochlear Implant Users: A Comparison With Hearing-impaired Children Using Hearing Aid and Normal-hearing Children.

    PubMed

    Jafari, Narges; Yadegari, Fariba; Jalaie, Shohreh

    2016-11-01

    Vowel production in essence is auditorily controlled; hence, the role of the auditory feedback in vowel production is very important. The purpose of this study was to compare formant frequencies and vowel space in Persian-speaking deaf children with cochlear implantation (CI), hearing-impaired children with hearing aid (HA), and their normal-hearing (NH) peers. A total of 40 prelingually children with hearing impairment and 20 NH groups participated in this study. Participants were native Persian speakers. The average of first formant frequency (F 1 ) and second formant frequency (F 2 ) of the six vowels were measured using Praat software (version 5.1.44). One-way analysis of variance (ANOVA) was used to analyze the differences between the three3 groups. The mean value of F 1 for vowel /i/ was significantly different (between CI and NH children and also between HA and NH groups) (F 2, 57  = 9.229, P < 0.001). For vowel /a/, the mean value of F 1 was significantly different (between HA and NH groups) (F 2, 57  = 3.707, P < 0.05). Regarding the second formant frequency, a post hoc Tukey test revealed that the differences were between HA and NH children (P < 0.05). F 2 for vowel /o/ was significantly different (F 2, 57  = 4.572, P < 0.05). Also, the mean value of F 2 for vowel /a/ was significantly different (F 2, 57  = 3.184, P < 0.05). About 1 year after implantation, the formants shift closer to those of the NH listeners who tend to have more expanded vowel spaces than hearing-impaired listeners with hearing aids. Probably, this condition is because CI has a subtly positive impact on the place of articulation of vowels. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  2. From prosodic structure to acoustic saliency: A fMRI investigation of speech rate, clarity, and emphasis

    NASA Astrophysics Data System (ADS)

    Golfinopoulos, Elisa

    Acoustic variability in fluent speech can arise at many stages in speech production planning and execution. For example, at the phonological encoding stage, the grouping of phonemes into syllables determines which segments are coarticulated and, by consequence, segment-level acoustic variation. Likewise phonetic encoding, which determines the spatiotemporal extent of articulatory gestures, will affect the acoustic detail of segments. Functional magnetic resonance imaging (fMRI) was used to measure brain activity of fluent adult speakers in four speaking conditions: fast, normal, clear, and emphatic (or stressed) speech. These speech manner changes typically result in acoustic variations that do not change the lexical or semantic identity of productions but do affect the acoustic saliency of phonemes, syllables and/or words. Acoustic responses recorded inside the scanner were assessed quantitatively using eight acoustic measures and sentence duration was used as a covariate of non-interest in the neuroimaging analysis. Compared to normal speech, emphatic speech was characterized acoustically by a greater difference between stressed and unstressed vowels in intensity, duration, and fundamental frequency, and neurally by increased activity in right middle premotor cortex and supplementary motor area, and bilateral primary sensorimotor cortex. These findings are consistent with right-lateralized motor planning of prosodic variation in emphatic speech. Clear speech involved an increase in average vowel and sentence durations and average vowel spacing, along with increased activity in left middle premotor cortex and bilateral primary sensorimotor cortex. These findings are consistent with an increased reliance on feedforward control, resulting in hyper-articulation, under clear as compared to normal speech. Fast speech was characterized acoustically by reduced sentence duration and average vowel spacing, and neurally by increased activity in left anterior frontal

  3. Characteristics of the Lax Vowel Space in Dysarthria

    ERIC Educational Resources Information Center

    Tjaden, Kris; Rivera, Deanna; Wilding, Gregory; Turner, Greg S.

    2005-01-01

    It has been hypothesized that lax vowels may be relatively unaffected by dysarthria, owing to the reduced vocal tract shapes required for these phonetic events (G. S. Turner, K. Tjaden, & G. Weismer, 1995). It also has been suggested that lax vowels may be especially susceptible to speech mode effects (M. A. Picheny, N. I. Durlach, & L. D. Braida,…

  4. Synthesis fidelity and time-varying spectral change in vowels

    NASA Astrophysics Data System (ADS)

    Assmann, Peter F.; Katz, William F.

    2005-02-01

    Recent studies have shown that synthesized versions of American English vowels are less accurately identified when the natural time-varying spectral changes are eliminated by holding the formant frequencies constant over the duration of the vowel. A limitation of these experiments has been that vowels produced by formant synthesis are generally less accurately identified than the natural vowels after which they are modeled. To overcome this limitation, a high-quality speech analysis-synthesis system (STRAIGHT) was used to synthesize versions of 12 American English vowels spoken by adults and children. Vowels synthesized with STRAIGHT were identified as accurately as the natural versions, in contrast with previous results from our laboratory showing identification rates 9%-12% lower for the same vowels synthesized using the cascade formant model. Consistent with earlier studies, identification accuracy was not reduced when the fundamental frequency was held constant across the vowel. However, elimination of time-varying changes in the spectral envelope using STRAIGHT led to a greater reduction in accuracy (23%) than was previously found with cascade formant synthesis (11%). A statistical pattern recognition model, applied to acoustic measurements of the natural and synthesized vowels, predicted both the higher identification accuracy for vowels synthesized using STRAIGHT compared to formant synthesis, and the greater effects of holding the formant frequencies constant over time with STRAIGHT synthesis. Taken together, the experiment and modeling results suggest that formant estimation errors and incorrect rendering of spectral and temporal cues by cascade formant synthesis contribute to lower identification accuracy and underestimation of the role of time-varying spectral change in vowels. .

  5. Cross-language categorization of French and German vowels by naive American listeners.

    PubMed

    Strange, Winifred; Levy, Erika S; Law, Franzo F

    2009-09-01

    American English (AE) speakers' perceptual assimilation of 14 North German (NG) and 9 Parisian French (PF) vowels was examined in two studies using citation-form disyllables (study 1) and sentences with vowels surrounded by labial and alveolar consonants in multisyllabic nonsense words (study 2). Listeners categorized multiple tokens of each NG and PF vowel as most similar to selected AE vowels and rated their category "goodness" on a nine-point Likert scale. Front, rounded vowels were assimilated primarily to back AE vowels, despite their acoustic similarity to front AE vowels. In study 1, they were considered poorer exemplars of AE vowels than were NG and PF back, rounded vowels; in study 2, front and back, rounded vowels were perceived as similar to each other. Assimilation of some front, unrounded and back, rounded NG and PF vowels varied with language, speaking style, and consonantal context. Differences in perceived similarity often could not be predicted from context-specific cross-language spectral similarities. Results suggest that listeners can access context-specific, phonetic details when listening to citation-form materials, but assimilate non-native vowels on the basis of context-independent phonological equivalence categories when processing continuous speech. Results are interpreted within the Automatic Selective Perception model of speech perception.

  6. A comparison of vowel productions in prelingually deaf children using cochlear implants, severe hearing-impaired children using conventional hearing aids and normal-hearing children.

    PubMed

    Baudonck, Nele; Van Lierde, K; Dhooge, I; Corthals, P

    2011-01-01

    The purpose of this study was to compare vowel productions by deaf cochlear implant (CI) children, hearing-impaired hearing aid (HA) children and normal-hearing (NH) children. 73 children [mean age: 9;14 years (years;months)] participated: 40 deaf CI children, 34 moderately to profoundly hearing-impaired HA children and 42 NH children. For the 3 corner vowels [a], [i] and [u], F(1), F(2) and the intrasubject SD were measured using the Praat software. Spectral separation between these vowel formants and vowel space were calculated. The significant effects in the CI group all pertain to a higher intrasubject variability in formant values, whereas the significant effects in the HA group all pertain to lower formant values. Both hearing-impaired subgroups showed a tendency toward greater intervowel distances and vowel space. Several subtle deviations in the vowel production of deaf CI children and hearing-impaired HA children could be established, using a well-defined acoustic analysis. CI children as well as HA children in this study tended to overarticulate, which hypothetically can be explained by a lack of auditory feedback and an attempt to compensate it by proprioceptive feedback during articulatory maneuvers. Copyright © 2010 S. Karger AG, Basel.

  7. Vowel Height Allophony and Dorsal Place Contrasts in Cochabamba Quechua.

    PubMed

    Gallagher, Gillian

    2016-01-01

    This paper reports on the results of two studies investigating the role of allophony in cueing phonemic contrasts. In Cochabamba Quechua, the uvularvelar place distinction is often cued by additional differences in the height of the surrounding vowels. An acoustic study documents the lowering effect of a preceding tautomorphemic or a following heteromorphemic uvular on the high vowels /i u/. A discrimination study finds that vowel height is a significant cue to the velar-uvular place contrast. These findings support a view of contrasts as collections of distinguishing properties, as opposed to oppositions in a single distinctive feature. © 2016 S. Karger AG, Basel.

  8. Vowel Intelligibility in Children with and without Dysarthria: An Exploratory Study

    ERIC Educational Resources Information Center

    Levy, Erika S.; Leone, Dorothy; Moya-Gale, Gemma; Hsu, Sih-Chiao; Chen, Wenli; Ramig, Lorraine O.

    2016-01-01

    Children with dysarthria due to cerebral palsy (CP) present with decreased vowel space area and reduced word intelligibility. Although a robust relationship exists between vowel space and word intelligibility, little is known about the intelligibility of vowels in this population. This exploratory study investigated the intelligibility of American…

  9. Vowel category dependence of the relationship between palate height, tongue height, and oral area.

    PubMed

    Hasegawa-Johnson, Mark; Pizza, Shamala; Alwan, Abeer; Cha, Jul Setsu; Haker, Katherine

    2003-06-01

    This article evaluates intertalker variance of oral area, logarithm of the oral area, tongue height, and formant frequencies as a function of vowel category. The data consist of coronal magnetic resonance imaging (MRI) sequences and acoustic recordings of 5 talkers, each producing 11 different vowels. Tongue height (left, right, and midsagittal), palate height, and oral area were measured in 3 coronal sections anterior to the oropharyngeal bend and were subjected to multivariate analysis of variance, variance ratio analysis, and regression analysis. The primary finding of this article is that oral area (between palate and tongue) showed less intertalker variance during production of vowels with an oral place of articulation (palatal and velar vowels) than during production of vowels with a uvular or pharyngeal place of articulation. Although oral area variance is place dependent, percentage variance (log area variance) is not place dependent. Midsagittal tongue height in the molar region was positively correlated with palate height during production of palatal vowels, but not during production of nonpalatal vowels. Taken together, these results suggest that small oral areas are characterized by relatively talker-independent vowel targets and that meeting these talker-independent targets is important enough that each talker adjusts his or her own tongue height to compensate for talker-dependent differences in constriction anatomy. Computer simulation results are presented to demonstrate that these results may be explained by an acoustic control strategy: When talkers with very different anatomical characteristics try to match talker-independent formant targets, the resulting area variances are minimized near the primary vocal tract constriction.

  10. Mapping the Speech Code: Cortical Responses Linking the Perception and Production of Vowels

    PubMed Central

    Schuerman, William L.; Meyer, Antje S.; McQueen, James M.

    2017-01-01

    The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation. PMID:28439232

  11. Vowel selection and its effects on perturbation and nonlinear dynamic measures.

    PubMed

    Maccallum, Julia K; Zhang, Yu; Jiang, Jack J

    2011-01-01

    Acoustic analysis of voice is typically conducted on recordings of sustained vowel phonation. This study applied perturbation and nonlinear dynamic analyses to the vowels /a/, /i/, and /u/ in order to determine vowel selection effects on analysis. Forty subjects (20 males and 20 females) with normal voices participated in recording. Traditional parameters of fundamental frequency, signal-to-noise ratio, percent jitter, and percent shimmer were calculated for the signals using CSpeech. Nonlinear dynamic parameters of correlation dimension and second-order entropy were also calculated. Perturbation analysis results were largely incongruous in this study and in previous research. Fundamental frequency results corroborated previous work, indicating higher fundamental frequency for /i/ and /u/ and lower fundamental frequency for /a/. Signal-to-noise ratio results showed that /i/ and /u/ have greater harmonic levels than /a/. Results of nonlinear dynamic analysis suggested that more complex activity may be evident in /a/ than in /i/ or /u/. Percent jitter and percent shimmer may not be useful for description of acoustic differences between vowels. Fundamental frequency, signal-to-noise ratio, and nonlinear dynamic parameters may be applied to characterize /a/ as having lower frequency, higher noise, and greater nonlinear components than /i/ and /u/. Copyright © 2010 S. Karger AG, Basel.

  12. Evaluating acoustic speaker normalization algorithms: evidence from longitudinal child data.

    PubMed

    Kohn, Mary Elizabeth; Farrington, Charlie

    2012-03-01

    Speaker vowel formant normalization, a technique that controls for variation introduced by physical differences between speakers, is necessary in variationist studies to compare speakers of different ages, genders, and physiological makeup in order to understand non-physiological variation patterns within populations. Many algorithms have been established to reduce variation introduced into vocalic data from physiological sources. The lack of real-time studies tracking the effectiveness of these normalization algorithms from childhood through adolescence inhibits exploration of child participation in vowel shifts. This analysis compares normalization techniques applied to data collected from ten African American children across five time points. Linear regressions compare the reduction in variation attributable to age and gender for each speaker for the vowels BEET, BAT, BOT, BUT, and BOAR. A normalization technique is successful if it maintains variation attributable to a reference sociolinguistic variable, while reducing variation attributable to age. Results indicate that normalization techniques which rely on both a measure of central tendency and range of the vowel space perform best at reducing variation attributable to age, although some variation attributable to age persists after normalization for some sections of the vowel space. © 2012 Acoustical Society of America

  13. Cross-linguistic studies of children’s and adults’ vowel spacesa

    PubMed Central

    Chung, Hyunju; Kong, Eun Jong; Edwards, Jan; Weismer, Gary; Fourakis, Marios; Hwang, Youngdeok

    2012-01-01

    This study examines cross-linguistic variation in the location of shared vowels in the vowel space across five languages (Cantonese, American English, Greek, Japanese, and Korean) and three age groups (2-year-olds, 5-year-olds, and adults). The vowels /a/, /i/, and /u/ were elicited in familiar words using a word repetition task. The productions of target words were recorded and transcribed by native speakers of each language. For correctly produced vowels, first and second formant frequencies were measured. In order to remove the effect of vocal tract size on these measurements, a normalization approach that calculates distance and angular displacement from the speaker centroid was adopted. Language-specific differences in the location of shared vowels in the formant values as well as the shape of the vowel spaces were observed for both adults and children. PMID:22280606

  14. The Relationship Between Acoustic Signal Typing and Perceptual Evaluation of Tracheoesophageal Voice Quality for Sustained Vowels.

    PubMed

    Clapham, Renee P; van As-Brooks, Corina J; van Son, Rob J J H; Hilgers, Frans J M; van den Brekel, Michiel W M

    2015-07-01

    To investigate the relationship between acoustic signal typing and perceptual evaluation of sustained vowels produced by tracheoesophageal (TE) speakers and the use of signal typing in the clinical setting. Two evaluators independently categorized 1.75-second segments of narrow-band spectrograms according to acoustic signal typing and independently evaluated the recording of the same segments on a visual analog scale according to overall perceptual acoustic voice quality. The relationship between acoustic signal typing and overall voice quality (as a continuous scale and as a four-point ordinal scale) was investigated and the proportion of inter-rater agreement as well as the reliability between the two measures is reported. The agreement between signal type (I-IV) and ordinal voice quality (four-point scale) was low but significant, and there was a significant linear relationship between the variables. Signal type correctly predicted less than half of the voice quality data. There was a significant main effect of signal type on continuous voice quality scores with significant differences in median quality scores between signal types I-IV, I-III, and I-II. Signal typing can be used as an adjunct to perceptual and acoustic evaluation of the same stimuli for TE speech as part of a multidimensional evaluation protocol. Signal typing in its current form provides limited predictive information on voice quality, and there is significant overlap between signal types II and III and perceptual categories. Future work should consider whether the current four signal types could be refined. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  15. Adult Second Language Learning of Spanish Vowels

    ERIC Educational Resources Information Center

    Cobb, Katherine; Simonet, Miquel

    2015-01-01

    The present study reports on the findings of a cross-sectional acoustic study of the production of Spanish vowels by three different groups of speakers: 1) native Spanish speakers; 2) native English intermediate learners of Spanish; and 3) native English advanced learners of Spanish. In particular, we examined the production of the five Spanish…

  16. Cross-language categorization of French and German vowels by naïve American listeners

    PubMed Central

    Strange, Winifred; Levy, Erika S.; Law, Franzo F.

    2009-01-01

    American English (AE) speakers’ perceptual assimilation of 14 North German (NG) and 9 Parisian French (PF) vowels was examined in two studies using citation-form disyllables (study 1) and sentences with vowels surrounded by labial and alveolar consonants in multisyllabic nonsense words (study 2). Listeners categorized multiple tokens of each NG and PF vowel as most similar to selected AE vowels and rated their category “goodness” on a nine-point Likert scale. Front, rounded vowels were assimilated primarily to back AE vowels, despite their acoustic similarity to front AE vowels. In study 1, they were considered poorer exemplars of AE vowels than were NG and PF back, rounded vowels; in study 2, front and back, rounded vowels were perceived as similar to each other. Assimilation of some front, unrounded and back, rounded NG and PF vowels varied with language, speaking style, and consonantal context. Differences in perceived similarity often could not be predicted from context-specific cross-language spectral similarities. Results suggest that listeners can access context-specific, phonetic details when listening to citation-form materials, but assimilate non-native vowels on the basis of context-independent phonological equivalence categories when processing continuous speech. Results are interpreted within the Automatic Selective Perception model of speech perception. PMID:19739759

  17. The Effect of Stress and Speech Rate on Vowel Coarticulation in Catalan Vowel-Consonant-Vowel Sequences

    ERIC Educational Resources Information Center

    Recasens, Daniel

    2015-01-01

    Purpose: The goal of this study was to ascertain the effect of changes in stress and speech rate on vowel coarticulation in vowel-consonant-vowel sequences. Method: Data on second formant coarticulatory effects as a function of changing /i/ versus /a/ were collected for five Catalan speakers' productions of vowel-consonant-vowel sequences with the…

  18. Rate and onset cues can improve cochlear implant synthetic vowel recognition in noise

    PubMed Central

    Mc Laughlin, Myles; Reilly, Richard B.; Zeng, Fan-Gang

    2013-01-01

    Understanding speech-in-noise is difficult for most cochlear implant (CI) users. Speech-in-noise segregation cues are well understood for acoustic hearing but not for electric hearing. This study investigated the effects of stimulation rate and onset delay on synthetic vowel-in-noise recognition in CI subjects. In experiment I, synthetic vowels were presented at 50, 145, or 795 pulse/s and noise at the same three rates, yielding nine combinations. Recognition improved significantly if the noise had a lower rate than the vowel, suggesting that listeners can use temporal gaps in the noise to detect a synthetic vowel. This hypothesis is supported by accurate prediction of synthetic vowel recognition using a temporal integration window model. Using lower rates a similar trend was observed in normal hearing subjects. Experiment II found that for CI subjects, a vowel onset delay improved performance if the noise had a lower or higher rate than the synthetic vowel. These results show that differing rates or onset times can improve synthetic vowel-in-noise recognition, indicating a need to develop speech processing strategies that encode or emphasize these cues. PMID:23464025

  19. Perception of steady-state vowels and vowelless syllables by adults and children

    NASA Astrophysics Data System (ADS)

    Nittrouer, Susan

    2005-04-01

    Vowels can be produced as long, isolated, and steady-state, but that is not how they are found in natural speech. Instead natural speech consists of almost continuously changing (i.e., dynamic) acoustic forms from which mature listeners recover underlying phonetic form. Some theories suggest that children need steady-state information to recognize vowels (and so learn vowel systems), even though that information is sparse in natural speech. The current study examined whether young children can recover vowel targets from dynamic forms, or whether they need steady-state information. Vowel recognition was measured for adults and children (3, 5, and 7 years) for natural productions of /dæd/, /dUd/ /æ/, /U/ edited to make six stimulus sets: three dynamic (whole syllables; syllables with middle 50-percent replaced by cough; syllables with all but the first and last three pitch periods replaced by cough), and three steady-state (natural, isolated vowels; reiterated pitch periods from those vowels; reiterated pitch periods from the syllables). Adults scored nearly perfectly on all but first/last three pitch period stimuli. Children performed nearly perfectly only when the entire syllable was heard, and performed similarly (near 80%) for all other stimuli. Consequently, children need dynamic forms to perceive vowels; steady-state forms are not preferred.

  20. Assessing Vowel Centralization in Dysarthria: A Comparison of Methods

    ERIC Educational Resources Information Center

    Fletcher, Annalise R.; McAuliffe, Megan J.; Lansford, Kaitlin L.; Liss, Julie M.

    2017-01-01

    Purpose: The strength of the relationship between vowel centralization measures and perceptual ratings of dysarthria severity has varied considerably across reports. This article evaluates methods of acoustic-perceptual analysis to determine whether procedural changes can strengthen the association between these measures. Method: Sixty-one…

  1. Relationships between objective acoustic indices and acoustic comfort evaluation in nonacoustic spaces

    NASA Astrophysics Data System (ADS)

    Kang, Jian

    2004-05-01

    Much attention has been paid to acoustic spaces such as concert halls and recording studios, whereas research on nonacoustic buildings/spaces has been rather limited, especially from the viewpoint of acoustic comfort. In this research a series of case studies has been carried out on this topic, considering various spaces including shopping mall atrium spaces, library reading rooms, football stadia, swimming spaces, churches, dining spaces, as well as urban open public spaces. The studies focus on the relationships between objective acoustic indices such as sound pressure level and reverberation time and perceptions of acoustic comfort. The results show that the acoustic atmosphere is an important consideration in such spaces and the evaluation of acoustic comfort may vary considerably even if the objective acoustic indices are the same. It is suggested that current guidelines and technical regulations are insufficient in terms of acoustic design of these spaces, and the relationships established from the case studies between objective and subjective aspects would be useful for developing further design guidelines. [Work supported partly by the British Academy.

  2. Call Me Alix, Not Elix: Vowels Are More Important than Consonants in Own-Name Recognition at 5 Months

    ERIC Educational Resources Information Center

    Bouchon, Camillia; Floccia, Caroline; Fux, Thibaut; Adda-Decker, Martine; Nazzi, Thierry

    2015-01-01

    Consonants and vowels differ acoustically and articulatorily, but also functionally: Consonants are more relevant for lexical processing, and vowels for prosodic/syntactic processing. These functional biases could be powerful bootstrapping mechanisms for learning language, but their developmental origin remains unclear. The relative importance of…

  3. The relationship between native allophonic experience with vowel duration and perception of the English tense/lax vowel contrast by Spanish and Russian listeners.

    PubMed

    Kondaurova, Maria V; Francis, Alexander L

    2008-12-01

    Two studies explored the role of native language use of an acoustic cue, vowel duration, in both native and non-native contexts in order to test the hypothesis that non-native listeners' reliance on vowel duration instead of vowel quality to distinguish the English tense/lax vowel contrast could be explained by the role of duration as a cue in native phonological contrasts. In the first experiment, native Russian, Spanish, and American English listeners identified stimuli from a beat/bit continuum varying in nine perceptually equal spectral and duration steps. English listeners relied predominantly on spectrum, but showed some reliance on duration. Russian and Spanish speakers relied entirely on duration. In the second experiment, three tests examined listeners' use of vowel duration in native contrasts. Duration was equally important for the perception of lexical stress for all three groups. However, English listeners relied more on duration as a cue to postvocalic consonant voicing than did native Spanish or Russian listeners, and Spanish listeners relied on duration more than did Russian listeners. Results suggest that, although allophonic experience may contribute to cross-language perceptual patterns, other factors such as the application of statistical learning mechanisms and the influence of language-independent psychoacoustic proclivities cannot be ruled out.

  4. The relationship between native allophonic experience with vowel duration and perception of the English tense∕lax vowel contrast by Spanish and Russian listeners

    PubMed Central

    Kondaurova, Maria V.; Francis, Alexander L.

    2008-01-01

    Two studies explored the role of native language use of an acoustic cue, vowel duration, in both native and non-native contexts in order to test the hypothesis that non-native listeners’ reliance on vowel duration instead of vowel quality to distinguish the English tense∕lax vowel contrast could be explained by the role of duration as a cue in native phonological contrasts. In the first experiment, native Russian, Spanish, and American English listeners identified stimuli from a beat∕bit continuum varying in nine perceptually equal spectral and duration steps. English listeners relied predominantly on spectrum, but showed some reliance on duration. Russian and Spanish speakers relied entirely on duration. In the second experiment, three tests examined listeners’ use of vowel duration in native contrasts. Duration was equally important for the perception of lexical stress for all three groups. However, English listeners relied more on duration as a cue to postvocalic consonant voicing than did native Spanish or Russian listeners, and Spanish listeners relied on duration more than did Russian listeners. Results suggest that, although allophonic experience may contribute to cross-language perceptual patterns, other factors such as the application of statistical learning mechanisms and the influence of language-independent psychoacoustic proclivities cannot be ruled out. PMID:19206820

  5. Acoustic and Perceptual Analyses of Adductor Spasmodic Dysphonia in Mandarin-speaking Chinese.

    PubMed

    Chen, Zhipeng; Li, Jingyuan; Ren, Qingyi; Ge, Pingjiang

    2018-02-12

    The objective of this study was to examine the perceptual structure and acoustic characteristics of speech of patients with adductor spasmodic dysphonia (ADSD) in Mandarin. Case-Control Study MATERIALS AND METHODS: For the estimation of dysphonia level, perceptual and acoustic analysis were used for patients with ADSD (N = 20) and the control group (N = 20) that are Mandarin-Chinese speakers. For both subgroups, a sustained vowel and connected speech samples were obtained. The difference of perceptual and acoustic parameters between the two subgroups was assessed and analyzed. For acoustic assessment, the percentage of phonatory breaks (PBs) of connected reading and the percentage of aperiodic segments and frequency shifts (FS) of vowel and reading in patients with ADSD were significantly worse than controls, the mean harmonics-to-noise ratio and the fundamental frequency standard deviation of vowel as well. For perceptual evaluation, the rating of speech and vowel in patients with ADSD are significantly higher than controls. The percentage of aberrant acoustic events (PB, frequency shift, and aperiodic segment) and the fundamental frequency standard deviation and mean harmonics-to-noise ratio were significantly correlated with the perceptual rating in the vowel and reading productions. The perceptual and acoustic parameters of connected vowel and reading in patients with ADSD are worse than those in normal controls, and could validly and reliably estimate dysphonia of ADSD in Mandarin-speaking Chinese. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. Cross-modal associations in synaesthesia: Vowel colours in the ear of the beholder

    PubMed Central

    Moos, Anja; Smith, Rachel; Miller, Sam R.; Simmons, David R.

    2014-01-01

    Human speech conveys many forms of information, but for some exceptional individuals (synaesthetes), listening to speech sounds can automatically induce visual percepts such as colours. In this experiment, grapheme–colour synaesthetes and controls were asked to assign colours, or shades of grey, to different vowel sounds. We then investigated whether the acoustic content of these vowel sounds influenced participants' colour and grey-shade choices. We found that both colour and grey-shade associations varied systematically with vowel changes. The colour effect was significant for both participant groups, but significantly stronger and more consistent for synaesthetes. Because not all vowel sounds that we used are “translatable” into graphemes, we conclude that acoustic–phonetic influences co-exist with established graphemic influences in the cross-modal correspondences of both synaesthetes and non-synaesthetes. PMID:25469218

  7. The Interplay between Input and Initial Biases: Asymmetries in Vowel Perception during the First Year of Life

    ERIC Educational Resources Information Center

    Pons, Ferran; Albareda-Castellot, Barbara; Sebastian-Galles, Nuria

    2012-01-01

    Vowels with extreme articulatory-acoustic properties act as natural referents. Infant perceptual asymmetries point to an underlying bias favoring these referent vowels. However, as language experience is gathered, distributional frequency of speech sounds could modify this initial bias. The perception of the /i/-/e/ contrast was explored in 144…

  8. The use of ultrasound in the study of articulatory properties of vowels in clear speech.

    PubMed

    Song, Jae Yung

    2017-01-01

    Although the acoustic properties of clear speech have been extensively studied, its underlying articulatory details have not been well understood. The purpose of the present study is twofold: To examine the specific articulatory processes of clear speech using ultrasound and to investigate whether and how the type of listener (hard of hearing, normal hearing) and the lexical property of words (frequency) interact in the production of clear speech. To this end, we examined productions of /ɑ/, /æ/ and /u/ from 16 speakers of US English. Overall, our ultrasound results suggested that the tongue's highest point moved in a direction that exaggerated the three vowels' phonological features, resulting in an expanded articulatory vowel space for the hard-of-hearing listener and low-frequency words. No interaction was found between the listener and word frequency, suggesting that the effects of word frequency hold constant across the two types of listeners.

  9. Are vowel errors influenced by consonantal context in the speech of persons with aphasia?

    NASA Astrophysics Data System (ADS)

    Gelfer, Carole E.; Bell-Berti, Fredericka; Boyle, Mary

    2004-05-01

    The literature suggests that vowels and consonants may be affected differently in the speech of persons with conduction aphasia (CA) or nonfluent aphasia with apraxia of speech (AOS). Persons with CA have shown similar error rates across vowels and consonants, while those with AOS have shown more errors for consonants than vowels. These data have been interpreted to suggest that consonants have greater gestural complexity than vowels. However, recent research [M. Boyle et al., Proc. International Cong. Phon. Sci., 3265-3268 (2003)] does not support this interpretation: persons with AOS and CA both had a high proportion of vowel errors, and vowel errors almost always occurred in the context of consonantal errors. To examine the notion that vowels are inherently less complex than consonants and are differentially affected in different types of aphasia, vowel production in different consonantal contexts for speakers with AOS or CA was examined. The target utterances, produced in carrier phrases, were bVC and bV syllables, allowing us to examine whether vowel production is influenced by consonantal context. Listener judgments were obtained for each token, and error productions were grouped according to the intended utterance and error type. Acoustical measurements were made from spectrographic displays.

  10. Evaluating iPhone recordings for acoustic voice assessment.

    PubMed

    Lin, Emily; Hornibrook, Jeremy; Ormond, Tika

    2012-01-01

    This study examined the viability of using iPhone recordings for acoustic measurements of voice quality. Acoustic measures were compared between voice signals simultaneously recorded from 11 normal speakers (6 females and 5 males) through an iPhone (model A1303, Apple, USA) and a comparison recording system. Comparisons were also conducted between the pre- and post-operative voices recorded from 10 voice patients (4 females and 6 males) through the iPhone. Participants aged between 27 and 79 years. Measures from iPhone and comparison signals were found to be highly correlated. Findings of the effects of vowel type on the selected measures were consistent between the two recording systems and congruent with previous findings. Analysis of the patient data revealed that a selection of acoustic measures, such as vowel space area and voice perturbation measures, consistently demonstrated a positive change following phonosurgery. The present findings indicated that the iPhone device tested was useful for tracking voice changes for clinical management. Preliminary findings regarding factors such as gender and type of pathology suggest that intra-subject, instead of norm-referenced, comparisons of acoustic measures would be more useful in monitoring the progression of a voice disorder or tracking the treatment effect. Copyright © 2012 S. Karger AG, Basel.

  11. Cross-language comparisons of contextual variation in the production and perception of vowels

    NASA Astrophysics Data System (ADS)

    Strange, Winifred

    2005-04-01

    In the last two decades, a considerable amount of research has investigated second-language (L2) learners problems with perception and production of non-native vowels. Most studies have been conducted using stimuli in which the vowels are produced and presented in simple, citation-form (lists) monosyllabic or disyllabic utterances. In my laboratory, we have investigated the spectral (static/dynamic formant patterns) and temporal (syllable duration) variation in vowel productions as a function of speech-style (list/sentence utterances), speaking rate (normal/rapid), sentence focus (narrow focus/post-focus) and phonetic context (voicing/place of surrounding consonants). Data will be presented for a set of languages that include large and small vowel inventories, stress-, syllable-, and mora-timed prosody, and that vary in the phonological/phonetic function of vowel length, diphthongization, and palatalization. Results show language-specific patterns of contextual variation that affect the cross-language acoustic similarity of vowels. Research on cross-language patterns of perceived phonetic similarity by naive listeners suggests that listener's knowledge of native language (L1) patterns of contextual variation influences their L1/L2 similarity judgments and subsequently, their discrimination of L2 contrasts. Implications of these findings for assessing L2 learners perception of vowels and for developing laboratory training procedures to improve L2 vowel perception will be discussed. [Work supported by NIDCD.

  12. Spectral timbre perception in ferrets: discrimination of artificial vowels under different listening conditions.

    PubMed

    Bizley, Jennifer K; Walker, Kerry M M; King, Andrew J; Schnupp, Jan W H

    2013-01-01

    Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/ and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners.

  13. Spectral timbre perception in ferrets; discrimination of artificial vowels under different listening conditions

    PubMed Central

    Bizley, Jennifer K; Walker, Kerry MM; King, Andrew J; Schnupp, Jan WH

    2013-01-01

    Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/, and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners. PMID:23297909

  14. LEARNING NONADJACENT DEPENDENCIES IN PHONOLOGY: TRANSPARENT VOWELS IN VOWEL HARMONY

    PubMed Central

    Finley, Sara

    2015-01-01

    Nonadjacent dependencies are an important part of the structure of language. While the majority of syntactic and phonological processes occur at a local domain, there are several processes that appear to apply at a distance, posing a challenge for theories of linguistic structure. This article addresses one of the most common nonadjacent phenomena in phonology: transparent vowels in vowel harmony. Vowel harmony occurs when adjacent vowels are required to share the same phonological feature value (e.g. V+F C V+F). However, transparent vowels create a second-order nonadjacent pattern because agreement between two vowels can ‘skip’ the transparent neutral vowel in addition to consonants (e.g. V+F C VT−F C V+F). Adults are shown to display initial learning biases against second-order nonadjacency in experiments that use an artificial grammar learning paradigm. Experiments 1–3 show that adult learners fail to learn the second-order long-distance dependency created by the transparent vowel (as compared to a control condition). In experiments 4–5, training in terms of overall exposure as well as the frequency of relevant transparent items was increased. With adequate exposure, learners reliably generalize to novel words containing transparent vowels. The experiments suggest that learners are sensitive to the structure of phonological representations, even when learning occurs at a relatively rapid pace.* PMID:26146423

  15. Effects of Talker Variability on Vowel Recognition in Cochlear Implants

    ERIC Educational Resources Information Center

    Chang, Yi-ping; Fu, Qian-Jie

    2006-01-01

    Purpose: To investigate the effects of talker variability on vowel recognition by cochlear implant (CI) users and by normal-hearing (NH) participants listening to 4-channel acoustic CI simulations. Method: CI users were tested with their clinically assigned speech processors. For NH participants, 3 CI processors were simulated, using different…

  16. Enhancement of temporal periodicity cues in cochlear implants: Effects on prosodic perception and vowel identification

    NASA Astrophysics Data System (ADS)

    Green, Tim; Faulkner, Andrew; Rosen, Stuart; Macherey, Olivier

    2005-07-01

    Standard continuous interleaved sampling processing, and a modified processing strategy designed to enhance temporal cues to voice pitch, were compared on tests of intonation perception, and vowel perception, both in implant users and in acoustic simulations. In standard processing, 400 Hz low-pass envelopes modulated either pulse trains (implant users) or noise carriers (simulations). In the modified strategy, slow-rate envelope modulations, which convey dynamic spectral variation crucial for speech understanding, were extracted by low-pass filtering (32 Hz). In addition, during voiced speech, higher-rate temporal modulation in each channel was provided by 100% amplitude-modulation by a sawtooth-like wave form whose periodicity followed the fundamental frequency (F0) of the input. Channel levels were determined by the product of the lower- and higher-rate modulation components. Both in acoustic simulations and in implant users, the ability to use intonation information to identify sentences as question or statement was significantly better with modified processing. However, while there was no difference in vowel recognition in the acoustic simulation, implant users performed worse with modified processing both in vowel recognition and in formant frequency discrimination. It appears that, while enhancing pitch perception, modified processing harmed the transmission of spectral information.

  17. On the Role of Cognitive Abilities in Second Language Vowel Learning.

    PubMed

    Ghaffarvand Mokari, Payam; Werner, Stefan

    2018-03-01

    This study investigated the role of different cognitive abilities-inhibitory control, attention control, phonological short-term memory (PSTM), and acoustic short-term memory (AM)-in second language (L2) vowel learning. The participants were 40 Azerbaijani learners of Standard Southern British English. Their perception of L2 vowels was tested through a perceptual discrimination task before and after five sessions of high-variability phonetic training. Inhibitory control was significantly correlated with gains from training in the discrimination of L2 vowel pairs. However, there were no significant correlations between attention control, AM, PSTM, and gains from training. These findings suggest the potential role of inhibitory control in L2 phonological learning. We suggest that inhibitory control facilitates the processing of L2 sounds by allowing learners to ignore the interfering information from L1 during training, leading to better L2 segmental learning.

  18. Vowel Formant Values in Hearing and Hearing-Impaired Children: A Discriminant Analysis

    ERIC Educational Resources Information Center

    Ozbic, Martina; Kogovsek, Damjana

    2010-01-01

    Hearing-impaired speakers show changes in vowel production and formant pitch and variability, as well as more cases of overlapping between vowels and more restricted formant space, than hearing speakers; consequently their speech is less intelligible. The purposes of this paper were to determine the differences in vowel formant values between 32…

  19. Perceptual effects of dialectal and prosodic variation in vowels

    NASA Astrophysics Data System (ADS)

    Fox, Robert Allen; Jacewicz, Ewa; Hatcher, Kristin; Salmons, Joseph

    2005-09-01

    As was reported earlier [Fox et al., J. Acoust. Soc. Am. 114, 2396 (2003)], certain vowels in the Ohio and Wisconsin dialects of American English are shifting in different directions. In addition, we have found that the spectral characteristics of these vowels (e.g., duration and formant frequencies) changed systematically under varying degrees of prosodic prominence, with somewhat different changes occurring within each dialect. The question addressed in the current study is whether naive listeners from these two dialects are sensitive to both the dialect variations and to the prosodically induced spectral differences. Listeners from Ohio and Wisconsin listened to the stimulus tokens [beIt] and [bɛt] produced in each of three prosodic contexts (representing three different levels of prominence). These words were produced by speakers from Ohio or from Wisconsin (none of the listeners were also speakers). Listeners identified the stimulus tokens in terms of vowel quality and indicated whether it was a good, fair, or poor exemplar of that phonetic category. Results showed that both phonetic quality decisions and goodness ratings were systematically and significantly affected by speaker dialect, listener dialect, and prosodic context. Implications of source and nature of ongoing vowel changes in these two dialects will be discussed. [Work partially supported by NIDCD R03 DC005560-01.

  20. High-speed imaging of vocal fold vibrations and larynx movements within vocalizations of different vowels.

    PubMed

    Maurer, D; Hess, M; Gross, M

    1996-12-01

    Theoretic investigations of the "source-filter" model have indicated a pronounced acoustic interaction of glottal source and vocal tract. Empirical investigations of formant pattern variations apart from changes in vowel identity have demonstrated a direct relationship between the fundamental frequency and the patterns. As a consequence of both findings, independence of phonation and articulation may be limited in the speech process. Within the present study, possible interdependence of phonation and phoneme was investigated: vocal fold vibrations and larynx position for vocalizations of different vowels in a healthy man and woman were examined by high-speed light-intensified digital imaging. We found 1) different movements of the vocal folds for vocalizations of different vowel identities within one speaker and at similar fundamental frequency, and 2) constant larynx position within vocalization of one vowel identity, but different positions for vocalizations of different vowel identities. A possible relationship between the vocal fold vibrations and the phoneme is discussed.

  1. Acoustic Analysis of Speech of Cochlear Implantees and Its Implications

    PubMed Central

    Patadia, Rajesh; Govale, Prajakta; Rangasayee, R.; Kirtane, Milind

    2012-01-01

    Objectives Cochlear implantees have improved speech production skills compared with those using hearing aids, as reflected in their acoustic measures. When compared to normal hearing controls, implanted children had fronted vowel space and their /s/ and /∫/ noise frequencies overlapped. Acoustic analysis of speech provides an objective index of perceived differences in speech production which can be precursory in planning therapy. The objective of this study was to compare acoustic characteristics of speech in cochlear implantees with those of normal hearing age matched peers to understand implications. Methods Group 1 consisted of 15 children with prelingual bilateral severe-profound hearing loss (age, 5-11 years; implanted between 4-10 years). Prior to an implant behind the ear, hearing aids were used; prior & post implantation subjects received at least 1 year of aural intervention. Group 2 consisted of 15 normal hearing age matched peers. Sustained productions of vowels and words with selected consonants were recorded. Using Praat software for acoustic analysis, digitized speech tokens were measured for F1, F2, and F3 of vowels; centre frequency (Hz) and energy concentration (dB) in burst; voice onset time (VOT in ms) for stops; centre frequency (Hz) of noise in /s/; rise time (ms) for affricates. A t-test was used to find significant differences between groups. Results Significant differences were found in VOT for /b/, F1 and F2 of /e/, and F3 of /u/. No significant differences were found for centre frequency of burst, energy concentration for stops, centre frequency of noise in /s/, or rise time for affricates. These findings suggest that auditory feedback provided by cochlear implants enable subjects to monitor production of speech sounds. Conclusion Acoustic analysis of speech is an essential method for discerning characteristics which have or have not been improved by cochlear implantation and thus for planning intervention. PMID:22701768

  2. Formant Centralization Ratio: A Proposal for a New Acoustic Measure of Dysarthric Speech

    ERIC Educational Resources Information Center

    Sapir, Shimon; Ramig, Lorraine O.; Spielman, Jennifer L.; Fox, Cynthia

    2010-01-01

    Purpose: The vowel space area (VSA) has been used as an acoustic metric of dysarthric speech, but with varying degrees of success. In this study, the authors aimed to test an alternative metric to the VSA--the "formant centralization ratio" (FCR), which is hypothesized to more effectively differentiate dysarthric from healthy speech and register…

  3. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    ERIC Educational Resources Information Center

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  4. Phonetic Modification of Vowel Space in Storybook Speech to Infants up to 2 Years of Age

    ERIC Educational Resources Information Center

    Burnham, Evamarie B.; Wieland, Elizabeth A.; Kondaurova, Maria V.; McAuley, J. Devin; Bergeson, Tonya R.; Dilley, Laura C.

    2015-01-01

    Purpose: A large body of literature has indicated vowel space area expansion in infant-directed (ID) speech compared with adult-directed (AD) speech, which may promote language acquisition. The current study tested whether this expansion occurs in storybook speech read to infants at various points during their first 2 years of life. Method: In 2…

  5. Vowel perception by noise masked normal-hearing young adults

    NASA Astrophysics Data System (ADS)

    Richie, Carolyn; Kewley-Port, Diane; Coughlin, Maureen

    2005-08-01

    This study examined vowel perception by young normal-hearing (YNH) adults, in various listening conditions designed to simulate mild-to-moderate sloping sensorineural hearing loss. YNH listeners were individually age- and gender-matched to young hearing-impaired (YHI) listeners tested in a previous study [Richie et al., J. Acoust. Soc. Am. 114, 2923-2933 (2003)]. YNH listeners were tested in three conditions designed to create equal audibility with the YHI listeners; a low signal level with and without a simulated hearing loss, and a high signal level with a simulated hearing loss. Listeners discriminated changes in synthetic vowel tokens /smcapi e ɛ invv æ/ when F1 or F2 varied in frequency. Comparison of YNH with YHI results failed to reveal significant differences between groups in terms of performance on vowel discrimination, in conditions of similar audibility by using both noise masking to elevate the hearing thresholds of the YNH and applying frequency-specific gain to the YHI listeners. Further, analysis of learning curves suggests that while the YHI listeners completed an average of 46% more test blocks than YNH listeners, the YHI achieved a level of discrimination similar to that of the YNH within the same number of blocks. Apparently, when age and gender are closely matched between young hearing-impaired and normal-hearing adults, performance on vowel tasks may be explained by audibility alone.

  6. Representations of Spectral Differences between Vowels in Tonotopic Regions of Auditory Cortex

    ERIC Educational Resources Information Center

    Fisher, Julia

    2017-01-01

    This work examines the link between low-level cortical acoustic processing and higher-level cortical phonemic processing. Specifically, using functional magnetic resonance imaging, it looks at 1) whether or not the vowels [alpha] and [i] are distinguishable in regions of interest defined by the first two resonant frequencies (formants) of those…

  7. Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Hack, Zarita Caplan; Erber, Norman P.

    1982-01-01

    Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…

  8. Children's discrimination of vowel sequences

    NASA Astrophysics Data System (ADS)

    Coady, Jeffry A.; Kluender, Keith R.; Evans, Julia

    2003-10-01

    Children's ability to discriminate sequences of steady-state vowels was investigated. Vowels (as in ``beet,'' ``bat,'' ``bought,'' and ``boot'') were synthesized at durations of 40, 80, 160, 320, 640, and 1280 ms. Four different vowel sequences were created by concatenating different orders of vowels for each duration, separated by 10-ms intervening silence. Thus, sequences differed in vowel order and duration (rate). Sequences were 12 s in duration, with amplitude ramped linearly over the first and last 2 s. Sequence pairs included both same (identical sequences) and different trials (sequences with vowels in different orders). Sequences with vowel of equal duration were presented on individual trials. Children aged 7;0 to 10;6 listened to pairs of sequences (with 100 ms between sequences) and responded whether sequences sounded the same or different. Results indicate that children are best able to discriminate sequences of intermediate-duration vowels, typical of conversational speaking rate. Children were less accurate with both shorter and longer vowels. Results are discussed in terms of auditory processing (shortest vowels) and memory (longest vowels). [Research supported by NIDCD DC-05263, DC-04072, and DC-005650.

  9. English Vowel Spaces Produced by Japanese Speakers: The Smaller Point Vowels' and the Greater Schwas'

    ERIC Educational Resources Information Center

    Tomita, Kaoru; Yamada, Jun; Takatsuka, Shigenobu

    2010-01-01

    This study investigated how Japanese-speaking learners of English pronounce the three point vowels /i/, /u/, and /a/ appearing in the first and second monosyllabic words of English noun phrases, and the schwa /[image omitted]/ appearing in English disyllabic words. First and second formant (F1 and F2) values were measured for four Japanese…

  10. The Effect of Parkinson Disease Tremor Phenotype on Cepstral Peak Prominence and Transglottal Airflow in Vowels and Speech.

    PubMed

    Burk, Brittany R; Watts, Christopher R

    2018-02-19

    The physiological manifestations of Parkinson disease are heterogeneous, as evidenced by disease subtypes. Dysphonia has been well documented as an early and progressively significant impairment associated with the disease. The purpose of this study was to investigate how acoustic and aerodynamic measures of vocal function were affected by Parkinson tremor subtype (phenotype) in an effort to better understand the heterogeneity of voice impairment severity in Parkinson disease. This is a prospective case-control study. Thirty-two speakers with Parkinson disease assigned to tremor and nontremor phenotypes and 10 healthy controls were recruited. Sustained vowels and connected speech were recorded from each speaker. Acoustic measures of cepstral peak prominence (CPP) and aerodynamic measures of transglottal airflow (TAF) were calculated from the recorded acoustic and aerodynamic waveforms. Speakers with a nontremor dominant phenotype exhibited significantly (P < 0.05) lower CPP and higher TAF in vowels compared with the tremor dominant phenotype and control speakers, who were not different from each other. No significant group differences were observed for CPP or TAF in connected speech. When producing vowels, participants with nontremor dominant phenotype exhibited reduced phonation periodicity and elevated TAF compared with tremor dominant and control participants. This finding is consistent with differential limb-motor and cognitive impairments between tremor and nontremor phenotypes reported in the extant literature. Results suggest that sustained vowel production may be sensitive to phonatory control as a function of Parkinson tremor phenotype in mild to moderate stages of the disease. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. Acoustic Correlates of Emphatic Stress in Central Catalan

    ERIC Educational Resources Information Center

    Nadeu, Marianna; Hualde, Jose Ignacio

    2012-01-01

    A common feature of public speech in Catalan is the placement of prominence on lexically unstressed syllables ("emphatic stress"). This paper presents an acoustic study of radio speech data. Instances of emphatic stress were perceptually identified. Within-word comparison between vowels with emphatic stress and vowels with primary lexical stress…

  12. The Neural Representation of Consonant-Vowel Transitions in Adults Who Wear Hearing Aids

    PubMed Central

    Tremblay, Kelly L.; Kalstein, Laura; Billings, Cuttis J.; Souza, Pamela E.

    2006-01-01

    Hearing aids help compensate for disorders of the ear by amplifying sound; however, their effectiveness also depends on the central auditory system's ability to represent and integrate spectral and temporal information delivered by the hearing aid. The authors report that the neural detection of time-varying acoustic cues contained in speech can be recorded in adult hearing aid users using the acoustic change complex (ACC). Seven adults (50–76 years) with mild to severe sensorineural hearing participated in the study. When presented with 2 identifiable consonant-vowel (CV) syllables (“shee” and “see”), the neural detection of CV transitions (as indicated by the presence of a P1-N1-P2 response) was different for each speech sound. More specifically, the latency of the evoked neural response coincided in time with the onset of the vowel, similar to the latency patterns the authors previously reported in normal-hearing listeners. PMID:16959736

  13. Effect of Vowel Identity and Onset Asynchrony on Concurrent Vowel Identification

    ERIC Educational Resources Information Center

    Hedrick, Mark S.; Madix, Steven G.

    2009-01-01

    Purpose: The purpose of the current study was to determine the effects of vowel identity and temporal onset asynchrony on identification of vowels overlapped in time. Method: Fourteen listeners with normal hearing, with a mean age of 24 years, participated. The listeners were asked to identify both of a pair of 200-ms vowels (referred to as…

  14. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging “periodicity-tagged” segregation of competing speech in rooms

    PubMed Central

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M.

    2015-01-01

    The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation). Brainstem circuits help segregate these complex acoustic mixtures into “auditory objects.” Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0) modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous. We examine the ability of 129 single units in the ventral cochlear nucleus (VCN) of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels' spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels' spectral energy into two streams (corresponding to the two vowels), on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset) are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging listening

  15. Perceptual “vowel spaces” of cochlear implant users: Implications for the study of auditory adaptation to spectral shift

    PubMed Central

    Harnsberger, James D.; Svirsky, Mario A.; Kaiser, Adam R.; Pisoni, David B.; Wright, Richard; Meyer, Ted A.

    2012-01-01

    Cochlear implant (CI) users differ in their ability to perceive and recognize speech sounds. Two possible reasons for such individual differences may lie in their ability to discriminate formant frequencies or to adapt to the spectrally shifted information presented by cochlear implants, a basalward shift related to the implant’s depth of insertion in the cochlea. In the present study, we examined these two alternatives using a method-of-adjustment (MOA) procedure with 330 synthetic vowel stimuli varying in F1 and F2 that were arranged in a two-dimensional grid. Subjects were asked to label the synthetic stimuli that matched ten monophthongal vowels in visually presented words. Subjects then provided goodness ratings for the stimuli they had chosen. The subjects’ responses to all ten vowels were used to construct individual perceptual “vowel spaces.” If CI users fail to adapt completely to the basalward spectral shift, then the formant frequencies of their vowel categories should be shifted lower in both F1 and F2. However, with one exception, no systematic shifts were observed in the vowel spaces of CI users. Instead, the vowel spaces differed from one another in the relative size of their vowel categories. The results suggest that differences in formant frequency discrimination may account for the individual differences in vowel perception observed in cochlear implant users. PMID:11386565

  16. A Cross-Language Study of Acoustic Predictors of Speech Intelligibility in Individuals With Parkinson's Disease

    PubMed Central

    Choi, Yaelin

    2017-01-01

    Purpose The present study aimed to compare acoustic models of speech intelligibility in individuals with the same disease (Parkinson's disease [PD]) and presumably similar underlying neuropathologies but with different native languages (American English [AE] and Korean). Method A total of 48 speakers from the 4 speaker groups (AE speakers with PD, Korean speakers with PD, healthy English speakers, and healthy Korean speakers) were asked to read a paragraph in their native languages. Four acoustic variables were analyzed: acoustic vowel space, voice onset time contrast scores, normalized pairwise variability index, and articulation rate. Speech intelligibility scores were obtained from scaled estimates of sentences extracted from the paragraph. Results The findings indicated that the multiple regression models of speech intelligibility were different in Korean and AE, even with the same set of predictor variables and with speakers matched on speech intelligibility across languages. Analysis of the descriptive data for the acoustic variables showed the expected compression of the vowel space in speakers with PD in both languages, lower normalized pairwise variability index scores in Korean compared with AE, and no differences within or across language in articulation rate. Conclusions The results indicate that the basis of an intelligibility deficit in dysarthria is likely to depend on the native language of the speaker and listener. Additional research is required to explore other potential predictor variables, as well as additional language comparisons to pursue cross-linguistic considerations in classification and diagnosis of dysarthria types. PMID:28821018

  17. Effects of head geometry simplifications on acoustic radiation of vowel sounds based on time-domain finite-element simulations.

    PubMed

    Arnela, Marc; Guasch, Oriol; Alías, Francesc

    2013-10-01

    One of the key effects to model in voice production is that of acoustic radiation of sound waves emanating from the mouth. The use of three-dimensional numerical simulations allows to naturally account for it, as well as to consider all geometrical head details, by extending the computational domain out of the vocal tract. Despite this advantage, many approximations to the head geometry are often performed for simplicity and impedance load models are still used as well to reduce the computational cost. In this work, the impact of some of these simplifications on radiation effects is examined for vowel production in the frequency range 0-10 kHz, by means of comparison with radiation from a realistic head. As a result, recommendations are given on their validity depending on whether high frequency energy (above 5 kHz) should be taken into account or not.

  18. Vowel Deletion in Latvian.

    ERIC Educational Resources Information Center

    Karins, A. Krisjanis

    1995-01-01

    Investigates variable deletion of short vowels in word-final unstressed syllables in Latvian spoken in Riga. Affected vowels were almost always inflectional endings and results indicated that internal phonological and prosodic factors (especially distance from main word stress) were the strongest constraints on vowel deletion, along with the…

  19. Sex differences in the acoustic structure of vowel-like grunt vocalizations in baboons and their perceptual discrimination by baboon listeners

    NASA Astrophysics Data System (ADS)

    Rendall, Drew; Owren, Michael J.; Weerts, Elise; Hienz, Robert D.

    2004-01-01

    This study quantifies sex differences in the acoustic structure of vowel-like grunt vocalizations in baboons (Papio spp.) and tests the basic perceptual discriminability of these differences to baboon listeners. Acoustic analyses were performed on 1028 grunts recorded from 27 adult baboons (11 males and 16 females) in southern Africa, focusing specifically on the fundamental frequency (F0) and formant frequencies. The mean F0 and the mean frequencies of the first three formants were all significantly lower in males than they were in females, more dramatically so for F0. Experiments using standard psychophysical procedures subsequently tested the discriminability of adult male and adult female grunts. After learning to discriminate the grunt of one male from that of one female, five baboon subjects subsequently generalized this discrimination both to new call tokens from the same individuals and to grunts from novel males and females. These results are discussed in the context of both the possible vocal anatomical basis for sex differences in call structure and the potential perceptual mechanisms involved in their processing by listeners, particularly as these relate to analogous issues in human speech production and perception.

  20. Exceptionality in vowel harmony

    NASA Astrophysics Data System (ADS)

    Szeredi, Daniel

    Vowel harmony has been of great interest in phonological research. It has been widely accepted that vowel harmony is a phonetically natural phenomenon, which means that it is a common pattern because it provides advantages to the speaker in articulation and to the listener in perception. Exceptional patterns proved to be a challenge to the phonetically grounded analysis as they, by their nature, introduce phonetically disadvantageous sequences to the surface form, that consist of harmonically different vowels. Such forms are found, for example in the Finnish stem tuoli 'chair' or in the Hungarian suffixed form hi:d-hoz 'to the bridge', both word forms containing a mix of front and back vowels. There has recently been evidence shown that there might be a phonetic level explanation for some exceptional patterns, as the possibility that some vowels participating in irregular stems (like the vowel [i] in the Hungarian stem hi:d 'bridge' above) differ in some small phonetic detail from vowels in regular stems. The main question has not been raised, though: does this phonetic detail matter for speakers? Would they use these minor differences when they have to categorize a new word as regular or irregular? A different recent trend in explaining morphophonological exceptionality by looking at the phonotactic regularities characteristic of classes of stems based on their morphological behavior. Studies have shown that speakers are aware of these regularities, and use them as cues when they have to decide what class a novel stem belongs to. These sublexical phonotactic regularities have already been shown to be present in some exceptional patterns vowel harmony, but many questions remain open: how is learning the static generalization linked to learning the allomorph selection facet of vowel harmony? How much does the effect of consonants on vowel harmony matter, when compared to the effect of vowel-to-vowel correspondences? This dissertation aims to test these two ideas

  1. The Effectiveness of Vowel Production Training with Real-Time Spectrographic Displays for Children with Profound Hearing Impairment.

    NASA Astrophysics Data System (ADS)

    Ertmer, David Joseph

    1994-01-01

    The effectiveness of vowel production training which incorporated direct instruction in combination with spectrographic models and feedback was assessed for two children who exhibited profound hearing impairment. A multiple-baseline design across behaviors, with replication across subjects was implemented to determine if vowel production accuracy improved following the introduction of treatment. Listener judgments of vowel correctness were obtained during the baseline, training, and follow-up phases of the study. Data were analyzed through visual inspection of changes in levels of accuracy, changes in trends of accuracy, and changes in variability of accuracy within and across phases. One subject showed significant improvement of all three trained vowel targets; the second subject for the first trained target only (Kolmogorov-Smirnov Two Sample Test). Performance trends during training sessions suggest that continued treatment would have resulted in further improvement for both subjects. Vowel duration, fundamental frequency, and the frequency locations of the first and second formants were measured before and after training. Acoustic analysis revealed highly individualized changes in the frequency locations of F1 and F2. Vowels which received the most training were maintained at higher levels than those which were introduced later in training, Some generalization of practiced vowel targets to untrained words was observed in both subjects. A bias towards judging productions as "correct" was observed for both subjects during self-evaluation tasks using spectrographic feedback.

  2. Vowel Devoicing in Shanghai.

    ERIC Educational Resources Information Center

    Zee, Eric

    A phonetic study of vowel devoicing in the Shanghai dialect of Chinese explored the phonetic conditions under which the high, closed vowels and the apical vowel in Shanghai are most likely to become devoiced. The phonetic conditions may be segmental or suprasegmental. Segmentally, the study sought to determine whether a certain type of pre-vocalic…

  3. Feedforward and feedback control in apraxia of speech: effects of noise masking on vowel production.

    PubMed

    Maas, Edwin; Mailend, Marja-Liisa; Guenther, Frank H

    2015-04-01

    This study was designed to test two hypotheses about apraxia of speech (AOS) derived from the Directions Into Velocities of Articulators (DIVA) model (Guenther et al., 2006): the feedforward system deficit hypothesis and the feedback system deficit hypothesis. The authors used noise masking to minimize auditory feedback during speech. Six speakers with AOS and aphasia, 4 with aphasia without AOS, and 2 groups of speakers without impairment (younger and older adults) participated. Acoustic measures of vowel contrast, variability, and duration were analyzed. Younger, but not older, speakers without impairment showed significantly reduced vowel contrast with noise masking. Relative to older controls, the AOS group showed longer vowel durations overall (regardless of masking condition) and a greater reduction in vowel contrast under masking conditions. There were no significant differences in variability. Three of the 6 speakers with AOS demonstrated the group pattern. Speakers with aphasia without AOS did not differ from controls in contrast, duration, or variability. The greater reduction in vowel contrast with masking noise for the AOS group is consistent with the feedforward system deficit hypothesis but not with the feedback system deficit hypothesis; however, effects were small and not present in all individual speakers with AOS. Theoretical implications and alternative interpretations of these findings are discussed.

  4. Feedforward and Feedback Control in Apraxia of Speech: Effects of Noise Masking on Vowel Production

    PubMed Central

    Mailend, Marja-Liisa; Guenther, Frank H.

    2015-01-01

    Purpose This study was designed to test two hypotheses about apraxia of speech (AOS) derived from the Directions Into Velocities of Articulators (DIVA) model (Guenther et al., 2006): the feedforward system deficit hypothesis and the feedback system deficit hypothesis. Method The authors used noise masking to minimize auditory feedback during speech. Six speakers with AOS and aphasia, 4 with aphasia without AOS, and 2 groups of speakers without impairment (younger and older adults) participated. Acoustic measures of vowel contrast, variability, and duration were analyzed. Results Younger, but not older, speakers without impairment showed significantly reduced vowel contrast with noise masking. Relative to older controls, the AOS group showed longer vowel durations overall (regardless of masking condition) and a greater reduction in vowel contrast under masking conditions. There were no significant differences in variability. Three of the 6 speakers with AOS demonstrated the group pattern. Speakers with aphasia without AOS did not differ from controls in contrast, duration, or variability. Conclusion The greater reduction in vowel contrast with masking noise for the AOS group is consistent with the feedforward system deficit hypothesis but not with the feedback system deficit hypothesis; however, effects were small and not present in all individual speakers with AOS. Theoretical implications and alternative interpretations of these findings are discussed. PMID:25565143

  5. Language experience and consonantal context effects on perceptual assimilation of French vowels by American-English learners of French1

    PubMed Central

    Levy, Erika S.

    2009-01-01

    Recent research has called for an examination of perceptual assimilation patterns in second-language speech learning. This study examined the effects of language learning and consonantal context on perceptual assimilation of Parisian French (PF) front rounded vowels ∕y∕ and ∕œ∕ by American English (AE) learners of French. AE listeners differing in their French language experience (no experience, formal instruction, formal-plus-immersion experience) performed an assimilation task involving PF ∕y, œ, u, o, i, ε, a∕ in bilabial ∕rabVp∕ and alveolar ∕radVt∕ contexts, presented in phrases. PF front rounded vowels were assimilated overwhelmingly to back AE vowels. For PF ∕œ∕, assimilation patterns differed as a function of language experience and consonantal context. However, PF ∕y∕ revealed no experience effect in alveolar context. In bilabial context, listeners with extensive experience assimilated PF ∕y∕ to ∕ju∕ less often than listeners with no or only formal experience, a pattern predicting the poorest ∕u-y∕ discrimination for the most experienced group. An “internal consistency” analysis indicated that responses were most consistent with extensive language experience and in bilabial context. Acoustical analysis revealed that acoustical similarities among PF vowels alone cannot explain context-specific assimilation patterns. Instead it is suggested that native-language allophonic variation influences context-specific perceptual patterns in second-language learning. PMID:19206888

  6. Documentation of the space station/aircraft acoustic apparatus

    NASA Technical Reports Server (NTRS)

    Clevenson, Sherman A.

    1987-01-01

    This paper documents the design and construction of the Space Station/Aircraft Acoustic Apparatus (SS/AAA). Its capabilities both as a space station acoustic simulator and as an aircraft acoustic simulator are described. Also indicated are the considerations which ultimately resulted in man-rating the SS/AAA. In addition, the results of noise surveys and reverberation time and absorption coefficient measurements are included.

  7. Correlations of decision weights and cognitive function for the masked discrimination of vowels by young and old adults

    PubMed Central

    Lutfi, Robert A.

    2014-01-01

    Older adults are often reported in the literature to have greater difficulty than younger adults understanding speech in noise [Helfer and Wilber (1988). J. Acoust. Soc. Am, 859–893]. The poorer performance of older adults has been attributed to a general deterioration of cognitive processing, deterioration of cochlear anatomy, and/or greater difficulty segregating speech from noise. The current work used perturbation analysis [Berg (1990). J. Acoust. Soc. Am., 149–158] to provide a more specific assessment of the effect of cognitive factors on speech perception in noise. Sixteen older (age 56–79 years) and seventeen younger (age 19–30 years) adults discriminated a target vowel masked by randomly selected masker vowels immediately preceding and following the target. Relative decision weights on target and maskers resulting from the analysis revealed large individual differences across participants despite similar performance scores in many cases. On the most difficult vowel discriminations, the older adult decision weights were significantly correlated with inhibitory control (Color Word Interference test) and pure-tone threshold averages (PTA). Young adult decision weights were not correlated with any measures of peripheral (PTA) or central function (inhibition or working memory). PMID:25256580

  8. Articulatory Changes in Muscle Tension Dysphonia: Evidence of Vowel Space Expansion Following Manual Circumlaryngeal Therapy

    ERIC Educational Resources Information Center

    Roy, Nelson; Nissen, Shawn L.; Dromey, Christopher; Sapir, Shimon

    2009-01-01

    In a preliminary study, we documented significant changes in formant transitions associated with successful manual circumlaryngeal treatment (MCT) of muscle tension dysphonia (MTD), suggesting improvement in speech articulation. The present study explores further the effects of MTD on vowel articulation by means of additional vowel acoustic…

  9. The right ear advantage revisited: speech lateralisation in dichotic listening using consonant-vowel and vowel-consonant syllables.

    PubMed

    Sætrevik, Bjørn

    2012-01-01

    The dichotic listening task is typically administered by presenting a consonant-vowel (CV) syllable to each ear and asking the participant to report the syllable heard most clearly. The results tend to show more reports of the right ear syllable than of the left ear syllable, an effect called the right ear advantage (REA). The REA is assumed to be due to the crossing over of auditory fibres and the processing of language stimuli being lateralised to left temporal areas. However, the tendency for most dichotic listening experiments to use only CV syllable stimuli limits the extent to which the conclusions can be generalised to also apply to other speech phonemes. The current study re-examines the REA in dichotic listening by using both CV and vowel-consonant (VC) syllables and combinations thereof. Results showed a replication of the REA response pattern for both CV and VC syllables, thus indicating that the general assumption of left-side localisation of processing can be applied for both types of stimuli. Further, on trials where a CV is presented in one ear and a VC is presented in the other ear, the CV is selected more often than the VC, indicating that these phonemes have an acoustic or processing advantage.

  10. Acoustic analysis of changes in articulation proficiency in patients with advanced head and neck cancer treated with chemoradiotherapy.

    PubMed

    Jacobi, Irene; van Rossum, Maya A; van der Molen, Lisette; Hilgers, Frans J M; van den Brekel, Michiel W M

    2013-12-01

    Our aim was to characterize articulation proficiency and differences between tumor sites before and after chemoradiotherapy for advanced head and neck cancer with the help of acoustic measures. Our further goal was to improve objective speech measures and gain insight into muscle functioning before and after treatment. In 34 patients with laryngeal or hypopharyngeal, nasal or nasopharyngeal, or oral or oropharyngeal cancer, we acoustically analyzed nasality, vowel space, precision, and strength of articulation in 12 speech sounds (/a/, /i/, /u/, /p/, /s/, /z/, /1/, /t/, /tj/, /k/, /x/, /r/) before treatment and 10 weeks and 1 year after treatment. Outcomes were compared between assessment points and between tumor sites. Nasality in nonlaryngeal sites was significantly reduced by treatment. Most affected in articulation were the oral or oropharyngeal cancer sites, followed by the nasal or nasopharyngeal sites. One year after treatment, vowel space had not recovered and consonant articulation had weakened. Laryngeal sites were less affected in articulation by tumor or treatment. Analyses of articulatory-acoustic features are a useful instrument for assessing articulation and speech quality objectively. Assessment of a number of sounds representing various articulation manners, places, and tongue shapes revealed patterns of speech deterioration after chemoradiotherapy. The results suggest that patients' speech could benefit from articulation exercises to address changes in muscle coordination and/or sensitivity and to counteract side effects and "underexercise" atrophy.

  11. [Acoustic analysis and characteristics of vocal range in Beijing Opera actors].

    PubMed

    Qu, C; Liu, Y

    2000-02-01

    To get the objective acoustic parameters of the voice of Beijing Opera actors and set a foundation for the training and protection of the special professional voice. Seventy-three (age 16-57 years) professional actors and students were asked to produce sustained comfortable vowels /a/ and /i/, and to sing two pieces of songs which were in the category of Xipi and Erhuang respectively. Dr. Speech for windows version 3.0 was used to get the acoustic parameters of the vowels and the songs. F0 of the vowels /a/ and /i/ of different Hangdangs were Chou (272.6 +/- 42.0) Hz (mean +/- s), (304.2 +/- 22.1) Hz; Xiaosheng (499.3 +/- 34.0) Hz, (485.4 +/- 18.7) Hz; Laosheng (335.6 +/- 60.0) Hz, (317.9 +/- 45.1) Hz; Hualian (319.0 +/- 61.3) Hz, (340.1 +/- 68.8) Hz; Laodan (427.6 +/- 47.2) Hz, (437.7 +/- 45.8) Hz; Huadan (535.8 +/- 48.8) Hz, (561.6 +/- 29.2) Hz; Qingyi (548.0 +/- 69.5) Hz, (543.5 +/- 79.3) Hz; these and other acoustic parameters of vowels such as Jitter, Shimmer and NNE were all within the normal range given by the software. The vocal range of Beijing Opera actors was from 1.7 to 2.8 oct, and most of the highest and the lowest pitches were higher than that of tenor or soprano. These findings may help to provide insight regarding the acoustic characteristics of the voice of Beijing Opera actors.

  12. An EMA/EPG Study of Vowel-to-Vowel Articulation across Velars in Southern British English

    ERIC Educational Resources Information Center

    Fletcher, Janet

    2004-01-01

    Recent studies have attested that the extent of transconsonantal vowel-to-vowel coarticulation is at least partly dependent on degree of prosodic accentuation, in languages like English. A further important factor is the mutual compatibility of consonant and vowel gestures associated with the segments in question. In this study two speakers of…

  13. Pitch effects on vowel roughness and spectral noise for subjects in four musical voice classifications.

    PubMed

    Newman, R A; Emanuel, F W

    1991-08-01

    This study was designed to investigate the effects of vocal fo on vowel spectral noise level (SNL) and perceived vowel roughness for subjects in high- and low-pitch voice categories. The subjects were 40 adult singers (10 each sopranos, altos, tenors, and basses). Each produced the vowel /a/ in isolation at a comfortable speaking pitch, and at each of seven assigned pitches spaced at whole-tone intervals over a musical octave within his or her singing pitch range. The eight /a/ productions were repeated by each subject on a second test day. The SNL differences between repeated test samples (different days) were not statistically significant for any subject group. For the vowel samples produced at a comfortable pitch, a relatively large SNL was associated with samples phonated by the subjects of each sex who manifested the relatively low singing pitch range. Regarding the vowel samples produced at the assigned-pitch levels, it was found that both vowel SNL and perceived vowel roughness decreased as test-pitch level was raised over a range of one octave. The relationship between vocal pitch and either vowel roughness or SNL approached linearity for each of the four subject groups.

  14. Vowel reduction in word-final position by early and late Spanish-English bilinguals.

    PubMed

    Byers, Emily; Yavas, Mehmet

    2017-01-01

    Vowel reduction is a prominent feature of American English, as well as other stress-timed languages. As a phonological process, vowel reduction neutralizes multiple vowel quality contrasts in unstressed syllables. For bilinguals whose native language is not characterized by large spectral and durational differences between tonic and atonic vowels, systematically reducing unstressed vowels to the central vowel space can be problematic. Failure to maintain this pattern of stressed-unstressed syllables in American English is one key element that contributes to a "foreign accent" in second language speakers. Reduced vowels, or "schwas," have also been identified as particularly vulnerable to the co-articulatory effects of adjacent consonants. The current study examined the effects of adjacent sounds on the spectral and temporal qualities of schwa in word-final position. Three groups of English-speaking adults were tested: Miami-based monolingual English speakers, early Spanish-English bilinguals, and late Spanish-English bilinguals. Subjects performed a reading task to examine their schwa productions in fluent speech when schwas were preceded by consonants from various points of articulation. Results indicated that monolingual English and late Spanish-English bilingual groups produced targeted vowel qualities for schwa, whereas early Spanish-English bilinguals lacked homogeneity in their vowel productions. This extends prior claims that schwa is targetless for F2 position for native speakers to highly-proficient bilingual speakers. Though spectral qualities lacked homogeneity for early Spanish-English bilinguals, early bilinguals produced schwas with near native-like vowel duration. In contrast, late bilinguals produced schwas with significantly longer durations than English monolinguals or early Spanish-English bilinguals. Our results suggest that the temporal properties of a language are better integrated into second language phonologies than spectral qualities

  15. An acoustic glottal source for vocal tract physical models

    NASA Astrophysics Data System (ADS)

    Hannukainen, Antti; Kuortti, Juha; Malinen, Jarmo; Ojalammi, Antti

    2017-11-01

    A sound source is proposed for the acoustic measurement of physical models of the human vocal tract. The physical models are produced by fast prototyping, based on magnetic resonance imaging during prolonged vowel production. The sound source, accompanied by custom signal processing algorithms, is used for two kinds of measurements from physical models of the vocal tract: (i) amplitude frequency response and resonant frequency measurements, and (ii) signal reconstructions at the source output according to a target pressure waveform with measurements at the mouth position. The proposed source and the software are validated by computational acoustics experiments and measurements on a physical model of the vocal tract corresponding to the vowels [] of a male speaker.

  16. Acoustic changes in voice after tonsillectomy.

    PubMed

    Saida, H; Hirose, H

    1996-01-01

    The vocal tract from the glottis to the lips is considered to he a resonator and the voice is changeable depending upon the shape of the vocal tract. In this report, we examined the change in pharyngeal size and acoustic feature of voice after tonsillectomy. Subjects were 20 patients. The distance between both anterior pillars (glossopalatine arches), and between both posterior pillars (pharyngopalatine arches) was measured weekly. For acoustic measurements, the five Japanese vowels and Japanese conversational sentences were recorded and analyzed. The distance between both anterior pillars became wider 2 weeks postoperatively, and tended to become narrower thereafter. The distance between both posterior pillars became wider even after 4 weeks postoperatively. No consistent changes in F0, F1 and F2 were found after surgery. Although there was a tendency for a decrease in F3, tonsillectomy did not appear to change the acoustical features of the Japanese vowels remarkably. It was assumed that the subject may adjust the shape of the vocal tract to produce consistent speech sounds after the surgery using auditory feedback.

  17. A Meta-Analysis: Acoustic Measurement of Roughness and Breathiness

    ERIC Educational Resources Information Center

    v. Latoszek, Ben Barsties; Maryn, Youri; Gerrits, Ellen; De Bodt, Marc

    2018-01-01

    Purpose: Over the last 5 decades, many acoustic measures have been created to measure roughness and breathiness. The aim of this study is to present a meta-analysis of correlation coefficients (r) between auditory-perceptual judgment of roughness and breathiness and various acoustic measures in both sustained vowels and continuous speech. Method:…

  18. On the number of channels needed to classify vowels: Implications for cochlear implants

    NASA Astrophysics Data System (ADS)

    Fourakis, Marios; Hawks, John W.; Davis, Erin

    2005-09-01

    In cochlear implants the incoming signal is analyzed by a bank of filters. Each filter is associated with an electrode to constitute a channel. The present research seeks to determine the number of channels needed for optimal vowel classification. Formant measurements of vowels produced by men and women [Hillenbrand et al., J. Acoust. Soc. Am. 97, 3099-3111 (1995)] were converted to channel assignments. The number of channels varied from 4 to 20 over two frequency ranges (180-4000 and 180-6000 Hz) in equal bark steps. Channel assignments were submitted to linear discriminant analysis (LDA). Classification accuracy increased with the number of channels, ranging from 30% with 4 channels to 98% with 20 channels, both for the female voice. To determine asymptotic performance, LDA classification scores were plotted against the number of channels and fitted with quadratic equations. The number of channels at which no further improvement occurred was determined, averaging 19 across all conditions with little variation. This number of channels seems to resolve the frequency range spanned by the first three formants finely enough to maximize vowel classification. This resolution may not be achieved using six or eight channels as previously proposed. [Work supported by NIH.

  19. [Acoustic characteristics of adductor spasmodic dysphonia].

    PubMed

    Yang, Yang; Wang, Li-Ping

    2008-06-01

    To explore the acoustic characteristics of adductor spasmodic dysphonia. The acoustic characteristics, including acoustic signal of recorded voice, three-dimensional sonogram patterns and subjective assessment of voice, between 10 patients (7 women, 3 men) with adductor spasmodic dysphonia and 10 healthy volunteers (5 women, 5 men), were compared. The main clinical manifestation of adductor spasmodic dysphonia included the disorders of sound quality, rhyme and fluency. It demonstrated the tension dysphonia when reading, acoustic jitter, momentary fluctuation of frequency and volume, voice squeezing, interruption, voice prolongation, and losing normal chime. Among 10 patients, there were 1 mild dysphonia (abnormal syllable number < 25%), 6 moderate dysphonia (abnormal syllable number 25%-49%), 1 severe dysphonia (abnormal syllable number 50%-74%) and 2 extremely severe dysphonia (abnormal syllable number > or = 75%). The average reading time in 10 patients was 49 s, with reading time extension and aphasia area interruption in acoustic signals, whereas the average reading time in health control group was 30 s, without voice interruption. The aphasia ratio averaged 42%. The respective symptom syllable in different patients demonstrated in the three-dimensional sonogram. There were voice onset time prolongation, irregular, interrupted and even absent vowel formants. The consonant of symptom syllables displayed absence or prolongation of friction murmur in the block-friction murmur occasionally. The acoustic characteristics of adductor spasmodic dysphonia is the disorders of sound quality, rhyme and fluency. The three-dimensional sonogram of the symptom syllables show distinctive changes of proportional vowels or consonant phonemes.

  20. The Vowel Harmony in the Sinhala Language

    ERIC Educational Resources Information Center

    Petryshyn, Ivan

    2005-01-01

    The Sinhala language is characterized by the melodic shifty stress or its essence, the opposition between long and short vowels, the Ablaut variants of the vowels and the syllabic alphabet which, of course, might impact the vowel harmony and can be a feature of all the leveled Indo-European languages. The vowel harmony is a well-known concept in…

  1. Pretreatment Acoustic Predictors of Gender, Femininity, and Naturalness Ratings in Individuals With Male-to-Female Gender Identity.

    PubMed

    Hardy, Teresa L D; Boliek, Carol A; Wells, Kristopher; Dearden, Carol; Zalmanowitz, Connie; Rieger, Jana M

    2016-05-01

    The purpose of this study was to describe the pretreatment acoustic characteristics of individuals with male-to-female gender identity (IMtFGI) and investigate the ability of the acoustic measures to predict ratings of gender, femininity, and vocal naturalness. This retrospective descriptive study included 2 groups of participants. Speakers were IMtFGI who had not previously received communication feminization treatment (N = 25). Listeners were members of the lay community (N = 30). Acoustic data were retrospectively obtained from pretreatment recordings, and pretreatment recordings also served as stimuli for 3 perceptual rating tasks (completed by listeners). Acoustic data generally were within normal limits for male speakers. All but 2 speakers were perceived to be male, limiting information about the relationship between acoustic measures and gender perception. Fundamental frequency (reading) significantly predicted femininity ratings (p = .000). A total of 3 stepwise regression models indicated that minimum frequency (range task), second vowel formant (sustained vowel), and shimmer percentage (sustained vowel) together significantly predicted naturalness ratings (p = .005, p = .003, and p = .002, respectively). Study aims were achieved with the exception of acoustic predictors of gender perception, which could be described for only 2 speakers. Future research should investigate measures of prosody, voice quality, and other aspects of communication as predictors of gender, femininity, and naturalness.

  2. A Vowel Is a Vowel: Generalizing Newly Learned Phonotactic Constraints to New Contexts

    ERIC Educational Resources Information Center

    Chambers, Kyle E.; Onishi, Kristine H.; Fisher, Cynthia

    2010-01-01

    Adults can learn novel phonotactic constraints from brief listening experience. We investigated the representations underlying phonotactic learning by testing generalization to syllables containing new vowels. Adults heard consonant-vowel-consonant study syllables in which particular consonants were artificially restricted to the onset or coda…

  3. Vowel reduction in word-final position by early and late Spanish-English bilinguals

    PubMed Central

    2017-01-01

    Vowel reduction is a prominent feature of American English, as well as other stress-timed languages. As a phonological process, vowel reduction neutralizes multiple vowel quality contrasts in unstressed syllables. For bilinguals whose native language is not characterized by large spectral and durational differences between tonic and atonic vowels, systematically reducing unstressed vowels to the central vowel space can be problematic. Failure to maintain this pattern of stressed-unstressed syllables in American English is one key element that contributes to a “foreign accent” in second language speakers. Reduced vowels, or “schwas,” have also been identified as particularly vulnerable to the co-articulatory effects of adjacent consonants. The current study examined the effects of adjacent sounds on the spectral and temporal qualities of schwa in word-final position. Three groups of English-speaking adults were tested: Miami-based monolingual English speakers, early Spanish-English bilinguals, and late Spanish-English bilinguals. Subjects performed a reading task to examine their schwa productions in fluent speech when schwas were preceded by consonants from various points of articulation. Results indicated that monolingual English and late Spanish-English bilingual groups produced targeted vowel qualities for schwa, whereas early Spanish-English bilinguals lacked homogeneity in their vowel productions. This extends prior claims that schwa is targetless for F2 position for native speakers to highly-proficient bilingual speakers. Though spectral qualities lacked homogeneity for early Spanish-English bilinguals, early bilinguals produced schwas with near native-like vowel duration. In contrast, late bilinguals produced schwas with significantly longer durations than English monolinguals or early Spanish-English bilinguals. Our results suggest that the temporal properties of a language are better integrated into second language phonologies than spectral

  4. Speaker compensation for local perturbation of fricative acoustic feedback.

    PubMed

    Casserly, Elizabeth D

    2011-04-01

    Feedback perturbation studies of speech acoustics have revealed a great deal about how speakers monitor and control their productions of segmental (e.g., formant frequencies) and non-segmental (e.g., pitch) linguistic elements. The majority of previous work, however, overlooks the role of acoustic feedback in consonant production and makes use of acoustic manipulations that effect either entire utterances or the entire acoustic signal, rather than more temporally and phonetically restricted alterations. This study, therefore, seeks to expand the feedback perturbation literature by examining perturbation of consonant acoustics that is applied in a time-restricted and phonetically specific manner. The spectral center of the alveopalatal fricative [∫] produced in vowel-fricative-vowel nonwords was incrementally raised until it reached the potential for [s]-like frequencies, but the characteristics of high-frequency energy outside the target fricative remained unaltered. An "offline," more widely accessible signal processing method was developed to perform this manipulation. The local feedback perturbation resulted in changes to speakers' fricative production that were more variable, idiosyncratic, and restricted than the compensation seen in more global acoustic manipulations reported in the literature. Implications and interpretations of the results, as well as future directions for research based on the findings, are discussed.

  5. Cepstral analysis of normal and pathological voice in Spanish adults. Smoothed cepstral peak prominence in sustained vowels versus connected speech.

    PubMed

    Delgado-Hernández, Jonathan; León-Gómez, Nieves M; Izquierdo-Arteaga, Laura M; Llanos-Fumero, Yanira

    In recent years, the use of cepstral measures for acoustic evaluation of voice has increased. One of the most investigated parameters is smoothed cepstral peak prominence (CPPs). The objectives of this paper are to establish the usefulness of this acoustic measure in the objective evaluation of alterations of the voice in Spanish and to determine what type of voice sample (sustained vowels or connected speech) is the most sensitive in evaluating the severity of dysphonia. Forty subjects participated in this study 40, 20 controls and 20 with dysphonia. Two voice samples were recorded for each subject (one sustained vowel/a/and four phonetically balanced sentences) and the CPPs was calculated using the Praat programme. Three raters perceptually evaluated the voice sample with the Grade parameter of GRABS scale. Significantly lower values were found in the dysphonic voices, both for/a/(t [38] = 4.85, P<.000) and for phrases (t [38] = 5,75, P<.000). In relation to the type of voice sample most suitable for evaluating the severity of voice alterations, a strong correlation was found with the acoustic-perceptual scale of CPPs calculated from connected speech (r s = -0.73) and moderate correlation with that calculated from the sustained vowel (r s = -0,56). The results of this preliminary study suggest that CPPs is a good measure to detect dysphonia and to objectively assess the severity of alterations in the voice. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.

  6. Study of acoustic correlates associate with emotional speech

    NASA Astrophysics Data System (ADS)

    Yildirim, Serdar; Lee, Sungbok; Lee, Chul Min; Bulut, Murtaza; Busso, Carlos; Kazemzadeh, Ebrahim; Narayanan, Shrikanth

    2004-10-01

    This study investigates the acoustic characteristics of four different emotions expressed in speech. The aim is to obtain detailed acoustic knowledge on how a speech signal is modulated by changes from neutral to a certain emotional state. Such knowledge is necessary for automatic emotion recognition and classification and emotional speech synthesis. Speech data obtained from two semi-professional actresses are analyzed and compared. Each subject produces 211 sentences with four different emotions; neutral, sad, angry, happy. We analyze changes in temporal and acoustic parameters such as magnitude and variability of segmental duration, fundamental frequency and the first three formant frequencies as a function of emotion. Acoustic differences among the emotions are also explored with mutual information computation, multidimensional scaling and acoustic likelihood comparison with normal speech. Results indicate that speech associated with anger and happiness is characterized by longer duration, shorter interword silence, higher pitch and rms energy with wider ranges. Sadness is distinguished from other emotions by lower rms energy and longer interword silence. Interestingly, the difference in formant pattern between [happiness/anger] and [neutral/sadness] are better reflected in back vowels such as /a/(/father/) than in front vowels. Detailed results on intra- and interspeaker variability will be reported.

  7. Perturbation and Nonlinear Dynamic Analysis of Acoustic Phonatory Signal in Parkinsonian Patients Receiving Deep Brain Stimulation

    ERIC Educational Resources Information Center

    Lee, Victoria S.; Zhou, Xiao Ping; Rahn, Douglas A., III; Wang, Emily Q.; Jiang, Jack J.

    2008-01-01

    Nineteen PD patients who received deep brain stimulation (DBS), 10 non-surgical (control) PD patients, and 11 non-pathologic age- and gender-matched subjects performed sustained vowel phonations. The following acoustic measures were obtained on the sustained vowel phonations: correlation dimension (D[subscript 2]), percent jitter, percent shimmer,…

  8. Embedded Vowels: Remedying the Problems Arising out of Embedded Vowels in the English Writings of Arab Learners

    ERIC Educational Resources Information Center

    Khan, Mohamed Fazlulla

    2013-01-01

    L1 habits often tend to interfere with the process of learning a second language. The vowel habits of Arab learners of English are one such interference. Arabic orthography is such that certain vowels indicated by diacritics are often omitted, since an experienced reader of Arabic knows, by habit, the exact vowel sound in each phonetic…

  9. The Vietnamese Vowel System

    ERIC Educational Resources Information Center

    Emerich, Giang Huong

    2012-01-01

    In this dissertation, I provide a new analysis of the Vietnamese vowel system as a system with fourteen monophthongs and nineteen diphthongs based on phonetic and phonological data. I propose that these Vietnamese contour vowels - /ie/, /[turned m]?/ and /uo/-should be grouped with these eleven monophthongs /i e epsilon a [turned a] ? ? [turned m]…

  10. Acoustic cue weighting in the singleton vs geminate contrast in Lebanese Arabic: The case of fricative consonants.

    PubMed

    Al-Tamimi, Jalal; Khattab, Ghada

    2015-07-01

    This paper is the first reported investigation of the role of non-temporal acoustic cues in the singleton-geminate contrast in Lebanese Arabic, alongside the more frequently reported temporal cues. The aim is to explore the extent to which singleton and geminate consonants show qualitative differences in a language where phonological length is prominent and where moraic structure governs segment timing and syllable weight. Twenty speakers (ten male, ten female) were recorded producing trochaic disyllables with medial singleton and geminate fricatives preceded by phonologically short and long vowels. The following acoustic measures were applied on the medial fricative and surrounding vowels: absolute duration; intensity; fundamental frequency; spectral peak and shape, dynamic amplitude, and voicing patterns of medial fricatives; and vowel quality and voice quality correlates of surrounding vowels. Discriminant analysis and receiver operating characteristics (ROC) curves were used to assess each acoustic cue's contribution to the singleton-geminate contrast. Classification rates of 89% and ROC curves with an area under the curve rate of 96% confirmed the major role played by temporal cues, with non-temporal cues contributing to the contrast but to a much lesser extent. These results confirm that the underlying contrast for gemination in Arabic is temporal, but highlight [+tense] (fortis) as a secondary feature.

  11. Acoustic levitation for high temperature containerless processing in space

    NASA Technical Reports Server (NTRS)

    Rey, C. A.; Sisler, R.; Merkley, D. R.; Danley, T. J.

    1990-01-01

    New facilities for high-temperature containerless processing in space are described, including the acoustic levitation furnace (ALF), the high-temperature acoustic levitator (HAL), and the high-pressure acoustic levitator (HPAL). In the current ALF development, the maximum temperature capabilities of the levitation furnaces are 1750 C, and in the HAL development with a cold wall furnace they will exceed 2000-2500 C. The HPAL demonstrated feasibility of precursor space flight experiments on the ground in a 1 g pressurized-gas environment. Testing of lower density materials up to 1300 C has also been accomplished. It is suggested that advances in acoustic levitation techniques will result in the production of new materials such as ceramics, alloys, and optical and electronic materials.

  12. Lip Movements for an Unfamiliar Vowel: Mandarin Front Rounded Vowel Produced by Japanese Speakers

    ERIC Educational Resources Information Center

    Saito, Haruka

    2016-01-01

    Purpose: The study was aimed at investigating what kind of lip positions are selected by Japanese adult participants for an unfamiliar Mandarin rounded vowel /y/ and if their lip positions are related to and/or differentiated from those for their native vowels. Method: Videotaping and post hoc tracking measurements for lip positions, namely…

  13. Digitised evaluation of speech intelligibility using vowels in maxillectomy patients.

    PubMed

    Sumita, Y I; Hattori, M; Murase, M; Elbashti, M E; Taniguchi, H

    2018-03-01

    Among the functional disabilities that patients face following maxillectomy, speech impairment is a major factor influencing quality of life. Proper rehabilitation of speech, which may include prosthodontic and surgical treatments and speech therapy, requires accurate evaluation of speech intelligibility (SI). A simple, less time-consuming yet accurate evaluation is desirable both for maxillectomy patients and the various clinicians providing maxillofacial treatment. This study sought to determine the utility of digital acoustic analysis of vowels for the prediction of SI in maxillectomy patients, based on a comprehensive understanding of speech production in the vocal tract of maxillectomy patients and its perception. Speech samples were collected from 33 male maxillectomy patients (mean age 57.4 years) in two conditions, without and with a maxillofacial prosthesis, and formant data for the vowels /a/,/e/,/i/,/o/, and /u/ were calculated based on linear predictive coding. The frequency range of formant 2 (F2) was determined by differences between the minimum and maximum frequency. An SI test was also conducted to reveal the relationship between SI score and F2 range. Statistical analyses were applied. F2 range and SI score were significantly different between the two conditions without and with a prosthesis (both P < .0001). F2 range was significantly correlated with SI score in both the conditions (Spearman's r = .843, P < .0001; r = .832, P < .0001, respectively). These findings indicate that calculating the F2 range from 5 vowels has clinical utility for the prediction of SI after maxillectomy. © 2017 John Wiley & Sons Ltd.

  14. Acoustic correlates of sexual orientation and gender-role self-concept in women's speech.

    PubMed

    Kachel, Sven; Simpson, Adrian P; Steffens, Melanie C

    2017-06-01

    Compared to studies of male speakers, relatively few studies have investigated acoustic correlates of sexual orientation in women. The present investigation focuses on shedding more light on intra-group variability in lesbians and straight women by using a fine-grained analysis of sexual orientation and collecting data on psychological characteristics (e.g., gender-role self-concept). For a large-scale women's sample (overall n = 108), recordings of spontaneous and read speech were analyzed for median fundamental frequency and acoustic vowel space features. Two studies showed no acoustic differences between lesbians and straight women, but there was evidence of acoustic differences within sexual orientation groups. Intra-group variability in median f0 was found to depend on the exclusivity of sexual orientation; F1 and F2 in /iː/ (study 1) and median f0 (study 2) were acoustic correlates of gender-role self-concept, at least for lesbians. Other psychological characteristics (e.g., sexual orientation of female friends) were also reflected in lesbians' speech. Findings suggest that acoustic features indexicalizing sexual orientation can only be successfully interpreted in combination with a fine-grained analysis of psychological characteristics.

  15. The Influence of Working Memory on Reading Comprehension in Vowelized versus Non-Vowelized Arabic

    ERIC Educational Resources Information Center

    Elsayyad, Hossam; Everatt, John; Mortimore, Tilly; Haynes, Charles

    2017-01-01

    Unlike English, short vowel sounds in Arabic are represented by diacritics rather than letters. According to the presence and absence of these vowel diacritics, the Arabic script can be considered more or less transparent in comparison with other orthographies. The purpose of this study was to investigate the contribution of working memory to…

  16. The Duration of Auditory Sensory Memory for Vowel Processing: Neurophysiological and Behavioral Measures.

    PubMed

    Yu, Yan H; Shafer, Valerie L; Sussman, Elyse S

    2018-01-01

    Speech perception behavioral research suggests that rates of sensory memory decay are dependent on stimulus properties at more than one level (e.g., acoustic level, phonemic level). The neurophysiology of sensory memory decay rate has rarely been examined in the context of speech processing. In a lexical tone study, we showed that long-term memory representation of lexical tone slows the decay rate of sensory memory for these tones. Here, we tested the hypothesis that long-term memory representation of vowels slows the rate of auditory sensory memory decay in a similar way to that of lexical tone. Event-related potential (ERP) responses were recorded to Mandarin non-words contrasting the vowels /i/ vs. /u/ and /y/ vs. /u/ from first-language (L1) Mandarin and L1 American English participants under short and long interstimulus interval (ISI) conditions (short ISI: an average of 575 ms, long ISI: an average of 2675 ms). Results revealed poorer discrimination of the vowel contrasts for English listeners than Mandarin listeners, but with different patterns for behavioral perception and neural discrimination. As predicted, English listeners showed the poorest discrimination and identification for the vowel contrast /y/ vs. /u/, and poorer performance in the long ISI condition. In contrast to Yu et al. (2017), however, we found no effect of ISI reflected in the neural responses, specifically the mismatch negativity (MMN), P3a and late negativity ERP amplitudes. We did see a language group effect, with Mandarin listeners generally showing larger MMN and English listeners showing larger P3a. The behavioral results revealed that native language experience plays a role in echoic sensory memory trace maintenance, but the failure to find an effect of ISI on the ERP results suggests that vowel and lexical tone memory traces decay at different rates. Highlights : We examined the interaction between auditory sensory memory decay and language experience. We compared MMN, P3a, LN

  17. Perceptual Adaptation of Voice Gender Discrimination with Spectrally Shifted Vowels

    PubMed Central

    Li, Tianhao; Fu, Qian-Jie

    2013-01-01

    Purpose To determine whether perceptual adaptation improves voice gender discrimination of spectrally shifted vowels and, if so, which acoustic cues contribute to the improvement. Method Voice gender discrimination was measured for 10 normal-hearing subjects, during 5 days of adaptation to spectrally shifted vowels, produced by processing the speech of 5 male and 5 female talkers with 16-channel sine-wave vocoders. The subjects were randomly divided into 2 groups; one subjected to 50-Hz, and the other to 200-Hz, temporal envelope cutoff frequencies. No preview or feedback was provided. Results: There was significant adaptation in voice gender discrimination with the 200-Hz cutoff frequency, but significant improvement was observed only for 3 female talkers with F0 > 180 Hz and 3 male talkers with F0 < 170 Hz. There was no significant adaptation with the 50-Hz cutoff frequency. Conclusions Temporal envelope cues are important for voice gender discrimination under spectral shift conditions with perceptual adaptation, but spectral shift may limit the exclusive use of spectral information and/or the use of formant structure on voice gender discrimination. The results have implications for cochlear implant users and for understanding voice gender discrimination. PMID:21173392

  18. Perceptual adaptation of voice gender discrimination with spectrally shifted vowels.

    PubMed

    Li, Tianhao; Fu, Qian-Jie

    2011-08-01

    To determine whether perceptual adaptation improves voice gender discrimination of spectrally shifted vowels and, if so, which acoustic cues contribute to the improvement. Voice gender discrimination was measured for 10 normal-hearing subjects, during 5 days of adaptation to spectrally shifted vowels, produced by processing the speech of 5 male and 5 female talkers with 16-channel sine-wave vocoders. The subjects were randomly divided into 2 groups; one subjected to 50-Hz, and the other to 200-Hz, temporal envelope cutoff frequencies. No preview or feedback was provided. There was significant adaptation in voice gender discrimination with the 200-Hz cutoff frequency, but significant improvement was observed only for 3 female talkers with F(0) > 180 Hz and 3 male talkers with F(0) < 170 Hz. There was no significant adaptation with the 50-Hz cutoff frequency. Temporal envelope cues are important for voice gender discrimination under spectral shift conditions with perceptual adaptation, but spectral shift may limit the exclusive use of spectral information and/or the use of formant structure on voice gender discrimination. The results have implications for cochlear implant users and for understanding voice gender discrimination.

  19. Acoustic interference and recognition space within a complex assemblage of dendrobatid frogs

    PubMed Central

    Amézquita, Adolfo; Flechas, Sandra Victoria; Lima, Albertina Pimentel; Gasser, Herbert; Hödl, Walter

    2011-01-01

    In species-rich assemblages of acoustically communicating animals, heterospecific sounds may constrain not only the evolution of signal traits but also the much less-studied signal-processing mechanisms that define the recognition space of a signal. To test the hypothesis that the recognition space is optimally designed, i.e., that it is narrower toward the species that represent the higher potential for acoustic interference, we studied an acoustic assemblage of 10 diurnally active frog species. We characterized their calls, estimated pairwise correlations in calling activity, and, to model the recognition spaces of five species, conducted playback experiments with 577 synthetic signals on 531 males. Acoustic co-occurrence was not related to multivariate distance in call parameters, suggesting a minor role for spectral or temporal segregation among species uttering similar calls. In most cases, the recognition space overlapped but was greater than the signal space, indicating that signal-processing traits do not act as strictly matched filters against sounds other than homospecific calls. Indeed, the range of the recognition space was strongly predicted by the acoustic distance to neighboring species in the signal space. Thus, our data provide compelling evidence of a role of heterospecific calls in evolutionarily shaping the frogs' recognition space within a complex acoustic assemblage without obvious concomitant effects on the signal. PMID:21969562

  20. Dynamic spectral structure specifies vowels for children and adultsa

    PubMed Central

    Nittrouer, Susan

    2008-01-01

    When it comes to making decisions regarding vowel quality, adults seem to weight dynamic syllable structure more strongly than static structure, although disagreement exists over the nature of the most relevant kind of dynamic structure: spectral change intrinsic to the vowel or structure arising from movements between consonant and vowel constrictions. Results have been even less clear regarding the signal components children use in making vowel judgments. In this experiment, listeners of four different ages (adults, and 3-, 5-, and 7-year-old children) were asked to label stimuli that sounded either like steady-state vowels or like CVC syllables which sometimes had middle sections masked by coughs. Four vowel contrasts were used, crossed for type (front/back or closed/open) and consonant context (strongly or only slightly constraining of vowel tongue position). All listeners recognized vowel quality with high levels of accuracy in all conditions, but children were disproportionately hampered by strong coarticulatory effects when only steady-state formants were available. Results clarified past studies, showing that dynamic structure is critical to vowel perception for all aged listeners, but particularly for young children, and that it is the dynamic structure arising from vocal-tract movement between consonant and vowel constrictions that is most important. PMID:17902868

  1. Digitized Speech Characteristics in Patients with Maxillectomy Defects.

    PubMed

    Elbashti, Mahmoud E; Sumita, Yuka I; Hattori, Mariko; Aswehlee, Amel M; Taniguchi, Hisashi

    2017-12-06

    Accurate evaluation of speech characteristics through formant frequency measurement is important for proper speech rehabilitation in patients after maxillectomy. This study aimed to evaluate the utility of digital acoustic analysis and vowel pentagon space for the prediction of speech ability after maxillectomy, by comparing the acoustic characteristics of vowel articulation in three classes of maxillectomy defects. Aramany's classifications I, II, and IV were used to group 27 male patients after maxillectomy. Digital acoustic analysis of five Japanese vowels-/a/, /e/, /i/, /o/, and /u/-was performed using a speech analysis system. First formant (F1) and second formant (F2) frequencies were calculated using an autocorrelation method. Data were plotted on an F1-F2 plane for each patient, and the F1 and F2 ranges were calculated. The vowel pentagon spaces were also determined. One-way ANOVA was applied to compare all results between the three groups. Class II maxillectomy patients had a significantly higher F2 range than did Class I and Class IV patients (p = 0.002). In contrast, there was no significant difference in the F1 range between the three classes. The vowel pentagon spaces were significantly larger in class II maxillectomy patients than in Class I and Class IV patients (p = 0.014). The results of this study indicate that the acoustic characteristics of maxillectomy patients are affected by the defect area. This finding may provide information for obturator design based on vowel articulation and defect class. © 2017 by the American College of Prosthodontists.

  2. Early sound symbolism for vowel sounds.

    PubMed

    Spector, Ferrinne; Maurer, Daphne

    2013-01-01

    Children and adults consistently match some words (e.g., kiki) to jagged shapes and other words (e.g., bouba) to rounded shapes, providing evidence for non-arbitrary sound-shape mapping. In this study, we investigated the influence of vowels on sound-shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat) and four rounded-jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko) rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba). Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01). The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  3. Early sound symbolism for vowel sounds

    PubMed Central

    Spector, Ferrinne; Maurer, Daphne

    2013-01-01

    Children and adults consistently match some words (e.g., kiki) to jagged shapes and other words (e.g., bouba) to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat) and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko) rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba). Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01). The results suggest that there may be naturally biased correspondences between vowel sound and shape. PMID:24349684

  4. Neural Processing of Acoustic Duration and Phonological German Vowel Length: Time Courses of Evoked Fields in Response to Speech and Nonspeech Signals

    ERIC Educational Resources Information Center

    Tomaschek, Fabian; Truckenbrodt, Hubert; Hertrich, Ingo

    2013-01-01

    Recent experiments showed that the perception of vowel length by German listeners exhibits the characteristics of categorical perception. The present study sought to find the neural activity reflecting categorical vowel length and the short-long boundary by examining the processing of non-contrastive durations and categorical length using MEG.…

  5. Preliminary study of acoustic analysis for evaluating speech-aid oral prostheses: Characteristic dips in octave spectrum for comparison of nasality.

    PubMed

    Chang, Yen-Liang; Hung, Chao-Ho; Chen, Po-Yueh; Chen, Wei-Chang; Hung, Shih-Han

    2015-10-01

    Acoustic analysis is often used in speech evaluation but seldom for the evaluation of oral prostheses designed for reconstruction of surgical defect. This study aimed to introduce the application of acoustic analysis for patients with velopharyngeal insufficiency (VPI) due to oral surgery and rehabilitated with oral speech-aid prostheses. The pre- and postprosthetic rehabilitation acoustic features of sustained vowel sounds from two patients with VPI were analyzed and compared with the acoustic analysis software Praat. There were significant differences in the octave spectrum of sustained vowel speech sound between the pre- and postprosthetic rehabilitation. Acoustic measurements of sustained vowels for patients before and after prosthetic treatment showed no significant differences for all parameters of fundamental frequency, jitter, shimmer, noise-to-harmonics ratio, formant frequency, F1 bandwidth, and band energy difference. The decrease in objective nasality perceptions correlated very well with the decrease in dips of the spectra for the male patient with a higher speech bulb height. Acoustic analysis may be a potential technique for evaluating the functions of oral speech-aid prostheses, which eliminates dysfunctions due to the surgical defect and contributes to a high percentage of intelligible speech. Octave spectrum analysis may also be a valuable tool for detecting changes in nasality characteristics of the voice during prosthetic treatment of VPI. Copyright © 2014. Published by Elsevier B.V.

  6. Direct Mapping of Acoustics to Phonology: On the Lexical Encoding of Front Rounded Vowels in L1 English-L2 French Acquisition

    ERIC Educational Resources Information Center

    Darcy, Isabelle; Dekydtspotter, Laurent; Sprouse, Rex A.; Glover, Justin; Kaden, Christiane; McGuire, Michael; Scott, John H. G.

    2012-01-01

    It is well known that adult US-English-speaking learners of French experience difficulties acquiring high /y/-/u/ and mid /oe/-/[openo]/ front vs. back rounded vowel contrasts in French. This study examines the acquisition of these French vowel contrasts at two levels: phonetic categorization and lexical representations. An ABX categorization task…

  7. Medial-Vowel Writing Difficulty in Korean Syllabic Writing: A Characteristic Sign of Alzheimer's Disease

    PubMed Central

    Yoon, Ji Hye; Jeong, Yong

    2018-01-01

    Background and Purpose Korean-speaking patients with a brain injury may show agraphia that differs from that of English-speaking patients due to the unique features of Hangul syllabic writing. Each grapheme in Hangul must be arranged from left to right and/or top to bottom within a square space to form a syllable, which requires greater visuospatial abilities than when writing the letters constituting an alphabetic writing system. Among the Hangul grapheme positions within a syllable, the position of a vowel is important because it determines the writing direction and the whole configuration in Korean syllabic writing. Due to the visuospatial characteristics of the Hangul vowel, individuals with early-onset Alzheimer's disease (EOAD) may experiences differences between the difficulties of writing Hangul vowels and consonants due to prominent visuospatial dysfunctions caused by parietal lesions. Methods Eighteen patients with EOAD and 18 age-and-education-matched healthy adults participated in this study. The participants were requested to listen to and write 30 monosyllabic characters that consisted of an initial consonant, medial vowel, and final consonant with a one-to-one phoneme-to-grapheme correspondence. We measured the writing time for each grapheme, the pause time between writing the initial consonant and the medial vowel (P1), and the pause time between writing the medial vowel and the final consonant (P2). Results All grapheme writing and pause times were significantly longer in the EOAD group than in the controls. P1 was also significantly longer than P2 in the EOAD group. Conclusions Patients with EOAD might require a higher judgment ability and longer processing time for determining the visuospatial grapheme position before writing medial vowels. This finding suggests that a longer pause time before writing medial vowels is an early marker of visuospatial dysfunction in patients with EOAD. PMID:29504296

  8. Medial-Vowel Writing Difficulty in Korean Syllabic Writing: A Characteristic Sign of Alzheimer's Disease.

    PubMed

    Yoon, Ji Hye; Jeong, Yong; Na, Duk L

    2018-04-01

    Korean-speaking patients with a brain injury may show agraphia that differs from that of English-speaking patients due to the unique features of Hangul syllabic writing. Each grapheme in Hangul must be arranged from left to right and/or top to bottom within a square space to form a syllable, which requires greater visuospatial abilities than when writing the letters constituting an alphabetic writing system. Among the Hangul grapheme positions within a syllable, the position of a vowel is important because it determines the writing direction and the whole configuration in Korean syllabic writing. Due to the visuospatial characteristics of the Hangul vowel, individuals with early-onset Alzheimer's disease (EOAD) may experiences differences between the difficulties of writing Hangul vowels and consonants due to prominent visuospatial dysfunctions caused by parietal lesions. Eighteen patients with EOAD and 18 age-and-education-matched healthy adults participated in this study. The participants were requested to listen to and write 30 monosyllabic characters that consisted of an initial consonant, medial vowel, and final consonant with a one-to-one phoneme-to-grapheme correspondence. We measured the writing time for each grapheme, the pause time between writing the initial consonant and the medial vowel (P1), and the pause time between writing the medial vowel and the final consonant (P2). All grapheme writing and pause times were significantly longer in the EOAD group than in the controls. P1 was also significantly longer than P2 in the EOAD group. Patients with EOAD might require a higher judgment ability and longer processing time for determining the visuospatial grapheme position before writing medial vowels. This finding suggests that a longer pause time before writing medial vowels is an early marker of visuospatial dysfunction in patients with EOAD. Copyright © 2018 Korean Neurological Association.

  9. Discrimination of synthesized English vowels by American and Korean listeners

    NASA Astrophysics Data System (ADS)

    Yang, Byunggon

    2004-05-01

    This study explored the discrimination of synthesized English vowel pairs by 27 American and Korean, male and female listeners. The average formant values of nine monophthongs produced by ten American English male speakers were employed to synthesize the vowels. Then, subjects were instructed explicitly to respond to AX discrimination tasks in which the standard vowel was followed by another one with the increment or decrement of the original formant values. The highest and lowest formant values of the same vowel quality were collected and compared to examine patterns of vowel discrimination. Results showed that the American and Korean groups discriminated the vowel pairs almost identically and their center formant frequency values of the high and low boundary fell almost exactly on those of the standards. In addition, the acceptable range of the same vowel quality was similar among the language and gender groups. The acceptable thresholds of each vowel formed an oval to maintain perceptual contrast from adjacent vowels. Pedagogical implications of those findings are discussed.

  10. Central Tendency and Dispersion Measures of the Fundamental Frequencies of Four Vowels as Produced by Two Year-Old and Four-Year Children.

    NASA Astrophysics Data System (ADS)

    Monroe, Roberta Lynn

    The intrinsic fundamental frequency effect among vowels is a vocalic phenomenon of adult speech in which high vowels have higher fundamental frequencies in relation to low vowels. Acoustic investigations of children's speech have shown that variability of the speech signal decreases as children's ages increase. Fundamental frequency measures have been suggested as an indirect metric for the development of laryngeal stability and coordination. Studies of the intrinsic fundamental frequency effect have been conducted among 8- and 9-year old children and in infants. The present study investigated this effect among 2- and 4-year old children. Eight 2-year old and eight 4-year old children produced four vowels, /ae/, /i/, /u/, and /a/, in CVC syllables. Three measures of fundamental frequency were taken. These were mean fundamental frequency, the intra-utterance standard deviation of the fundamental frequency, and the extent to which the cycle-to-cycle pattern of the fundamental frequency was predicted by a linear trend. An analysis of variance was performed to compare the two age groups, the four vowels, and the earlier and later repetitions of the CVC syllables. A significant difference between the two age groups was detected using the intra-utterance standard deviation of the fundamental frequency. Mean fundamental frequencies and linear trend analysis showed that voicing of the preceding consonant determined the statistical significance of the age-group comparisons. Statistically significant differences among the fundamental frequencies of the four vowels were not detected for either age group.

  11. Non-native Speech Perception Training Using Vowel Subsets: Effects of Vowels in Sets and Order of Training

    PubMed Central

    Nishi, Kanae; Kewley-Port, Diane

    2008-01-01

    Purpose Nishi and Kewley-Port (2007) trained Japanese listeners to perceive nine American English monophthongs and showed that a protocol using all nine vowels (fullset) produced better results than the one using only the three more difficult vowels (subset). The present study extended the target population to Koreans and examined whether protocols combining the two stimulus sets would provide more effective training. Method Three groups of five Korean listeners were trained on American English vowels for nine days using one of the three protocols: fullset only, first three days on subset then six days on fullset, or first six days on fullset then three days on subset. Participants' performance was assessed by pre- and post-training tests, as well as by a mid-training test. Results 1) Fullset training was also effective for Koreans; 2) no advantage was found for the two combined protocols over the fullset only protocol, and 3) sustained “non-improvement” was observed for training using one of the combined protocols. Conclusions In using subsets for training American English vowels, care should be taken not only in the selection of subset vowels, but also for the training orders of subsets. PMID:18664694

  12. Perceptual integration of acoustic cues to laryngeal contrasts in Korean fricatives.

    PubMed

    Lee, Sarah; Katz, Jonah

    2016-02-01

    This paper provides evidence that multiple acoustic cues involving the presence of low-frequency energy integrate in the perception of Korean coronal fricatives. This finding helps explain a surprising asymmetry between the production and perception of these fricatives found in previous studies: lower F0 onset in the following vowel leads to a response bias for plain [s] over fortis [s*], despite the fact that there is no evidence for a corresponding acoustic asymmetry in the production of [s] and [s*]. A fixed classification task using the Garner paradigm provides evidence that low F0 in a following vowel and the presence of voicing during frication perceptually integrate. This suggests that Korean listeners in previous experiments were responding to an "intermediate perceptual property" of stimuli, despite the fact that the individual acoustic components of that property are not all present in typical Korean fricative productions. The finding also broadens empirical support for the general idea of perceptual integration to a language, a different manner of consonant, and a situation where covariance of the acoustic cues under investigation is not generally present in a listener's linguistic input.

  13. Palatalization and Intrinsic Prosodic Vowel Features in Russian

    ERIC Educational Resources Information Center

    Ordin, Mikhail

    2011-01-01

    The presented study is aimed at investigating the interaction of palatalization and intrinsic prosodic features of the vowel in CVC (consonant+vowel+consonant) syllables in Russian. The universal nature of intrinsic prosodic vowel features was confirmed with the data from the Russian language. It was found that palatalization of the consonants…

  14. Technical Aspects of Acoustical Engineering for the ISS [International Space Station

    NASA Technical Reports Server (NTRS)

    Allen, Christopher S.

    2009-01-01

    It is important to control acoustic levels on manned space flight vehicles and habitats to protect crew-hearing, allow for voice communications, and to ensure a healthy and habitable environment in which to work and live. For the International Space Station (ISS) this is critical because of the long duration crew-stays of approximately 6-months. NASA and the JSC Acoustics Office set acoustic requirements that must be met for hardware to be certified for flight. Modules must meet the NC-50 requirement and other component hardware are given smaller allocations to meet. In order to meet these requirements many aspects of noise generation and control must be considered. This presentation has been developed to give an insight into the various technical activities performed at JSC to ensure that a suitable acoustic environment is provided for the ISS crew. Examples discussed include fan noise, acoustic flight material development, on-orbit acoustic monitoring, and a specific hardware development and acoustical design case, the ISS Crew Quarters.

  15. Consonant and Vowel Identification in Cochlear Implant Users Measured by Nonsense Words: A Systematic Review and Meta-Analysis.

    PubMed

    Rødvik, Arne Kirkhorn; von Koss Torkildsen, Janne; Wie, Ona Bø; Storaker, Marit Aarvaag; Silvola, Juha Tapio

    2018-04-17

    The purpose of this systematic review and meta-analysis was to establish a baseline of the vowel and consonant identification scores in prelingually and postlingually deaf users of multichannel cochlear implants (CIs) tested with consonant-vowel-consonant and vowel-consonant-vowel nonsense syllables. Six electronic databases were searched for peer-reviewed articles reporting consonant and vowel identification scores in CI users measured by nonsense words. Relevant studies were independently assessed and screened by 2 reviewers. Consonant and vowel identification scores were presented in forest plots and compared between studies in a meta-analysis. Forty-seven articles with 50 studies, including 647 participants, thereof 581 postlingually deaf and 66 prelingually deaf, met the inclusion criteria of this study. The mean performance on vowel identification tasks for the postlingually deaf CI users was 76.8% (N = 5), which was higher than the mean performance for the prelingually deaf CI users (67.7%; N = 1). The mean performance on consonant identification tasks for the postlingually deaf CI users was higher (58.4%; N = 44) than for the prelingually deaf CI users (46.7%; N = 6). The most common consonant confusions were found between those with same manner of articulation (/k/ as /t/, /m/ as /n/, and /p/ as /t/). The mean performance on consonant identification tasks for the prelingually and postlingually deaf CI users was found. There were no statistically significant differences between the scores for prelingually and postlingually deaf CI users. The consonants that were incorrectly identified were typically confused with other consonants with the same acoustic properties, namely, voicing, duration, nasality, and silent gaps. A univariate metaregression model, although not statistically significant, indicated that duration of implant use in postlingually deaf adults predict a substantial portion of their consonant identification ability. As there is no ceiling

  16. International Space Station Acoustics - A Status Report

    NASA Technical Reports Server (NTRS)

    Allen, Christopher S.; Denham, Samuel A.

    2011-01-01

    It is important to control acoustic noise aboard the International Space Station (ISS) to provide a satisfactory environment for voice communications, crew productivity, and restful sleep, and to minimize the risk for temporary and permanent hearing loss. Acoustic monitoring is an important part of the noise control process on ISS, providing critical data for trend analysis, noise exposure analysis, validation of acoustic analysis and predictions, and to provide strong evidence for ensuring crew health and safety, thus allowing Flight Certification. To this purpose, sound level meter (SLM) measurements and acoustic noise dosimetry are routinely performed. And since the primary noise sources on ISS include the environmental control and life support system (fans and airflow) and active thermal control system (pumps and water flow), acoustic monitoring will indicate changes in hardware noise emissions that may indicate system degradation or performance issues. This paper provides the current acoustic levels in the ISS modules and sleep stations, and is an update to the status presented in 20031. Many new modules, and sleep stations have been added to the ISS since that time. In addition, noise mitigation efforts have reduced noise levels in some areas. As a result, the acoustic levels on the ISS have improved.

  17. Automated acoustic analysis of task dependency in adductor spasmodic dysphonia versus muscle tension dysphonia.

    PubMed

    Roy, Nelson; Mazin, Alqhazo; Awan, Shaheen N

    2014-03-01

    Distinguishing muscle tension dysphonia (MTD) from adductor spasmodic dysphonia (ADSD) can be difficult. Unlike MTD, ADSD is described as "task-dependent," implying that dysphonia severity varies depending upon the demands of the vocal task, with connected speech thought to be more symptomatic than sustained vowels. This study used an acoustic index of dysphonia severity (i.e., the Cepstral Spectral Index of Dysphonia [CSID]) to: 1) assess the value of "task dependency" to distinguish ADSD from MTD, and to 2) examine associations between the CSID and listener ratings. Case-Control Study. CSID estimates of dysphonia severity for connected speech and sustained vowels of patients with ADSD (n = 36) and MTD (n = 45) were compared. The diagnostic precision of task dependency (as evidenced by differences in CSID-estimated dysphonia severity between connected speech and sustained vowels) was examined. In ADSD, CSID-estimated severity for connected speech (M = 39. 2, SD = 22.0) was significantly worse than for sustained vowels (M = 29.3, SD = 21.9), [P = .020]. Whereas in MTD, no significant difference in CSID-estimated severity was observed between connected speech (M = 55.1, SD = 23.8) and sustained vowels (M = 50.0, SD = 27.4), [P = .177]. CSID evidence of task dependency correctly identified 66.7% of ADSD cases (sensitivity) and 64.4% of MTD cases (specificity). CSID and listener ratings were significantly correlated. Task dependency in ADSD, as revealed by differences in acoustically-derived estimates of dysphonia severity between connected speech and sustained vowel production, is a potentially valuable diagnostic marker. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  18. Acoustic vibration analysis for utilization of woody plant in space environment

    NASA Astrophysics Data System (ADS)

    Chida, Yukari; Yamashita, Masamichi; Hashimoto, Hirofumi; Sato, Seigo; Tomita-Yokotani, Kaori; Baba, Keiichi; Suzuki, Toshisada; Motohashi, Kyohei; Sakurai, Naoki; Nakagawa-izumi, Akiko

    2012-07-01

    We are proposing to raise woody plants for space agriculture in Mars. Space agriculture has the utilization of wood in their ecosystem. Nobody knows the real tree shape grown under space environment under the low or micro gravitational conditions such as outer environment. Angiosperm tree forms tension wood for keeping their shape. Tension wood formation is deeply related to gravity, but the details of the mechanism of its formation has not yet been clarified. For clarifying the mechanism, the space experiment in international space station, ISS is the best way to investigate about them as the first step. It is necessary to establish the easy method for crews who examine the experiments at ISS. Here, we are proposing to investigate the possibility of the acoustic vibration analysis for the experiment at ISS. Two types of Japanese cherry tree, weeping and upright types in Prunus sp., were analyzed by the acoustic vibration method. Coefficient-of-variation (CV) of sound speed was calculated by the acoustic vibration analysis. The amount of lignin and decomposed lignin were estimated by both Klason and Py-GC/MS method, respectively. The relationships of the results of acoustic vibration analysis and the inner components in tested woody materials were investigated. After the experiments, we confirm the correlation about them. Our results indicated that the acoustic vibration analysis would be useful for determining the inside composition as a nondestructive method in outer space environment.

  19. A Comparison of Persian Vowel Production in Hearing-Impaired Children Using a Cochlear Implant and Normal-Hearing Children.

    PubMed

    Jafari, Narges; Drinnan, Michael; Mohamadi, Reyhane; Yadegari, Fariba; Nourbakhsh, Mandana; Torabinezhad, Farhad

    2016-05-01

    Normal-hearing (NH) acuity and auditory feedback control are crucial for human voice production and articulation. The lack of auditory feedback in individuals with profound hearing impairment changes their vowel production. The purpose of this study was to compare Persian vowel production in deaf children with cochlear implants (CIs) and that in NH children. The participants were 20 children (12 girls and 8 boys) with age range of 5 years; 1 month to 9 years. All patients had congenital hearing loss and received a multichannel CI at an average age of 3 years. They had at least 6 months experience of their current device (CI). The control group consisted of 20 NH children (12 girls and 8 boys) with age range of 5 to 9 years old. The two groups were matched by age. Participants were native Persian speakers who were asked to produce the vowels /i/, /e/, /ӕ/, /u/, /o/, and /a/. The averages for first formant frequency (F1) and second formant frequency (F2) of six vowels were measured using Praat software (Version 5.1.44, Boersma & Weenink, 2012). The independent samples t test was conducted to assess the differences in F1 and F2 values and the area of the vowel space between the two groups. Mean values of F1 were increased in CI children; the mean values of F1 for vowel /i/ and /a/, F2 for vowel /a/ and /o/ were significantly different (P < 0.05). The changes in F1 and F2 showed a centralized vowel space for CI children. F1 is increased in CI children, probably because CI children tend to overarticulate. We hypothesis this is due to a lack of auditory feedback; there is an attempt by hearing-impaired children to compensate via proprioceptive feedback during articulatory process. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  20. Identification and Multiplicity of Double Vowels in Cochlear Implant Users

    ERIC Educational Resources Information Center

    Kwon, Bomjun J.; Perry, Trevor T.

    2014-01-01

    Purpose: The present study examined cochlear implant (CI) users' perception of vowels presented concurrently (i.e., "double vowels") to further our understanding of auditory grouping in electric hearing. Method: Identification of double vowels and single vowels was measured with 10 CI subjects. Fundamental frequencies (F0s) of…

  1. Dissociation of tone and vowel processing in Mandarin idioms.

    PubMed

    Hu, Jiehui; Gao, Shan; Ma, Weiyi; Yao, Dezhong

    2012-09-01

    Using event-related potentials, this study measured the access of suprasegmental (tone) and segmental (vowel) information in spoken word recognition with Mandarin idioms. Participants performed a delayed-response acceptability task, in which they judged the correctness of the last word of each idiom, which might deviate from the correct word in either tone or vowel. Results showed that, compared with the correct idioms, a larger early negativity appeared only for vowel violation. Additionally, a larger N400 effect was observed for vowel mismatch than tone mismatch. A control experiment revealed that these differences were not due to low-level physical differences across conditions; instead, they represented the greater constraining power of vowels than tones in the lexical selection and semantic integration of the spoken words. Furthermore, tone violation elicited a more robust late positive component than vowel violation, suggesting different reanalyses of the two types of information. In summary, the current results support a functional dissociation of tone and vowel processing in spoken word recognition. Copyright © 2012 Society for Psychophysiological Research.

  2. International Space Station Crew Quarters Ventilation and Acoustic Design Implementation

    NASA Technical Reports Server (NTRS)

    Broyan, James L., Jr.; Cady, Scott M; Welsh, David A.

    2010-01-01

    The International Space Station (ISS) United States Operational Segment has four permanent rack sized ISS Crew Quarters (CQs) providing a private crew member space. The CQs use Node 2 cabin air for ventilation/thermal cooling, as opposed to conditioned ducted air-from the ISS Common Cabin Air Assembly (CCAA) or the ISS fluid cooling loop. Consequently, CQ can only increase the air flow rate to reduce the temperature delta between the cabin and the CQ interior. However, increasing airflow causes increased acoustic noise so efficient airflow distribution is an important design parameter. The CQ utilized a two fan push-pull configuration to ensure fresh air at the crew member's head position and reduce acoustic exposure. The CQ ventilation ducts are conduits to the louder Node 2 cabin aisle way which required significant acoustic mitigation controls. The CQ interior needs to be below noise criteria curve 40 (NC-40). The design implementation of the CQ ventilation system and acoustic mitigation are very inter-related and require consideration of crew comfort balanced with use of interior habitable volume, accommodation of fan failures, and possible crew uses that impact ventilation and acoustic performance. Each CQ required 13% of its total volume and approximately 6% of its total mass to reduce acoustic noise. This paper illustrates the types of model analysis, assumptions, vehicle interactions, and trade-offs required for CQ ventilation and acoustics. Additionally, on-orbit ventilation system performance and initial crew feedback is presented. This approach is applicable to any private enclosed space that the crew will occupy.

  3. Perception of Vowel Length by Japanese- and English-Learning Infants

    ERIC Educational Resources Information Center

    Mugitani, Ryoko; Pons, Ferran; Fais, Laurel; Dietrich, Christiane; Werker, Janet F.; Amano, Shigeaki

    2009-01-01

    This study investigated vowel length discrimination in infants from 2 language backgrounds, Japanese and English, in which vowel length is either phonemic or nonphonemic. Experiment 1 revealed that English 18-month-olds discriminate short and long vowels although vowel length is not phonemically contrastive in English. Experiments 2 and 3 revealed…

  4. Measurement and Characterization of Space Shuttle Solid Rocket Motor Plume Acoustics

    NASA Technical Reports Server (NTRS)

    Kenny, Robert Jeremy

    2009-01-01

    NASA's current models to predict lift-off acoustics for launch vehicles are currently being updated using several numerical and empirical inputs. One empirical input comes from free-field acoustic data measured at three Space Shuttle Reusable Solid Rocket Motor (RSRM) static firings. The measurements were collected by a joint collaboration between NASA - Marshall Space Flight Center, Wyle Labs, and ATK Launch Systems. For the first time NASA measured large-thrust solid rocket motor plume acoustics for evaluation of both noise sources and acoustic radiation properties. Over sixty acoustic free-field measurements were taken over the three static firings to support evaluation of acoustic radiation near the rocket plume, far-field acoustic radiation patterns, plume acoustic power efficiencies, and apparent noise source locations within the plume. At approximately 67 m off nozzle centerline and 70 m downstream of the nozzle exit plan, the measured overall sound pressure level of the RSRM was 155 dB. Peak overall levels in the far field were over 140 dB at 300 m and 50-deg off of the RSRM thrust centerline. The successful collaboration has yielded valuable data that are being implemented into NASA's lift-off acoustic models, which will then be used to update predictions for Ares I and Ares V liftoff acoustic environments.

  5. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English

    ERIC Educational Resources Information Center

    Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.

    2016-01-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…

  6. Sex-Related Acoustic Changes in Voiceless English Fricatives

    ERIC Educational Resources Information Center

    Fox, Robert Allen; Nissen, Shawn L.

    2005-01-01

    This investigation is a comprehensive acoustic study of 4 voiceless fricatives (/f [theta] s [esh]/) in English produced by adults and pre-and postpubescent children aged 6-14 years. Vowel duration, amplitude, and several different spectral measures (including spectral tilt and spectral moments) were examined. Of specific interest was the pattern…

  7. Effects of Word Position on the Acoustic Realization of Vietnamese Final Consonants.

    PubMed

    Tran, Thi Thuy Hien; Vallée, Nathalie; Granjon, Lionel

    2018-05-28

    A variety of studies have shown differences between phonetic features of consonants according to their prosodic and/or syllable (onset vs. coda) positions. However, differences are not always found, and interactions between the various factors involved are complex and not well understood. Our study compares acoustical characteristics of coda consonants in Vietnamese taking into account their position within words. Traditionally described as monosyllabic, Vietnamese is partially polysyllabic at the lexical level. In this language, tautosyllabic consonant sequences are prohibited, and adjacent consonants are only found at syllable boundaries either within polysyllabic words (CVC.CVC) or across monosyllabic words (CVC#CVC). This study is designed to examine whether or not syllable boundary types (interword vs. intraword) have an effect on the acoustic realization of codas. The results show significant acoustic differences in consonant realizations according to syllable boundary type, suggesting different coarticulation patterns between nuclei and codas. In addition, as Vietnamese voiceless stops are generally unreleased in coda position, with no burst to carry consonantal information, our results show that a vowel's second half contains acoustic cues which are available to aid in the discrimination of place of articulation of the vowel's following consonant. © 2018 S. Karger AG, Basel.

  8. Enhancing Vowel Discrimination Using Constructed Spelling

    ERIC Educational Resources Information Center

    Stewart, Katherine; Hayashi, Yusuke; Saunders, Kathryn

    2010-01-01

    In a computerized task, an adult with intellectual disabilities learned to construct consonant-vowel-consonant words in the presence of corresponding spoken words. During the initial assessment, the participant demonstrated high accuracy on one word group (containing the vowel-consonant units "it" and "un") but low accuracy on the other group…

  9. Speaker-Sex Discrimination for Voiced and Whispered Vowels at Short Durations.

    PubMed

    Smith, David R R

    2016-01-01

    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel performance would improve relative to whispered vowel performance as pitch information becomes available. This pattern of results was shown for women's but not for men's voices. A whispered vowel needs to have a duration three times longer than a voiced vowel before listeners can reliably tell whether it's spoken by a man or woman (∼30 ms vs. ∼10 ms). Listeners were half as sensitive to information about speaker-sex when it is carried by whispered compared with voiced vowels.

  10. Speaker-Sex Discrimination for Voiced and Whispered Vowels at Short Durations

    PubMed Central

    2016-01-01

    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel performance would improve relative to whispered vowel performance as pitch information becomes available. This pattern of results was shown for women’s but not for men’s voices. A whispered vowel needs to have a duration three times longer than a voiced vowel before listeners can reliably tell whether it’s spoken by a man or woman (∼30 ms vs. ∼10 ms). Listeners were half as sensitive to information about speaker-sex when it is carried by whispered compared with voiced vowels. PMID:27757218

  11. International Space Station Acoustics - A Status Report

    NASA Technical Reports Server (NTRS)

    Allen, Christopher S.

    2015-01-01

    It is important to control acoustic noise aboard the International Space Station (ISS) to provide a satisfactory environment for voice communications, crew productivity, alarm audibility, and restful sleep, and to minimize the risk for temporary and permanent hearing loss. Acoustic monitoring is an important part of the noise control process on ISS, providing critical data for trend analysis, noise exposure analysis, validation of acoustic analyses and predictions, and to provide strong evidence for ensuring crew health and safety, thus allowing Flight Certification. To this purpose, sound level meter (SLM) measurements and acoustic noise dosimetry are routinely performed. And since the primary noise sources on ISS include the environmental control and life support system (fans and airflow) and active thermal control system (pumps and water flow), acoustic monitoring will reveal changes in hardware noise emissions that may indicate system degradation or performance issues. This paper provides the current acoustic levels in the ISS modules and sleep stations and is an update to the status presented in 2011. Since this last status report, many payloads (science experiment hardware) have been added and a significant number of quiet ventilation fans have replaced noisier fans in the Russian Segment. Also, noise mitigation efforts are planned to reduce the noise levels of the T2 treadmill and levels in Node 3, in general. As a result, the acoustic levels on the ISS continue to improve.

  12. Acoustic levitation and manipulation for space applications

    NASA Technical Reports Server (NTRS)

    Wang, T. G.

    1979-01-01

    A wide spectrum of experiments to be performed in space in a microgravity environment require levitation and manipulation of liquid or molten samples. A novel acoustic method has been developed at JPL for controlling liquid samples without physical contacts. This method utilizes the static pressure generated by three orthogonal acoustic standing waves excited within an enclosure. Furthermore, this method will allow the sample to be rotated and/or oscillated by modifying the phase angles and/or the amplitude of the acoustic field. This technique has been proven both in our laboratory and in a microgravity environment provided by KC-135 flights. Samples placed within our chamber driven at (1,0,0), (0,1,0), and (0,0,1), modes were indeed levitated, rotated, and oscillated.

  13. Acoustic evolution of old Italian violins from Amati to Stradivari.

    PubMed

    Tai, Hwan-Ching; Shen, Yen-Ping; Lin, Jer-Horng; Chung, Dai-Ting

    2018-06-05

    The shape and design of the modern violin are largely influenced by two makers from Cremona, Italy: The instrument was invented by Andrea Amati and then improved by Antonio Stradivari. Although the construction methods of Amati and Stradivari have been carefully examined, the underlying acoustic qualities which contribute to their popularity are little understood. According to Geminiani, a Baroque violinist, the ideal violin tone should "rival the most perfect human voice." To investigate whether Amati and Stradivari violins produce voice-like features, we recorded the scales of 15 antique Italian violins as well as male and female singers. The frequency response curves are similar between the Andrea Amati violin and human singers, up to ∼4.2 kHz. By linear predictive coding analyses, the first two formants of the Amati exhibit vowel-like qualities (F1/F2 = 503/1,583 Hz), mapping to the central region on the vowel diagram. Its third and fourth formants (F3/F4 = 2,602/3,731 Hz) resemble those produced by male singers. Using F1 to F4 values to estimate the corresponding vocal tract length, we observed that antique Italian violins generally resemble basses/baritones, but Stradivari violins are closer to tenors/altos. Furthermore, the vowel qualities of Stradivari violins show reduced backness and height. The unique formant properties displayed by Stradivari violins may represent the acoustic correlate of their distinctive brilliance perceived by musicians. Our data demonstrate that the pioneering designs of Cremonese violins exhibit voice-like qualities in their acoustic output. Copyright © 2018 the Author(s). Published by PNAS.

  14. The time course of learning during a vowel discrimination task by hearing-impaired and masked normal-hearing listeners

    NASA Astrophysics Data System (ADS)

    Davis, Carrie; Kewley-Port, Diane; Coughlin, Maureen

    2002-05-01

    Vowel discrimination was compared between a group of young, well-trained listeners with mild-to-moderate sensorineural hearing impairment (YHI), and a matched group of normal hearing, noise-masked listeners (YNH). Unexpectedly, discrimination of F1 and F2 in the YHI listeners was equal to or better than that observed in YNH listeners in three conditions of similar audibility [Davis et al., J. Acoust. Soc. Am. 109, 2501 (2001)]. However, in the same time interval, the YHI subjects completed an average of 55% more blocks of testing than the YNH group. New analyses were undertaken to examine the time course of learning during the vowel discrimination task, to determine whether performance was affected by number of trials. Learning curves for a set of vowels in the F1 and F2 regions showed no significant differences between the YHI and YNH listeners. Thus while the YHI subjects completed more trials overall, they achieved a level of discrimination similar to that of their normal-hearing peers within the same number of blocks. Implications of discrimination performance in relation to hearing status and listening strategies will be discussed. [Work supported by NIHDCD-02229.

  15. Computerized Analysis of Acoustic Characteristics of Patients with Internal Nasal Valve Collapse Before and After Functional Rhinoplasty

    PubMed Central

    Rezaei, Fariba; Omrani, Mohammad Reza; Abnavi, Fateme; Mojiri, Fariba; Golabbakhsh, Marzieh; Barati, Sohrab; Mahaki, Behzad

    2015-01-01

    Acoustic analysis of sounds produced during speech provides significant information about the physiology of larynx and vocal tract. The analysis of voice power spectrum is a fundamental sensitive method of acoustic assessment that provides valuable information about the voice source and characteristics of vocal tract resonance cavities. The changes in long-term average spectrum (LTAS) spectral tilt and harmony to noise ratio (HNR) were analyzed to assess the voice quality before and after functional rhinoplasty in patients with internal nasal valve collapse. Before and 3 months after functional rhinoplasty, 12 participants were evaluated and HNR and LTAS spectral tilt in /a/ and /i/ vowels were estimated. It was seen that an increase in HNR and a decrease in LTAS spectral tilt existed after surgery. Mean LTAS spectral tilt in vowel /a/ decreased from 2.37 ± 1.04 to 2.28 ± 1.17 (P = 0.388), and it was decreased from 4.16 ± 1.65 to 2.73 ± 0.69 in vowel /i/ (P = 0.008). Mean HNR in the vowel /a/ increased from 20.71 ± 3.93 to 25.06 ± 2.67 (P = 0.002), and it was increased from 21.28 ± 4.11 to 25.26 ± 3.94 in vowel /i/ (P = 0.002). Modification of the vocal tract caused the vocal cords to close sufficiently, and this showed that although rhinoplasty did not affect the larynx directly, it changes the structure of the vocal tract and consequently the resonance of voice production. The aim of this study was to investigate the changes in voice parameters after functional rhinoplasty in patients with internal nasal valve collapse by computerized analysis of acoustic characteristics. PMID:26955564

  16. Analysis of Acoustic Features in Speakers with Cognitive Disorders and Speech Impairments

    NASA Astrophysics Data System (ADS)

    Saz, Oscar; Simón, Javier; Rodríguez, W. Ricardo; Lleida, Eduardo; Vaquero, Carlos

    2009-12-01

    This work presents the results in the analysis of the acoustic features (formants and the three suprasegmental features: tone, intensity and duration) of the vowel production in a group of 14 young speakers suffering different kinds of speech impairments due to physical and cognitive disorders. A corpus with unimpaired children's speech is used to determine the reference values for these features in speakers without any kind of speech impairment within the same domain of the impaired speakers; this is 57 isolated words. The signal processing to extract the formant and pitch values is based on a Linear Prediction Coefficients (LPCs) analysis of the segments considered as vowels in a Hidden Markov Model (HMM) based Viterbi forced alignment. Intensity and duration are also based in the outcome of the automated segmentation. As main conclusion of the work, it is shown that intelligibility of the vowel production is lowered in impaired speakers even when the vowel is perceived as correct by human labelers. The decrease in intelligibility is due to a 30% of increase in confusability in the formants map, a reduction of 50% in the discriminative power in energy between stressed and unstressed vowels and to a 50% increase of the standard deviation in the length of the vowels. On the other hand, impaired speakers keep good control of tone in the production of stressed and unstressed vowels.

  17. Dynamic Spectral Structure Specifies Vowels for Adults and Children

    PubMed Central

    Nittrouer, Susan; Lowenstein, Joanna H.

    2014-01-01

    The dynamic specification account of vowel recognition suggests that formant movement between vowel targets and consonant margins is used by listeners to recognize vowels. This study tested that account by measuring contributions to vowel recognition of dynamic (i.e., time-varying) spectral structure and coarticulatory effects on stationary structure. Adults and children (four-and seven-year-olds) were tested with three kinds of consonant-vowel-consonant syllables: (1) unprocessed; (2) sine waves that preserved both stationary coarticulated and dynamic spectral structure; and (3) vocoded signals that primarily preserved that stationary, but not dynamic structure. Sections of two lengths were removed from syllable middles: (1) half the vocalic portion; and (2) all but the first and last three pitch periods. Adults performed accurately with unprocessed and sine-wave signals, as long as half the syllable remained; their recognition was poorer for vocoded signals, but above chance. Seven-year-olds performed more poorly than adults with both sorts of processed signals, but disproportionately worse with vocoded than sine-wave signals. Most four-year-olds were unable to recognize vowels at all with vocoded signals. Conclusions were that both dynamic and stationary coarticulated structures support vowel recognition for adults, but children attend to dynamic spectral structure more strongly because early phonological organization favors whole words. PMID:25536845

  18. Audiovisual Perception of Congruent and Incongruent Dutch Front Vowels

    ERIC Educational Resources Information Center

    Valkenier, Bea; Duyne, Jurriaan Y.; Andringa, Tjeerd C.; Baskent, Deniz

    2012-01-01

    Purpose: Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk effect observed with consonants. Method:…

  19. The influence of different native language systems on vowel discrimination and identification

    NASA Astrophysics Data System (ADS)

    Kewley-Port, Diane; Bohn, Ocke-Schwen; Nishi, Kanae

    2005-04-01

    The ability to identify the vowel sounds of a language reliably is dependent on the ability to discriminate between vowels at a more sensory level. This study examined how the complexity of the vowel systems of three native languages (L1) influenced listeners perception of American English (AE) vowels. AE has a fairly complex vowel system with 11 monophthongs. In contrast, Japanese has only 5 spectrally different vowels, while Swedish has 9 and Danish has 12. Six listeners, with exposure of less than 4 months in English speaking environments, participated from each L1. Their performance in two tasks was compared to 6 AE listeners. As expected, there were large differences in a linguistic identification task using 4 confusable AE low vowels. Japanese listeners performed quite poorly compared to listeners with more complex L1 vowel systems. Thresholds for formant discrimination for the 3 groups were very similar to those of native AE listeners. Thus it appears that sensory abilities for discriminating vowels are only slightly affected by native vowel systems, and that vowel confusions occur at a more central, linguistic level. [Work supported by funding from NIHDCD-02229 and the American-Scandinavian Foundation.

  20. A narrow band pattern-matching model of vowel perception

    NASA Astrophysics Data System (ADS)

    Hillenbrand, James M.; Houde, Robert A.

    2003-02-01

    The purpose of this paper is to propose and evaluate a new model of vowel perception which assumes that vowel identity is recognized by a template-matching process involving the comparison of narrow band input spectra with a set of smoothed spectral-shape templates that are learned through ordinary exposure to speech. In the present simulation of this process, the input spectra are computed over a sufficiently long window to resolve individual harmonics of voiced speech. Prior to template creation and pattern matching, the narrow band spectra are amplitude equalized by a spectrum-level normalization process, and the information-bearing spectral peaks are enhanced by a ``flooring'' procedure that zeroes out spectral values below a threshold function consisting of a center-weighted running average of spectral amplitudes. Templates for each vowel category are created simply by averaging the narrow band spectra of like vowels spoken by a panel of talkers. In the present implementation, separate templates are used for men, women, and children. The pattern matching is implemented with a simple city-block distance measure given by the sum of the channel-by-channel differences between the narrow band input spectrum (level-equalized and floored) and each vowel template. Spectral movement is taken into account by computing the distance measure at several points throughout the course of the vowel. The input spectrum is assigned to the vowel template that results in the smallest difference accumulated over the sequence of spectral slices. The model was evaluated using a large database consisting of 12 vowels in /hVd/ context spoken by 45 men, 48 women, and 46 children. The narrow band model classified vowels in this database with a degree of accuracy (91.4%) approaching that of human listeners.

  1. The Role of Consonant/Vowel Organization in Perceptual Discrimination

    ERIC Educational Resources Information Center

    Chetail, Fabienne; Drabs, Virginie; Content, Alain

    2014-01-01

    According to a recent hypothesis, the CV pattern (i.e., the arrangement of consonant and vowel letters) constrains the mental representation of letter strings, with each vowel or vowel cluster being the core of a unit. Six experiments with the same/different task were conducted to test whether this structure is extracted prelexically. In the…

  2. Effect of the loss of auditory feedback on segmental parameters of vowels of postlingually deafened speakers.

    PubMed

    Schenk, Barbara S; Baumgartner, Wolf Dieter; Hamzavi, Jafar Sasan

    2003-12-01

    The most obvious and best documented changes in speech of postlingually deafened speakers are the rate, fundamental frequency, and volume (energy). These changes are due to the lack of auditory feedback. But auditory feedback affects not only the suprasegmental parameters of speech. The aim of this study was to determine the change at the segmental level of speech in terms of vowel formants. Twenty-three postlingually deafened and 18 normally hearing speakers were recorded reading a German text. The frequencies of the first and second formants and the vowel spaces of selected vowels in word-in-context condition were compared. All first formant frequencies (F1) of the postlingually deafened speakers were significantly different from those of the normally hearing people. The values of F1 were higher for the vowels /e/ (418+/-61 Hz compared with 359+/-52 Hz, P=0.006) and /o/ (459+/-58 compared with 390+/-45 Hz, P=0.0003) and lower for /a/ (765+/-115 Hz compared with 851+/-146 Hz, P=0.038). The second formant frequency (F2) only showed a significant increase for the vowel/e/(2016+/-347 Hz compared with 2279+/-250 Hz, P=0.012). The postlingually deafened people were divided into two subgroups according to duration of deafness (shorter/longer than 10 years of deafness). There was no significant difference in formant changes between the two groups. Our report demonstrated an effect of auditory feedback also on segmental features of speech of postlingually deafened people.

  3. Fundamental frequency and perturbation measures of sustained vowels in Malaysian Malay children between 7 and 12 years old.

    PubMed

    Ting, Hua-Nong; Chia, See-Yan; Manap, Hany Hazfiza; Ho, Ai-Hui; Tiu, Kian-Yean; Abdul Hamid, Badrulzaman

    2012-07-01

    The study is going to investigate the fundamental frequency (F(0)) and perturbation measures of sustained vowels in 360 native Malaysian Malay children aged between 7 and 12 years using acoustical analysis. Praat software (Boersma and Weenink, University of Amsterdam, The Netherlands) was used to analyze the F(0) and perturbation measures of the sustained vowels. Statistical analyses were conducted to determine the significant differences in F(0) and perturbation measures across the vowels, sex, and age groups. The mean F(0) of Malaysian Malay male and female children were reported at 240±34.88 and 254.48±23.35Hz, respectively. The jitter (Jitt), relative average perturbation (RAP), five-point period perturbation quotient (PPQ5), shimmer (Shim), and 11-point amplitude perturbation quotient (APQ11) of Malaysian male children were reported at 0.43±0.26%, 0.25±0.16%, 0.26±0.15%, 2.48±1.61%, and 1.75±1.04%, respectively. As for female children, the Jitt, RAP, PPQ5, Shim, and APQ11 were reported at 0.42±0.22%, 0.25±0.14%, 0.25±0.13%, 2.47±1.53%, and 1.75±1.10%, respectively. No significant differences in F(0) were reported across the Malay vowels for both males and females. Malay females had significantly higher F(0) than that in Malay males at the age of 8, 10, and 12 years. Malaysian Malay children underwent the nonsystematic decrement in F(0) across the age groups. Significant differences in F(0) were found across the age groups. Significant differences in perturbation measures were observed across the vowels in certain age groups of Malay males and females. Generally, no significant differences in perturbation measures between the sex were observed in all the age groups and vowels. No significant differences in all the perturbation measures across the age groups were reported in both Malaysian Malay male and female children. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  4. Effect of Vowel Context on the Recognition of Initial Consonants in Kannada.

    PubMed

    Kalaiah, Mohan Kumar; Bhat, Jayashree S

    2017-09-01

    The present study was carried out to investigate the effect of vowel context on the recognition of Kannada consonants in quiet for young adults. A total of 17 young adults with normal hearing in both ears participated in the study. The stimuli included consonant-vowel syllables, spoken by 12 native speakers of Kannada. Consonant recognition task was carried out as a closed-set (fourteen-alternative forced-choice). The present study showed an effect of vowel context on the perception of consonants. Maximum consonant recognition score was obtained in the /o/ vowel context, followed by the /a/ and /u/ vowel contexts, and then the /e/ context. Poorest consonant recognition score was obtained in the vowel context /i/. Vowel context has an effect on the recognition of Kannada consonants, and the vowel effect was unique for Kannada consonants.

  5. International Space Station USOS Crew Quarters Ventilation and Acoustic Design Implementation

    NASA Technical Reports Server (NTRS)

    Broyan, James Lee, Jr.

    2009-01-01

    The International Space Station (ISS) United States Operational Segment (USOS) has four permanent rack sized ISS Crew Quarters (CQ) providing a private crewmember space. The CQ uses Node 2 cabin air for ventilation/thermal cooling, as opposed to conditioned ducted air from the ISS Temperature Humidity Control System or the ISS fluid cooling loop connections. Consequently, CQ can only increase the air flow rate to reduce the temperature delta between the cabin and the CQ interior. However, increasing airflow causes increased acoustic noise so efficient airflow distribution is an important design parameter. The CQ utilized a two fan push-pull configuration to ensure fresh air at the crewmember s head position and reduce acoustic exposure. The CQ interior needs to be below Noise Curve 40 (NC-40). The CQ ventilation ducts are open to the significantly louder Node 2 cabin aisle way which required significantly acoustic mitigation controls. The design implementation of the CQ ventilation system and acoustic mitigation are very inter-related and require consideration of crew comfort balanced with use of interior habitable volume, accommodation of fan failures, and possible crew uses that impact ventilation and acoustic performance. This paper illustrates the types of model analysis, assumptions, vehicle interactions, and trade-offs required for CQ ventilation and acoustics. Additionally, on-orbit ventilation system performance and initial crew feedback is presented. This approach is applicable to any private enclosed space that the crew will occupy.

  6. The first radial-mode Lorentzian Landau damping of dust acoustic space-charge waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Myoung-Jae; Jung, Young-Dae, E-mail: ydjung@hanyang.ac.kr; Department of Applied Physics and Department of Bionanotechnology, Hanyang University, Ansan, Kyunggi-Do 15588

    2016-05-15

    The dispersion properties and the first radial-mode Lorentzian Landau damping of a dust acoustic space-charge wave propagating in a cylindrical waveguide dusty plasma which contains nonthermal electrons and ions are investigated by employing the normal mode analysis and the method of separation of variables. It is found that the frequency of dust acoustic space-charge wave increases as the wave number increases as well as the radius of cylindrical plasma does. However, the nonthermal property of the Lorentzian plasma is found to suppress the wave frequency of the dust acoustic space-charge wave. The Landau damping rate of the dust acoustic space-chargemore » wave is derived in a cylindrical waveguide dusty plasma. The damping of the space-charge wave is found to be enhanced as the radius of cylindrical plasma and the nonthermal property increase. The maximum Lorentzian Landau damping rate is also found in a cylindrical waveguide dusty plasma. The variation of the wave frequency and the Landau damping rate due to the nonthermal character and geometric effects are also discussed.« less

  7. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123

    PubMed Central

    Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  8. Acoustic Characteristics in Epiglottic Cyst.

    PubMed

    Lee, YeonWoo; Kim, GeunHyo; Wang, SooGeun; Jang, JeonYeob; Cha, Wonjae; Choi, HongSik; Kim, HyangHee

    2018-05-03

    The purpose of this study was to analyze the acoustic characteristics associated with alternation deformation of the vocal tract due to large epiglottic cyst, and to confirm the relation between the anatomical change and resonant function of the vocal tract. Eight men with epiglottic cyst were enrolled in this study. The jitter, shimmer, noise-to-harmonic ratio, and first two formants were analyzed in vowels /a:/, /e:/, /i:/, /o:/, and /u:/. These values were analyzed before and after laryngeal microsurgery. The F1 value of /a:/ was significantly raised after surgery. Significant differences of formant frequencies in other vowels, jitter, shimmer, and noise-to-harmonic ratio were not presented. The results of this study could be used to analyze changes in the resonance of vocal tracts due to the epiglottic cysts. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  9. Greek perception and production of an English vowel contrast: A preliminary study

    NASA Astrophysics Data System (ADS)

    Podlipský, Václav J.

    2005-04-01

    This study focused on language-independent principles functioning in acquisition of second language (L2) contrasts. Specifically, it tested Bohn's Desensitization Hypothesis [in Speech perception and linguistic experience: Issues in Cross Language Research, edited by W. Strange (York Press, Baltimore, 1995)] which predicted that Greek speakers of English as an L2 would base their perceptual identification of English /i/ and /I/ on durational differences. Synthetic vowels differing orthogonally in duration and spectrum between the /i/ and /I/ endpoints served as stimuli for a forced-choice identification test. To assess L2 proficiency and to evaluate the possibility of cross-language category assimilation, productions of English /i/, /I/, and /ɛ/ and of Greek /i/ and /e/ were elicited and analyzed acoustically. The L2 utterances were also rated for the degree of foreign accent. Two native speakers of Modern Greek with low and 2 with intermediate experience in English participated. Six native English (NE) listeners and 6 NE speakers tested in an earlier study constituted the control groups. Heterogeneous perceptual behavior was observed for the L2 subjects. It is concluded that until acquisition in completely naturalistic settings is tested, possible interference of formally induced meta-linguistic differentiation between a ``short'' and a ``long'' vowel cannot be eliminated.

  10. Articulatory-to-Acoustic Relations in Talkers with Dysarthria: A First Analysis

    ERIC Educational Resources Information Center

    Mefferd, Antje

    2015-01-01

    Purpose: The primary purpose of this study was to determine the strength of interspeaker and intraspeaker articulatory-to-acoustic relations of vowel contrast produced by talkers with dysarthria and controls. Methods: Six talkers with amyotrophic lateral sclerosis (ALS), six talkers with Parkinson's disease (PD), and 12 controls repeated a…

  11. Post interaural neural net-based vowel recognition

    NASA Astrophysics Data System (ADS)

    Jouny, Ismail I.

    2001-10-01

    Interaural head related transfer functions are used to process speech signatures prior to neural net based recognition. Data representing the head related transfer function of a dummy has been collected at MIT and made available on the Internet. This data is used to pre-process vowel signatures to mimic the effects of human ear on speech perception. Signatures representing various vowels of the English language are then presented to a multi-layer perceptron trained using the back propagation algorithm for recognition purposes. The focus in this paper is to assess the effects of human interaural system on vowel recognition performance particularly when using a classification system that mimics the human brain such as a neural net.

  12. Acoustic emissions verification testing of International Space Station experiment racks at the NASA Glenn Research Center Acoustical Testing Laboratory

    NASA Astrophysics Data System (ADS)

    Akers, James C.; Passe, Paul J.; Cooper, Beth A.

    2005-09-01

    The Acoustical Testing Laboratory (ATL) at the NASA John H. Glenn Research Center (GRC) in Cleveland, OH, provides acoustic emission testing and noise control engineering services for a variety of specialized customers, particularly developers of equipment and science experiments manifested for NASA's manned space missions. The ATL's primary customer has been the Fluids and Combustion Facility (FCF), a multirack microgravity research facility being developed at GRC for the USA Laboratory Module of the International Space Station (ISS). Since opening in September 2000, ATL has conducted acoustic emission testing of components, subassemblies, and partially populated FCF engineering model racks. The culmination of this effort has been the acoustic emission verification tests on the FCF Combustion Integrated Rack (CIR) and Fluids Integrated Rack (FIR), employing a procedure that incorporates ISO 11201 (``Acoustics-Noise emitted by machinery and equipment-Measurement of emission sound pressure levels at a work station and at other specified positions-Engineering method in an essentially free field over a reflecting plane''). This paper will provide an overview of the test methodology, software, and hardware developed to perform the acoustic emission verification tests on the CIR and FIR flight racks and lessons learned from these tests.

  13. Mandarin compound vowels produced by prelingually deafened children with cochlear implants.

    PubMed

    Yang, Jing; Xu, Li

    2017-06-01

    Compound vowels including diphthongs and triphthongs have complex, dynamic spectral features. The production of compound vowels by children with cochlear implants (CIs) has not been studied previously. The present study examined the dynamic features of compound vowels in native Mandarin-speaking children with CIs. Fourteen prelingually deafened children with CIs (aged 2.9-8.3 years old) and 14 age-matched, normal-hearing (NH) children produced monosyllables containing six Mandarin compound vowels (i.e., /aɪ/, /aʊ/, /uo/, /iɛ/, /iaʊ/, /ioʊ/). The frequency values of the first two formants were measured at nine equidistant time points over the course of the vowel duration. All formant frequency values were normalized and then used to calculate vowel trajectory length and overall spectral rate of change. The results revealed that the CI children produced significantly longer durations for all six compound vowels. The CI children's ability to produce formant movement for the compound vowels varied considerably. Some CI children produced relatively static formant trajectories for certain diphthongs, whereas others produced certain vowels with greater formant movement than did the NH children. As a group, the CI children roughly followed the NH children on the pattern of magnitude of formant movement, but they showed a slower rate of formant change than did the NH children. The findings suggested that prelingually deafened children with CIs, during the early stage of speech acquisition, had not established appropriate targets and articulatory coordination for compound vowel productions. This preliminary study may shed light on rehabilitation of prelingually deafened children with CIs. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Effects of frequency shifts and visual gender information on vowel category judgments

    NASA Astrophysics Data System (ADS)

    Glidden, Catherine; Assmann, Peter F.

    2003-10-01

    Visual morphing techniques were used together with a high-quality vocoder to study the audiovisual contribution of talker gender to the identification of frequency-shifted vowels. A nine-step continuum ranging from ``bit'' to ``bet'' was constructed from natural recorded syllables spoken by an adult female talker. Upward and downward frequency shifts in spectral envelope (scale factors of 0.85 and 1.0) were applied in combination with shifts in fundamental frequency, F0 (scale factors of 0.5 and 1.0). Downward frequency shifts generally resulted in malelike voices whereas upward shifts were perceived as femalelike. Two separate nine-step visual continua from ``bit'' to ``bet'' were also constructed, one from a male face and the other a female face, each producing the end-point words. Each step along the two visual continua was paired with the corresponding step on the acoustic continuum, creating natural audiovisual utterances. Category boundary shifts were found for both acoustic cues (F0 and formant frequency shifts) and visual cues (visual gender). The visual gender effect was larger when acoustic and visual information were matched appropriately. These results suggest that visual information provided by the speech signal plays an important supplemental role in talker normalization.

  15. Intrinsic fundamental frequency of vowels is moderated by regional dialect

    PubMed Central

    Jacewicz, Ewa; Fox, Robert Allen

    2015-01-01

    There has been a long-standing debate whether the intrinsic fundamental frequency (IF0) of vowels is an automatic consequence of articulation or whether it is independently controlled by speakers to perceptually enhance vowel contrasts along the height dimension. This paper provides evidence from regional variation in American English that IF0 difference between high and low vowels is, in part, controlled and varies across dialects. The sources of this F0 control are socio-cultural and cannot be attributed to differences in the vowel inventory size. The socially motivated enhancement was found only in prosodically prominent contexts. PMID:26520352

  16. Importance of envelope modulations during consonants and vowels in segmentally interrupted sentencesa)

    PubMed Central

    Fogerty, Daniel

    2014-01-01

    The present study investigated the importance of overall segment amplitude and intrinsic segment amplitude modulation of consonants and vowels to sentence intelligibility. Sentences were processed according to three conditions that replaced consonant or vowel segments with noise matched to the long-term average speech spectrum. Segments were replaced with (1) low-level noise that distorted the overall sentence envelope, (2) segment-level noise that restored the overall syllabic amplitude modulation of the sentence, and (3) segment-modulated noise that further restored faster temporal envelope modulations during the vowel. Results from the first experiment demonstrated an incremental benefit with increasing resolution of the vowel temporal envelope. However, amplitude modulations of replaced consonant segments had a comparatively minimal effect on overall sentence intelligibility scores. A second experiment selectively noise-masked preserved vowel segments in order to equate overall performance of consonant-replaced sentences to that of the vowel-replaced sentences. Results demonstrated no significant effect of restoring consonant modulations during the interrupting noise when existing vowel cues were degraded. A third experiment demonstrated greater perceived sentence continuity with the preservation or addition of vowel envelope modulations. Overall, results support previous investigations demonstrating the importance of vowel envelope modulations to the intelligibility of interrupted sentences. PMID:24606291

  17. The effects of indexical and phonetic variation on vowel perception in typically developing 9- to 12-year-old children

    PubMed Central

    Jacewicz, Ewa; Fox, Robert Allen

    2015-01-01

    Purpose To investigate how linguistic knowledge interacts with indexical knowledge in older children's perception under demanding listening conditions created by extensive talker variability. Method Twenty five 9- to 12-year-old children, 12 from North Carolina (NC) and 13 from Wisconsin (WI), identified 12 vowels in isolated hVd-words produced by 120 talkers representing the two dialects (NC and WI), both genders and three age groups (generations) of residents from the same geographic locations as the listeners. Results Identification rates were higher for responses to talkers from the same dialect as the listeners and for female speech. Listeners were sensitive to systematic positional variations in vowels and their dynamic structure (formant movement) associated with generational differences in vowel pronunciation resulting from sound change in a speech community. Overall identification rate was 71.7%, which is 8.5% lower than for the adults responding to the same stimuli in Jacewicz and Fox (2012). Conclusions Typically developing older children are successful in dealing with both phonetic and indexical variation related to talker dialect, gender and generation. They are less consistent than the adults most likely due to their less efficient encoding of acoustic-phonetic information in the speech of multiple talkers and relative inexperience with indexical variation. PMID:24686520

  18. Space vehicle acoustics prediction improvement for payloads. [space shuttle

    NASA Technical Reports Server (NTRS)

    Dandridge, R. E.

    1979-01-01

    The modal analysis method was extensively modified for the prediction of space vehicle noise reduction in the shuttle payload enclosure, and this program was adapted to the IBM 360 computer. The predicted noise reduction levels for two test cases were compared with experimental results to determine the validity of the analytical model for predicting space vehicle payload noise environments in the 10 Hz one-third octave band regime. The prediction approach for the two test cases generally gave reasonable magnitudes and trends when compared with the measured noise reduction spectra. The discrepancies in the predictions could be corrected primarily by improved modeling of the vehicle structural walls and of the enclosed acoustic space to obtain a more accurate assessment of normal modes. Techniques for improving and expandng the noise prediction for a payload environment are also suggested.

  19. On the effects of L2 perception and of individual differences in L1 production on L2 pronunciation

    PubMed Central

    Kartushina, Natalia; Frauenfelder, Ulrich H.

    2014-01-01

    The speech of late second language (L2) learners is generally marked by an accent. The dominant theoretical perspective attributes accents to deficient L2 perception arising from a transfer of L1 phonology, which is thought to influence L2 perception and production. In this study we evaluate the explanatory role of L2 perception in L2 production and explore alternative explanations arising from the L1 phonological system, such as for example, the role of L1 production. Specifically we examine the role of an individual’s L1 productions in the production of L2 vowel contrasts. Fourteen Spanish adolescents studying French at school were assessed on their perception and production of the mid-close/mid-open contrasts, /ø-œ/ and /e-ε/, which are, respectively, acoustically distinct from Spanish sounds, or similar to them. The participants’ native productions were explored to assess (1) the variability in the production of native vowels (i.e., the compactness of vowel categories in F1/F2 acoustic space), and (2) the position of the vowels in the acoustic space. The results revealed that although poorly perceived contrasts were generally produced poorly, there was no correlation between individual performance in perception and production, and no effect of L2 perception on L2 production in mixed-effects regression analyses. This result is consistent with a growing body of psycholinguistic and neuroimaging research that suggest partial dissociations between L2 perception and production. In contrast, individual differences in the compactness and position of native vowels predicted L2 production accuracy. These results point to existence of surface transfer of individual L1 phonetic realizations to L2 space and demonstrate that pre-existing features of the native space in production partly determine how new sounds can be accommodated in that space. PMID:25414678

  20. Acoustical levitation for space processing. [weightless molten material manipulation

    NASA Technical Reports Server (NTRS)

    Wang, T. G.; Saffren, M. M.; Elleman, D. D.

    1974-01-01

    It is pointed out that many space-manufacturing processes will require the manipulation of weightless molten material within a container in such a way that the material does not touch the container wall. A description is given of an acoustical method which can be used for the positioning and shaping of any molten material including nonconductors such as glasses. The new approach makes use of an acoustical standing wave which is excited within an enclosure or resonator.

  1. Structural Generalizations over Consonants and Vowels in 11-Month-Old Infants

    ERIC Educational Resources Information Center

    Pons, Ferran; Toro, Juan M.

    2010-01-01

    Recent research has suggested consonants and vowels serve different roles during language processing. While statistical computations are preferentially made over consonants but not over vowels, simple structural generalizations are easily made over vowels but not over consonants. Nevertheless, the origins of this asymmetry are unknown. Here we…

  2. Stress Effects in Vowel Perception as a Function of Language-Specific Vocabulary Patterns.

    PubMed

    Warner, Natasha; Cutler, Anne

    2017-01-01

    Evidence from spoken word recognition suggests that for English listeners, distinguishing full versus reduced vowels is important, but discerning stress differences involving the same full vowel (as in mu- from music or museum) is not. In Dutch, in contrast, the latter distinction is important. This difference arises from the relative frequency of unstressed full vowels in the two vocabularies. The goal of this paper is to determine how this difference in the lexicon influences the perception of stressed versus unstressed vowels. All possible sequences of two segments (diphones) in Dutch and in English were presented to native listeners in gated fragments. We recorded identification performance over time throughout the speech signal. The data were here analysed specifically for patterns in perception of stressed versus unstressed vowels. The data reveal significantly larger stress effects (whereby unstressed vowels are harder to identify than stressed vowels) in English than in Dutch. Both language-specific and shared patterns appear regarding which vowels show stress effects. We explain the larger stress effect in English as reflecting the processing demands caused by the difference in use of unstressed vowels in the lexicon. The larger stress effect in English is due to relative inexperience with processing unstressed full vowels. © 2016 S. Karger AG, Basel.

  3. Acoustic Cues to Perception of Word Stress by English, Mandarin, and Russian Speakers

    ERIC Educational Resources Information Center

    Chrabaszcz, Anna; Winn, Matthew; Lin, Candise Y.; Idsardi, William J.

    2014-01-01

    Purpose: This study investigated how listeners' native language affects their weighting of acoustic cues (such as vowel quality, pitch, duration, and intensity) in the perception of contrastive word stress. Method: Native speakers (N = 45) of typologically diverse languages (English, Russian, and Mandarin) performed a stress identification…

  4. Mechanisms of Vowel Variation in African American English

    ERIC Educational Resources Information Center

    Holt, Yolanda Feimster

    2018-01-01

    Purpose: This research explored mechanisms of vowel variation in African American English by comparing 2 geographically distant groups of African American and White American English speakers for participation in the African American Shift and the Southern Vowel Shift. Method: Thirty-two male (African American: n = 16, White American controls: n =…

  5. Children Use Vowels to Help Them Spell Consonants

    ERIC Educational Resources Information Center

    Hayes, Heather; Treiman, Rebecca; Kessler, Brett

    2006-01-01

    English spelling is highly inconsistent in terms of simple sound-to-spelling correspondences but is more consistent when context is taken into account. For example, the choice between "ch" and "tch" is determined by the preceding vowel ("coach," "roach" vs. "catch," "hatch"). We investigated children's sensitivity to vowel context when spelling…

  6. Congruent and Incongruent Semantic Context Influence Vowel Recognition

    ERIC Educational Resources Information Center

    Wotton, J. M.; Elvebak, R. L.; Moua, L. C.; Heggem, N. M.; Nelson, C. A.; Kirk, K. M.

    2011-01-01

    The influence of sentence context on the recognition of naturally spoken vowels degraded by reverberation and Gaussian noise was investigated. Target words were paired to have similar consonant sounds but different vowels (e.g., map/mop) and were embedded early in sentences which provided three types of semantic context. Fifty-eight…

  7. Bite Block Vowel Production in Apraxia of Speech

    ERIC Educational Resources Information Center

    Jacks, Adam

    2008-01-01

    Purpose: This study explored vowel production and adaptation to articulatory constraints in adults with acquired apraxia of speech (AOS) plus aphasia. Method: Five adults with acquired AOS plus aphasia and 5 healthy control participants produced the vowels [iota], [epsilon], and [ash] in four word-length conditions in unconstrained and bite block…

  8. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English.

    PubMed

    Banzina, Elina; Dilley, Laura C; Hewitt, Lynne E

    2016-08-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found, likely due to a transfer of native phonological features. Next, a cross-modal phonological priming technique combined with a lexical decision task assessed the effect of inaccurate SS and UU syllable productions on native American English listeners' speech processing. Inaccurate UU vowels led to significant inhibition of lexical access, while reduced SS vowels revealed less interference. The results have implications for understanding the role of SS and UU syllables for word recognition and English pronunciation instruction.

  9. Early integration of vowel and pitch processing: a mismatch negativity study.

    PubMed

    Lidji, Pascale; Jolicoeur, Pierre; Kolinsky, Régine; Moreau, Patricia; Connolly, John F; Peretz, Isabelle

    2010-04-01

    Several studies have explored the processing specificity of music and speech, but only a few have addressed the processing autonomy of their fundamental components: pitch and phonemes. Here, we examined the additivity of the mismatch negativity (MMN) indexing the early interactions between vowels and pitch when sung. Event-related potentials (ERPs) were recorded while participants heard frequent sung vowels and rare stimuli deviating in pitch only, in vowel only, or in both pitch and vowel. The task was to watch a silent movie while ignoring the sounds. All three types of deviants elicited both an MMN and a P3a ERP component. The observed MMNs were of similar amplitude for the three types of deviants and the P3a was larger for double deviants. The MMNs to deviance in vowel and deviance in pitch were not additive. The underadditivity of the MMN responses suggests that vowel and pitch differences are processed by interacting neural networks. The results indicate that vowel and pitch are processed as integrated units, even at a pre-attentive level. Music-processing specificity thus rests on more complex dimensions of music and speech. 2009 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  10. Vowel Harmony Is a Basic Phonetic Rule of the Turkic Languages

    ERIC Educational Resources Information Center

    Shoibekova, Gaziza B.; Odanova, Sagira A.; Sultanova, Bibigul M.; Yermekova, Tynyshtyk N.

    2016-01-01

    The present study comprehensively analyzes vowel harmony as an important phonetic rule in Turkic languages. Recent changes in the vowel harmony potential of Turkic sounds caused by linguistic and extra-linguistic factors were described. Vowels in the Kazakh, Turkish, and Uzbek language were compared. The way this or that phoneme sounded in the…

  11. Discrimination of Phonemic Vowel Length by Japanese Infants

    ERIC Educational Resources Information Center

    Sato, Yutaka; Sogabe, Yuko; Mazuka, Reiko

    2010-01-01

    Japanese has a vowel duration contrast as one component of its language-specific phonemic repertory to distinguish word meanings. It is not clear, however, how a sensitivity to vowel duration can develop in a linguistic context. In the present study, using the visual habituation-dishabituation method, the authors evaluated infants' abilities to…

  12. The Spelling of Vowels Is Influenced by Australian and British English Dialect Differences

    ERIC Educational Resources Information Center

    Kemp, Nenagh

    2009-01-01

    Two experiments examined the influence of dialect on the spelling of vowel sounds. British and Australian children (6 to 8 years) and university students wrote words whose unstressed vowel sound is spelled i or e and pronounced /I/ or /schwa/. Participants often (mis)spelled these vowel sounds as they pronounced them. When vowels were pronounced…

  13. Perceptual, auditory and acoustic vocal analysis of speech and singing in choir conductors.

    PubMed

    Rehder, Maria Inês Beltrati Cornacchioni; Behlau, Mara

    2008-01-01

    the voice of choir conductors. to evaluate the vocal quality of choir conductors based on the production of a sustained vowel during singing and when speaking in order to observe auditory and acoustic differences. participants of this study were 100 choir conductors, with an equal distribution between genders. Participants were asked to produce the sustained vowel "é" using a singing and speaking voice. Speech samples were analyzed based on auditory-perceptive and acoustic parameters. The auditory-perceptive analysis was carried out by two speech-language pathologist, specialists in this field of knowledge. The acoustic analysis was carried out with the support of the computer software Doctor Speech (Tiger Electronics, SRD, USA, version 4.0), using the Real Analysis module. the auditory-perceptive analysis of the vocal quality indicated that most conductors have adapted voices, presenting more alterations in their speaking voice. The acoustic analysis indicated different values between genders and between the different production modalities. The fundamental frequency was higher in the singing voice, as well as the values for the first formant; the second formant presented lower values in the singing voice, with statistically significant results only for women. the voice of choir conductors is adapted, presenting fewer deviations in the singing voice when compared to the speaking voice. Productions differ based the voice modality, singing or speaking.

  14. We're Not in Kansas Anymore: The TOTO Strategy for Decoding Vowel Pairs

    ERIC Educational Resources Information Center

    Meese, Ruth Lyn

    2016-01-01

    Vowel teams such as vowel digraphs present a challenge to struggling readers. Some researchers assert that phonics generalizations such as the "two vowels go walking and the first one does the talking" rule do not hold often enough to be reliable for children. Others suggest that some vowel teams are highly regular and that children can…

  15. The Identification of High-pitched Sung Vowels in Sense and Nonsense Words by Professional Singers and Untrained Listeners.

    PubMed

    Deme, Andrea

    2017-03-01

    High-pitched sung vowels may be considered phonetically "underspecified" because of (i) the tuning of the F 1 to the f 0 accompanying pitch raising and (ii) the wide harmonic spacing of the voice source resulting in the undersampling of the vocal tract transfer function. Therefore, sung vowel intelligibility is expected to decrease as the f 0 increases. Based on the literature of speech perception, it is often suggested that sung vowels are better perceived if uttered in consonantal (CVC) context than in isolation even at high f 0 . The results for singing, however, are contradictory. In the present study, we further investigate this question. We compare vowel identification in sense and nonsense CVC sequences and show that the positive effect of the context disappears if the number of legal choices in a perception test is similar in both conditions, meaning that any positive effect of the CVC context may only stem from the smaller number of possible responses, i.e., from higher probabilities. Additionally, it is also tested whether the training in production (i.e., singing training) may also lead to a perceptual advantage of the singers over nonsingers in the identification of high-pitched sung vowels. The results show no advantage of this kind. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  16. Typological Asymmetries in Round Vowel Harmony: Support from Artificial Grammar Learning

    PubMed Central

    Finley, Sara

    2012-01-01

    Providing evidence for the universal tendencies of patterns in the world’s languages can be difficult, as it is impossible to sample all possible languages, and linguistic samples are subject to interpretation. However, experimental techniques such as artificial grammar learning paradigms make it possible to uncover the psychological reality of claimed universal tendencies. This paper addresses learning of phonological patterns (systematic tendencies in the sounds in language). Specifically, I explore the role of phonetic grounding in learning round harmony, a phonological process in which words must contain either all round vowels ([o, u]) or all unround vowels ([i, e]). The phonetic precursors to round harmony are such that mid vowels ([o, e]), which receive the greatest perceptual benefit from harmony, are most likely to trigger harmony. High vowels ([i, u]), however, are cross-linguistically less likely to trigger round harmony. Adult participants were exposed to a miniature language that contained a round harmony pattern in which the harmony source triggers were either high vowels ([i, u]) (poor harmony source triggers) or mid vowels ([o, e]) (ideal harmony source triggers). Only participants who were exposed to the ideal mid vowel harmony source triggers were successfully able to generalize the harmony pattern to novel instances, suggesting that perception and phonetic naturalness play a role in learning. PMID:23264713

  17. Acoustic levitation technique for containerless processing at high temperatures in space

    NASA Technical Reports Server (NTRS)

    Rey, Charles A.; Merkley, Dennis R.; Hammarlund, Gregory R.; Danley, Thomas J.

    1988-01-01

    High temperature processing of a small specimen without a container has been demonstrated in a set of experiments using an acoustic levitation furnace in the microgravity of space. This processing technique includes the positioning, heating, melting, cooling, and solidification of a material supported without physical contact with container or other surface. The specimen is supported in a potential energy well, created by an acoustic field, which is sufficiently strong to position the specimen in the microgravity environment of space. This containerless processing apparatus has been successfully tested on the Space Shuttle during the STS-61A mission. In that experiment, three samples wer successfully levitated and processed at temperatures from 600 to 1500 C. Experiment data and results are presented.

  18. Differential processing of consonants and vowels in lexical access through reading.

    PubMed

    New, Boris; Araújo, Verónica; Nazzi, Thierry

    2008-12-01

    Do consonants and vowels have the same importance during reading? Recently, it has been proposed that consonants play a more important role than vowels for language acquisition and adult speech processing. This proposal has started receiving developmental support from studies showing that infants are better at processing specific consonantal than vocalic information while learning new words. This proposal also received support from adult speech processing. In our study, we directly investigated the relative contributions of consonants and vowels to lexical access while reading by using a visual masked-priming lexical decision task. Test items were presented following four different primes: identity (e.g., for the word joli, joli), unrelated (vabu), consonant-related (jalu), and vowel-related (vobi). Priming was found for the identity and consonant-related conditions, but not for the vowel-related condition. These results establish the privileged role of consonants during lexical access while reading.

  19. Speech recognition: Acoustic-phonetic knowledge acquisition and representation

    NASA Astrophysics Data System (ADS)

    Zue, Victor W.

    1988-09-01

    The long-term research goal is to develop and implement speaker-independent continuous speech recognition systems. It is believed that the proper utilization of speech-specific knowledge is essential for such advanced systems. This research is thus directed toward the acquisition, quantification, and representation, of acoustic-phonetic and lexical knowledge, and the application of this knowledge to speech recognition algorithms. In addition, we are exploring new speech recognition alternatives based on artificial intelligence and connectionist techniques. We developed a statistical model for predicting the acoustic realization of stop consonants in various positions in the syllable template. A unification-based grammatical formalism was developed for incorporating this model into the lexical access algorithm. We provided an information-theoretic justification for the hierarchical structure of the syllable template. We analyzed segmented duration for vowels and fricatives in continuous speech. Based on contextual information, we developed durational models for vowels and fricatives that account for over 70 percent of the variance, using data from multiple, unknown speakers. We rigorously evaluated the ability of human spectrogram readers to identify stop consonants spoken by many talkers and in a variety of phonetic contexts. Incorporating the declarative knowledge used by the readers, we developed a knowledge-based system for stop identification. We achieved comparable system performance to that to the readers.

  20. Vibro-acoustics for Space Station applications

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.; Bofilios, D. A.

    1986-01-01

    An analytical procedure has been developed to study noise generation in a double wall and single wall cylindrical shell due to mechanical point loads. The objective of this study is to develop theoretical procedures for parametetric evaluation of noise generation andd noise transmission for the habitability modules of the proposed Space Station operation. The solutions of the governing acoustic-structural equations are obtained utilizing modal decomposition. The numerical results include modal frequencies, deflection response spectral densities and interior noise sound pressure levels.

  1. The contribution of waveform interactions to the perception of concurrent vowels.

    PubMed

    Assmann, P F; Summerfield, Q

    1994-01-01

    Models of the auditory and phonetic analysis of speech must account for the ability of listeners to extract information from speech when competing voices are present. When two synthetic vowels are presented simultaneously and monaurally, listeners can exploit cues provided by a difference in fundamental frequency (F0) between the vowels to help determine their phonemic identities. Three experiments examined the effects of stimulus duration on the perception of such "double vowels." Experiment 1 confirmed earlier findings that a difference in F0 provides a smaller advantage when the duration of the stimulus is brief (50 ms rather than 200 ms). With brief stimuli, there may be insufficient time for attentional mechanisms to switch from the "dominant" member of the pair to the "nondominant" vowel. Alternatively, brief segments may restrict the availability of cues that are distributed over the time course of a longer segment of a double vowel. In experiment 1, listeners did not perform better when the same 50-ms segment was presented four times in succession (with 100-ms silent intervals) rather than only once, suggesting that limits on attention switching do not underlie the duration effect. However, performance improved in some conditions when four successive 50-ms segments were extracted from the 200-ms double vowels and presented in sequence, again with 100-ms silent intervals. Similar improvements were observed in experiment 2 between performance with the first 50-ms segment and one or more of the other three segments when the segments were presented individually. Experiment 3 demonstrated that part of the improvement observed in experiments 1 and 2 could be attributed to waveform interactions that either reinforce or attenuate harmonics that lie near vowel formants. Such interactions were beneficial only when the difference in F0 was small (0.25-1 semitone). These results are compatible with the idea that listeners benefit from small differences in F0 by

  2. The acoustic and perceptual differences to the non-singer's singing voice before and after a singing vocal warm-up

    NASA Astrophysics Data System (ADS)

    DeRosa, Angela

    The present study analyzed the acoustic and perceptual differences in non-singer's singing voice before and after a vocal warm-up. Experiments were conducted with 12 females who had no singing experience and considered themselves to be non-singers. Participants were recorded performing 3 tasks: a musical scale stretching to their most comfortable high and low pitches, sustained productions of the vowels /a/ and /i/, and singing performance of the "Star Spangled Banner." Participants were recorded performing these three tasks before a vocal warm-up, after a vocal warm-up, and then again 2-3 weeks later after 2-3 weeks of practice. Acoustical analysis consisted of formant frequency analysis, singer's formant/singing power ratio analysis, maximum phonation frequency range analysis, and an analysis of jitter, noise to harmonic ratio (NHR), relative average perturbation (RAP), and voice turbulence index (VTI). A perceptual analysis was also conducted with 12 listeners rating comparison performances of before vs. after the vocal warm-up, before vs. after the second vocal warm-up, and after both vocal warm-ups. There were no significant findings for the formant frequency analysis of the vowel /a/, but there was significance for the 1st formant frequency analysis of the vowel /i/. Singer's formant analyzed via Singing Power Ratio analysis showed significance only for the vowel /i/. Maximum phonation frequency range analysis showed a significant increase after the vocal warm-ups. There were no significant findings for the acoustic measures of jitter, NHR, RAP, and VTI. Perceptual analysis showed a significant difference after a vocal warm-up. The results indicate that a singing vocal warm-up can have a significant positive influence on the singing voice of non-singers.

  3. Application of the acoustic voice quality index for objective measurement of dysphonia severity.

    PubMed

    Núñez-Batalla, Faustino; Díaz-Fresno, Estefanía; Álvarez-Fernández, Andrea; Muñoz Cordero, Gabriela; Llorente Pendás, José Luis

    Over the past several decades, many acoustic parameters have been studied as sensitive to and to measure dysphonia. However, current acoustic measures might not be sensitive measures of perceived voice quality. A meta-analysis which evaluated the relationship between perceived overall voice quality and several acoustic-phonetic correlates, identified measures that do not rely on the extraction of the fundamental period, such the measures derived from the cepstrum, and that can be used in sustained vowel as well as continuous speech samples. A specific and recently developed method to quantify the severity of overall dysphonia is the acoustic voice quality index (AVQI) that is a multivariate construct that combines multiple acoustic markers to yield a single number that correlates reasonably with overall vocal quality. This research is based on one pool of voice recordings collected in two sets of subjects: 60 vocally normal and 58 voice disordered participants. A sustained vowel and a sample of connected speech were recorded and analyzed to obtain the six parameters included in the AVQI using the program Praat. Statistical analysis was completed using SPSS for Windows, version 12.0. Correlation between perception of overall voice quality and AVQI: A significant difference exists (t(95) = 9.5; p<.000) between normal and dysphonic voices. The findings of this study demonstrate the clinical feasibility of the AVQI as a measure of dysphonia severity. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.

  4. Interactions of speaking condition and auditory feedback on vowel production in postlingually deaf adults with cochlear implants.

    PubMed

    Ménard, Lucie; Polak, Marek; Denny, Margaret; Burton, Ellen; Lane, Harlan; Matthies, Melanie L; Marrone, Nicole; Perkell, Joseph S; Tiede, Mark; Vick, Jennell

    2007-06-01

    This study investigates the effects of speaking condition and auditory feedback on vowel production by postlingually deafened adults. Thirteen cochlear implant users produced repetitions of nine American English vowels prior to implantation, and at one month and one year after implantation. There were three speaking conditions (clear, normal, and fast), and two feedback conditions after implantation (implant processor turned on and off). Ten normal-hearing controls were also recorded once. Vowel contrasts in the formant space (expressed in mels) were larger in the clear than in the fast condition, both for controls and for implant users at all three time samples. Implant users also produced differences in duration between clear and fast conditions that were in the range of those obtained from the controls. In agreement with prior work, the implant users had contrast values lower than did the controls. The implant users' contrasts were larger with hearing on than off and improved from one month to one year postimplant. Because the controls and implant users responded similarly to a change in speaking condition, it is inferred that auditory feedback, although demonstrably important for maintaining normative values of vowel contrasts, is not needed to maintain the distinctiveness of those contrasts in different speaking conditions.

  5. Influences of Tone on Vowel Articulation in Mandarin Chinese

    ERIC Educational Resources Information Center

    Shaw, Jason A.; Chen, Wei-rong; Proctor, Michael I.; Derrick, Donald

    2016-01-01

    Purpose: Models of speech production often abstract away from shared physiology in pitch control and lingual articulation, positing independent control of tone and vowel units. We assess the validity of this assumption in Mandarin Chinese by evaluating the stability of lingual articulation for vowels across variation in tone. Method:…

  6. Criteria for the Segmentation of Vowels on Duplex Oscillograms.

    ERIC Educational Resources Information Center

    Naeser, Margaret A.

    This paper develops criteria for the segmentation of vowels on duplex oscillograms. Previous vowel duration studies have primarily used sound spectrograms. The use of duplex oscillograms, rather than sound spectrograms, permits faster production (real time) at less expense (adding machine paper may be used). The speech signal can be more spread…

  7. The Basis for Language Acquisition: Congenitally Deaf Infants Discriminate Vowel Length in the First Months after Cochlear Implantation.

    PubMed

    Vavatzanidis, Niki Katerina; Mürbe, Dirk; Friederici, Angela; Hahne, Anja

    2015-12-01

    One main incentive for supplying hearing impaired children with a cochlear implant is the prospect of oral language acquisition. Only scarce knowledge exists, however, of what congenitally deaf children actually perceive when receiving their first auditory input, and specifically what speech-relevant features they are able to extract from the new modality. We therefore presented congenitally deaf infants and young children implanted before the age of 4 years with an oddball paradigm of long and short vowel variants of the syllable /ba/. We measured the EEG in regular intervals to study their discriminative ability starting with the first activation of the implant up to 8 months later. We were thus able to time-track the emerging ability to differentiate one of the most basic linguistic features that bears semantic differentiation and helps in word segmentation, namely, vowel length. Results show that already 2 months after the first auditory input, but not directly after implant activation, these early implanted children differentiate between long and short syllables. Surprisingly, after only 4 months of hearing experience, the ERPs have reached the same properties as those of the normal hearing control group, demonstrating the plasticity of the brain with respect to the new modality. We thus show that a simple but linguistically highly relevant feature such as vowel length reaches age-appropriate electrophysiological levels as fast as 4 months after the first acoustic stimulation, providing an important basis for further language acquisition.

  8. The Developmental Process of Vowel Integration as Found in Children in Grades 1-3.

    ERIC Educational Resources Information Center

    Bentz, Darrell; Szymczuk, Mike

    A study was designed to investigate the auditory-visual integrative abilities of primary grade children for five long vowels and five short vowels. The Vowel Integration Test (VIT), composed of 35 nonsense words having all the long and short vowel sounds, was administered to students in 64 schools over a period of two years. Students' indications…

  9. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    ERIC Educational Resources Information Center

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Halle, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and…

  10. The effect of L1 orthography on non-native vowel perception.

    PubMed

    Escudero, Paola; Wanrooij, Karin

    2010-01-01

    Previous research has shown that orthography influences the learning and processing of spoken non-native words. In this paper, we examine the effect of L1 orthography on non-native sound perception. In Experiment 1, 204 Spanish learners of Dutch and a control group of 20 native speakers of Dutch were asked to classify Dutch vowel tokens by choosing from auditorily presented options, in one task, and from the orthographic representations of Dutch vowels, in a second task. The results show that vowel categorization varied across tasks: the most difficult vowels in the purely auditory task were the easiest in the orthographic task and, conversely, vowels with a relatively high success rate in the purely auditory task were poorly classified in the orthographic task. The results of Experiment 2 with 22 monolingual Peruvian Spanish listeners replicated the main results of Experiment 1 and confirmed the existence of orthographic effects. Together, the two experiments show that when listening to auditory stimuli only, native speakers of Spanish have great difficulty classifying certain Dutch vowels, regardless of the amount of experience they may have with the Dutch language. Importantly, the pairing of auditory stimuli with orthographic labels can help or hinder Spanish listeners' sound categorization, depending on the specific sound contrast.

  11. Unspoken vowel recognition using facial electromyogram.

    PubMed

    Arjunan, Sridhar P; Kumar, Dinesh K; Yau, Wai C; Weghorn, Hans

    2006-01-01

    The paper aims to identify speech using the facial muscle activity without the audio signals. The paper presents an effective technique that measures the relative muscle activity of the articulatory muscles. Five English vowels were used as recognition variables. This paper reports using moving root mean square (RMS) of surface electromyogram (SEMG) of four facial muscles to segment the signal and identify the start and end of the utterance. The RMS of the signal between the start and end markers was integrated and normalised. This represented the relative muscle activity of the four muscles. These were classified using back propagation neural network to identify the speech. The technique was successfully used to classify 5 vowels into three classes and was not sensitive to the variation in speed and the style of speaking of the different subjects. The results also show that this technique was suitable for classifying the 5 vowels into 5 classes when trained for each of the subjects. It is suggested that such a technology may be used for the user to give simple unvoiced commands when trained for the specific user.

  12. Textual Input Enhancement for Vowel Blindness: A Study with Arabic ESL Learners

    ERIC Educational Resources Information Center

    Alsadoon, Reem; Heift, Trude

    2015-01-01

    This study explores the impact of textual input enhancement on the noticing and intake of English vowels by Arabic L2 learners of English. Arabic L1 speakers are known to experience "vowel blindness," commonly defined as a difficulty in the textual decoding and encoding of English vowels due to an insufficient decoding of the word form.…

  13. Toward the Development of an Objective Index of Dysphonia Severity: A Four-Factor Acoustic Model

    ERIC Educational Resources Information Center

    Awan, Shaheen N.; Roy, Nelson

    2006-01-01

    During assessment and management of individuals with voice disorders, clinicians routinely attempt to describe or quantify the severity of a patient's dysphonia. This investigation used acoustic measures derived from sustained vowel samples to predict dysphonia severity (as determined by auditory-perceptual ratings), for a diverse set of voice…

  14. Introduction to the Special Issue on Advancing Methods for Analyzing Dialect Variation.

    PubMed

    Clopper, Cynthia G

    2017-07-01

    Documenting and analyzing dialect variation is traditionally the domain of dialectology and sociolinguistics. However, modern approaches to acoustic analysis of dialect variation have their roots in Peterson and Barney's [(1952). J. Acoust. Soc. Am. 24, 175-184] foundational work on the acoustic analysis of vowels that was published in the Journal of the Acoustical Society of America (JASA) over 6 decades ago. Although Peterson and Barney (1952) were not primarily concerned with dialect variation, their methods laid the groundwork for the acoustic methods that are still used by scholars today to analyze vowel variation within and across languages. In more recent decades, a number of methodological advances in the study of vowel variation have been published in JASA, including work on acoustic vowel overlap and vowel normalization. The goal of this special issue was to honor that tradition by bringing together a set of papers describing the application of emerging acoustic, articulatory, and computational methods to the analysis of dialect variation in vowels and beyond.

  15. Auditory temporal-order processing of vowel sequences by young and elderly listeners.

    PubMed

    Fogerty, Daniel; Humes, Larry E; Kewley-Port, Diane

    2010-04-01

    This project focused on the individual differences underlying observed variability in temporal processing among older listeners. Four measures of vowel temporal-order identification were completed by young (N=35; 18-31 years) and older (N=151; 60-88 years) listeners. Experiments used forced-choice, constant-stimuli methods to determine the smallest stimulus onset asynchrony (SOA) between brief (40 or 70 ms) vowels that enabled identification of a stimulus sequence. Four words (pit, pet, pot, and put) spoken by a male talker were processed to serve as vowel stimuli. All listeners identified the vowels in isolation with better than 90% accuracy. Vowel temporal-order tasks included the following: (1) monaural two-item identification, (2) monaural four-item identification, (3) dichotic two-item vowel identification, and (4) dichotic two-item ear identification. Results indicated that older listeners had more variability and performed poorer than young listeners on vowel-identification tasks, although a large overlap in distributions was observed. Both age groups performed similarly on the dichotic ear-identification task. For both groups, the monaural four-item and dichotic two-item tasks were significantly harder than the monaural two-item task. Older listeners' SOA thresholds improved with additional stimulus exposure and shorter dichotic stimulus durations. Individual differences of temporal-order performance among the older listeners demonstrated the influence of cognitive measures, but not audibility or age.

  16. Speech Recognition: Acoustic-Phonetic Knowledge Acquisition and Representation.

    DTIC Science & Technology

    1987-09-25

    the release duration is the voice onset time, or VOT. For the purpose of this investigation, alveolar flaps ( as in "butter’) and and glottalized /t/’s...Cambridge, Massachusetts 02139 Abstract females and 8 males. The other sentence was said by 7 females We discuss a framework for an acoustic-phonetic...tarned a number of semivowels. One sentence was said by 6 vowels + + "jpporte.d by a Xerox Fellowhsp Table It Features which characterite

  17. Comparing Deaf and Hearing Dutch Infants: Changes in the Vowel Space in the First 2 Years

    ERIC Educational Resources Information Center

    van der Stelt, Jeannette M.; Wempe, Ton G.; Pols, Louis C. W.

    2008-01-01

    The influence of the mother tongue on vowel productions in infancy is different for deaf and hearing babies. Audio material of five hearing and five deaf infants acquiring Dutch was collected monthly from month 5-18, and at 24 months. Fifty unlabelled utterances were digitized for each recording. This study focused on developmental paths in vowel…

  18. Measurement and Characterization of Space Shuttle Solid Rocket Motor Plume Acoustics

    NASA Technical Reports Server (NTRS)

    Kenny, Jeremy; Hobbs, Chris; Plotkin, Ken; Pilkey, Debbie

    2009-01-01

    Lift-off acoustic environments generated by the future Ares I launch vehicle are assessed by the NASA Marshall Space Flight Center (MSFC) acoustics team using several prediction tools. This acoustic environment is directly caused by the Ares I First Stage booster, powered by the five-segment Reusable Solid Rocket Motor (RSRMV). The RSRMV is a larger-thrust derivative design from the currently used Space Shuttle solid rocket motor, the Reusable Solid Rocket Motor (RSRM). Lift-off acoustics is an integral part of the composite launch vibration environment affecting the Ares launch vehicle and must be assessed to help generate hardware qualification levels and ensure structural integrity of the vehicle during launch and lift-off. Available prediction tools that use free field noise source spectrums as a starting point for generation of lift-off acoustic environments are described in the monograph NASA SP-8072: "Acoustic Loads Generated by the Propulsion System." This monograph uses a reference database for free field noise source spectrums which consist of subscale rocket motor firings, oriented in horizontal static configurations. The phrase "subscale" is appropriate, since the thrust levels of rockets in the reference database are orders of magnitude lower than the current design thrust for the Ares launch family. Thus, extrapolation is needed to extend the various reference curves to match Ares-scale acoustic levels. This extrapolation process yields a subsequent amount of uncertainty added upon the acoustic environment predictions. As the Ares launch vehicle design schedule progresses, it is important to take every opportunity to lower prediction uncertainty and subsequently increase prediction accuracy. Never before in NASA s history has plume acoustics been measured for large scale solid rocket motors. Approximately twice a year, the RSRM prime vendor, ATK Launch Systems, static fires an assembled RSRM motor in a horizontal configuration at their test facility

  19. Acoustic analysis of normal Saudi adult voices.

    PubMed

    Malki, Khalid H; Al-Habib, Salman F; Hagr, Abulrahman A; Farahat, Mohamed M

    2009-08-01

    To determine the acoustic differences between Saudi adult male and female voices, and to compare the acoustic variables of the Multidimensional Voice Program (MDVP) obtained from North American adults to a group of Saudi males and females. A cross-sectional survey of normal adult male and female voices was conducted at King Abdulaziz University Hospital, Riyadh, Kingdom of Saudi Arabia between March 2007 and December 2008. Ninety-five Saudi subjects sustained the vowel /a/ 6 times, and the steady state portion of 3 samples was analyzed and compared with the samples of the KayPentax normative voice database. Significant differences were found between Saudi and North American KayPentax database groups. In the male subjects, 15 of 33 MDVP variables, and 10 of 33 variables in the female subjects were found to be significantly different from the KayPentax database. We conclude that the acoustical differences may reflect laryngeal anatomical or tissue differences between the Saudi and the KayPentax database.

  20. Vowelling and semantic priming effects in Arabic.

    PubMed

    Mountaj, Nadia; El Yagoubi, Radouane; Himmi, Majid; Lakhdar Ghazal, Faouzi; Besson, Mireille; Boudelaa, Sami

    2015-01-01

    In the present experiment we used a semantic judgment task with Arabic words to determine whether semantic priming effects are found in the Arabic language. Moreover, we took advantage of the specificity of the Arabic orthographic system, which is characterized by a shallow (i.e., vowelled words) and a deep orthography (i.e., unvowelled words), to examine the relationship between orthographic and semantic processing. Results showed faster Reaction Times (RTs) for semantically related than unrelated words with no difference between vowelled and unvowelled words. By contrast, Event Related Potentials (ERPs) revealed larger N1 and N2 components to vowelled words than unvowelled words suggesting that visual-orthographic complexity taxes the early word processing stages. Moreover, semantically unrelated Arabic words elicited larger N400 components than related words thereby demonstrating N400 effects in Arabic. Finally, the Arabic N400 effect was not influenced by orthographic depth. The implications of these results for understanding the processing of orthographic, semantic, and morphological structures in Modern Standard Arabic are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Reading Arabic Texts: Effects of Text Type, Reader Type and Vowelization.

    ERIC Educational Resources Information Center

    Abu-Rabia, Salim

    1998-01-01

    Investigates the effect of vowels on reading accuracy in Arabic orthography. Finds that vowels had a significant effect on reading accuracy of poor and skilled readers in reading each of four kinds of texts. (NH)

  2. Dynamic Articulation of Vowels.

    ERIC Educational Resources Information Center

    Morgan, Willie B.

    1979-01-01

    A series of exercises and a theory of vowel descriptions can help minimize speakers' problems of excessive tension, awareness of tongue height, and tongue retraction. Eight exercises to provide Forward Facial Stretch neutralize tensions in the face and vocal resonator and their effect on the voice. Three experiments in which sounds are repeated…

  3. Characterization of space dust using acoustic impact detection.

    PubMed

    Corsaro, Robert D; Giovane, Frank; Liou, Jer-Chyi; Burchell, Mark J; Cole, Michael J; Williams, Earl G; Lagakos, Nicholas; Sadilek, Albert; Anderson, Christopher R

    2016-08-01

    This paper describes studies leading to the development of an acoustic instrument for measuring properties of micrometeoroids and other dust particles in space. The instrument uses a pair of easily penetrated membranes separated by a known distance. Sensors located on these films detect the transient acoustic signals produced by particle impacts. The arrival times of these signals at the sensor locations are used in a simple multilateration calculation to measure the impact coordinates on each film. Particle direction and speed are found using these impact coordinates and the known membrane separations. This ability to determine particle speed, direction, and time of impact provides the information needed to assign the particle's orbit and identify its likely origin. In many cases additional particle properties can be estimated from the signal amplitudes, including approximate diameter and (for small particles) some indication of composition/morphology. Two versions of this instrument were evaluated in this study. Fiber optic displacement sensors are found advantageous when very thin membranes can be maintained in tension (solar sails, lunar surface). Piezoelectric strain sensors are preferred for thicker films without tension (long duration free flyers). The latter was selected for an upcoming installation on the International Space Station.

  4. The Sound of Mute Vowels in Auditory Word-Stem Completion

    ERIC Educational Resources Information Center

    Beland, Renee; Prunet, Jean-Francois; Peretz, Isabelle

    2009-01-01

    Some studies have argued that orthography can influence speakers when they perform oral language tasks. Words containing a mute vowel provide well-suited stimuli to investigate this phenomenon because mute vowels, such as the second "e" in "vegetable", are present orthographically but absent phonetically. Using an auditory word-stem completion…

  5. The Role of Vowels in Reading Semitic Scripts: Data from Arabic and Hebrew.

    ERIC Educational Resources Information Center

    Abu-Rabia, Salim

    2001-01-01

    Investigates the effect of vowels and context on reading accuracy of skilled adult native Arabic speakers in Arabic and in Hebrew, their second language. Reveals a significant effect for vowels and for context across all reading conditions in Arabic and Hebrew. Finds that the vowelized texts in Arabic and the pointed and unpointed texts in Hebrew…

  6. Pre-attentive sensitivity to vowel duration reveals native phonology and predicts learning of second-language sounds.

    PubMed

    Chládková, Kateřina; Escudero, Paola; Lipski, Silvia C

    2013-09-01

    In some languages (e.g. Czech), changes in vowel duration affect word meaning, while in others (e.g. Spanish) they do not. Yet for other languages (e.g. Dutch), the linguistic role of vowel duration remains unclear. To reveal whether Dutch represents vowel length in its phonology, we compared auditory pre-attentive duration processing in native and non-native vowels across Dutch, Czech, and Spanish. Dutch duration sensitivity patterned with Czech but was larger than Spanish in the native vowel, while it was smaller than Czech and Spanish in the non-native vowel. An interpretation of these findings suggests that in Dutch, duration is used phonemically but it might be relevant for the identity of certain native vowels only. Furthermore, the finding that Spanish listeners are more sensitive to duration in non-native than in native vowels indicates that a lack of duration differences in one's native language could be beneficial for second-language learning. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Effects of stimulus response compatibility on covert imitation of vowels.

    PubMed

    Adank, Patti; Nuttall, Helen; Bekkering, Harold; Maegherman, Gwijde

    2018-03-13

    When we observe someone else speaking, we tend to automatically activate the corresponding speech motor patterns. When listening, we therefore covertly imitate the observed speech. Simulation theories of speech perception propose that covert imitation of speech motor patterns supports speech perception. Covert imitation of speech has been studied with interference paradigms, including the stimulus-response compatibility paradigm (SRC). The SRC paradigm measures covert imitation by comparing articulation of a prompt following exposure to a distracter. Responses tend to be faster for congruent than for incongruent distracters; thus, showing evidence of covert imitation. Simulation accounts propose a key role for covert imitation in speech perception. However, covert imitation has thus far only been demonstrated for a select class of speech sounds, namely consonants, and it is unclear whether covert imitation extends to vowels. We aimed to demonstrate that covert imitation effects as measured with the SRC paradigm extend to vowels, in two experiments. We examined whether covert imitation occurs for vowels in a consonant-vowel-consonant context in visual, audio, and audiovisual modalities. We presented the prompt at four time points to examine how covert imitation varied over the distracter's duration. The results of both experiments clearly demonstrated covert imitation effects for vowels, thus supporting simulation theories of speech perception. Covert imitation was not affected by stimulus modality and was maximal for later time points.

  8. Speaking fundamental frequency and vowel formant frequencies: effects on perception of gender.

    PubMed

    Gelfer, Marylou Pausewang; Bennett, Quinn E

    2013-09-01

    The purpose of the present study was to investigate the contribution of vowel formant frequencies to gender identification in connected speech, the distinctiveness of vowel formants in males versus females, and how ambiguous speaking fundamental frequencies (SFFs) and vowel formants might affect perception of gender. Multivalent experimental. Speakers subjects (eight tall males, eight short females, and seven males and seven females of "middle" height) were recorded saying two carrier phrases to elicit the vowels /i/ and /α/ and a sentence. The gender/height groups were selected to (presumably) maximize formant differences between some groups (tall vs short) and minimize differences between others (middle height). Each subjects' samples were digitally altered to distinct SFFs (116, 145, 155, 165, and 207 Hz) to represent SFFs typical of average males, average females, and in an ambiguous range. Listeners judged the gender of each randomized altered speech sample. Results indicated that female speakers were perceived as female even with an SFF in the typical male range. For male speakers, gender perception was less accurate at SFFs of 165 Hz and higher. Although the ranges of vowel formants had considerable overlap between genders, significant differences in formant frequencies of males and females were seen. Vowel formants appeared to be important to perception of gender, especially for SFFs in the range of 145-165 Hz; however, formants may be a more salient cue in connected speech when compared with isolated vowels or syllables. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  9. Acoustic voice analysis of prelingually deaf adults before and after cochlear implantation.

    PubMed

    Evans, Maegan K; Deliyski, Dimitar D

    2007-11-01

    It is widely accepted that many severe to profoundly deaf adults have benefited from cochlear implants (CIs). However, limited research has been conducted to investigate changes in voice and speech of prelingually deaf adults who receive CIs, a population well known for presenting with a variety of voice and speech abnormalities. The purpose of this study was to use acoustic analysis to explore changes in voice and speech for three prelingually deaf males pre- and postimplantation over 6 months. The following measurements, some measured in varying contexts, were obtained: fundamental frequency (F0), jitter, shimmer, noise-to-harmonic ratio, voice turbulence index, soft phonation index, amplitude- and F0-variation, F0-range, speech rate, nasalance, and vowel production. Characteristics of vowel production were measured by determining the first formant (F1) and second formant (F2) of vowels in various contexts, magnitude of F2-variation, and rate of F2-variation. Perceptual measurements of pitch, pitch variability, loudness variability, speech rate, and intonation were obtained for comparison. Results are reported using descriptive statistics. The results showed patterns of change for some of the parameters while there was considerable variation across the subjects. All participants demonstrated a decrease in F0 in at least one context and demonstrated a change in nasalance toward the norm as compared to their normal hearing control. The two participants who were oral-language communicators were judged to produce vowels with an average of 97.2% accuracy and the sign-language user demonstrated low percent accuracy for vowel production.

  10. Auditory temporal-order processing of vowel sequences by young and elderly listeners1

    PubMed Central

    Fogerty, Daniel; Humes, Larry E.; Kewley-Port, Diane

    2010-01-01

    This project focused on the individual differences underlying observed variability in temporal processing among older listeners. Four measures of vowel temporal-order identification were completed by young (N=35; 18–31 years) and older (N=151; 60–88 years) listeners. Experiments used forced-choice, constant-stimuli methods to determine the smallest stimulus onset asynchrony (SOA) between brief (40 or 70 ms) vowels that enabled identification of a stimulus sequence. Four words (pit, pet, pot, and put) spoken by a male talker were processed to serve as vowel stimuli. All listeners identified the vowels in isolation with better than 90% accuracy. Vowel temporal-order tasks included the following: (1) monaural two-item identification, (2) monaural four-item identification, (3) dichotic two-item vowel identification, and (4) dichotic two-item ear identification. Results indicated that older listeners had more variability and performed poorer than young listeners on vowel-identification tasks, although a large overlap in distributions was observed. Both age groups performed similarly on the dichotic ear-identification task. For both groups, the monaural four-item and dichotic two-item tasks were significantly harder than the monaural two-item task. Older listeners’ SOA thresholds improved with additional stimulus exposure and shorter dichotic stimulus durations. Individual differences of temporal-order performance among the older listeners demonstrated the influence of cognitive measures, but not audibility or age. PMID:20370033

  11. Acoustic Emission Detection of Impact Damage on Space Shuttle Structures

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Gorman, Michael R.; Madaras, Eric I.

    2004-01-01

    The loss of the Space Shuttle Columbia as a result of impact damage from foam debris during ascent has led NASA to investigate the feasibility of on-board impact detection technologies. AE sensing has been utilized to monitor a wide variety of impact conditions on Space Shuttle components ranging from insulating foam and ablator materials, and ice at ascent velocities to simulated hypervelocity micrometeoroid and orbital debris impacts. Impact testing has been performed on both reinforced carbon composite leading edge materials as well as Shuttle tile materials on representative aluminum wing structures. Results of these impact tests will be presented with a focus on the acoustic emission sensor responses to these impact conditions. These tests have demonstrated the potential of employing an on-board Shuttle impact detection system. We will describe the present plans for implementation of an initial, very low frequency acoustic impact sensing system using pre-existing flight qualified hardware. The details of an accompanying flight measurement system to assess the Shuttle s acoustic background noise environment as a function of frequency will be described. The background noise assessment is being performed to optimize the frequency range of sensing for a planned future upgrade to the initial impact sensing system.

  12. Center-of-gravity effects in the perception of high front vowels

    NASA Astrophysics Data System (ADS)

    Jacewicz, Ewa; Feth, Lawrence L.

    2002-05-01

    When two formant peaks are close in frequency, changing their amplitude ratio can shift the perceived vowel quality. This center-of-gravity effect (COG) was studied particularly in back vowels whose F1 and F2 are close in frequency. Chistovich and Lublinskaja (1979) show that the effect occurs when the frequency separation between the formants does not exceed 3.5 bark. The COG and critical distance effects were manifested when a two-formant reference signal was matched by a single-formant target of variable frequency. This study investigates whether the COG effect extends to closely spaced higher formants as in English /i/ and /I/. In /i/, the frequency separation between F2, F3, and F4 does not exceed 3.5 bark, suggesting the existence of one COG which may affect all three closely spaced formants (F2=2030, F3=2970, F4=3400 Hz). In /I/, each of the F2-F3 and F3-F4 separations is less than 3.5 bark but the F2-F4 separation exceeds the critical distance, indicating two COGs (F2=1780, F3=2578, F4=3400 Hz). We examine the COG effects using matching of four-formant reference signals, in which we change the amplitude ratios, by two-formant targets with variable frequency of F2. The double-staircase adaptive procedure is used. [Work supported by an INRS award from NIH to R. Fox.

  13. Children's Acoustic and Linguistic Adaptations to Peers With Hearing Impairment.

    PubMed

    Granlund, Sonia; Hazan, Valerie; Mahon, Merle

    2018-05-17

    This study aims to examine the clear speaking strategies used by older children when interacting with a peer with hearing loss, focusing on both acoustic and linguistic adaptations in speech. The Grid task, a problem-solving task developed to elicit spontaneous interactive speech, was used to obtain a range of global acoustic and linguistic measures. Eighteen 9- to 14-year-old children with normal hearing (NH) performed the task in pairs, once with a friend with NH and once with a friend with a hearing impairment (HI). In HI-directed speech, children increased their fundamental frequency range and midfrequency intensity, decreased the number of words per phrase, and expanded their vowel space area by increasing F1 and F2 range, relative to NH-directed speech. However, participants did not appear to make changes to their articulation rate, the lexical frequency of content words, or lexical diversity when talking to their friend with HI compared with their friend with NH. Older children show evidence of listener-oriented adaptations to their speech production; although their speech production systems are still developing, they are able to make speech adaptations to benefit the needs of a peer with HI, even without being given a specific instruction to do so. https://doi.org/10.23641/asha.6118817.

  14. Formant discrimination in noise for isolated vowels

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Kewley-Port, Diane

    2004-11-01

    Formant discrimination for isolated vowels presented in noise was investigated for normal-hearing listeners. Discrimination thresholds for F1 and F2, for the seven American English vowels /eye, smcapi, eh, æ, invv, aye, you/, were measured under two types of noise, long-term speech-shaped noise (LTSS) and multitalker babble, and also under quiet listening conditions. Signal-to-noise ratios (SNR) varied from -4 to +4 dB in steps of 2 dB. All three factors, formant frequency, signal-to-noise ratio, and noise type, had significant effects on vowel formant discrimination. Significant interactions among the three factors showed that threshold-frequency functions depended on SNR and noise type. The thresholds at the lowest levels of SNR were highly elevated by a factor of about 3 compared to those in quiet. The masking functions (threshold vs SNR) were well described by a negative exponential over F1 and F2 for both LTSS and babble noise. Speech-shaped noise was a slightly more effective masker than multitalker babble, presumably reflecting small benefits (1.5 dB) due to the temporal variation of the babble. .

  15. Vowels, Syllables, and Letter Names: Differences between Young Children's Spelling in English and Portuguese

    ERIC Educational Resources Information Center

    Pollo, Tatiana Cury; Kessler, Brett; Treiman, Rebecca

    2005-01-01

    Young Portuguese-speaking children have been reported to produce more vowel- and syllable-oriented spellings than have English speakers. To investigate the extent and source of such differences, we analyzed children's vocabulary and found that Portuguese words have more vowel letter names and a higher vowel-consonant ratio than do English words.…

  16. Effects of Weight Loss on Acoustic Parameters After Bariatric Surgery.

    PubMed

    de Souza, Lourdes Bernadete Rocha; Dos Santos, Marquiony Marques; Pernambuco, Leandro Araújo; de Almeida Godoy, Cynthia Meira; da Silva Lima, Deysianne Meire

    2018-05-01

    Patients with morbid obesity may present vocal alterations, since large accumulation of fat in the vocal tract may interfere with voice production of these individuals. Verify the neck circumference and the acoustic parameters of voice in obese women, before and after the bariatric surgery, and compare the results with a control group, with normal weight. Observational, longitudinal, descriptive study with patients referred to the SCODE (Obesity Surgery and Related Disorders Center) in a university hospital. The sample consisted of 25 morbidly obese women, age range 28-43 years and 23 non-obese women, aged 21-41 years control group. To measure the neck circumference, a tape measure was used and all participants were seated upright with the head positioned in the Frankfort horizontal plane. The fundamental frequency was calculated through the sustained emission of vowel [a] at usual intensity and pitch, to measure the fundamental frequency of the voice, that is, how much the vocal fold vibrates per second. After the recording, participants were prompted to produce vowels [a], [i], and [u] sustained at usual intensity and pitch, and a stopwatch was used to measure the maximum phonation time, to verify the balance between myoelastic and dynamic forces of the larynx. After 8 months post-surgery, the patients were recruited to be re-evaluated using the same pre-surgical data collection procedures. There was an increase in the mean value of f0. The maximum phonation time of all vowels increased after surgery. Obese individuals with post-surgery weight loss may present neck circumference, fundamental frequency, and maximum phonation time values closer to the mean values of normal weight individuals. In this study, weight loss was sufficient to adjust the acoustic parameter measurements.

  17. Vowel normalization for accent: An investigation of perceptual plasticity in young adults

    NASA Astrophysics Data System (ADS)

    Evans, Bronwen G.; Iverson, Paul

    2004-05-01

    Previous work has emphasized the role of early experience in the ability to accurately perceive and produce foreign or foreign-accented speech. This study examines how listeners at a much later stage in language development-early adulthood-adapt to a non-native accent within the same language. A longitudinal study investigated whether listeners who had had no previous experience of living in multidialectal environments adapted their speech perception and production when attending university. Participants were tested before beginning university and then again 3 months later. An acoustic analysis of production was carried out and perceptual tests were used to investigate changes in word intelligibility and vowel categorization. Preliminary results suggest that listeners are able to adjust their phonetic representations and that these patterns of adjustment are linked to the changes in production that speakers typically make due to sociolinguistic factors when living in multidialectal environments.

  18. Different Timescales for the Neural Coding of Consonant and Vowel Sounds

    PubMed Central

    Perez, Claudia A.; Engineer, Crystal T.; Jakkamsetti, Vikram; Carraway, Ryan S.; Perry, Matthew S.

    2013-01-01

    Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders. PMID:22426334

  19. Numerical Simulation of the Self-Oscillations of the Vocal Folds and of the Resulting Acoustic Phenomena in the Vocal Tract

    NASA Astrophysics Data System (ADS)

    Švancara, P.; Horáček, J.; Švec, J. G.

    The study presents a three-dimensional (3D) finite element (FE) model of the flow-induced self-oscillation of the human vocal folds in interaction with acoustics of simplified vocal tract models. The 3D vocal tract models of the acoustic spaces shaped for simulation of phonation of Czech vowels [a:], [i:] and [u:] were created by converting the data from the magnetic resonance images (MRI). For modelling of the fluid-structure interaction, explicit coupling scheme with separated solvers for fluid and structure domain was utilized. The FE model comprises vocal folds pretension before starting phonation, large deformations of the vocal fold tissue, vocal-fold collisions, fluid-structure interaction, morphing the fluid mesh according to the vocal-fold motion (Arbitrary Lagrangian-Eulerian approach), unsteady viscous compressible airflow described by the Navier-Stokes equations and airflow separation. The developed FE model enables to study the relationship between flow-induced vibrations of the vocal folds and acoustic wave propagation in the vocal tract and can also be used to simulate for example pathological changes in the vocal fold tissue and their influence on the voice production.

  20. The Effects of Surgical Rapid Maxillary Expansion (SRME) on Vowel Formants

    ERIC Educational Resources Information Center

    Sari, Emel; Kilic, Mehmet Akif

    2009-01-01

    The objective of this study was to investigate the effect of surgical rapid maxillary expansion (SRME) on vowel production. The subjects included 12 patients, whose speech were considered perceptually normal, that had undergone surgical RME for expansion of a narrow maxilla. They uttered the following Turkish vowels, ([a], [[epsilon

  1. Vowel Representations in the Invented Spellings of Spanish-English Bilingual Kindergartners

    ERIC Educational Resources Information Center

    Raynolds, Laura B.; Uhry, Joanna K.; Brunner, Jessica

    2013-01-01

    The study compared the invented spelling of vowels in kindergarten native Spanish speaking children with that of English monolinguals. It examined whether, after receiving phonics instruction for short vowels, the spelling of native Spanish-speaking kindergartners would contain phonological errors that were influenced by their first language.…

  2. Shallow and deep orthographies in Hebrew: the role of vowelization in reading development for unvowelized scripts.

    PubMed

    Schiff, Rachel

    2012-12-01

    The present study explored the speed, accuracy, and reading comprehension of vowelized versus unvowelized scripts among 126 native Hebrew speaking children in second, fourth, and sixth grades. Findings indicated that second graders read and comprehended vowelized scripts significantly more accurately and more quickly than unvowelized scripts, whereas among fourth and sixth graders reading of unvowelized scripts developed to a greater degree than the reading of vowelized scripts. An analysis of the mediation effect for children's mastery of vowelized reading speed and accuracy on their mastery of unvowelized reading speed and comprehension revealed that in second grade, reading accuracy of vowelized words mediated the reading speed and comprehension of unvowelized scripts. In the fourth grade, accuracy in reading both vowelized and unvowelized words mediated the reading speed and comprehension of unvowelized scripts. By sixth grade, accuracy in reading vowelized words offered no mediating effect, either on reading speed or comprehension of unvowelized scripts. The current outcomes thus suggest that young Hebrew readers undergo a scaffolding process, where vowelization serves as the foundation for building initial reading abilities and is essential for successful and meaningful decoding of unvowelized scripts.

  3. Learning Vowel Categories from Maternal Speech in Gurindji Kriol

    ERIC Educational Resources Information Center

    Jones, Caroline; Meakins, Felicity; Muawiyath, Shujau

    2012-01-01

    Distributional learning is a proposal for how infants might learn early speech sound categories from acoustic input before they know many words. When categories in the input differ greatly in relative frequency and overlap in acoustic space, research in bilingual development suggests that this affects the course of development. In the present…

  4. Children's Perception of Conversational and Clear American-English Vowels in Noise

    ERIC Educational Resources Information Center

    Leone, Dorothy; Levy, Erika S.

    2015-01-01

    Purpose: Much of a child's day is spent listening to speech in the presence of background noise. Although accurate vowel perception is important for listeners' accurate speech perception and comprehension, little is known about children's vowel perception in noise. "Clear speech" is a speech style frequently used by talkers in the…

  5. A mathematical model of vowel identification by users of cochlear implants

    PubMed Central

    Sagi, Elad; Meyer, Ted A.; Kaiser, Adam R.; Teoh, Su Wooi; Svirsky, Mario A.

    2010-01-01

    A simple mathematical model is presented that predicts vowel identification by cochlear implant users based on these listeners’ resolving power for the mean locations of first, second, and∕or third formant energies along the implanted electrode array. This psychophysically based model provides hypotheses about the mechanism cochlear implant users employ to encode and process the input auditory signal to extract information relevant for identifying steady-state vowels. Using one free parameter, the model predicts most of the patterns of vowel confusions made by users of different cochlear implant devices and stimulation strategies, and who show widely different levels of speech perception (from near chance to near perfect). Furthermore, the model can predict results from the literature, such as Skinner, et al. [(1995). Ann. Otol. Rhinol. Laryngol. 104, 307–311] frequency mapping study, and the general trend in the vowel results of Zeng and Galvin’s [(1999). Ear Hear. 20, 60–74] studies of output electrical dynamic range reduction. The implementation of the model presented here is specific to vowel identification by cochlear implant users, but the framework of the model is more general. Computational models such as the one presented here can be useful for advancing knowledge about speech perception in hearing impaired populations, and for providing a guide for clinical research and clinical practice. PMID:20136228

  6. Acoustic analysis of the singing and speaking voice in singing students.

    PubMed

    Lundy, D S; Roy, S; Casiano, R R; Xue, J W; Evans, J

    2000-12-01

    The singing power ratio (SPR) is an objective means of quantifying the singer's formant. SPR has been shown to differentiate trained singers from nonsingers and sung from spoken tones. This study was designed to evaluate SPR and acoustic parameters in singing students to determine if the singer-in-training has an identifiable difference between sung and spoken voices. Digital audio recordings were made of both sung and spoken vowel sounds in 55 singing students for acoustic analysis. SPR values were not significantly different between the sung and spoken samples. Shimmer and noise-to-harmonic ratio were significantly higher in spoken samples. SPR analysis may provide an objective tool for monitoring the student's progress.

  7. Perceptual and acoustic study of professionally trained versus untrained voices.

    PubMed

    Brown, W S; Rothman, H B; Sapienza, C M

    2000-09-01

    Acoustic and perceptual analyses were completed to determine the effect of vocal training on professional singers when speaking and singing. Twenty professional singers and 20 nonsingers, acting as the control, were recorded while sustaining a vowel, reading a modified Rainbow Passage, and singing "America the Beautiful." Acoustic measures included fundamental frequency, duration, percent jitter, percent shimmer, noise-to-harmonic ratio, and determination of the presence or absence of both vibrato and the singer's formant. Results indicated that, whereas certain acoustic parameters differentiated singers from nonsingers within sex, no consistently significant trends were found across males and females for either speaking or singing. The most consistent differences were the presence or absence of the singer's vibrato and formant in the singers versus the nonsingers, respectively. Perceptual analysis indicated that singers could be correctly identified with greater frequency than by chance alone from their singing, but not their speaking utterances.

  8. Vowels Development in Babbling of typically developing 6-to-12-month old Persian-learning Infants.

    PubMed

    Fotuhi, Mina; Yadegari, Fariba; Teymouri, Robab

    2017-10-01

    Pre-linguistic vocalizations including early consonants, vowels, and their combinations into syllables are considered as important predictors of the speech and language development. The purpose of this study was to examine vowel development in babblings of normally developing Persian-learning infants. Eight typically developing 6-8-month-old Persian-learning infants (3 boys and 5 girls) participated in this 4-month longitudinal descriptive-analytic study. A weekly 30-60-minute audio- and video-recording was obtained at home from the comfort state vocalizations of infants and the mother-child interactions. A total of 74:02:03 hours of vocalizations were phonetically transcribed. Seven vowels comprising /i/,/e/,/a/,/u/,/o/,/ɑ/, and /ә/ were identified in the babblings. The inter-rater reliability was obtained for 20% of vocalizations. The data were analyzed by repeated measures ANOVA and Pearson's correlation coefficient using SPSS software version 20. The results showed that two vowels /a/ (46.04) and /e/ (23.60) were produced with the highest mean frequency of occurrence, respectively. Regarding front/back dimension, the front vowels were the most prominent ones (71.87); in terms of height, low (46.78) and mid (32.45) vowels occurred maximally. A good inter-rater reliability was obtained (0.99, P < .01). The increased frequency of occurrence of the low and mid front vowels in the current study was consistent with previous studies on the emergence of vowels in pre-linguistic vocalization in other languages.

  9. Vowel production, speech-motor control, and phonological encoding in people who are lesbian, bisexual, or gay, and people who are not

    NASA Astrophysics Data System (ADS)

    Munson, Benjamin; Deboe, Nancy

    2003-10-01

    A recent study (Pierrehumbert, Bent, Munson, and Bailey, submitted) found differences in vowel production between people who are lesbian, bisexual, or gay (LBG) and people who are not. The specific differences (more fronted /u/ and /a/ in the non-LB women; an overall more-contracted vowel space in the non-gay men) were not amenable to an interpretation based on simple group differences in vocal-tract geometry. Rather, they suggested that differences were either due to group differences in some other skill, such as motor control or phonological encoding, or learned. This paper expands on this research by examining vowel production, speech-motor control (measured by diadochokinetic rates), and phonological encoding (measured by error rates in a tongue-twister task) in people who are LBG and people who are not. Analyses focus on whether the findings of Pierrehumbert et al. (submitted) are replicable, and whether group differences in vowel production are related to group differences in speech-motor control or phonological encoding. To date, 20 LB women, 20 non-LB women, 7 gay men, and 7 non-gay men have participated. Preliminary analyses suggest that there are no group differences in speech motor control or phonological encoding, suggesting that the earlier findings of Pierrehumbert et al. reflected learned behaviors.

  10. Differential effects of speech situations on mothers' and fathers' infant-directed and dog-directed speech: An acoustic analysis.

    PubMed

    Gergely, Anna; Faragó, Tamás; Galambos, Ágoston; Topál, József

    2017-10-23

    There is growing evidence that dog-directed and infant-directed speech have similar acoustic characteristics, like high overall pitch, wide pitch range, and attention-getting devices. However, it is still unclear whether dog- and infant-directed speech have gender or context-dependent acoustic features. In the present study, we collected comparable infant-, dog-, and adult directed speech samples (IDS, DDS, and ADS) in four different speech situations (Storytelling, Task solving, Teaching, and Fixed sentences situations); we obtained the samples from parents whose infants were younger than 30 months of age and also had pet dog at home. We found that ADS was different from IDS and DDS, independently of the speakers' gender and the given situation. Higher overall pitch in DDS than in IDS during free situations was also found. Our results show that both parents hyperarticulate their vowels when talking to children but not when addressing dogs: this result is consistent with the goal of hyperspeech in language tutoring. Mothers, however, exaggerate their vowels for their infants under 18 months more than fathers do. Our findings suggest that IDS and DDS have context-dependent features and support the notion that people adapt their prosodic features to the acoustic preferences and emotional needs of their audience.

  11. The Roles of Vowel Fronting, Lengthening, and Listener Variables in the Perception of Vocal Femininity

    ERIC Educational Resources Information Center

    Shport, Irina A.

    2018-01-01

    Purpose: The goal of this study was to test whether fronting and lengthening of lax vowels influence the perception of femininity in listeners whose dialect is characterized as already having relatively fronted and long lax vowels in male and female speech. Method: Sixteen English words containing the /? ? ? ?/ vowels were produced by a male…

  12. Acoustical Testing Laboratory Developed to Support the Low-Noise Design of Microgravity Space Flight Hardware

    NASA Technical Reports Server (NTRS)

    Cooper, Beth A.

    2001-01-01

    The NASA John H. Glenn Research Center at Lewis Field has designed and constructed an Acoustical Testing Laboratory to support the low-noise design of microgravity space flight hardware. This new laboratory will provide acoustic emissions testing and noise control services for a variety of customers, particularly for microgravity space flight hardware that must meet International Space Station limits on noise emissions. These limits have been imposed by the space station to support hearing conservation, speech communication, and safety goals as well as to prevent noise-induced vibrations that could impact microgravity research data. The Acoustical Testing Laboratory consists of a 23 by 27 by 20 ft (height) convertible hemi/anechoic chamber and separate sound-attenuating test support enclosure. Absorptive 34-in. fiberglass wedges in the test chamber provide an anechoic environment down to 100 Hz. A spring-isolated floor system affords vibration isolation above 3 Hz. These criteria, along with very low design background levels, will enable the acquisition of accurate and repeatable acoustical measurements on test articles, up to a full space station rack in size, that produce very little noise. Removable floor wedges will allow the test chamber to operate in either a hemi/anechoic or anechoic configuration, depending on the size of the test article and the specific test being conducted. The test support enclosure functions as a control room during normal operations but, alternatively, may be used as a noise-control enclosure for test articles that require the operation of noise-generating test support equipment.

  13. THE INFLUENCE OF LEXICAL FACTORS ON VOWEL DISTINCTIVENESS: EFFECTS OF JAW POSITIONING.

    PubMed

    Munson, Benjamin; Solomon, Nancy Pearl

    2016-11-01

    The phonetic characteristics of words are influenced by lexical characteristics, including word frequency and phonological neighborhood density (Baese-Berke & Goldrick, 2009; Wright, 2004). In our previous research, we replicated this effect with neurologically healthy young adults (Munson & Solomon, 2004). In research with the same set of participants, we showed that speech sounded less natural when produced with bite blocks than with an unconstrained jaw (Solomon, Makashay, & Munson, 2016). The current study combined these concepts to examine whether a bite-block perturbation exaggerated or reduced the effects of lexical factors on normal speech. Ten young adults produced more challenging lexical stimuli (i.e. infrequent words with many phonological neighbors) with shorter vowels and more disperse F1/F2 spaces than less challenging words (i.e. frequent words with few phonological neighbors). This difference was exaggerated when speaking with a 10-mm bite block, though the interaction between jaw positioning and lexical competition did not achieve statistical significance. Results indicate that talkers alter vowel characteristics in response both to biomechanical and linguistic demands, and that the effect of lexical characteristics is robust to the articulatory reorganization required for successful bite-block compensation.

  14. Gender identification from high-pass filtered vowel segments: the use of high-frequency energy.

    PubMed

    Donai, Jeremy J; Lass, Norman J

    2015-10-01

    The purpose of this study was to examine the use of high-frequency information for making gender identity judgments from high-pass filtered vowel segments produced by adult speakers. Specifically, the effect of removing lower-frequency spectral detail (i.e., F3 and below) from vowel segments via high-pass filtering was evaluated. Thirty listeners (ages 18-35) with normal hearing participated in the experiment. A within-subjects design was used to measure gender identification for six 250-ms vowel segments (/æ/, /ɪ /, /ɝ/, /ʌ/, /ɔ/, and /u/), produced by ten male and ten female speakers. The results of this experiment demonstrated that despite the removal of low-frequency spectral detail, the listeners were accurate in identifying speaker gender from the vowel segments, and did so with performance significantly above chance. The removal of low-frequency spectral detail reduced gender identification by approximately 16 % relative to unfiltered vowel segments. Classification results using linear discriminant function analyses followed the perceptual data, using spectral and temporal representations derived from the high-pass filtered segments. Cumulatively, these findings indicate that normal-hearing listeners are able to make accurate perceptual judgments regarding speaker gender from vowel segments with low-frequency spectral detail removed via high-pass filtering. Therefore, it is reasonable to suggest the presence of perceptual cues related to gender identity in the high-frequency region of naturally produced vowel signals. Implications of these findings and possible mechanisms for performing the gender identification task from high-pass filtered stimuli are discussed.

  15. Cross-Linguistic Differences in the Immediate Serial Recall of Consonants versus Vowels

    ERIC Educational Resources Information Center

    Kissling, Elizabeth M.

    2012-01-01

    The current study investigated native English and native Arabic speakers' phonological short-term memory for sequences of consonants and vowels. Phonological short-term memory was assessed in immediate serial recall tasks conducted in Arabic and English for both groups. Participants (n = 39) heard series of six consonant-vowel syllables and wrote…

  16. Perceptual Training of Second-Language Vowels: Does Musical Ability Play a Role?

    ERIC Educational Resources Information Center

    Ghaffarvand Mokari, Payam; Werner, Stefan

    2018-01-01

    The present study attempts to extend the research on the effects of phonetic training on the production and perception of second-language (L2) vowels. We also examined whether success in learning L2 vowels through high-variability intensive phonetic training is related to the learners' general musical abilities. Forty Azerbaijani learners of…

  17. A wideband fast multipole boundary element method for half-space/plane-symmetric acoustic wave problems

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Chen, Hai-Bo; Chen, Lei-Lei

    2013-04-01

    This paper presents a novel wideband fast multipole boundary element approach to 3D half-space/plane-symmetric acoustic wave problems. The half-space fundamental solution is employed in the boundary integral equations so that the tree structure required in the fast multipole algorithm is constructed for the boundary elements in the real domain only. Moreover, a set of symmetric relations between the multipole expansion coefficients of the real and image domains are derived, and the half-space fundamental solution is modified for the purpose of applying such relations to avoid calculating, translating and saving the multipole/local expansion coefficients of the image domain. The wideband adaptive multilevel fast multipole algorithm associated with the iterative solver GMRES is employed so that the present method is accurate and efficient for both lowand high-frequency acoustic wave problems. As for exterior acoustic problems, the Burton-Miller method is adopted to tackle the fictitious eigenfrequency problem involved in the conventional boundary integral equation method. Details on the implementation of the present method are described, and numerical examples are given to demonstrate its accuracy and efficiency.

  18. Acoustic analysis of trill sounds.

    PubMed

    Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri

    2012-04-01

    In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.

  19. Early and Late Spanish-English Bilingual Adults' Perception of American English Vowels

    ERIC Educational Resources Information Center

    Baigorri, Miriam

    2016-01-01

    Increasing numbers of Hispanic immigrants are entering the US (US Census Bureau, 2011) and are learning American English (AE) as a second language (L2). Many may experience difficulty in understanding AE. Accurate perception of AE vowels is important because vowels carry a large part of the speech signal (Kewley-Port, Burkle, & Lee, 2007). The…

  20. Now you hear it, now you don't: vowel devoicing in Japanese infant-directed speech.

    PubMed

    Fais, Laurel; Kajikawa, Sachiyo; Amano, Shigeaki; Werker, Janet F

    2010-03-01

    In this work, we examine a context in which a conflict arises between two roles that infant-directed speech (IDS) plays: making language structure salient and modeling the adult form of a language. Vowel devoicing in fluent adult Japanese creates violations of the canonical Japanese consonant-vowel word structure pattern by systematically devoicing particular vowels, yielding surface consonant clusters. We measured vowel devoicing rates in a corpus of infant- and adult-directed Japanese speech, for both read and spontaneous speech, and found that the mothers in our study preserve the fluent adult form of the language and mask underlying phonological structure by devoicing vowels in infant-directed speech at virtually the same rates as those for adult-directed speech. The results highlight the complex interrelationships among the modifications to adult speech that comprise infant-directed speech, and that form the input from which infants begin to build the eventual mature form of their native language.

  1. Auditory-Perceptual and Acoustic Methods in Measuring Dysphonia Severity of Korean Speech.

    PubMed

    Maryn, Youri; Kim, Hyung-Tae; Kim, Jaeock

    2016-09-01

    The purpose of this study was to explore the criterion-related concurrent validity of two standardized auditory-perceptual rating protocols and the Acoustic Voice Quality Index (AVQI) for measuring dysphonia severity in Korean speech. Sixty native Korean subjects with various voice disorders were asked to sustain the vowel [a:] and to read aloud the Korean text "Walk." A 3-second midvowel portion of the sustained vowel and two sentences (with 25 syllables) were edited, concatenated, and analyzed according to methods described elsewhere. From 56 participants, both continuous speech and sustained vowel recordings had sufficiently high signal-to-noise ratios (35.5 dB and 37 dB on average, respectively) and were therefore subjected to further dysphonia severity analysis with (1) "G" or Grade from the GRBAS protocol, (2) "OS" or Overall Severity from the Consensus Auditory-Perceptual Evaluation of Voice protocol, and (3) AVQI. First, high correlations were found between G and OS (rS = 0.955 for sustained vowels; rS = 0.965 for continuous speech). Second, the AVQI showed a strong correlation with G (rS = 0.911) as well as OS (rP = 0.924). These findings are in agreement with similar studies dealing with continuous speech in other languages. The present study highlights the criterion-related concurrent validity of these methods in Korean speech. Furthermore, it supports the cross-linguistic robustness of the AVQI as a valid and objective marker of overall dysphonia severity. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  2. Effect of urban noise to the acoustical performance of the secondary school’s learning spaces - A case study in Batu Pahat.

    NASA Astrophysics Data System (ADS)

    Tong, Y. G.; Abu Bakar, H.; Mohd. Sari, K. A.; Ewon, U.; Labeni, M. N.; Fauzan, N. F. A.

    2017-11-01

    Classrooms and laboratories are important spaces that use for teaching and learning process in the school. Therefore, good acoustical performances of these spaces are essential to ensure the speech or message from the teacher can be delivered to the students effectively and clearly. The aims of this study is to determine the acoustical performance of the teaching and learning spaces in public school that situated near to the traffic roads. The acoustical performance of the classrooms and laboratories at Sekolah Menengah Kebangsaan Convent Batu Pahat was evaluated in this study. The reverberation time and ambient noise of these learning spaces which are the main parameters for classroom design criteria were evaluated. Field measurements were carried out inside six classrooms and four laboratories unoccupied furnished according to the international standards. The acoustical performances of the tested learning spaces were poor where the noise criteria and reverberation times inside the measured classrooms and laboratories were higher than recommended values.

  3. Rapid Learning of Minimally Different Words in Five- to Six-Year-Old Children: Effects of Acoustic Salience and Hearing Impairment

    ERIC Educational Resources Information Center

    Giezen, Marcel R.; Escudero, Paola; Baker, Anne E.

    2016-01-01

    This study investigates the role of acoustic salience and hearing impairment in learning phonologically minimal pairs. Picture-matching and object-matching tasks were used to investigate the learning of consonant and vowel minimal pairs in five- to six-year-old deaf children with a cochlear implant (CI), and children of the same age with normal…

  4. A speech processing study using an acoustic model of a multiple-channel cochlear implant

    NASA Astrophysics Data System (ADS)

    Xu, Ying

    1998-10-01

    A cochlear implant is an electronic device designed to provide sound information for adults and children who have bilateral profound hearing loss. The task of representing speech signals as electrical stimuli is central to the design and performance of cochlear implants. Studies have shown that the current speech- processing strategies provide significant benefits to cochlear implant users. However, the evaluation and development of speech-processing strategies have been complicated by hardware limitations and large variability in user performance. To alleviate these problems, an acoustic model of a cochlear implant with the SPEAK strategy is implemented in this study, in which a set of acoustic stimuli whose psychophysical characteristics are as close as possible to those produced by a cochlear implant are presented on normal-hearing subjects. To test the effectiveness and feasibility of this acoustic model, a psychophysical experiment was conducted to match the performance of a normal-hearing listener using model- processed signals to that of a cochlear implant user. Good agreement was found between an implanted patient and an age-matched normal-hearing subject in a dynamic signal discrimination experiment, indicating that this acoustic model is a reasonably good approximation of a cochlear implant with the SPEAK strategy. The acoustic model was then used to examine the potential of the SPEAK strategy in terms of its temporal and frequency encoding of speech. It was hypothesized that better temporal and frequency encoding of speech can be accomplished by higher stimulation rates and a larger number of activated channels. Vowel and consonant recognition tests were conducted on normal-hearing subjects using speech tokens processed by the acoustic model, with different combinations of stimulation rate and number of activated channels. The results showed that vowel recognition was best at 600 pps and 8 activated channels, but further increases in stimulation rate and

  5. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    PubMed Central

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980

  6. Acoustic characteristics of Punjabi retroflex and dental stops.

    PubMed

    Hussain, Qandeel; Proctor, Michael; Harvey, Mark; Demuth, Katherine

    2017-06-01

    The phonological category "retroflex" is found in many Indo-Aryan languages; however, it has not been clearly established which acoustic characteristics reliably differentiate retroflexes from other coronals. This study investigates the acoustic phonetic properties of Punjabi retroflex /ʈ/ and dental /ʈ̪/ in word-medial and word-initial contexts across /i e a o u/, and in word-final context across /i a u/. Formant transitions, closure and release durations, and spectral moments of release bursts are compared in 2280 stop tokens produced by 30 speakers. Although burst spectral measures and formant transitions do not consistently differentiate retroflexes from dentals in some vowel contexts, stop release duration, and total stop duration reliably differentiate Punjabi retroflex and dental stops across all word contexts and vocalic environments. These results suggest that Punjabi coronal place contrasts are signaled by the complex interaction of temporal and spectral cues.

  7. Response of space shuttle insulation panels to acoustic noise pressure

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.

    1976-01-01

    The response of reusable space shuttle insulation panels to random acoustic pressure fields are studied. The basic analytical approach in formulating the governing equations of motion uses a Rayleigh-Ritz technique. The input pressure field is modeled as a stationary Gaussian random process for which the cross-spectral density function is known empirically from experimental measurements. The response calculations are performed in both frequency and time domain.

  8. Perception of speaker size and sex of vowel sounds

    NASA Astrophysics Data System (ADS)

    Smith, David R. R.; Patterson, Roy D.

    2005-04-01

    Glottal-pulse rate (GPR) and vocal-tract length (VTL) are both related to speaker size and sex-however, it is unclear how they interact to determine our perception of speaker size and sex. Experiments were designed to measure the relative contribution of GPR and VTL to judgements of speaker size and sex. Vowels were scaled to represent people with different GPRs and VTLs, including many well beyond the normal population values. In a single interval, two response rating paradigm, listeners judged the size (using a 7-point scale) and sex/age of the speaker (man, woman, boy, or girl) of these scaled vowels. Results from the size-rating experiments show that VTL has a much greater influence upon judgements of speaker size than GPR. Results from the sex-categorization experiments show that judgements of speaker sex are influenced about equally by GPR and VTL for vowels with normal GPR and VTL values. For abnormal combinations of GPR and VTL, where low GPRs are combined with short VTLs, VTL has more influence than GPR in sex judgements. [Work supported by the UK MRC (G9901257) and the German Volkswagen Foundation (VWF 1/79 783).

  9. Evaluating Computational Models in Cognitive Neuropsychology: The Case from the Consonant/Vowel Distinction

    ERIC Educational Resources Information Center

    Knobel, Mark; Caramazza, Alfonso

    2007-01-01

    Caramazza et al. [Caramazza, A., Chialant, D., Capasso, R., & Miceli, G. (2000). Separable processing of consonants and vowels. "Nature," 403(6768), 428-430.] report two patients who exhibit a double dissociation between consonants and vowels in speech production. The patterning of this double dissociation cannot be explained by appealing to…

  10. Effect of vowel context on test-retest nasalance score variability in children with and without cleft palate.

    PubMed

    Ha, Seunghee; Jung, Seungeun; Koh, Kyung S

    2018-06-01

    The purpose of this study was to determine whether test-retest nasalance score variability differs between Korean children with and without cleft palate (CP) and vowel context influences variability in nasalance score. Thirty-four 3-to-5-year-old children with and without CP participated in the study. Three 8-syllable speech stimuli devoid of nasal consonants were used for data collection. Each stimulus was loaded with high, low, or mixed vowels, respectively. All participants were asked to repeat the speech stimuli twice after the examiner, and an immediate test-retest nasalance score was assessed with no headgear change. Children with CP exhibited significantly greater absolute difference in nasalance scores than children without CP. Variability in nasalance scores was significantly different for the vowel context, and the high vowel sentence showed a significantly larger difference in nasalance scores than the low vowel sentence. The cumulative frequencies indicated that, for children with CP in the high vowel sentence, only 8 of 17 (47%) repeated nasalance scores were within 5 points. Test-retest nasalance score variability was greater for children with CP than children without CP, and there was greater variability for the high vowel sentence(s) for both groups. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. A general radiation model for sound fields and nearfield acoustical holography in wedge propagation spaces.

    PubMed

    Hoffmann, Falk-Martin; Fazi, Filippo Maria; Williams, Earl G; Fontana, Simone

    2017-09-01

    In this work an expression for the solution of the Helmholtz equation for wedge spaces is derived. Such propagation spaces represent scenarios for many acoustical problems where a free field assumption is not eligible. The proposed sound field model is derived from the general solution of the wave equation in cylindrical coordinates, using sets of orthonormal basis functions. The latter are modified to satisfy several boundary conditions representing the reflective behaviour of wedge-shaped propagation spaces. This formulation is then used in the context of nearfield acoustical holography (NAH) and to obtain the expression of the Neumann Green function. The model and its suitability for NAH is demonstrated through both numerical simulations and measured data, where the latter was acquired for the specific case of a loudspeaker on a hemi-cylindrical rigid baffle.

  12. Acoustics and perception of overtone singing.

    PubMed

    Bloothooft, G; Bringmann, E; van Cappellen, M; van Luipen, J B; Thomassen, K P

    1992-10-01

    Overtone singing, a technique of Asian origin, is a special type of voice production resulting in a very pronounced, high and separate tone that can be heard over a more or less constant drone. An acoustic analysis is presented of the phenomenon and the results are described in terms of the classical theory of speech production. The overtone sound may be interpreted as the result of an interaction of closely spaced formants. For the lower overtones, these may be the first and second formant, separated from the lower harmonics by a nasal pole-zero pair, as the result of a nasalized articulation shifting from /c/ to /a/, or, as an alternative, the second formant alone, separated from the first formant by the nasal pole-zero pair, again as the result of a nasalized articulation around /c/. For overtones with a frequency higher than 800 Hz, the overtone sound can be explained as a combination of the second and third formant as the result of a careful, retroflex, and rounded articulation from /c/, via schwa /e/ to /y/ and /i/ for the highest overtones. The results indicate a firm and relatively long closure of the glottis during overtone phonation. The corresponding short open duration of the glottis introduces a glottal formant that may enhance the amplitude of the intended overtone. Perception experiments showed that listeners categorized the overtone sounds differently from normally sung vowels, which possibly has its basis in an independent perception of the small bandwidth of the resonance underlying the overtone. Their verbal judgments were in agreement with the presented phonetic-acoustic explanation.

  13. A model of acoustic interspeaker variability based on the concept of formant-cavity affiliation

    NASA Astrophysics Data System (ADS)

    Apostol, Lian; Perrier, Pascal; Bailly, Gérard

    2004-01-01

    A method is proposed to model the interspeaker variability of formant patterns for oral vowels. It is assumed that this variability originates in the differences existing among speakers in the respective lengths of their front and back vocal-tract cavities. In order to characterize, from the spectral description of the acoustic speech signal, these vocal-tract differences between speakers, each formant is interpreted, according to the concept of formant-cavity affiliation, as a resonance of a specific vocal-tract cavity. Its frequency can thus be directly related to the corresponding cavity length, and a transformation model can be proposed from a speaker A to a speaker B on the basis of the frequency ratios of the formants corresponding to the same resonances. In order to minimize the number of sounds to be recorded for each speaker in order to carry out this speaker transformation, the frequency ratios are exactly computed only for the three extreme cardinal vowels [eye, aye, you] and they are approximated for the remaining vowels through an interpolation function. The method is evaluated through its capacity to transform the (F1,F2) formant patterns of eight oral vowels pronounced by five male speakers into the (F1,F2) patterns of the corresponding vowels generated by an articulatory model of the vocal tract. The resulting formant patterns are compared to those provided by normalization techniques published in the literature. The proposed method is found to be efficient, but a number of limitations are also observed and discussed. These limitations can be associated with the formant-cavity affiliation model itself or with a possible influence of speaker-specific vocal-tract geometry in the cross-sectional direction, which the model might not have taken into account.

  14. Experimental investigation of acoustic self-oscillation influence on decay process for underexpanded supersonic jet in submerged space

    NASA Astrophysics Data System (ADS)

    Aleksandrov, V. Yu.; Arefyev, K. Yu.; Ilchenko, M. A.

    2016-07-01

    Intensification of mixing between the gaseous working body ejected through a jet nozzle with ambient medium is an important scientific and technical problem. Effective mixing can increase the total efficiency of power and propulsion apparatuses. The promising approach, although poorly studied, is generation of acoustic self-oscillation inside the jet nozzle: this impact might enhance the decay of a supersonic jet and improve the mixing parameters. The paper presents peculiar properties of acoustic self-excitation in jet nozzle. The paper presents results of experimental study performed for a model injector with a set of plates placed into the flow channel, enabling the excitation of acoustic self-oscillations. The study reveals the regularity of under-expanded supersonic jet decay in submerged space for different flow modes. Experimental data support the efficiency of using the jet nozzle with acoustic self-oscillation in application to the systems of gas fuel supply. Experimental results can be used for designing new power apparatuses for aviation and space industry and for process plants.

  15. A Psychological Experiment on the Correspondence between Colors and Voiced Vowels in Non-synesthetes'

    NASA Astrophysics Data System (ADS)

    Miyahara, Tomoko; Koda, Ai; Sekiguchi, Rikuko; Amemiya, Toshihiko

    In this study, we investigated the nature of cross-modal associations between colors and vowels. In Experiment 1, we examined the patterns of synesthetic correspondence between colors and vowels in a perceptual similarity experiment. The results were as follows: red was chosen for /a/, yellow was chosen for /i/, and blue was chosen for /o/ significantly more than any other vowels. Interestingly this pattern of correspondence is similar to the pattern of colored hearing reported by synesthetes. In Experiment 2, we investigated the robustness of these cross-modal associations using an implicit association test (IAT). A clear congruence effect was found. Participants responded faster in congruent conditions (/i/ and yellow, /o/ and blue) than in incongruent conditions (/i/ and blue, /o/ and yellow). This result suggests that the weak synesthesia between vowels and colors in non-synesthtes is not the fact of mere conscious choice, but reflects some underlying implicit associations.

  16. Space shuttle maneuvering engine reusable thrust chamber program. Task 11: Stability analyses and acoustic model testing data dump

    NASA Technical Reports Server (NTRS)

    Oberg, C. L.

    1974-01-01

    The combustion stability characteristics of engines applicable to the Space Shuttle Orbit Maneuvering System and the adequacy of acoustic cavities as a means of assuring stability in these engines were investigated. The study comprised full-scale stability rating tests, bench-scale acoustic model tests and analysis. Two series of stability rating tests were made. Acoustic model tests were made to determine the resonance characteristics and effects of acoustic cavities. Analytical studies were done to aid design of the cavity configurations to be tested and, also, to aid evaluation of the effectiveness of acoustic cavities from available test results.

  17. The relative degree of difficulty of L2 Spanish /d, t/, trill, and tap by L1 English speakers: Auditory and acoustic methods of defining pronunciation accuracy

    NASA Astrophysics Data System (ADS)

    Waltmunson, Jeremy C.

    2005-07-01

    This study has investigated the L2 acquisition of Spanish word-medial /d, t, r, (fish hook)/, word-initial /r/, and onset cluster /(fish hook)/. Two similar experiments were designed to address the relative degree of difficulty of the word-medial contrasts, as well as the effect of word-position on /r/ and /(fish hook)/ accuracy scores. In addition, the effect of vowel height on the production of [r] and the L2 emergence of the svarabhakti vowel in onset cluster /(fish hook)/ were investigated. Participants included 34 Ll English speakers from a range of L2 Spanish levels who were recorded in multiple sessions across a 6-month or 2-month period. The criteria for assessing segment accuracy was based on auditory and acoustic features found in productions by native Spanish speakers. In order to be scored as accurate, the L2 productions had to evidence both the auditory and acoustic features found in native speaker productions. L2 participant scores for each target were normalized in order to account for the variation of features found across native speaker productions. The results showed that word-medial accuracy scores followed two significant rankings (from lowest to highest): /r <= d <= (fish hook) <= t/ and /r <= (fish hook) <= d <= t/; however, when scores for /t/ included a voice onset time criterion, only the ranking /r <= (fish hook) <= d <= t/ was significant. These results suggest that /r/ is most difficult for learners while /t/ is least difficult, although individual variation was found. Regarding /r/, there was a strong effect of word position and vowel height on accuracy scores. For productions of /(fish hook)/, there was a strong effect of syllable position on accuracy scores. Acoustic analyses of taps in onset cluster revealed that only the experienced L2 Spanish participants demonstrated svarabhakti vowel emergence with native-like performance, suggesting that its emergence occurs relatively late in L2 acquisition.

  18. Acoustic cues to perception of word stress by English, Mandarin, and Russian speakers.

    PubMed

    Chrabaszcz, Anna; Winn, Matthew; Lin, Candise Y; Idsardi, William J

    2014-08-01

    This study investigated how listeners' native language affects their weighting of acoustic cues (such as vowel quality, pitch, duration, and intensity) in the perception of contrastive word stress. Native speakers (N = 45) of typologically diverse languages (English, Russian, and Mandarin) performed a stress identification task on nonce disyllabic words with fully crossed combinations of each of the 4 cues in both syllables. The results revealed that although the vowel quality cue was the strongest cue for all groups of listeners, pitch was the second strongest cue for the English and the Mandarin listeners but was virtually disregarded by the Russian listeners. Duration and intensity cues were used by the Russian listeners to a significantly greater extent compared with the English and Mandarin participants. Compared with when cues were noncontrastive across syllables, cues were stronger when they were in the iambic contour than when they were in the trochaic contour. Although both English and Russian are stress languages and Mandarin is a tonal language, stress perception performance of the Mandarin listeners but not of the Russian listeners is more similar to that of the native English listeners, both in terms of weighting of the acoustic cues and the cues' relative strength in different word positions. The findings suggest that tuning of second-language prosodic perceptions is not entirely predictable by prosodic similarities across languages.

  19. Acoustic Cues to Perception of Word Stress by English, Mandarin, and Russian Speakers

    PubMed Central

    Chrabaszcz, Anna; Winn, Matthew; Lin, Candise Y.; Idsardi, William J.

    2017-01-01

    Purpose This study investigated how listeners’ native language affects their weighting of acoustic cues (such as vowel quality, pitch, duration, and intensity) in the perception of contrastive word stress. Method Native speakers (N = 45) of typologically diverse languages (English, Russian, and Mandarin) performed a stress identification task on nonce disyllabic words with fully crossed combinations of each of the 4 cues in both syllables. Results The results revealed that although the vowel quality cue was the strongest cue for all groups of listeners, pitch was the second strongest cue for the English and the Mandarin listeners but was virtually disregarded by the Russian listeners. Duration and intensity cues were used by the Russian listeners to a significantly greater extent compared with the English and Mandarin participants. Compared with when cues were noncontrastive across syllables, cues were stronger when they were in the iambic contour than when they were in the trochaic contour. Conclusions Although both English and Russian are stress languages and Mandarin is a tonal language, stress perception performance of the Mandarin listeners but not of the Russian listeners is more similar to that of the native English listeners, both in terms of weighting of the acoustic cues and the cues’ relative strength in different word positions. The findings suggest that tuning of second-language prosodic perceptions is not entirely predictable by prosodic similarities across languages. PMID:24686836

  20. An assessment of the microgravity and acoustic environments in Space Station Freedom using VAPEPS

    NASA Technical Reports Server (NTRS)

    Bergen, Thomas F.; Scharton, Terry D.; Badilla, Gloria A.

    1992-01-01

    The Vibroacoustic Payload Environment Prediction System (VAPEPS) was used to predict the stationary on-orbit environments in one of the Space Station Freedom modules. The model of the module included the outer structure, equipment and payload racks, avionics, and cabin air and duct systems. Acoustic and vibratory outputs of various source classes were derived and input to the model. Initial results of analyses, performed in one-third octave frequency bands from 10 to 10,000 Hz, show that both the microgravity and acoustic environments will be exceeded in some one-third octave bands with the current SSF design. Further analyses indicate that interior acoustic level requirements will be exceeded even if the microgravity requirements are met.

  1. Acoustic and perceptual aspects of vocal function in children with adenotonsillar hypertrophy--effects of surgery.

    PubMed

    Lundeborg, Inger; Hultcrantz, Elisabeth; Ericsson, Elisabeth; McAllister, Anita

    2012-07-01

    To evaluate outcome of two types of tonsil surgery (tonsillectomy [TE]+adenoidectomy or tonsillotomy [TT]+adenoidectomy) on vocal function perceptually and acoustically. Sixty-seven children, aged 50-65 months, on waiting list for tonsil surgery were randomized to TE (n=33) or TT (n=34). Fifty-seven age- and gender-matched healthy preschool children were controls. Twenty-eight of them, aged 48-59 months, served as control group before surgery, and 29, aged 60-71 months, served as control group after surgery. Before surgery and 6 months postoperatively, the children were recorded producing three sustained vowels (/ɑ/, /u/, and /i/) and 14 words. The control groups were recorded only once. Three trained speech and language pathologists performed the perceptual analysis using visual analog scale for eight voice quality parameters. Acoustic analysis from sustained vowels included average fundamental frequency, jitter percent, shimmer percent, noise-to-harmonic ratio, and the center frequencies of formants 1-3. Before surgery, the children were rated to have more hyponasality and compressed/throaty voice (P<0.05) and lower mean pitch (P<0.01) in comparison to the control group. They also had higher perturbation measures and lower frequencies of the second and third formants. After surgery, there were no differences perceptually. Perturbation measures decreased but were still higher compared with those of control group (P<0.05). Differences in formant frequencies for /i/ and /u/ remained. No differences were found between the two surgical methods. Voice quality is affected perceptually and acoustically by adenotonsillar hypertrophy. After surgery, the voice is perceptually normalized but acoustic differences remain. Outcome was equal for both surgical methods. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  2. Within- and across-language spectral and temporal variability of vowels in different phonetic and prosodic contexts: Russian and Japanese

    NASA Astrophysics Data System (ADS)

    Gilichinskaya, Yana D.; Hisagi, Miwako; Law, Franzo F.; Berkowitz, Shari; Ito, Kikuyo

    2005-04-01

    Contextual variability of vowels in three languages with large vowel inventories was examined previously. Here, variability of vowels in two languages with small inventories (Russian, Japanese) was explored. Vowels were produced by three female speakers of each language in four contexts: (Vba) disyllables and in 3-syllable nonsense words (gaC1VC2a) embedded within carrier sentences; contexts included bilabial stops (bVp) in normal rate sentences and alveolar stops (dVt) in both normal and rapid rate sentences. Dependent variables were syllable durations and formant frequencies at syllable midpoint. Results showed very little variation across consonant and rate conditions in formants for /i/ in both languages. Japanese short /u, o, a/ showed fronting (F2 increases) in alveolar context relative to labial context (1.3-2.0 Barks), which was more pronounced in rapid sentences. Fronting of Japanese long vowels was less pronounced (0.3 to 0.9 Barks). Japanese long/short vowel ratios varied with speaking style (syllables versus sentences) and speaking rate. All Russian vowels except /i/ were fronted in alveolar vs labial context (1.1-3.1 Barks) but showed little change in either spectrum or duration with speaking rate. Comparisons of these patterns of variability with American English, French and German vowel results will be discussed.

  3. The Prosodic Licensing of Coda Consonants in Early Speech: Interactions with Vowel Length

    ERIC Educational Resources Information Center

    Miles, Kelly; Yuen, Ivan; Cox, Felicity; Demuth, Katherine

    2016-01-01

    English has a word-minimality requirement that all open-class lexical items must contain at least two moras of structure, forming a bimoraic foot (Hayes, 1995).Thus, a word with either a long vowel, or a short vowel and a coda consonant, satisfies this requirement. This raises the question of when and how young children might learn this…

  4. /ae/ versus /?/: Vowel Fossilization in the Pronunciation of Turkish English Majors: Rehabilitation 1

    ERIC Educational Resources Information Center

    Demirezen, Mehmet

    2017-01-01

    In North American English (NAE) and British English, [ae] and [?] are open vowel phonemes which are articulated by a speaker easily without a build-up of air pressure. Among all English vowels, the greatest problem for most Turkish majors of English is the discrimination of [ae] and [?]. In English, [ae] is called the "short a" or ash,…

  5. The Theory of Adaptive Dispersion and Acoustic-phonetic Properties of Cross-language Lexical-tone Systems

    NASA Astrophysics Data System (ADS)

    Alexander, Jennifer Alexandra

    Lexical-tone languages use fundamental frequency (F0/pitch) to convey word meaning. About 41.8% of the world's languages use lexical tone (Maddieson, 2008), yet those systems are under-studied. I aim to increase our understanding of speech-sound inventory organization by extending to tone-systems a model of vowel-system organization, the Theory of Adaptive Dispersion (TAD) (Liljencrants and Lindblom, 1972). This is a cross-language investigation of whether and how the size of a tonal inventory affects (A) acoustic tone-space size and (B) dispersion of tone categories within the tone-space. I compared five languages with very different tone inventories: Cantonese (3 contour, 3 level tones); Mandarin (3 contour, 1 level tone); Thai (2 contour, 3 level tones); Yoruba (3 level tones only); and Igbo (2 level tones only). Six native speakers (3 female) of each language produced 18 CV syllables in isolation, with each of his/her language's tones, six times. I measured tonal F0 across the vowel at onset, midpoint, and offglide. Tone-space size was the F0 difference in semitones (ST) between each language's highest and lowest tones. Tone dispersion was the F0 distance (ST) between two tones shared by multiple languages. Following the TAD, I predicted that languages with larger tone inventories would have larger tone-spaces. Against expectations, tone-space size was fixed across level-tone languages at midpoint and offglide, and across contour-tone languages (except Thai) at offglide. However, within each language type (level-tone vs. contour-tone), languages with smaller tone inventories had larger tone spaces at onset. Tone-dispersion results were also unexpected. The Cantonese mid-level tone was further dispersed from a tonal baseline than the Yoruba mid-level tone; Cantonese mid-level tone dispersion was therefore greater than theoretically necessary. The Cantonese high-level tone was also further dispersed from baseline than the Mandarin high-level tone -- at midpoint

  6. An Amplitude-Based Estimation Method for International Space Station (ISS) Leak Detection and Localization Using Acoustic Sensor Networks

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Madaras, Eric I.

    2009-01-01

    The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.

  7. Acoustic source for generating an acoustic beam

    DOEpatents

    Vu, Cung Khac; Sinha, Dipen N.; Pantea, Cristian

    2016-05-31

    An acoustic source for generating an acoustic beam includes a housing; a plurality of spaced apart piezo-electric layers disposed within the housing; and a non-linear medium filling between the plurality of layers. Each of the plurality of piezoelectric layers is configured to generate an acoustic wave. The non-linear medium and the plurality of piezo-electric material layers have a matching impedance so as to enhance a transmission of the acoustic wave generated by each of plurality of layers through the remaining plurality of layers.

  8. A Dynamically Focusing Cochlear Implant Strategy Can Improve Vowel Identification in Noise.

    PubMed

    Arenberg, Julie G; Parkinson, Wendy S; Litvak, Leonid; Chen, Chen; Kreft, Heather A; Oxenham, Andrew J

    2018-03-09

    The standard, monopolar (MP) electrode configuration used in commercially available cochlear implants (CI) creates a broad electrical field, which can lead to unwanted channel interactions. Use of more focused configurations, such as tripolar and phased array, has led to mixed results for improving speech understanding. The purpose of the present study was to assess the efficacy of a physiologically inspired configuration called dynamic focusing, using focused tripolar stimulation at low levels and less focused stimulation at high levels. Dynamic focusing may better mimic cochlear excitation patterns in normal acoustic hearing, while reducing the current levels necessary to achieve sufficient loudness at high levels. Twenty postlingually deafened adult CI users participated in the study. Speech perception was assessed in quiet and in a four-talker babble background noise. Speech stimuli were closed-set spondees in noise, and medial vowels at 50 and 60 dB SPL in quiet and in noise. The signal to noise ratio was adjusted individually such that performance was between 40 and 60% correct with the MP strategy. Subjects were fitted with three experimental strategies matched for pulse duration, pulse rate, filter settings, and loudness on a channel-by-channel basis. The strategies included 14 channels programmed in MP, fixed partial tripolar (σ = 0.8), and dynamic partial tripolar (σ at 0.8 at threshold and 0.5 at the most comfortable level). Fifteen minutes of listening experience was provided with each strategy before testing. Sound quality ratings were also obtained. Speech perception performance for vowel identification in quiet at 50 and 60 dB SPL and for spondees in noise was similar for the three tested strategies. However, performance on vowel identification in noise was significantly better for listeners using the dynamic focusing strategy. Sound quality ratings were similar for the three strategies. Some subjects obtained more benefit than others, with some

  9. From Kratzenstein to Wheatstone: Episodes in the early history of free reed acoustics

    NASA Astrophysics Data System (ADS)

    Cottingham, James P.

    2002-05-01

    In 1780 C. G. Kratzenstein published a paper in St. Petersburg describing a machine which produced vowel sounds using free reeds with resonators of various shapes. This marks a convenient, if arbitrary, starting point for the history of the free reed musical instruments of European origin. These instruments developed rapidly, and by 1850 the accordion, concertina, harmonica, reed organ, and harmonium all had been invented and developed into more or less final form. A key figure in this period is Charles Wheatstone, who not only published papers on acoustical research but was also an inventor and commercially successful manufacturer of musical instruments, most notably the Wheatstone English concertina. Much of Wheatstone's research in acoustics and almost all of his work as an inventor of musical instruments involved free reeds. This paper presents some episodes in the development of the free reed instruments and some examples of acoustical research involving free reeds during the 18th and 19th centuries.

  10. Unidirectional Wave Vector Manipulation in Two-Dimensional Space with an All Passive Acoustic Parity-Time-Symmetric Metamaterials Crystal

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Zhu, Xuefeng; Chen, Fei; Liang, Shanjun; Zhu, Jie

    2018-03-01

    Exploring the concept of non-Hermitian Hamiltonians respecting parity-time symmetry with classical wave systems is of great interest as it enables the experimental investigation of parity-time-symmetric systems through the quantum-classical analogue. Here, we demonstrate unidirectional wave vector manipulation in two-dimensional space, with an all passive acoustic parity-time-symmetric metamaterials crystal. The metamaterials crystal is constructed through interleaving groove- and holey-structured acoustic metamaterials to provide an intrinsic parity-time-symmetric potential that is two-dimensionally extended and curved, which allows the flexible manipulation of unpaired wave vectors. At the transition point from the unbroken to broken parity-time symmetry phase, the unidirectional sound focusing effect (along with reflectionless acoustic transparency in the opposite direction) is experimentally realized over the spectrum. This demonstration confirms the capability of passive acoustic systems to carry the experimental studies on general parity-time symmetry physics and further reveals the unique functionalities enabled by the judiciously tailored unidirectional wave vectors in space.

  11. Phonology, Decoding, and Lexical Compensation in Vowel Spelling Errors Made by Children with Dyslexia

    ERIC Educational Resources Information Center

    Bernstein, Stuart E.

    2009-01-01

    A descriptive study of vowel spelling errors made by children first diagnosed with dyslexia (n = 79) revealed that phonological errors, such as "bet" for "bat", outnumbered orthographic errors, such as "bate" for "bait". These errors were more frequent in nonwords than words, suggesting that lexical context helps with vowel spelling. In a second…

  12. Varying acoustic-phonemic ambiguity reveals that talker normalization is obligatory in speech processing.

    PubMed

    Choi, Ja Young; Hu, Elly R; Perrachione, Tyler K

    2018-04-01

    The nondeterministic relationship between speech acoustics and abstract phonemic representations imposes a challenge for listeners to maintain perceptual constancy despite the highly variable acoustic realization of speech. Talker normalization facilitates speech processing by reducing the degrees of freedom for mapping between encountered speech and phonemic representations. While this process has been proposed to facilitate the perception of ambiguous speech sounds, it is currently unknown whether talker normalization is affected by the degree of potential ambiguity in acoustic-phonemic mapping. We explored the effects of talker normalization on speech processing in a series of speeded classification paradigms, parametrically manipulating the potential for inconsistent acoustic-phonemic relationships across talkers for both consonants and vowels. Listeners identified words with varying potential acoustic-phonemic ambiguity across talkers (e.g., beet/boat vs. boot/boat) spoken by single or mixed talkers. Auditory categorization of words was always slower when listening to mixed talkers compared to a single talker, even when there was no potential acoustic ambiguity between target sounds. Moreover, the processing cost imposed by mixed talkers was greatest when words had the most potential acoustic-phonemic overlap across talkers. Models of acoustic dissimilarity between target speech sounds did not account for the pattern of results. These results suggest (a) that talker normalization incurs the greatest processing cost when disambiguating highly confusable sounds and (b) that talker normalization appears to be an obligatory component of speech perception, taking place even when the acoustic-phonemic relationships across sounds are unambiguous.

  13. On the assimilation-discrimination relationship in American English adults’ French vowel learning1

    PubMed Central

    Levy, Erika S.

    2009-01-01

    A quantitative “cross-language assimilation overlap” method for testing predictions of the Perceptual Assimilation Model (PAM) was implemented to compare results of a discrimination experiment with the listeners’ previously reported assimilation data. The experiment examined discrimination of Parisian French (PF) front rounded vowels ∕y∕ and ∕œ∕. Three groups of American English listeners differing in their French experience (no experience [NoExp], formal experience [ModExp], and extensive formal-plus-immersion experience [HiExp]) performed discrimination of PF ∕y-u∕, ∕y-o∕, ∕œ-o∕, ∕œ-u∕, ∕y-i∕, ∕y-ɛ∕, ∕œ-ɛ∕, ∕œ-i∕, ∕y-œ∕, ∕u-i∕, and ∕a-ɛ∕. Vowels were in bilabial ∕rabVp∕ and alveolar ∕radVt∕ contexts. More errors were found for PF front vs back rounded vowel pairs (16%) than for PF front unrounded vs rounded pairs (2%). Overall, ModExp listeners did not perform more accurately (11% errors) than NoExp listeners (13% errors). Extensive immersion experience, however, was associated with fewer errors (3%) than formal experience alone, although discrimination of PF ∕y-u∕ remained relatively poor (12% errors) for HiExp listeners. More errors occurred on pairs involving front vs back rounded vowels in alveolar context (20% errors) than in bilabial (11% errors). Significant correlations were revealed between listeners’ assimilation overlap scores and their discrimination errors, suggesting that the PAM may be extended to second-language (L2) vowel learning. PMID:19894844

  14. Reading accuracy and speed of vowelized and unvowelized scripts among dyslexic readers of Hebrew: the road not taken.

    PubMed

    Schiff, Rachel; Katzir, Tami; Shoshan, Noa

    2013-07-01

    The present study examined the effects of orthographic transparency on reading ability of children with dyslexia in two Hebrew scripts. The study explored the reading accuracy and speed of vowelized and unvowelized Hebrew words of fourth-grade children with dyslexia. A comparison was made to typically developing readers of two age groups: a group matched by chronological age and a group of children who are 2 years younger, presumably at the end of the reading acquisition process. An additional purpose was to investigate the role of vowelization in the reading ability of unvowelized script among readers with dyslexia in an attempt to assess whether vowelization plays a mediating role for reading speed of unvowelized scripts. The present study found no significant differences in reading accuracy and speed between vowelized and unvowelized scripts among fourth-grade readers with dyslexia. The reading speed of fourth-graders with dyslexia was similar to typically developing second-graders for both the vowelized and unvowelized words. However, fourth-grade children with dyslexia performed lower than the typically developing second-graders in the reading accuracy of vowelized script. Furthermore, for readers with dyslexia, accuracy in reading both vowelized and unvowelized words mediated the reading speed of unvowelized scripts. These results may be a sign that Hebrew-speaking children with dyslexia have severe difficulties that prevent them from developing strategies for more efficient reading.

  15. Effects of Intensive Voice Treatment (the Lee Silverman Voice Treatment [LSVT]) on Vowel Articulation in Dysarthric Individuals with Idiopathic Parkinson Disease: Acoustic and Perceptual Findings

    ERIC Educational Resources Information Center

    Sapir, Shimon; Spielman, Jennifer L.; Ramig, Lorraine O.; Story, Brad H.; Fox, Cynthia

    2007-01-01

    Purpose: To evaluate the effects of intensive voice treatment targeting vocal loudness (the Lee Silverman Voice Treatment [LSVT]) on vowel articulation in dysarthric individuals with idiopathic Parkinson's disease (PD). Method: A group of individuals with PD receiving LSVT (n = 14) was compared to a group of individuals with PD not receiving LSVT…

  16. Acoustical study of the development of stop consonants in children

    NASA Astrophysics Data System (ADS)

    Imbrie, Annika K.

    2003-10-01

    This study focuses on the acoustic patterns of stop consonants and adjacent vowels as they develop in young children (ages 26-33) over a six month period. The acoustic properties that are being measured for stop consonants include spectra of bursts, frication noise and aspiration noise, and formant movements. Additionally, acoustic landmarks are labeled for measurements of durations of events determined by these landmarks. These acoustic measurements are being interpreted in terms of the supraglottal, laryngeal, and respiratory actions that give rise to them. Preliminary data show that some details of the child's gestures are still far from achieving the adult pattern. The burst of frication noise at the release tends to be shorter than adult values, and often consists of multiple bursts. From the burst spectrum, the place of articulation appears to be normal. Finally, coordination of closure of the glottis and release of the primary articulator is still quite variable, as is apparent from a large standard deviation in VOT. Analysis of longitudinal data on young children will result in better models of the development of the coordination of articulation, phonation, and respiration for motor speech production. [Work supported by NIH Grants Nos. DC00038 and DC00075.

  17. Acoustical study of the development of stop consonants in children

    NASA Astrophysics Data System (ADS)

    Imbrie, Annika K.

    2004-05-01

    This study focuses on the acoustic patterns of stop consonants and adjacent vowels as they develop in young children (ages 2.6-3.3) over a 6-month period. The acoustic properties that are being measured for stop consonants include spectra of bursts, frication noise and aspiration noise, and formant movements. Additionally, acoustic landmarks are labeled for measurements of durations of events determined by these landmarks. These acoustic measurements are being interpreted in terms of the supraglottal, laryngeal, and respiratory actions that give rise to them. Preliminary data show that some details of the child's gestures are still far from achieving the adult pattern. The burst of frication noise at the release tends to be shorter than adult values, and often consists of multiple bursts, possibly due to greater compliance of the active articulator. From the burst spectrum, the place of articulation appears to be normal. Finally, coordination of closure of the glottis and release of the primary articulator is still quite variable, as is apparent from a large standard deviation in VOT. Analysis of longitudinal data on young children will result in better models of the development of motor speech production. [Work supported by NIH Grants DC00038 and DC00075.

  18. Formant-frequency discrimination of synthesized vowels in budgerigars (Melopsittacus undulatus) and humans.

    PubMed

    Henry, Kenneth S; Amburgey, Kassidy N; Abrams, Kristina S; Idrobo, Fabio; Carney, Laurel H

    2017-10-01

    Vowels are complex sounds with four to five spectral peaks known as formants. The frequencies of the two lowest formants, F1and F2, are sufficient for vowel discrimination. Behavioral studies show that many birds and mammals can discriminate vowels. However, few studies have quantified thresholds for formant-frequency discrimination. The present study examined formant-frequency discrimination in budgerigars (Melopsittacus undulatus) and humans using stimuli with one or two formants and a constant fundamental frequency of 200 Hz. Stimuli had spectral envelopes similar to natural speech and were presented with random level variation. Thresholds were estimated for frequency discrimination of F1, F2, and simultaneous F1 and F2 changes. The same two-down, one-up tracking procedure and single-interval, two-alternative task were used for both species. Formant-frequency discrimination thresholds were as sensitive in budgerigars as in humans and followed the same patterns across all conditions. Thresholds expressed as percent frequency difference were higher for F1 than for F2, and were unchanged between stimuli with one or two formants. Thresholds for simultaneous F1 and F2 changes indicated that discrimination was based on combined information from both formant regions. Results were consistent with previous human studies and show that budgerigars provide an exceptionally sensitive animal model of vowel feature discrimination.

  19. Cross-modal discrepancies in coarticulation and the integration of speech information: the McGurk effect with mismatched vowels.

    PubMed

    Green, K P; Gerdeman, A

    1995-12-01

    Two experiments examined the impact of a discrepancy in vowel quality between the auditory and visual modalities on the perception of a syllable-initial consonant. One experiment examined the effect of such a discrepancy on the McGurk effect by cross-dubbing auditory /bi/ tokens onto visual /ga/ articulations (and vice versa). A discrepancy in vowel category significantly reduced the magnitude of the McGurk effect and changed the pattern of responses. A 2nd experiment investigated the effect of such a discrepancy on the speeded classification of the initial consonant. Mean reaction times to classify the tokens increased when the vowel information was discrepant between the 2 modalities but not when the vowel information was consistent. These experiments indicate that the perceptual system is sensitive to cross-modal discrepancies in the coarticulatory information between a consonant and its following vowel during phonetic perception.

  20. Body mass index and acoustic voice parameters: is there a relationship.

    PubMed

    Souza, Lourdes Bernadete Rocha de; Santos, Marquiony Marques Dos

    2017-05-06

    Specific elements such as weight and body volume can interfere in voice production and consequently in its acoustic parameters, which is why it is important for the clinician to be aware of these relationships. To investigate the relationship between body mass index and the average acoustic voice parameters. Observational, cross-sectional descriptive study. The sample consisted of 84 women, aged between 18 and 40years, an average of 26.83 (±6.88). The subjects were grouped according to body mass index: 19 underweight; 23 normal ranges, 20 overweight and 22 obese and evaluated the fundamental frequency of the sustained vowel [a] and the maximum phonation time of the vowels [a], [i], [u], using PRAAT software. The data were submitted to the Kruskal-Wallis test to verify if there were differences between the groups regarding the study variables. All variables showed statistically significant results and were subjected to non-parametric test Mann-Whitney. Regarding to the average of the fundamental frequency, there was statistically significant difference between groups with underweight and overweight and obese; normal range and overweight and obese. The average maximum phonation time revealed statistically significant difference between underweight and obese individuals; normal range and obese; overweight and obese. Body mass index influenced the average fundamental frequency of overweight and obese individuals evaluated in this study. Obesity influenced in reducing maximum phonation time average. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  1. Acoustical analysis of the underlying voice differences between two groups of professional singers: opera and country and western.

    PubMed

    Burns, P

    1986-05-01

    An acoustical analysis of the speaking and singing voices of two types of professional singers was conducted. The vowels /i/, /a/, and /o/ were spoken and sung ten times each by seven opera and seven country and western singers. Vowel spectra were derived by computer software techniques allowing quantitative assessment of formant structure (F1-F4), relative amplitude of resonance peaks (F1-F4), fundamental frequency, and harmonic high frequency energy. Formant analysis was the most effective parameter differentiating the two groups. Only opera singers lowered their fourth formant creating a wide-band resonance area (approximately 2,800 Hz) corresponding to the well-known "singing formant." Country and western singers revealed similar resonatory voice characteristics for both spoken and sung output. These results implicate faulty vocal technique in country and western singers as a contributory reason for vocal abuse/fatigue.

  2. The effect of speaking style on a locus equation characterization of stop place of articulation.

    PubMed

    Sussman, H M; Dalston, E; Gumbert, S

    1998-01-01

    Locus equations were employed to assess the phonetic stability and distinctiveness of stop place categories in reduced speech. Twenty-two speakers produced stop consonant + vowel utterances in citation and spontaneous speech. Coarticulatory increases in hypoarticulated speech were documented only for /dV/ and [gV] productions in front vowel contexts. Coarticulatory extents for /bV/ and [gV] in back vowel contexts remained stable across style changes. Discriminant analyses showed equivalent levels of correct classification across speaking styles. CV reduction was quantified by use of Euclidean distances separating stop place categories. Despite sensitivity of locus equation parameters to articulatory differences encountered in informal speech, stop place categories still maintained a clear separability when plotted in a higher-order slope x y-intercept acoustic space.

  3. AST Launch Vehicle Acoustics

    NASA Technical Reports Server (NTRS)

    Houston, Janice; Counter, D.; Giacomoni, D.

    2015-01-01

    The liftoff phase induces acoustic loading over a broad frequency range for a launch vehicle. These external acoustic environments are then used in the prediction of internal vibration responses of the vehicle and components which result in the qualification levels. Thus, predicting these liftoff acoustic (LOA) environments is critical to the design requirements of any launch vehicle. If there is a significant amount of uncertainty in the predictions or if acoustic mitigation options must be implemented, a subscale acoustic test is a feasible pre-launch test option to verify the LOA environments. The NASA Space Launch System (SLS) program initiated the Scale Model Acoustic Test (SMAT) to verify the predicted SLS LOA environments and to determine the acoustic reduction with an above deck water sound suppression system. The SMAT was conducted at Marshall Space Flight Center and the test article included a 5% scale SLS vehicle model, tower and Mobile Launcher. Acoustic and pressure data were measured by approximately 250 instruments. The SMAT liftoff acoustic results are presented, findings are discussed and a comparison is shown to the Ares I Scale Model Acoustic Test (ASMAT) results.

  4. Acoustic and Perceptual Effects of Left–Right Laryngeal Asymmetries Based on Computational Modeling

    PubMed Central

    Samlan, Robin A.; Story, Brad H.; Lotto, Andrew J.; Bunton, Kate

    2015-01-01

    Purpose Computational modeling was used to examine the consequences of 5 different laryngeal asymmetries on acoustic and perceptual measures of vocal function. Method A kinematic vocal fold model was used to impose 5 laryngeal asymmetries: adduction, edge bulging, nodal point ratio, amplitude of vibration, and starting phase. Thirty /a/ and /I/ vowels were generated for each asymmetry and analyzed acoustically using cepstral peak prominence (CPP), harmonics-to-noise ratio (HNR), and 3 measures of spectral slope (H1*-H2*, B0-B1, and B0-B2). Twenty listeners rated voice quality for a subset of the productions. Results Increasingly asymmetric adduction, bulging, and nodal point ratio explained significant variance in perceptual rating (R2 = .05, p < .001). The same factors resulted in generally decreasing CPP, HNR, and B0-B2 and in increasing B0-B1. Of the acoustic measures, only CPP explained significant variance in perceived quality (R2 = .14, p < .001). Increasingly asymmetric amplitude of vibration or starting phase minimally altered vocal function or voice quality. Conclusion Asymmetries of adduction, bulging, and nodal point ratio drove acoustic measures and perception in the current study, whereas asymmetric amplitude of vibration and starting phase demonstrated minimal influence on the acoustic signal or voice quality. PMID:24845730

  5. Now You Hear It, Now You Don't: Vowel Devoicing in Japanese Infant-Directed Speech

    ERIC Educational Resources Information Center

    Fais, Laurel; Kajikawa, Sachiyo; Amano, Shigeaki; Werker, Janet F.

    2010-01-01

    In this work, we examine a context in which a conflict arises between two roles that infant-directed speech (IDS) plays: making language structure salient and modeling the adult form of a language. Vowel devoicing in fluent adult Japanese creates violations of the canonical Japanese consonant-vowel word structure pattern by systematically…

  6. Cross-language perception of Japanese vowel length contrasts: comparison of listeners from different first language backgrounds.

    PubMed

    Tsukada, Kimiko; Hirata, Yukari; Roengpitya, Rungpat

    2014-06-01

    The purpose of this research was to compare the perception of Japanese vowel length contrasts by 4 groups of listeners who differed in their familiarity with length contrasts in their first language (L1; i.e., American English, Italian, Japanese, and Thai). Of the 3 nonnative groups, native Thai listeners were expected to outperform American English and Italian listeners, because vowel length is contrastive in their L1. Native Italian listeners were expected to demonstrate a higher level of accuracy for length contrasts than American English listeners, because the former are familiar with consonant (but not vowel) length contrasts (i.e., singleton vs. geminate) in their L1. A 2-alternative forced-choice AXB discrimination test that included 125 trials was administered to all the participants, and the listeners' discrimination accuracy (d') was reported. As expected, Japanese listeners were more accurate than all 3 nonnative groups in their discrimination of Japanese vowel length contrasts. The 3 nonnative groups did not differ from one another in their discrimination accuracy despite varying experience with length contrasts in their L1. Only Thai listeners were more accurate in their length discrimination when the target vowel was long than when it was short. Being familiar with vowel length contrasts in L1 may affect the listeners' cross-language perception, but it does not guarantee that their L1 experience automatically results in efficient processing of length contrasts in unfamiliar languages. The extent of success may be related to how length contrasts are phonetically implemented in listeners' L1.

  7. Acoustic passaggio pedagogy for the male voice.

    PubMed

    Bozeman, Kenneth Wood

    2013-07-01

    Awareness of interactions between the lower harmonics of the voice source and the first formant of the vocal tract, and of the passive vowel modifications that accompany them, can assist in working out a smooth transition through the passaggio of the male voice. A stable vocal tract length establishes the general location of all formants, including the higher formants that form the singer's formant cluster. Untrained males instinctively shorten the tube to preserve the strong F1/H2 acoustic coupling of voce aperta, resulting in 'yell' timbre. If tube length and shape are kept stable during pitch ascent, the yell can be avoided by allowing the second harmonic to rise above the first formant, creating the balanced timbre of voce chiusa.

  8. Ethnographic model of acoustic use of space in the southern Andes for an archaeo-musicological investigation

    NASA Astrophysics Data System (ADS)

    Perez de Arce, Jose

    2002-11-01

    Studies of ritual celebrations in central Chile conducted in the past 15 years show that the spatial component of sound is a crucial component of the whole. The sonic compositions of these rituals generate complex musical structures that the author has termed ''multi-orchestral polyphonies.'' Their origins have been documented from archaeological remains in a vast region of southern Andes (southern Peru, Bolivia, northern Argentina, north-central Chile). It consists of a combination of dance, space walk-through, spatial extension, multiple movements between listener and orchestra, and multiple relations between ritual and ambient sounds. The characteristics of these observables reveal a complex schematic relation between space and sound. This schema can be used as a valid hypothesis for the study of pre-Hispanic uses of acoustic ritual space. The acoustic features observed in this study are common in Andean ritual and, to some extent are seen in Mesoamerica as well.

  9. Acoustic and perceptual cues for compound-phrasal contrasts in Vietnamese.

    PubMed

    Nguyen, Anh-Thu T; Ingram, John C L

    2007-09-01

    This paper reports two series of experiments that examined the phonetic correlates of lexical stress in Vietnamese compounds in comparison to their phrasal constructions. In the first series of experiments, acoustic and perceptual characteristics of Vietnamese compound words and their phrasal counterparts were investigated on five likely acoustic correlates of stress or prominence (f0 range and contour, duration, intensity and spectral slope, vowel reduction), elicited under two distinct speaking conditions: a "normal speaking" condition and a "maximum contrast" condition which encouraged speakers to employ prosodic strategies for disambiguation. The results suggested that Vietnamese lacks phonetic resources for distinguishing compounds from phrases lexically and that native speakers may employ a phrase-level prosodic disambiguation strategy (juncture marking), when required to do so. However, in a second series of experiments, minimal pairs of bisyllabic coordinative compounds with reversible syllable positions were examined for acoustic evidence of asymmetrical prominence relations. Clear evidence of asymmetric prominences in coordinative compounds was found, supporting independent results obtained from an analysis of reduplicative compounds and tone sandhi in Vietnamese [Nguye;n and Ingram, 2006]. A reconciliation of these apparently conflicting findings on word stress in Vietnamese is presented and discussed.

  10. An acoustic comparison of two women's infant- and adult-directed speech

    NASA Astrophysics Data System (ADS)

    Andruski, Jean; Katz-Gershon, Shiri

    2003-04-01

    In addition to having prosodic characteristics that are attractive to infant listeners, infant-directed (ID) speech shares certain characteristics of adult-directed (AD) clear speech, such as increased acoustic distance between vowels, that might be expected to make ID speech easier for adults to perceive in noise than AD conversational speech. However, perceptual tests of two women's ID productions by Andruski and Bessega [J. Acoust. Soc. Am. 112, 2355] showed that is not always the case. In a word identification task that compared ID speech with AD clear and conversational speech, one speaker's ID productions were less well-identified than AD clear speech, but better identified than AD conversational speech. For the second woman, ID speech was the least accurately identified of the three speech registers. For both speakers, hard words (infrequent words with many lexical neighbors) were also at an increased disadvantage relative to easy words (frequent words with few lexical neighbors) in speech registers that were less accurately perceived. This study will compare several acoustic properties of these women's productions, including pitch and formant-frequency characteristics. Results of the acoustic analyses will be examined with the original perceptual results to suggest reasons for differences in listener's accuracy in identifying these two women's ID speech in noise.

  11. Contributions of cochlea-scaled entropy and consonant-vowel boundaries to prediction of speech intelligibility in noise

    PubMed Central

    Chen, Fei; Loizou, Philipos C.

    2012-01-01

    Recent evidence suggests that spectral change, as measured by cochlea-scaled entropy (CSE), predicts speech intelligibility better than the information carried by vowels or consonants in sentences. Motivated by this finding, the present study investigates whether intelligibility indices implemented to include segments marked with significant spectral change better predict speech intelligibility in noise than measures that include all phonetic segments paying no attention to vowels/consonants or spectral change. The prediction of two intelligibility measures [normalized covariance measure (NCM), coherence-based speech intelligibility index (CSII)] is investigated using three sentence-segmentation methods: relative root-mean-square (RMS) levels, CSE, and traditional phonetic segmentation of obstruents and sonorants. While the CSE method makes no distinction between spectral changes occurring within vowels/consonants, the RMS-level segmentation method places more emphasis on the vowel-consonant boundaries wherein the spectral change is often most prominent, and perhaps most robust, in the presence of noise. Higher correlation with intelligibility scores was obtained when including sentence segments containing a large number of consonant-vowel boundaries than when including segments with highest entropy or segments based on obstruent/sonorant classification. These data suggest that in the context of intelligibility measures the type of spectral change captured by the measure is important. PMID:22559382

  12. Effects of Short- and Long-Term Changes in Auditory Feedback on Vowel and Sibilant Contrasts

    ERIC Educational Resources Information Center

    Lane, Harlan; Matthies, Melanie L.; Guenther, Frank H.; Denny, Margaret; Perkell, Joseph S.; Stockmann, Ellen; Tiede, Mark; Vick, Jennell; Zandipour, Majid

    2007-01-01

    Purpose: To assess the effects of short- and long-term changes in auditory feedback on vowel and sibilant contrasts and to evaluate hypotheses arising from a model of speech motor planning. Method: The perception and production of vowel and sibilant contrasts were measured in 8 postlingually deafened adults prior to activation of their cochlear…

  13. Shallow and Deep Orthographies in Hebrew: The Role of Vowelization in Reading Development for Unvowelized Scripts

    ERIC Educational Resources Information Center

    Schiff, Rachel

    2012-01-01

    The present study explored the speed, accuracy, and reading comprehension of vowelized versus unvowelized scripts among 126 native Hebrew speaking children in second, fourth, and sixth grades. Findings indicated that second graders read and comprehended vowelized scripts significantly more accurately and more quickly than unvowelized scripts,…

  14. Consequences of broad auditory filters for identification of multichannel-compressed vowels

    PubMed Central

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose In view of previous findings (Bor, Souza & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method Listeners were recruited in three groups: listeners with flat sensorineural loss, listeners with sloping sensorineural loss, and a control group of listeners with normal hearing. Individual auditory filter measurements were obtained at 500 and 2000 Hz. The filter widths were related to identification of vowels processed with 16-channel MCC and with a control (linear) condition. Results Listeners with flat loss had broader filters at 500 Hz but not at 2000 Hz, compared to listeners with sloping loss. Vowel identification was poorer for MCC compared to linear amplification. Listeners with flat loss made more errors than listeners with sloping loss, and there was a significant relationship between filter width and the effects of MCC. Conclusions Broadened auditory filters can reduce the ability to process amplitude-compressed vowel spectra. This suggests that individual frequency selectivity is one factor which influences benefit of MCC, when a high number of compression channels are used. PMID:22207696

  15. An Index of Phonic Patterns by Vowel Types. AVKO "Great Idea" Reprint Series No. 622.

    ERIC Educational Resources Information Center

    McCabe, Don

    Intended for the use of teachers or diagnosticians, this booklet presents charts that list various phonic patterns, word families, or "rimes" associated with specific vowel patterns. Lists in the booklet are arranged according to the 14 basic vowel phonemes in English (including long a, long e, long i, long aw, short ah, and short u).…

  16. The Locus Equation as an Index of Coarticulation in Syllables Produced by Speakers with Profound Hearing Loss

    ERIC Educational Resources Information Center

    McCaffrey Morrison, Helen

    2008-01-01

    Locus equations (LEs) were derived from consonant-vowel-consonant (CVC) syllables produced by four speakers with profound hearing loss. Group data indicated that LE functions obtained for the separate CVC productions initiated by /b/, /d/, and /g/ were less well-separated in acoustic space than those obtained from speakers with normal hearing. A…

  17. Categorical vowel perception enhances the effectiveness and generalization of auditory feedback in human-machine-interfaces.

    PubMed

    Larson, Eric; Terry, Howard P; Canevari, Margaux M; Stepp, Cara E

    2013-01-01

    Human-machine interface (HMI) designs offer the possibility of improving quality of life for patient populations as well as augmenting normal user function. Despite pragmatic benefits, utilizing auditory feedback for HMI control remains underutilized, in part due to observed limitations in effectiveness. The goal of this study was to determine the extent to which categorical speech perception could be used to improve an auditory HMI. Using surface electromyography, 24 healthy speakers of American English participated in 4 sessions to learn to control an HMI using auditory feedback (provided via vowel synthesis). Participants trained on 3 targets in sessions 1-3 and were tested on 3 novel targets in session 4. An "established categories with text cues" group of eight participants were trained and tested on auditory targets corresponding to standard American English vowels using auditory and text target cues. An "established categories without text cues" group of eight participants were trained and tested on the same targets using only auditory cuing of target vowel identity. A "new categories" group of eight participants were trained and tested on targets that corresponded to vowel-like sounds not part of American English. Analyses of user performance revealed significant effects of session and group (established categories groups and the new categories group), and a trend for an interaction between session and group. Results suggest that auditory feedback can be effectively used for HMI operation when paired with established categorical (native vowel) targets with an unambiguous cue.

  18. Baryon acoustic oscillations in 2D. II. Redshift-space halo clustering in N-body simulations

    NASA Astrophysics Data System (ADS)

    Nishimichi, Takahiro; Taruya, Atsushi

    2011-08-01

    We measure the halo power spectrum in redshift space from cosmological N-body simulations, and test the analytical models of redshift distortions particularly focusing on the scales of baryon acoustic oscillations. Remarkably, the measured halo power spectrum in redshift space exhibits a large-scale enhancement in amplitude relative to the real-space clustering, and the effect becomes significant for the massive or highly biased halo samples. These findings cannot be simply explained by the so-called streaming model frequently used in the literature. By contrast, a physically motivated perturbation theory model developed in the previous paper reproduces the halo power spectrum very well, and the model combining a simple linear scale-dependent bias can accurately characterize the clustering anisotropies of halos in two dimensions, i.e., line-of-sight and its perpendicular directions. The results highlight the significance of nonlinear coupling between density and velocity fields associated with two competing effects of redshift distortions, i.e., Kaiser and Finger-of-God effects, and a proper account of this effect would be important in accurately characterizing the baryon acoustic oscillations in two dimensions.

  19. Comparison of Nasal Acceleration and Nasalance across Vowels

    ERIC Educational Resources Information Center

    Thorp, Elias B.; Virnik, Boris T.; Stepp, Cara E.

    2013-01-01

    Purpose: The purpose of this study was to determine the performance of normalized nasal acceleration (NNA) relative to nasalance as estimates of nasalized versus nonnasalized vowel and sentence productions. Method: Participants were 18 healthy speakers of American English. NNA was measured using a custom sensor, and nasalance was measured using…

  20. Comparing Identification of Standardized and Regionally Valid Vowels

    ERIC Educational Resources Information Center

    Wright, Richard; Souza, Pamela

    2012-01-01

    Purpose: In perception studies, it is common to use vowel stimuli from standardized recordings or synthetic stimuli created using values from well-known published research. Although the use of standardized stimuli is convenient, unconsidered dialect and regional accent differences may introduce confounding effects. The goal of this study was to…

  1. Stromal-epithelial dynamics in response to fractionated radiotherapy

    NASA Astrophysics Data System (ADS)

    Rong, Panying

    such a speaker-adaptive articulatory model, (1) an articulatory-acoustic-aerodynamic database was recorded using the articulography and aerodynamic instruments to provide point-wise articulatory data to be fitted into the framework of Childers's standard vocal tract model; (2) the length and transverse dimension of the vocal tract were adjusted to fit individual speaker by minimizing the acoustic discrepancy between the model simulation and the target derived from acoustic signal in the database using the simulated annealing algorithm; (3) the articulatory space of the model was adjusted to fit individual articulatory features by adapting the movement ranges of all articulators. With the speaker-adaptive articulatory model, the articulatory configurations of the oral and nasal vowels in the database were simulated and synthesized. Given the acoustic targets derived from the oral vowels in the database, speech-dependent articulatory adjustments were simulated to compensate for the acoustic outcome caused by VPO. The resultant articulatory configurations corresponds to nasal vowels with articulatory adjustment, which were synthesized to serve as the perceptual stimuli for a listening task of nasality rating. The oral and nasal vowels synthesized based on the oral and nasal vowel targets in the database also served as the perceptual stimuli. The results suggest both acoustic and perceptual effects of the mode-generated articulatory adjustment on the nasal vowels /a/, /i/ and /u/. In terms of acoustics, the articulatory adjustment (1) restores the altered formant structures due to nasal coupling, including shifted formant frequency, attenuated formant intensity and expanded formant bandwidth and (2) attenuates the peaks and zeros caused by nasal resonances. Perceptually, the articulatory adjustment generated by the speaker-adaptive model significantly reduces the perceived nasality for all three vowels (/a/, /i/, /u/). The acoustic and perceptual effects of articulatory

  2. Volume I. Percussion Sextet. (original Composition). Volume II. The Simulation of Acoustical Space by Means of Physical Modeling.

    NASA Astrophysics Data System (ADS)

    Manzara, Leonard Charles

    1990-01-01

    The dissertation is in two parts:. 1. Percussion Sextet. The Percussion Sextet is a one movement musical composition with a length of approximately fifteen minutes. It is for six instrumentalists, each on a number of percussion instruments. The overriding formal problem was to construct a coherent and compelling structure which fuses a diversity of musical materials and textures into a dramatic whole. Particularly important is the synthesis of opposing tendencies contained in stochastic and deterministic processes: global textures versus motivic detail, and randomness versus total control. Several compositional techniques are employed in the composition. These methods of composition will be aided, in part, by the use of artificial intelligence techniques programmed on a computer. Finally, the percussion ensemble is the ideal medium to realize the above processes since it encompasses a wide range of both pitched and unpitched timbres, and since a great variety of textures and densities can be created with a certain economy of means. 2. The simulation of acoustical space by means of physical modeling. This is a written report describing the research and development of a computer program which simulates the characteristics of acoustical space in two dimensions. With the computer program the user can simulate most conventional acoustical spaces, as well as those physically impossible to realize in the real world. The program simulates acoustical space by means of geometric modeling. This involves defining wall equations, phantom source points and wall diffusions, and then processing input files containing digital signals through the program, producing output files ready for digital to analog conversion. The user of the program is able to define wall locations and wall reflectivity and roughness characteristics, all of which can be changed over time. Sound source locations are also definable within the acoustical space and these locations can be changed independently at

  3. Vowel Harmony in Palestinian Arabic: A Metrical Perspective.

    ERIC Educational Resources Information Center

    Abu-Salim, I. M.

    1987-01-01

    The autosegmental rule of vowel harmony (VH) in Palestinian Arabic is shown to be constrained simultaneously by metrical and segmental boundaries. The indicative prefix bi- is no longer an exception to VH if a structure is assumed that disallows the prefix from sharing a foot with the stem, consequently blocking VH. (Author/LMO)

  4. Vowel Harmony: A Variable Rule in Brazilian Portuguese.

    ERIC Educational Resources Information Center

    Bisol, Leda

    1989-01-01

    Examines vowel harmony in the "Gaucho dialect" of the Brazilian state of Rio Grande do Sul. Informants from four areas of the state were studied: the capital city (Porto Alegre), the border region with Uruguay, and two areas of the interior populated by descendants of nineteenth-century immigrants from Europe, mainly Germans and…

  5. Effect of sound-related activities on human behaviours and acoustic comfort in urban open spaces.

    PubMed

    Meng, Qi; Kang, Jian

    2016-12-15

    Human activities are important to landscape design and urban planning; however, the effect of sound-related activities on human behaviours and acoustic comfort has not been considered. The objective of this study is to explore how human behaviours and acoustic comfort in urban open spaces can be changed by sound-related activities. On-site measurements were performed at a case study site in Harbin, China, and an acoustic comfort survey was simultaneously conducted. In terms of effect of sound activities on human behaviours, music-related activities caused 5.1-21.5% of persons who pass by the area to stand and watch the activity, while there was a little effect on the number of persons who performed excises during the activity. Human activities generally have little effect on the behaviour of pedestrians when only 1 to 3 persons are involved in the activities, while a deep effect on the behaviour of pedestrians is noted when >6 persons are involved in the activities. In terms of effect of activities on acoustic comfort, music-related activities can increase the sound level from 10.8 to 16.4dBA, while human activities such RS and PC can increase the sound level from 9.6 to 12.8dBA; however, they lead to very different acoustic comfort. The acoustic comfort of persons can differ with activities, for example the acoustic comfort of persons who stand watch can increase by music-related activities, while the acoustic comfort of persons who sit and watch can decrease by human sound-related activities. Some sound-related activities can show opposite trend of acoustic comfort between visitors and citizens. Persons with higher income prefer music sound-related activities, while those with lower income prefer human sound-related activities. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    NASA Astrophysics Data System (ADS)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  7. Comparing Measures of Voice Quality From Sustained Phonation and Continuous Speech.

    PubMed

    Gerratt, Bruce R; Kreiman, Jody; Garellek, Marc

    2016-10-01

    The question of what type of utterance-a sustained vowel or continuous speech-is best for voice quality analysis has been extensively studied but with equivocal results. This study examines whether previously reported differences derive from the articulatory and prosodic factors occurring in continuous speech versus sustained phonation. Speakers with voice disorders sustained vowels and read sentences. Vowel samples were excerpted from the steadiest portion of each vowel in the sentences. In addition to sustained and excerpted vowels, a 3rd set of stimuli was created by shortening sustained vowel productions to match the duration of vowels excerpted from continuous speech. Acoustic measures were made on the stimuli, and listeners judged the severity of vocal quality deviation. Sustained vowels and those extracted from continuous speech contain essentially the same acoustic and perceptual information about vocal quality deviation. Perceived and/or measured differences between continuous speech and sustained vowels derive largely from voice source variability across segmental and prosodic contexts and not from variations in vocal fold vibration in the quasisteady portion of the vowels. Approaches to voice quality assessment by using continuous speech samples average across utterances and may not adequately quantify the variability they are intended to assess.

  8. Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver

    NASA Technical Reports Server (NTRS)

    Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)

    2002-01-01

    The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.

  9. Does letter position coding depend on consonant/vowel status? Evidence with the masked priming technique.

    PubMed

    Perea, Manuel; Acha, Joana

    2009-02-01

    Recently, a number of input coding schemes (e.g., SOLAR model, SERIOL model, open-bigram model, overlap model) have been proposed that capture the transposed-letter priming effect (i.e., faster response times for jugde-JUDGE than for jupte-JUDGE). In their current version, these coding schemes do not assume any processing differences between vowels and consonants. However, in a lexical decision task, Perea and Lupker (2004, JML; Lupker, Perea, & Davis, 2008, L&CP) reported that transposed-letter priming effects occurred for consonant transpositions but not for vowel transpositions. This finding poses a challenge for these recently proposed coding schemes. Here, we report four masked priming experiments that examine whether this consonant/vowel dissociation in transposed-letter priming is task-specific. In Experiment 1, we used a lexical decision task and found a transposed-letter priming effect only for consonant transpositions. In Experiments 2-4, we employed a same-different task - a task which taps early perceptual processes - and found a robust transposed-letter priming effect that did not interact with consonant/vowel status. We examine the implications of these findings for the front-end of the models of visual word recognition.

  10. On the nature of consonant/vowel differences in letter position coding: Evidence from developing and adult readers.

    PubMed

    Comesaña, Montserrat; Soares, Ana P; Marcet, Ana; Perea, Manuel

    2016-11-01

    In skilled adult readers, transposed-letter effects (jugde-JUDGE) are greater for consonant than for vowel transpositions. These differences are often attributed to phonological rather than orthographic processing. To examine this issue, we employed a scenario in which phonological involvement varies as a function of reading experience: A masked priming lexical decision task with 50-ms primes in adult and developing readers. Indeed, masked phonological priming at this prime duration has been consistently reported in adults, but not in developing readers (Davis, Castles, & Iakovidis, 1998). Thus, if consonant/vowel asymmetries in letter position coding with adults are due to phonological influences, transposed-letter priming should occur for both consonant and vowel transpositions in developing readers. Results with adults (Experiment 1) replicated the usual consonant/vowel asymmetry in transposed-letter priming. In contrast, no signs of an asymmetry were found with developing readers (Experiments 2-3). However, Experiments 1-3 did not directly test the existence of phonological involvement. To study this question, Experiment 4 manipulated the phonological prime-target relationship in developing readers. As expected, we found no signs of masked phonological priming. Thus, the present data favour an interpretation of the consonant/vowel dissociation in letter position coding as due to phonological rather than orthographic processing. © 2016 The British Psychological Society.

  11. Dual routes for verbal repetition: articulation-based and acoustic-phonetic codes for pseudoword and word repetition, respectively.

    PubMed

    Yoo, Sejin; Chung, Jun-Young; Jeon, Hyeon-Ae; Lee, Kyoung-Min; Kim, Young-Bo; Cho, Zang-Hee

    2012-07-01

    Speech production is inextricably linked to speech perception, yet they are usually investigated in isolation. In this study, we employed a verbal-repetition task to identify the neural substrates of speech processing with two ends active simultaneously using functional MRI. Subjects verbally repeated auditory stimuli containing an ambiguous vowel sound that could be perceived as either a word or a pseudoword depending on the interpretation of the vowel. We found verbal repetition commonly activated the audition-articulation interface bilaterally at Sylvian fissures and superior temporal sulci. Contrasting word-versus-pseudoword trials revealed neural activities unique to word repetition in the left posterior middle temporal areas and activities unique to pseudoword repetition in the left inferior frontal gyrus. These findings imply that the tasks are carried out using different speech codes: an articulation-based code of pseudowords and an acoustic-phonetic code of words. It also supports the dual-stream model and imitative learning of vocabulary. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Directional Asymmetries in Vowel Perception of Adult Nonnative Listeners Do Not Change over Time with Language Experience

    ERIC Educational Resources Information Center

    Kriengwatana, Buddhamas Pralle; Escudero, Paola

    2017-01-01

    Purpose: This study tested an assumption of the Natural Referent Vowel (Polka & Bohn, 2011) framework, namely, that directional asymmetries in adult vowel perception can be influenced by language experience. Method: Data from participants reported in Escudero and Williams (2014) were analyzed. Spanish participants categorized the Dutch vowels…

  13. Predicting Reading in Vowelized and Unvowelized Arabic Script: An Investigation of Reading in First and Second Grades

    ERIC Educational Resources Information Center

    Asadi, Ibrahim A.; Khateb, Asaid

    2017-01-01

    This study examined the orthographic transparency of Arabic by investigating the contribution of phonological awareness (PA), vocabulary, and Rapid Automatized Naming (RAN) to reading vowelized and unvowelized words. The results from first and second grade children showed that PA contribution was similar in the vowelized and unvowelized…

  14. A study of acoustic-to-articulatory inversion of speech by analysis-by-synthesis using chain matrices and the Maeda articulatory model

    PubMed Central

    Panchapagesan, Sankaran; Alwan, Abeer

    2011-01-01

    In this paper, a quantitative study of acoustic-to-articulatory inversion for vowel speech sounds by analysis-by-synthesis using the Maeda articulatory model is performed. For chain matrix calculation of vocal tract (VT) acoustics, the chain matrix derivatives with respect to area function are calculated and used in a quasi-Newton method for optimizing articulatory trajectories. The cost function includes a distance measure between natural and synthesized first three formants, and parameter regularization and continuity terms. Calibration of the Maeda model to two speakers, one male and one female, from the University of Wisconsin x-ray microbeam (XRMB) database, using a cost function, is discussed. Model adaptation includes scaling the overall VT and the pharyngeal region and modifying the outer VT outline using measured palate and pharyngeal traces. The inversion optimization is initialized by a fast search of an articulatory codebook, which was pruned using XRMB data to improve inversion results. Good agreement between estimated midsagittal VT outlines and measured XRMB tongue pellet positions was achieved for several vowels and diphthongs for the male speaker, with average pellet-VT outline distances around 0.15 cm, smooth articulatory trajectories, and less than 1% average error in the first three formants. PMID:21476670

  15. The contrast between alveolar and velar stops with typical speech data: acoustic and articulatory analyses.

    PubMed

    Melo, Roberta Michelon; Mota, Helena Bolli; Berti, Larissa Cristina

    2017-06-08

    This study used acoustic and articulatory analyses to characterize the contrast between alveolar and velar stops with typical speech data, comparing the parameters (acoustic and articulatory) of adults and children with typical speech development. The sample consisted of 20 adults and 15 children with typical speech development. The analyzed corpus was organized through five repetitions of each target-word (/'kap ə/, /'tapə/, /'galo/ e /'daɾə/). These words were inserted into a carrier phrase and the participant was asked to name them spontaneously. Simultaneous audio and video data were recorded (tongue ultrasound images). The data was submitted to acoustic analyses (voice onset time; spectral peak and burst spectral moments; vowel/consonant transition and relative duration measures) and articulatory analyses (proportion of significant axes of the anterior and posterior tongue regions and description of tongue curves). Acoustic and articulatory parameters were effective to indicate the contrast between alveolar and velar stops, mainly in the adult group. Both speech analyses showed statistically significant differences between the two groups. The acoustic and articulatory parameters provided signals to characterize the phonic contrast of speech. One of the main findings in the comparison between adult and child speech was evidence of articulatory refinement/maturation even after the period of segment acquisition.

  16. The Effects of Size and Type of Vocal Fold Polyp on Some Acoustic Voice Parameters.

    PubMed

    Akbari, Elaheh; Seifpanahi, Sadegh; Ghorbani, Ali; Izadi, Farzad; Torabinezhad, Farhad

    2018-03-01

    Vocal abuse and misuse would result in vocal fold polyp. Certain features define the extent of vocal folds polyp effects on voice acoustic parameters. The present study aimed to define the effects of polyp size on acoustic voice parameters, and compare these parameters in hemorrhagic and non-hemorrhagic polyps. In the present retrospective study, 28 individuals with hemorrhagic or non-hemorrhagic polyps of the true vocal folds were recruited to investigate acoustic voice parameters of vowel/ æ/ computed by the Praat software. The data were analyzed using the SPSS software, version 17.0. According to the type and size of polyps, mean acoustic differences and correlations were analyzed by the statistical t test and Pearson correlation test, respectively; with significance level below 0.05. The results indicated that jitter and the harmonics-to-noise ratio had a significant positive and negative correlation with the polyp size (P=0.01), respectively. In addition, both mentioned parameters were significantly different between the two types of the investigated polyps. Both the type and size of polyps have effects on acoustic voice characteristics. In the present study, a novel method to measure polyp size was introduced. Further confirmation of this method as a tool to compare polyp sizes requires additional investigations.

  17. The Effects of Size and Type of Vocal Fold Polyp on Some Acoustic Voice Parameters

    PubMed Central

    Akbari, Elaheh; Seifpanahi, Sadegh; Ghorbani, Ali; Izadi, Farzad; Torabinezhad, Farhad

    2018-01-01

    Background Vocal abuse and misuse would result in vocal fold polyp. Certain features define the extent of vocal folds polyp effects on voice acoustic parameters. The present study aimed to define the effects of polyp size on acoustic voice parameters, and compare these parameters in hemorrhagic and non-hemorrhagic polyps. Methods In the present retrospective study, 28 individuals with hemorrhagic or non-hemorrhagic polyps of the true vocal folds were recruited to investigate acoustic voice parameters of vowel/ æ/ computed by the Praat software. The data were analyzed using the SPSS software, version 17.0. According to the type and size of polyps, mean acoustic differences and correlations were analyzed by the statistical t test and Pearson correlation test, respectively; with significance level below 0.05. Results The results indicated that jitter and the harmonics-to-noise ratio had a significant positive and negative correlation with the polyp size (P=0.01), respectively. In addition, both mentioned parameters were significantly different between the two types of the investigated polyps. Conclusion Both the type and size of polyps have effects on acoustic voice characteristics. In the present study, a novel method to measure polyp size was introduced. Further confirmation of this method as a tool to compare polyp sizes requires additional investigations. PMID:29749984

  18. The Acquisition of Phonetic Details: Evidence from the Production of English Reduced Vowels by Korean Learners

    ERIC Educational Resources Information Center

    Han, Jeong-Im; Hwang, Jong-Bai; Choi, Tae-Hwan

    2011-01-01

    The purpose of this study was to evaluate the acquisition of non-contrastive phonetic details of a second language. Reduced vowels in English are realized as a schwa or barred- i depending on their phonological contexts, but Korean has no reduced vowels. Two groups of Korean learners of English who differed according to the experience of residence…

  19. Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans

    NASA Astrophysics Data System (ADS)

    Pei, Xiaomei; Barbour, Dennis L.; Leuthardt, Eric C.; Schalk, Gerwin

    2011-08-01

    Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.

  20. The Rise and Fall of Unstressed Vowel Reduction in the Spanish of Cusco, Peru: A Sociophonetic Study

    ERIC Educational Resources Information Center

    Delforge, Ann Marie

    2009-01-01

    This dissertation describes the phonetic characteristics of a phenomenon that has previously been denominated "unstressed vowel reduction" in Andean Spanish based on the spectrographic analysis of 40,556 unstressed vowels extracted from the conversational speech of 150 residents of the city of Cusco, Peru. Results demonstrate that this…

  1. 14 CFR 25.856 - Thermal/Acoustic insulation materials.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Thermal/Acoustic insulation materials. 25.856 Section 25.856 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION....856 Thermal/Acoustic insulation materials. (a) Thermal/acoustic insulation material installed in the...

  2. 14 CFR 25.856 - Thermal/Acoustic insulation materials.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Thermal/Acoustic insulation materials. 25.856 Section 25.856 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION....856 Thermal/Acoustic insulation materials. (a) Thermal/acoustic insulation material installed in the...

  3. 14 CFR 23.856 - Thermal/acoustic insulation materials.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Thermal/acoustic insulation materials. 23.856 Section 23.856 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION... Construction Fire Protection § 23.856 Thermal/acoustic insulation materials. Thermal/acoustic insulation...

  4. 14 CFR 23.856 - Thermal/acoustic insulation materials.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Thermal/acoustic insulation materials. 23.856 Section 23.856 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION... Construction Fire Protection § 23.856 Thermal/acoustic insulation materials. Thermal/acoustic insulation...

  5. 14 CFR 23.856 - Thermal/acoustic insulation materials.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Thermal/acoustic insulation materials. 23.856 Section 23.856 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION... Construction Fire Protection § 23.856 Thermal/acoustic insulation materials. Thermal/acoustic insulation...

  6. The Development of the Acoustic Design of NASA Glenn Research Center's New Reverberant Acoustic Test Facility

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Mark E.; Hozman, Aron D.; McNelis, Anne M.

    2011-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) is leading the design and build of the new world-class vibroacoustic test capabilities at the NASA GRC s Plum Brook Station in Sandusky, Ohio. Benham Companies, LLC is currently constructing modal, base-shake sine and reverberant acoustic test facilities to support the future testing needs of NASA s space exploration program. The large Reverberant Acoustic Test Facility (RATF) will be approximately 101,000 ft3 in volume and capable of achieving an empty chamber acoustic overall sound pressure level (OASPL) of 163 dB. This combination of size and acoustic power is unprecedented amongst the world s known active reverberant acoustic test facilities. The key to achieving the expected acoustic test spectra for a range of many NASA space flight environments in the RATF is the knowledge gained from a series of ground acoustic tests. Data was obtained from several NASA-sponsored test programs, including testing performed at the National Research Council of Canada s acoustic test facility in Ottawa, Ontario, Canada, and at the Redstone Technical Test Center acoustic test facility in Huntsville, Alabama. The majority of these tests were performed to characterize the acoustic performance of the modulators (noise generators) and representative horns that would be required to meet the desired spectra, as well as to evaluate possible supplemental gas jet noise sources. The knowledge obtained in each of these test programs enabled the design of the RATF sound generation system to confidently advance to its final acoustic design and subsequent on-going construction.

  7. The Development of the Acoustic Design of NASA Glenn Research Center's New Reverberant Acoustic Test Facility

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Mark E.; Hozman, Aron D.; McNelis, Anne M.

    2011-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) is leading the design and build of the new world-class vibroacoustic test capabilities at the NASA GRC's Plum Brook Station in Sandusky, Ohio, USA. Benham Companies, LLC is currently constructing modal, base-shake sine and reverberant acoustic test facilities to support the future testing needs of NASA s space exploration program. The large Reverberant Acoustic Test Facility (RATF) will be approximately 101,000 ft3 in volume and capable of achieving an empty chamber acoustic overall sound pressure level (OASPL) of 163 dB. This combination of size and acoustic power is unprecedented amongst the world s known active reverberant acoustic test facilities. The key to achieving the expected acoustic test spectra for a range of many NASA space flight environments in the RATF is the knowledge gained from a series of ground acoustic tests. Data was obtained from several NASA-sponsored test programs, including testing performed at the National Research Council of Canada s acoustic test facility in Ottawa, Ontario, Canada, and at the Redstone Technical Test Center acoustic test facility in Huntsville, Alabama, USA. The majority of these tests were performed to characterize the acoustic performance of the modulators (noise generators) and representative horns that would be required to meet the desired spectra, as well as to evaluate possible supplemental gas jet noise sources. The knowledge obtained in each of these test programs enabled the design of the RATF sound generation system to confidently advance to its final acoustic design and subsequent on-going construction.

  8. Vibro-Acoustic Analysis of NASA's Space Shuttle Launch Pad 39A Flame Trench Wall

    NASA Technical Reports Server (NTRS)

    Margasahayam, Ravi N.

    2009-01-01

    A vital element to NASA's manned space flight launch operations is the Kennedy Space Center Launch Complex 39's launch pads A and B. Originally designed and constructed In the 1960s for the Saturn V rockets used for the Apollo missions, these pads were modified above grade to support Space Shuttle missions. But below grade, each of the pad's original walls (including a 42 feet deep, 58 feet wide, and 450 feet long tunnel designed to deflect flames and exhaust gases, the flame trench) remained unchanged. On May 31, 2008 during the launch of STS-124, over 3500 of the. 22000 interlocking refractory bricks that lined east wall of the flame trench, protecting the pad structure were liberated from pad 39A. The STS-124 launch anomaly spawned an agency-wide initiative to determine the failure root cause, to assess the impact of debris on vehicle and ground support equipment safety, and to prescribe corrective action. The investigation encompassed radar imaging, infrared video review, debris transport mechanism analysis using computational fluid dynamics, destructive testing, and non-destructive evaluation, including vibroacoustic analysis, in order to validate the corrective action. The primary focus of this paper is on the analytic approach, including static, modal, and vibro-acoustic analysis, required to certify the corrective action, and ensure Integrity and operational reliability for future launches. Due to the absence of instrumentation (including pressure transducers, acoustic pressure sensors, and accelerometers) in the flame trench, defining an accurate acoustic signature of the launch environment during shuttle main engine/solid rocket booster Ignition and vehicle ascent posed a significant challenge. Details of the analysis, including the derivation of launch environments, the finite element approach taken, and analysistest/ launch data correlation are discussed. Data obtained from the recent launch of STS-126 from Pad 39A was instrumental in validating the

  9. Speech production in experienced cochlear implant users undergoing short-term auditory deprivation

    NASA Astrophysics Data System (ADS)

    Greenman, Geoffrey; Tjaden, Kris; Kozak, Alexa T.

    2005-09-01

    This study examined the effect of short-term auditory deprivation on the speech production of five postlingually deafened women, all of whom were experienced cochlear implant users. Each cochlear implant user, as well as age and gender matched control speakers, produced CVC target words embedded in a reading passage. Speech samples for the deafened adults were collected on two separate occasions. First, the speakers were recorded after wearing their speech processor consistently for at least two to three hours prior to recording (implant ``ON''). The second recording occurred when the speakers had their speech processors turned off for approximately ten to twelve hours prior to recording (implant ``OFF''). Acoustic measures, including fundamental frequency (F0), the first (F1) and second (F2) formants of the vowels, vowel space area, vowel duration, spectral moments of the consonants, as well as utterance duration and sound pressure level (SPL) across the entire utterance were analyzed in both speaking conditions. For each implant speaker, acoustic measures will be compared across implant ``ON'' and implant ``OFF'' speaking conditions, and will also be compared to data obtained from normal hearing speakers.

  10. Intersensory Redundancy Facilitates Learning of Arbitrary Relations between Vowel Sounds and Objects in Seven-Month-Old Infants.

    ERIC Educational Resources Information Center

    Gogate, Lakshmi J.; Bahrick, Lorraine E.

    1998-01-01

    Investigated 7-month olds' ability to relate vowel sounds with objects when intersensory redundancy was present versus absent. Found that infants detected a mismatch in the vowel-object pairs in the moving-synchronous condition but not in the still or moving-asynchronous condition, demonstrating that temporal synchrony between vocalizations and…

  11. Developmental weighting shifts for noise components of fricative-vowel syllables.

    PubMed

    Nittrouer, S; Miller, M E

    1997-07-01

    Previous studies have convincingly shown that the weight assigned to vocalic formant transitions in decisions of fricative identity for fricative-vowel syllables decreases with development. Although these same studies suggested a developmental increase in the weight assigned to the noise spectrum, the role of the aperiodic-noise portions of the signals in these fricative decisions have not been as well-studied. The purpose of these experiments was to examine more closely developmental shifts in the weight assigned to the aperiodic-noise components of the signals in decisions of syllable-initial fricative identity. Two experiments used noises varying along continua from a clear /s/ percept to a clear /[symbol: see text]/ percept. In experiment 1, these noises were created by combining /s/ and /[symbol: see text]/ noises produced by a human vocal tract at different amplitude ratios, a process that resulted in stimuli differing primarily in the amplitude of a relatively low-frequency (roughly 2.2-kHz) peak. In experiment 2, noises that varied only in the amplitude of a similar low-frequency peak were created with a software synthesizer. Both experiments used synthetic /a/ and /u/ portions, and efforts were made to minimize possible contributions of vocalic formant transitions to fricative labeling. Children and adults labeled the resulting stimuli as /s/ vowel or /[symbol: see text]/ vowel. Combined results of the two experiments showed that children's responses were less influenced than those of adults by the amplitude of the low-frequency peak of fricative noises.

  12. Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model

    NASA Technical Reports Server (NTRS)

    Zhang, Weiguo; Raveendra, Ravi

    2014-01-01

    Reported here is the ability of utilizing the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with actual measurements of leak sounds made by a one atmosphere to vacuum leak through a small hole in the pressure wall of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). While the E-FEM method represents a reverberant sound field calculation, of importance to this application is the requirement to also handle the direct field effect of the sound generation. It was also important to be able to compute the sound fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.

  13. Effects of stimulus duration and vowel quality in cross-linguistic categorical perception of pitch directions

    PubMed Central

    Zhu, Yiqing; Wayland, Ratree

    2017-01-01

    We investigated categorical perception of rising and falling pitch contours by tonal and non-tonal listeners. Specifically, we determined minimum durations needed to perceive both contours and compared to those of production, how stimuli duration affects their perception, whether there is an intrinsic F0 effect, and how first language background, duration, directions of pitch and vowel quality interact with each other. Continua of fundamental frequency on different vowels with 9 duration values were created for identification and discrimination tasks. Less time is generally needed to effectively perceive a pitch direction than to produce it. Overall, tonal listeners’ perception is more categorical than non-tonal listeners. Stimuli duration plays a critical role for both groups, but tonal listeners showed a stronger duration effect, and may benefit more from the extra time in longer stimuli for context-coding, consistent with the multistore model of categorical perception. Within a certain range of semitones, tonal listeners also required shorter stimulus duration to perceive pitch direction changes than non-tonal listeners. Finally, vowel quality plays a limited role and only interacts with duration in perceiving falling pitch directions. These findings further our understanding on models of categorical perception, the relationship between speech perception and production, and the interaction between the perception of tones and vowel quality. PMID:28671991

  14. Effects of gender on the production of emphasis in Jordanian Arabic: A sociophonetic study

    NASA Astrophysics Data System (ADS)

    Abudalbuh, Mujdey D.

    Emphasis, or pharyngealization, is a distinctive phonetic phenomenon and a phonemic feature of Semitic languages such as Arabic and Hebrew. The goal of this study is to investigate the effect of gender on the production of emphasis in Jordanian Arabic as manifested on the consonants themselves as well as on the adjacent vowels. To this end, 22 speakers of Jordanian Arabic, 12 males and 10 females, participated in a production experiment where they produced monosyllabic minimal CVC pairs contrasted on the basis of the presence of a word-initial plain or emphatic consonant. Several acoustic parameters were measured including Voice Onset Time (VOT), friction duration, the spectral mean of the friction noise, vowel duration and the formant frequencies (F1-F3) of the target vowels. The results of this study indicated that VOT is a reliable acoustic correlate of emphasis in Jordanian Arabic only for voiceless stops whose emphatic VOT was significantly shorter than their plain VOT. Also, emphatic fricatives were shorter than plain fricatives. Emphatic vowels were found to be longer than plain vowels. Overall, the results showed that emphatic vowels were characterized by a raised F1 at the onset and midpoint of the vowel, lowered F2 throughout the vowel, and raised F3 at the onset and offset of the vowel relative to the corresponding values of the plain vowels. Finally, results using Nearey's (1978) normalization algorithm indicated that emphasis was more acoustically evident in the speech of males than in the speech of females in terms of the F-pattern. The results are discussed from a sociolinguistic perspective in light of the previous literature and the notion of linguistic feminism.

  15. An evolution in listening: An analytical and critical study of structural, acoustic, and phenomenal aspects of selected works by Pauline Oliveros

    NASA Astrophysics Data System (ADS)

    Setar, Katherine Marie

    1997-08-01

    This dissertation analytically and critically examines composer Pauline Oliveros's philosophy of 'listening' as it applies to selected works created between 1961 and 1984. The dissertation is organized through the application of two criteria: three perspectives of listening (empirical, phenomenal, and, to a lesser extent, personal), and categories derived, in part, from her writings and interviews (improvisational, traditional, theatrical, electronic, meditational, and interactive). In general, Oliveros's works may be categorized by one of two listening perspectives. The 'empirical' listening perspective, which generally includes pure acoustic phenomenon, independent from human interpretation, is exemplified in the analyses of Sound Patterns (1961), OH HA AH (1968), and, to a lesser extent, I of IV (1966). The 'phenomenal' listening perspective, which involves the human interaction with the pure acoustic phenomenon, includes a critical examination of her post-1971 'meditation' pieces and an analytical and critical examination of her tonal 'interactive' improvisations in highly resonant space, such as Watertank Software (1984). The most pervasive element of Oliveros's stylistic evolution is her gradual change from the hierarchical aesthetic of the traditional composer, to one in which creative control is more equally shared by all participants. Other significant contributions by Oliveros include the probable invention of the 'meditation' genre, an emphasis on the subjective perceptions of musical participants as a means to greater musical awareness, her musical exploration of highly resonant space, and her pioneering work in American electronic music. Both analytical and critical commentary were applied to selective representative works from Oliveros's six compositional categories. The analytical methods applied to the Oliveros's works include Wayne Slawson's vowel/formant theory as described in his book, Sound Color, an original method of categorizing consonants as

  16. Reducing the dimensions of acoustic devices using anti-acoustic-null media

    NASA Astrophysics Data System (ADS)

    Li, Borui; Sun, Fei; He, Sailing

    2018-02-01

    An anti-acoustic-null medium (anti-ANM), a special homogeneous medium with anisotropic mass density, is designed by transformation acoustics (TA). Anti-ANM can greatly compress acoustic space along the direction of its main axis, where the size compression ratio is extremely large. This special feature can be utilized to reduce the geometric dimensions of classic acoustic devices. For example, the height of a parabolic acoustic reflector can be greatly reduced. We also design a brass-air structure on the basis of the effective medium theory to materialize the anti-ANM in a broadband frequency range. Numerical simulations verify the performance of the proposed anti-ANM.

  17. Mobile Communication Devices, Ambient Noise, and Acoustic Voice Measures.

    PubMed

    Maryn, Youri; Ysenbaert, Femke; Zarowski, Andrzej; Vanspauwen, Robby

    2017-03-01

    The ability to move with mobile communication devices (MCDs; ie, smartphones and tablet computers) may induce differences in microphone-to-mouth positioning and use in noise-packed environments, and thus influence reliability of acoustic voice measurements. This study investigated differences in various acoustic voice measures between six recording equipments in backgrounds with low and increasing noise levels. One chain of continuous speech and sustained vowel from 50 subjects with voice disorders (all separated by silence intervals) was radiated and re-recorded in an anechoic chamber with five MCDs and one high-quality recording system. These recordings were acquired in one condition without ambient noise and in four conditions with increased ambient noise. A total of 10 acoustic voice markers were obtained in the program Praat. Differences between MCDs and noise condition were assessed with Friedman repeated-measures test and posthoc Wilcoxon signed-rank tests, both for related samples, after Bonferroni correction. (1) Except median fundamental frequency and seven nonsignificant differences, MCD samples have significantly higher acoustic markers than clinical reference samples in minimal environmental noise. (2) Except median fundamental frequency, jitter local, and jitter rap, all acoustic measures on samples recorded with the reference system experienced significant influence from room noise levels. Fundamental frequency is resistant to recording system, environmental noise, and their combination. All other measures, however, were impacted by both recording system and noise condition, and especially by their combination, often already in the reference/baseline condition without added ambient noise. Caution is therefore warranted regarding implementation of MCDs as clinical recording tools, particularly when applied for treatment outcomes assessments. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. Sound in ecclesiastical spaces in Cordoba. Architectural projects incorporating acoustic methodology (El sonido del espacio eclesial en Cordoba. El proyecto arquitectonico como procedimiento acustico)

    NASA Astrophysics Data System (ADS)

    Suarez, Rafael

    2003-11-01

    This thesis is concerned with the acoustic analysis of ecclesiastical spaces, and the subsequent implementation of acoustic design methodology in architectural renovations. One begins with an adequate architectural design of specific elements (shape, materials, and textures), with the intention of elimination of acoustic deficiencies that are common in such spaces. These are those deficiencies that impair good speech intelligibility and good musical audibility. The investigation is limited to churches in the province of Cordoba and to churches built after the reconquest of Spain (1236) and up until the 18th century. Selected churches are those that have undergone architectural renovations to adapt them to new uses or to make them more suitable for liturgical use. The thesis attempts to summarize the acoustic analyses and the acoustical solutions that have been implemented. The results are presented in a manner that should be useful for the adoption of a model for the functional renovation of ecclesiastical spaces. Such would allow those involved in architectural projects to specify the nature of the sound, even though somewhat intangible, within the ecclesiastical space. Thesis advisors: Jaime Navarro and Juan J. Sendra Copies of this thesis written in Spanish may be obtained by contacting the advisor, Jaime Navarro, E.T.S. de Arquitectura de Sevilla, Dpto. de Construcciones Arquitectonicas I, Av. Reina Mercedes, 2, 41012 Sevilla, Spain. E-mail address: jnavarro@us.es

  19. Auditory Spectral Integration in the Perception of Static Vowels

    ERIC Educational Resources Information Center

    Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun

    2011-01-01

    Purpose: To evaluate potential contributions of broadband spectral integration in the perception of static vowels. Specifically, can the auditory system infer formant frequency information from changes in the intensity weighting across harmonics when the formant itself is missing? Does this type of integration produce the same results in the lower…

  20. Drop dynamics in space and interference with acoustic field (M-15)

    NASA Technical Reports Server (NTRS)

    Yamanaka, Tatsuo

    1993-01-01

    The objective of the experiment is to study contactless positioning of liquid drops, excitation of capillary waves on the surface of acoustically levitated liquid drops, and deformation of liquid drops by means of acoustic radiation pressure. Contactless positioning technologies are very important in space materials processing because the melt is processed without contacting the wall of a crucible which can easily contaminate the melt specifically for high melting temperatures and chemically reactive materials. Among the contactless positioning technologies, an acoustic technology is especially important for materials unsusceptible to electromagnetic fields such as glasses and ceramics. The shape of a levitated liquid drop in the weightless condition is determined by its surface tension and the internal and external pressure distribution. If the surface temperature is constant and there exist neither internal nor external pressure perturbations, the levitated liquid drop forms a shape of perfect sphere. If temperature gradients on the surface and internal or external pressure perturbations exist, the liquid drop forms various modes of shapes with proper vibrations. A rotating liquid drop was specifically studied not only as a classical problem of theoretical mechanics to describe the shapes of the planets of the solar system, as well as their arrangement, but it is also more a contemporary problem of modern non-linear mechanics. In the experiment, we are expecting to observe various shapes of a liquid drop such as cocoon, tri-lobed, tetropod, multi-lobed, and doughnut.

  1. Evidence for Separate Tonal and Segmental Tiers in the Lexical Specification of Words: A Case Study of a Brain-Damaged Chinese Speaker

    ERIC Educational Resources Information Center

    Liang, Jie; van Heuven, Vincent J.

    2004-01-01

    We present an acoustic study of segmental and prosodic properties of words produced by a female speaker of Chinese with left-hemisphere brain damage. We measured the location of the point vowels /a, e, @?, i, y, o, u/ and determined their separation in the vowel plane, and their perceptual distinctivity. Similarly, the acoustic properties of the…

  2. The processing of consonants and vowels during letter identity and letter position assignment in visual-word recognition: an ERP study.

    PubMed

    Vergara-Martínez, Marta; Perea, Manuel; Marín, Alejandro; Carreiras, Manuel

    2011-09-01

    Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in a lexical decision task. The stimuli were displayed under different conditions in a masked priming paradigm with a 50-ms SOA: (i) identity/baseline condition e.g., chocolate-CHOCOLATE); (ii) vowels-delayed condition (e.g., choc_l_te-CHOCOLATE); (iii) consonants-delayed condition (cho_o_ate-CHOCOLATE); (iv) consonants-transposed condition (cholocate-CHOCOLATE); (v) vowels-transposed condition (chocalote-CHOCOLATE), and (vi) unrelated condition (editorial-CHOCOLATE). Results showed earlier ERP effects and longer reaction times for the delayed-letter compared to the transposed-letter conditions. Furthermore, at early stages of processing, consonants may play a greater role during letter identity processing. Differences between vowels and consonants regarding letter position assignment are discussed in terms of a later phonological level involved in lexical retrieval. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. Angle-Dependent Distortions in the Perceptual Topology of Acoustic Space

    PubMed Central

    2018-01-01

    By moving sounds around the head and asking listeners to report which ones moved more, it was found that sound sources at the side of a listener must move at least twice as much as ones in front to be judged as moving the same amount. A relative expansion of space in the front and compression at the side has consequences for spatial perception of moving sounds by both static and moving listeners. An accompanying prediction that the apparent location of static sound sources ought to also be distorted agrees with previous work and suggests that this is a general perceptual phenomenon that is not limited to moving signals. A mathematical model that mimics the measured expansion of space can be used to successfully capture several previous findings in spatial auditory perception. The inverse of this function could be used alongside individualized head-related transfer functions and motion tracking to produce hyperstable virtual acoustic environments. PMID:29764312

  4. Validation of the Acoustic Voice Quality Index in the Lithuanian Language.

    PubMed

    Uloza, Virgilijus; Petrauskas, Tadas; Padervinskis, Evaldas; Ulozaitė, Nora; Barsties, Ben; Maryn, Youri

    2017-03-01

    The aim of the present study was to validate the Acoustic Voice Quality Index in Lithuanian language (AVQI-LT) and investigate the feasibility and robustness of its diagnostic accuracy, differentiating normal and dysphonic voice. A total of 184 native Lithuanian subjects with normal voices (n = 46) and with various voice disorders (n = 138) were asked to read aloud the Lithuanian text and to sustain the vowel /a/. A sentence with 13 syllables and a 3-second midvowel portion of the sustained vowel were edited. Both speech tasks were concatenated, and perceptually rated for dysphonia severity by five voice clinicians. They rated the Grade (G) from the Grade Roughness Breathiness Asthenia Strain (GRBAS) protocol and the overall severity from the Consensus Auditory-perceptual Evaluation of Voice protocol with a visual analog scale (VAS). The average scores (G mean and VAS mean ) were taken as the perceptual dysphonia severity level for every voice sample. All concatenated voice samples were acoustically analyzed to receive an AVQI-LT score. Both auditory-perceptual judgment procedures showed sufficient strength of agreement between five raters. The results achieved significant and marked concurrent validity between both auditory-perceptual judgment procedures and AVQI-LT. The diagnostic accuracy of AVQI-LT showed for both auditory-perceptual judgment procedures comparable results with two different AVQI-LT thresholds. The AVQI-LT threshold of 2.97 for the G mean rating obtained reasonable sensitivity = 0.838 and excellent specificity = 0.937. For the VAS rating, an AVQI-LT threshold of 3.48 was determined with sensitivity = 0.840 and specificity = 0.922. The AVQI-LT is considered a valid and reliable tool for assessing the dysphonia severity level in Lithuanian-speaking population. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  5. Tutorial on architectural acoustics

    NASA Astrophysics Data System (ADS)

    Shaw, Neil; Talaske, Rick; Bistafa, Sylvio

    2002-11-01

    This tutorial is intended to provide an overview of current knowledge and practice in architectural acoustics. Topics covered will include basic concepts and history, acoustics of small rooms (small rooms for speech such as classrooms and meeting rooms, music studios, small critical listening spaces such as home theatres) and the acoustics of large rooms (larger assembly halls, auditoria, and performance halls).

  6. Magnetoactive Acoustic Metamaterials.

    PubMed

    Yu, Kunhao; Fang, Nicholas X; Huang, Guoliang; Wang, Qiming

    2018-04-11

    Acoustic metamaterials with negative constitutive parameters (modulus and/or mass density) have shown great potential in diverse applications ranging from sonic cloaking, abnormal refraction and superlensing, to noise canceling. In conventional acoustic metamaterials, the negative constitutive parameters are engineered via tailored structures with fixed geometries; therefore, the relationships between constitutive parameters and acoustic frequencies are typically fixed to form a 2D phase space once the structures are fabricated. Here, by means of a model system of magnetoactive lattice structures, stimuli-responsive acoustic metamaterials are demonstrated to be able to extend the 2D phase space to 3D through rapidly and repeatedly switching signs of constitutive parameters with remote magnetic fields. It is shown for the first time that effective modulus can be reversibly switched between positive and negative within controlled frequency regimes through lattice buckling modulated by theoretically predicted magnetic fields. The magnetically triggered negative-modulus and cavity-induced negative density are integrated to achieve flexible switching between single-negative and double-negative. This strategy opens promising avenues for remote, rapid, and reversible modulation of acoustic transportation, refraction, imaging, and focusing in subwavelength regimes. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Neural Correlates of Temporal Auditory Processing in Developmental Dyslexia during German Vowel Length Discrimination: An fMRI Study

    ERIC Educational Resources Information Center

    Steinbrink, Claudia; Groth, Katarina; Lachmann, Thomas; Riecker, Axel

    2012-01-01

    This fMRI study investigated phonological vs. auditory temporal processing in developmental dyslexia by means of a German vowel length discrimination paradigm (Groth, Lachmann, Riecker, Muthmann, & Steinbrink, 2011). Behavioral and fMRI data were collected from dyslexics and controls while performing same-different judgments of vowel duration in…

  8. Aural localization of silent objects by active human biosonar: neural representations of virtual echo-acoustic space.

    PubMed

    Wallmeier, Ludwig; Kish, Daniel; Wiegrebe, Lutz; Flanagin, Virginia L

    2015-03-01

    Some blind humans have developed the remarkable ability to detect and localize objects through the auditory analysis of self-generated tongue clicks. These echolocation experts show a corresponding increase in 'visual' cortex activity when listening to echo-acoustic sounds. Echolocation in real-life settings involves multiple reflections as well as active sound production, neither of which has been systematically addressed. We developed a virtualization technique that allows participants to actively perform such biosonar tasks in virtual echo-acoustic space during magnetic resonance imaging (MRI). Tongue clicks, emitted in the MRI scanner, are picked up by a microphone, convolved in real time with the binaural impulse responses of a virtual space, and presented via headphones as virtual echoes. In this manner, we investigated the brain activity during active echo-acoustic localization tasks. Our data show that, in blind echolocation experts, activations in the calcarine cortex are dramatically enhanced when a single reflector is introduced into otherwise anechoic virtual space. A pattern-classification analysis revealed that, in the blind, calcarine cortex activation patterns could discriminate left-side from right-side reflectors. This was found in both blind experts, but the effect was significant for only one of them. In sighted controls, 'visual' cortex activations were insignificant, but activation patterns in the planum temporale were sufficient to discriminate left-side from right-side reflectors. Our data suggest that blind and echolocation-trained, sighted subjects may recruit different neural substrates for the same active-echolocation task. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Acoustic scattering from a finite cylindrical shell with evenly spaced stiffeners: Experimental investigation

    NASA Astrophysics Data System (ADS)

    Liétard, R.; Décultot, D.; Maze, G.; Tran-van-Nhieu, M.

    2005-10-01

    The influence of evenly spaced ribs (internal rings) on the acoustic scattering from a finite cylindrical shell is examined over the dimensionless frequency range 1Acoust. Soc. Am. 110, 2858-2866 (2001)] and a simple scattering/interference calculation.

  10. Acoustics

    NASA Technical Reports Server (NTRS)

    Goodman, Jerry R.; Grosveld, Ferdinand

    2007-01-01

    The acoustics environment in space operations is important to maintain at manageable levels so that the crewperson can remain safe, functional, effective, and reasonably comfortable. High acoustic levels can produce temporary or permanent hearing loss, or cause other physiological symptoms such as auditory pain, headaches, discomfort, strain in the vocal cords, or fatigue. Noise is defined as undesirable sound. Excessive noise may result in psychological effects such as irritability, inability to concentrate, decrease in productivity, annoyance, errors in judgment, and distraction. A noisy environment can also result in the inability to sleep, or sleep well. Elevated noise levels can affect the ability to communicate, understand what is being said, hear what is going on in the environment, degrade crew performance and operations, and create habitability concerns. Superfluous noise emissions can also create the inability to hear alarms or other important auditory cues such as an equipment malfunctioning. Recent space flight experience, evaluations of the requirements in crew habitable areas, and lessons learned (Goodman 2003; Allen and Goodman 2003; Pilkinton 2003; Grosveld et al. 2003) show the importance of maintaining an acceptable acoustics environment. This is best accomplished by having a high-quality set of limits/requirements early in the program, the "designing in" of acoustics in the development of hardware and systems, and by monitoring, testing and verifying the levels to ensure that they are acceptable.

  11. Acoustical considerations for secondary uses of government facilities

    NASA Astrophysics Data System (ADS)

    Evans, Jack B.

    2003-10-01

    Government buildings are by their nature, public and multi-functional. Whether in meetings, presentations, documentation processing, work instructions or dispatch, speech communications are critical. Full-time occupancy facilities may require sleep or rest areas adjacent to active spaces. Rooms designed for some other primary use may be used for public assembly, receptions or meetings. In addition, environmental noise impacts to the building or from the building should be considered, especially where adjacent to hospitals, hotels, apartments or other urban sensitive land uses. Acoustical criteria and design parameters for reverberation, background noise and sound isolation should enhance speech intelligibility and privacy. This presentation looks at unusual spaces and unexpected uses of spaces with regard to room acoustics and noise control. Examples of various spaces will be discussed, including an atrium used for reception and assembly, multi-jurisdictional (911) emergency control center, frequent or long-duration use of emergency generators, renovations of historically significant buildings, and the juxtaposition of acoustically incompatible functions. Brief case histories of acoustical requirements, constraints and design solutions will be presented, including acoustical measurements, plan illustrations and photographs. Acoustical criteria for secondary functional uses of spaces will be proposed.

  12. Subscale Acoustic Testing: Comparison of ALAT and ASMAT

    NASA Technical Reports Server (NTRS)

    Houston, Janice D.; Counter, Douglas

    2014-01-01

    The liftoff phase induces acoustic loading over a broad frequency range for a launch vehicle. These external acoustic environments are then used in the prediction of internal vibration responses of the vehicle and components which result in the qualification levels. Thus, predicting these liftoff acoustic environments is critical to the design requirements of any launch vehicle. If there is a significant amount of uncertainty in the predictions or if acoustic mitigation options must be implemented, a subscale acoustic test is a feasible pre-launch test option. This paper compares the acoustic measurements of two different subscale tests: the 2% Ares Liftoff Acoustic Test conducted at Stennis Space Center and the 5% Ares I Scale Model Acoustic Test conducted at Marshall Space Flight Center.

  13. Detecting Nasal Vowels in Speech Interfaces Based on Surface Electromyography

    PubMed Central

    Freitas, João; Teixeira, António; Silva, Samuel; Oliveira, Catarina; Dias, Miguel Sales

    2015-01-01

    Nasality is a very important characteristic of several languages, European Portuguese being one of them. This paper addresses the challenge of nasality detection in surface electromyography (EMG) based speech interfaces. We explore the existence of useful information about the velum movement and also assess if muscles deeper down in the face and neck region can be measured using surface electrodes, and the best electrode location to do so. The procedure we adopted uses Real-Time Magnetic Resonance Imaging (RT-MRI), collected from a set of speakers, providing a method to interpret EMG data. By ensuring compatible data recording conditions, and proper time alignment between the EMG and the RT-MRI data, we are able to accurately estimate the time when the velum moves and the type of movement when a nasal vowel occurs. The combination of these two sources revealed interesting and distinct characteristics in the EMG signal when a nasal vowel is uttered, which motivated a classification experiment. Overall results of this experiment provide evidence that it is possible to detect velum movement using sensors positioned below the ear, between mastoid process and the mandible, in the upper neck region. In a frame-based classification scenario, error rates as low as 32.5% for all speakers and 23.4% for the best speaker have been achieved, for nasal vowel detection. This outcome stands as an encouraging result, fostering the grounds for deeper exploration of the proposed approach as a promising route to the development of an EMG-based speech interface for languages with strong nasal characteristics. PMID:26069968

  14. Acoustic emission frequency discrimination

    NASA Technical Reports Server (NTRS)

    Sugg, Frank E. (Inventor); Graham, Lloyd J. (Inventor)

    1988-01-01

    In acoustic emission nondestructive testing, broadband frequency noise is distinguished from narrow banded acoustic emission signals, since the latter are valid events indicative of structural flaws in the material being examined. This is accomplished by separating out those signals which contain frequency components both within and beyond (either above or below) the range of valid acoustic emission events. Application to acoustic emission monitoring during nondestructive bond verification and proof loading of undensified tiles on the Space Shuttle Orbiter is considered.

  15. Stop-like modification of the dental fricative ∕ð∕: An acoustic analysis

    PubMed Central

    Zhao, Sherry Y.

    2010-01-01

    This study concentrates on one of the commonly occurring phonetic variations in English: the stop-like modification of the dental fricative ∕ð∕. The variant exhibits a drastic change from the canonical ∕ð∕; the manner of articulation is changed from one that is fricative to one that is stop-like. Furthermore, the place of articulation of stop-like ∕ð∕ has been a point of uncertainty, leading to confusion between stop-like ∕ð∕ and ∕d/. In this study, acoustic and spectral moment measures were taken from 100 stop-like ∕ð∕ and 102 ∕d/ tokens produced by 59 male and 23 female speakers in the TIMIT corpus. Data analysis indicated that stop-like ∕ð∕ is significantly different from ∕d/ in burst amplitude, burst spectrum shape, burst peak frequency, second formant at following-vowel onset, and spectral moments. Moreover, the acoustic differences from ∕d/ are consistent with those expected for a dental stop-like ∕ð∕. Automatic classification experiments involving these acoustic measures suggested that they are salient in distinguishing stop-like ∕ð∕ from ∕d/. PMID:20968372

  16. Nonlinear frequency compression: Influence of start frequency and input bandwidth on consonant and vowel recognitiona)

    PubMed Central

    Alexander, Joshua M.

    2016-01-01

    By varying parameters that control nonlinear frequency compression (NFC), this study examined how different ways of compressing inaudible mid- and/or high-frequency information at lower frequencies influences perception of consonants and vowels. Twenty-eight listeners with mild to moderately severe hearing loss identified consonants and vowels from nonsense syllables in noise following amplification via a hearing aid simulator. Low-pass filtering and the selection of NFC parameters fixed the output bandwidth at a frequency representing a moderately severe (3.3 kHz, group MS) or a mild-to-moderate (5.0 kHz, group MM) high-frequency loss. For each group (n = 14), effects of six combinations of NFC start frequency (SF) and input bandwidth [by varying the compression ratio (CR)] were examined. For both groups, the 1.6 kHz SF significantly reduced vowel and consonant recognition, especially as CR increased; whereas, recognition was generally unaffected if SF increased at the expense of a higher CR. Vowel recognition detriments for group MS were moderately correlated with the size of the second formant frequency shift following NFC. For both groups, significant improvement (33%–50%) with NFC was confined to final /s/ and /z/ and to some VCV tokens, perhaps because of listeners' limited exposure to each setting. No set of parameters simultaneously maximized recognition across all tokens. PMID:26936574

  17. Consonants and Vowels: Different Roles in Early Language Acquisition

    ERIC Educational Resources Information Center

    Hochmann, Jean-Remy; Benavides-Varela, Silvia; Nespor, Marina; Mehler, Jacques

    2011-01-01

    Language acquisition involves both acquiring a set of words (i.e. the lexicon) and learning the rules that combine them to form sentences (i.e. syntax). Here, we show that consonants are mainly involved in word processing, whereas vowels are favored for extracting and generalizing structural relations. We demonstrate that such a division of labor…

  18. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  19. Plasticity of illusory vowel perception in Brazilian-Japanese bilinguals.

    PubMed

    Parlato-Oliveira, Erika; Christophe, Anne; Hirose, Yuki; Dupoux, Emmanuel

    2010-06-01

    Previous research shows that monolingual Japanese and Brazilian Portuguese listeners perceive illusory vowels (/u/ and /i/, respectively) within illegal sequences of consonants. Here, several populations of Japanese-Brazilian bilinguals are tested, using an explicit vowel identification task (experiment 1), and an implicit categorization and sequence recall task (experiment 2). Overall, second-generation immigrants, who first acquired Japanese at home and Brazilian during childhood (after age 4) showed a typical Brazilian pattern of result (and so did simultaneous bilinguals, who were exposed to both languages from birth on). In contrast, late bilinguals, who acquired their second language in adulthood, exhibited a pattern corresponding to their native language. In addition, an influence of the second language was observed in the explicit task of Exp. 1, but not in the implicit task used in Exp. 2, suggesting that second language experience affects mostly explicit or metalinguistic skills. These results are compared to other studies of phonological representations in adopted children or immigrants, and discussed in relation to the role of age of acquisition and sociolinguistic factors.

  20. Acoustic and laryngographic measures of the laryngeal reflexes of linguistic prominence and vocal effort in German1

    PubMed Central

    Mooshammer, Christine

    2010-01-01

    This study uses acoustic and physiological measures to compare laryngeal reflexes of global changes in vocal effort to the effects of modulating such aspects of linguistic prominence as sentence accent, induced by focus variation, and word stress. Seven speakers were recorded by using a laryngograph. The laryngographic pulses were preprocessed to normalize time and amplitude. The laryngographic pulse shape was quantified using open and skewness quotients and also by applying a functional version of the principal component analysis. Acoustic measures included the acoustic open quotient and spectral balance in the vowel ∕e∕ during the test syllable. The open quotient and the laryngographic pulse shape indicated a significantly shorter open phase for loud speech than for soft speech. Similar results were found for lexical stress, suggesting that lexical stress and loud speech are produced with a similar voice source mechanism. Stressed syllables were distinguished from unstressed syllables by their open phase and pulse shape, even in the absence of sentence accent. Evidence for laryngeal involvement in signaling focus, independent of fundamental frequency changes, was not as consistent across speakers. Acoustic results on various spectral balance measures were generally much less consistent compared to results from laryngographic data. PMID:20136226

  1. Minke whale song, spacing, and acoustic communication on the Great Barrier Reef, Australia

    NASA Astrophysics Data System (ADS)

    Gedamke, Jason

    An inquisitive population of minke whale (Balaenoptera acutorostrata ) that concentrates on the Great Barrier Reef during its suspected breeding season offered a unique opportunity to conduct a multi-faceted study of a little-known Balaenopteran species' acoustic behavior. Chapter one investigates whether the minke whale is the source of an unusual, complex, and stereotyped sound recorded, the "star-wars" vocalization. A hydrophone array was towed from a vessel to record sounds from circling whales for subsequent localization of sound sources. These acoustic locations were matched with shipboard and in-water observations of the minke whale, demonstrating the minke whale was the source of this unusual sound. Spectral and temporal features of this sound and the source levels at which it is produced are described. The repetitive "star-wars" vocalization appears similar to the songs of other whale species and has characteristics consistent with reproductive advertisement displays. Chapter two investigates whether song (i.e. the "star-wars" vocalization) has a spacing function through passive monitoring of singer spatial patterns with a moored five-sonobuoy array. Active song playback experiments to singers were also conducted to further test song function. This study demonstrated that singers naturally maintain spatial separations between them through a nearest-neighbor analysis and animated tracks of singer movements. In response to active song playbacks, singers generally moved away and repeated song more quickly suggesting that song repetition interval may help regulate spatial interaction and singer separation. These results further indicate the Great Barrier Reef may be an important reproductive habitat for this species. Chapter three investigates whether song is part of a potentially graded repertoire of acoustic signals. Utilizing both vessel-based recordings and remote recordings from the sonobuoy array, temporal and spectral features, source levels, and

  2. Directional radiation pattern in structural-acoustic coupled system

    NASA Astrophysics Data System (ADS)

    Seo, Hee-Seon; Kim, Yang-Hann

    2005-07-01

    In this paper we demonstrate the possibility of designing a radiator using structural-acoustic interaction by predicting the pressure distribution and radiation pattern of a structural-acoustic coupling system that is composed by a wall and two spaces. If a wall separates spaces, then the wall's role in transporting the acoustic characteristics of the spaces is important. The spaces can be categorized as bounded finite space and unbounded infinite space. The wall considered in this study composes two plates and an opening, and the wall separates one space that is highly reverberant and the other that is unbounded without any reflection. This rather hypothetical circumstance is selected to study the general coupling problem between the finite and infinite acoustic domains. We developed an equation that predicts the energy distribution and energy flow in the two spaces separated by a wall, and its computational examples are presented. Three typical radiation patterns that include steered, focused, and omnidirected are presented. A designed radiation pattern is also presented by using the optimal design algorithm.

  3. Hemispheric Differences in the Effects of Context on Vowel Perception

    ERIC Educational Resources Information Center

    Sjerps, Matthias J.; Mitterer, Holger; McQueen, James M.

    2012-01-01

    Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners' right or left…

  4. Validation of the Acoustic Voice Quality Index Version 03.01 and the Acoustic Breathiness Index in the Spanish language.

    PubMed

    Delgado Hernández, Jonathan; León Gómez, Nieves M; Jiménez, Alejandra; Izquierdo, Laura M; Barsties V Latoszek, Ben

    2018-05-01

    The aim of this study was to validate the Acoustic Voice Quality Index 03.01 (AVQIv3) and the Acoustic Breathiness Index (ABI) in the Spanish language. Concatenated voice samples of continuous speech (cs) and sustained vowel (sv) from 136 subjects with dysphonia and 47 vocally healthy subjects were perceptually judged for overall voice quality and breathiness severity. First, to reach a higher level of ecological validity, the proportions of cs and sv were equalized regarding the time length of 3 seconds sv part and voiced cs part, respectively. Second, concurrent validity and diagnostic accuracy were verified. A moderate reliability of overall voice quality and breathiness severity from 5 experts was used. It was found that 33 syllables as standardization of the cs part, which represents 3 seconds of voiced cs, allows the equalization of both speech tasks. A strong correlation was revealed between AVQIv3 and overall voice quality and ABI and perceived breathiness severity. Additionally, the best diagnostic outcome was identified at a threshold of 2.28 and 3.40 for AVQIv3 and ABI, respectively. The AVQIv3 and ABI showed in the Spanish language valid and robust results to quantify abnormal voice qualities regarding overall voice quality and breathiness severity.

  5. A one-year longitudinal study of English and Japanese vowel production by Japanese adults and children in an English-speaking setting

    PubMed Central

    Oh, Grace E.; Guion-Anderson, Susan; Aoyama, Katsura; Flege, James E.; Akahane-Yamada, Reiko; Yamada, Tsuneo

    2011-01-01

    The effect of age of acquisition on first- and second-language vowel production was investigated. Eight English vowels were produced by Native Japanese (NJ) adults and children as well as by age-matched Native English (NE) adults and children. Productions were recorded shortly after the NJ participants’ arrival in the USA and then one year later. In agreement with previous investigations [Aoyama, et al., J. Phon. 32, 233–250 (2004)], children were able to learn more, leading to higher accuracy than adults in a year’s time. Based on the spectral quality and duration comparisons, NJ adults had more accurate production at Time 1, but showed no improvement over time. The NJ children’s productions, however, showed significant differences from the NE children’s for English “new” vowels /ɪ/, /ε/, /ɑ/, /ʌ/ and /ʊ/ at Time 1, but produced all eight vowels in a native-like manner at Time 2. An examination of NJ speakers’ productions of Japanese /i/, /a/, /u/ over time revealed significant changes for the NJ Child Group only. Japanese /i/ and /a/ showed changes in production that can be related to second language (L2) learning. The results suggest that L2 vowel production is affected importantly by age of acquisition and that there is a dynamic interaction, whereby the first and second language vowels affect each other. PMID:21603058

  6. Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids.

    PubMed

    Moradi, Shahram; Lidestam, Björn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Rönnberg, Jerker

    2017-09-18

    We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels-in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity. Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation. Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.

  7. Acoustic Sources of Accent in Second Language Japanese Speech.

    PubMed

    Idemaru, Kaori; Wei, Peipei; Gubbins, Lucy

    2018-05-01

    This study reports an exploratory analysis of the acoustic characteristics of second language (L2) speech which give rise to the perception of a foreign accent. Japanese speech samples were collected from American English and Mandarin Chinese speakers ( n = 16 in each group) studying Japanese. The L2 participants and native speakers ( n = 10) provided speech samples modeling after six short sentences. Segmental (vowels and stops) and prosodic features (rhythm, tone, and fluency) were examined. Native Japanese listeners ( n = 10) rated the samples with regard to degrees of foreign accent. The analyses predicting accent ratings based on the acoustic measurements indicated that one of the prosodic features in particular, tone (defined as high and low patterns of pitch accent and intonation in this study), plays an important role in robustly predicting accent rating in L2 Japanese across the two first language (L1) backgrounds. These results were consistent with the prediction based on phonological and phonetic comparisons between Japanese and English, as well as Japanese and Mandarin Chinese. The results also revealed L1-specific predictors of perceived accent in Japanese. The findings of this study contribute to the growing literature that examines sources of perceived foreign accent.

  8. Characterization of Pump-Induced Acoustics in Space Launch System Main Propulsion System Liquid Hydrogen Feedline Using Airflow Test Data

    NASA Technical Reports Server (NTRS)

    Eberhart, C. J.; Snellgrove, L. M.; Zoladz, T. F.

    2015-01-01

    High intensity acoustic edgetones located upstream of the RS-25 Low Pressure Fuel Turbo Pump (LPFTP) were previously observed during Space Launch System (STS) airflow testing of a model Main Propulsion System (MPS) liquid hydrogen (LH2) feedline mated to a modified LPFTP. MPS hardware has been adapted to mitigate the problematic edgetones as part of the Space Launch System (SLS) program. A follow-on airflow test campaign has subjected the adapted hardware to tests mimicking STS-era airflow conditions, and this manuscript describes acoustic environment identification and characterization born from the latest test results. Fluid dynamics responsible for driving discrete excitations were well reproduced using legacy hardware. The modified design was found insensitive to high intensity edgetone-like discretes over the bandwidth of interest to SLS MPS unsteady environments. Rather, the natural acoustics of the test article were observed to respond in a narrowband-random/mixed discrete manner to broadband noise thought generated by the flow field. The intensity of these responses were several orders of magnitude reduced from those driven by edgetones.

  9. Acoustic characteristics of voice after severe traumatic brain injury.

    PubMed

    McHenry, M

    2000-07-01

    To describe the acoustic characteristics of voice in individuals with motor speech disorders after traumatic brain injury (TBI). Prospective study of 100 individuals with TBI based on consecutive referrals for motor speech evaluations. Subjects were audio tape-recorded while producing sustained vowels and single word and sentence intelligibility tests. Laryngeal airway resistance was estimated, and voice quality was rated perceptually. None of the subjects evidenced vocal parameters within normal limits. The most frequently occurring abnormal parameter across subjects was amplitude perturbation, followed by voice turbulence index. Twenty-three percent of subjects evidenced deviation in all five parameters measured. The perceptual ratings of breathiness were significantly correlated with both the amplitude perturbation quotient and the noise-to-harmonics ratio. Vocal quality deviation is common in motor speech disorders after TBI and may impact intelligibility.

  10. Direction selective structural-acoustic coupled radiator

    NASA Astrophysics Data System (ADS)

    Seo, Hee-Seon; Kim, Yang-Hann

    2005-04-01

    This paper presents a method of designing a structural-acoustic coupled radiator that can emit sound in the desired direction. The structural-acoustic coupled system is consisted of acoustic spaces and wall. The wall composes two plates and an opening, and the wall separates one space that is highly reverberant and the other that is unbounded without any reflection. An equation is developed that predicts energy distribution and energy flow in the two spaces separated by the wall, and its computational examples are presented including near field acoustic characteristics. To design the directional coupled radiator, Pareto optimization method is adapted. An objective is selected to maximize radiation power on a main axis and minimize a side lobe level and a subjective is selected direction of the main axis and dimensions of the walls geometry. Pressure and intensity distribution of the designed radiator is also presented.

  11. Comparison of Pitch Strength With Perceptual and Other Acoustic Metric Outcome Measures Following Medialization Laryngoplasty.

    PubMed

    Rubin, Adam D; Jackson-Menaldi, Cristina; Kopf, Lisa M; Marks, Katherine; Skeffington, Jean; Skowronski, Mark D; Shrivastav, Rahul; Hunter, Eric J

    2018-05-14

    The diagnoses of voice disorders, as well as treatment outcomes, are often tracked using visual (eg, stroboscopic images), auditory (eg, perceptual ratings), objective (eg, from acoustic or aerodynamic signals), and patient report (eg, Voice Handicap Index and Voice-Related Quality of Life) measures. However, many of these measures are known to have low to moderate sensitivity and specificity for detecting changes in vocal characteristics, including vocal quality. The objective of this study was to compare changes in estimated pitch strength (PS) with other conventionally used acoustic measures based on the cepstral peak prominence (smoothed cepstral peak prominence, cepstral spectral index of dysphonia, and acoustic voice quality index), and clinical judgments of voice quality (GRBAS [grade, roughness, breathiness, asthenia, strain] scale) following laryngeal framework surgery. This study involved post hoc analysis of recordings from 22 patients pretreatment and post treatment (thyroplasty and behavioral therapy). Sustained vowels and connected speech were analyzed using objective measures (PS, smoothed cepstral peak prominence, cepstral spectral index of dysphonia, and acoustic voice quality index), and these results were compared with mean auditory-perceptual ratings by expert clinicians using the GRBAS scale. All four acoustic measures changed significantly in the direction that usually indicates improved voice quality following treatment (P < 0.005). Grade and breathiness correlated the strongest with the acoustic measures (|r| ~0.7) with strain being the least correlated. Acoustic analysis on running speech highly correlates with judged ratings. PS is a robust, easily obtained acoustic measure of voice quality that could be useful in the clinical environment to follow treatment of voice disorders. Copyright © 2018. Published by Elsevier Inc.

  12. Speech training alters consonant and vowel responses in multiple auditory cortex fields

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927

  13. Acoustic analysis of speech under stress.

    PubMed

    Sondhi, Savita; Khan, Munna; Vijay, Ritu; Salhan, Ashok K; Chouhan, Satish

    2015-01-01

    When a person is emotionally charged, stress could be discerned in his voice. This paper presents a simplified and a non-invasive approach to detect psycho-physiological stress by monitoring the acoustic modifications during a stressful conversation. Voice database consists of audio clips from eight different popular FM broadcasts wherein the host of the show vexes the subjects who are otherwise unaware of the charade. The audio clips are obtained from real-life stressful conversations (no simulated emotions). Analysis is done using PRAAT software to evaluate mean fundamental frequency (F0) and formant frequencies (F1, F2, F3, F4) both in neutral and stressed state. Results suggest that F0 increases with stress; however, formant frequency decreases with stress. Comparison of Fourier and chirp spectra of short vowel segment shows that for relaxed speech, the two spectra are similar; however, for stressed speech, they differ in the high frequency range due to increased pitch modulation.

  14. Finding Words in a Language that Allows Words without Vowels

    ERIC Educational Resources Information Center

    El Aissati, Abder; McQueen, James M.; Cutler, Anne

    2012-01-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring "win" in "twin" because "t" cannot be a word). However, the constraint would be counter-productive in…

  15. An investigation of acoustic noise requirements for the Space Station centrifuge facility

    NASA Technical Reports Server (NTRS)

    Castellano, Timothy

    1994-01-01

    Acoustic noise emissions from the Space Station Freedom (SSF) centrifuge facility hardware represent a potential technical and programmatic risk to the project. The SSF program requires that no payload exceed a Noise Criterion 40 (NC-40) noise contour in any octave band between 63 Hz and 8 kHz as measured 2 feet from the equipment item. Past experience with life science experiment hardware indicates that this requirement will be difficult to meet. The crew has found noise levels on Spacelab flights to be unacceptably high. Many past Ames Spacelab life science payloads have required waivers because of excessive noise. The objectives of this study were (1) to develop an understanding of acoustic measurement theory, instruments, and technique, and (2) to characterize the noise emission of analogous Facility components and previously flown flight hardware. Test results from existing hardware were reviewed and analyzed. Measurements of the spectral and intensity characteristics of fans and other rotating machinery were performed. The literature was reviewed and contacts were made with NASA and industry organizations concerned with or performing research on noise control.

  16. Portuguese Cistercian Churches - An acoustic legacy

    NASA Astrophysics Data System (ADS)

    Rodrigues, Fabiel G.; Lanzinha, João C. G.; Martins, Ana M. T.

    2017-10-01

    The Cistercian Order (11th century) stands out as an apologist of the simplicity and austerity of the space. According to the Order of Cîteaux, only with an austere space, without any distractions, the true spiritual contemplation is achieved. This Order was an aggregator and consolidator pole during the Christian Reconquest. Thus, as it happens with other Religious Orders, Cîteaux has a vast heritage legacy. This heritage is witness, not only of the historical, but also social, political, and spiritual evolution. This legacy resumes the key principles to an austere liturgy, which requirements, in the beginning, are based on the simplicity of worship and of the connection between man and God. Later, these requirements allowed the development of the liturgy itself and its relation with the believers. Consequently, it can be concisely established an empirical approach between the Cistercian churches and the acoustics conditioning of these spaces. This outcome is fundamental in order to understand the connection between liturgy and the conception of the Cistercian churches as well as the constructed space and its history. So, an analysis of these principles is essential to establish the relation between acoustic and religious buildings design throughout history. It is also a mean of understanding the knowledge of acoustics principles that the Cistercian Order bequeathed to Portugal. This paper presents an empirical approach on Cistercian monastic churches acoustics. These spaces are the place where the greatest acoustic efforts are concentrated and it is also the space where the liturgy reaches greater importance. On the other hand, Portugal is a country which has an important Cistercian legacy over several periods of history. Consequently, the Portuguese Cistercian monastic churches are representative of the development of the liturgy, the design of spaces and of the acoustic requirements of their churches since the 12th century until the 21st century and it is of

  17. Effects of hand gestures on auditory learning of second-language vowel length contrasts.

    PubMed

    Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael

    2014-12-01

    Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.

  18. Study for Identification of Beneficial Uses of Space (BUS). Volume 2: Technical report. Book 4: Development and business analysis of space processed surface acoustic wave devices

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Preliminary development plans, analysis of required R and D and production resources, the costs of such resources, and, finally, the potential profitability of a commercial space processing opportunity for the production of very high frequency surface acoustic wave devices are presented.

  19. Comparative evaluation of Space Transportation System (STS)-3 flight and acoustic test random vibration response of the OSS-1 payload

    NASA Technical Reports Server (NTRS)

    On, F. J.

    1983-01-01

    A comparative evaluation of the Space Transportation System (STS)-3 flight and acoustic test random vibration response of the Office of Space Science-1 (OSS-1) payload is presented. The results provide insight into the characteristics of vibroacoustic response of pallet payload components in the payload bay during STS flights.

  20. Lingual Electromyography Related to Tongue Movements in Swedish Vowel Production.

    ERIC Educational Resources Information Center

    Hirose, Hajime; And Others

    1979-01-01

    In order to investigate the articulatory dynamics of the tongue in the production of Swedish vowels, electromyographic (EMG) and X-ray microbeam studies were performed on a native Swedish subject. The EMG signals were used to obtain average indication of the muscle activity of the tongue as a function of time. (NCR)

  1. Volpe Center Acoustics Facility time-space-position-information system differential global positioning system user's guide, version 1.2

    DOT National Transportation Integrated Search

    2000-07-01

    This document is a users guide for the VolpeCenter AcousticsFacilitys(VCAF)Time-Space-Position-Information : (TSPI) System. The VCAF TSPI system is a differential global positioning system (dGPS) which may be utilized : for highly accurate vehi...

  2. Acoustic markers to differentiate gender in prepubescent children's speaking and singing voice.

    PubMed

    Guzman, Marco; Muñoz, Daniel; Vivero, Martin; Marín, Natalia; Ramírez, Mirta; Rivera, María Trinidad; Vidal, Carla; Gerhard, Julia; González, Catalina

    2014-10-01

    Investigation sought to determine whether there is any acoustic variable to objectively differentiate gender in children with normal voices. A total of 30 children, 15 boys and 15 girls, with perceptually normal voices were examined. They were between 7 and 10 years old (mean: 8.1, SD: 0.7 years). Subjects were required to perform the following phonatory tasks: (1) to phonate sustained vowels [a:], [i:], [u:], (2) to read a phonetically balanced text, and (3) to sing a song. Acoustic analysis included long-term average spectrum (LTAS), fundamental frequency (F0), speaking fundamental frequency (SFF), equivalent continuous sound level (Leq), linear predictive code (LPC) to obtain formant frequencies, perturbation measures, harmonic to noise ratio (HNR), and Cepstral peak prominence (CPP). Auditory perceptual analysis was performed by four blinded judges to determine gender. No significant gender-related differences were found for most acoustic variables. Perceptual assessment showed good intra and inter rater reliability for gender. Cepstrum for [a:], alpha ratio in text, shimmer for [i:], F3 in [a:], and F3 in [i:], were the parameters that composed the multivariate logistic regression model to best differentiate male and female children's voices. Since perceptual assessment reliably detected gender, it is likely that other acoustic markers (not evaluated in the present study) are able to make clearer gender differences. For example, gender-specific patterns of intonation may be a more accurate feature for differentiating gender in children's voices. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Autophonic Loudness of Singers in Simulated Room Acoustic Environments.

    PubMed

    Yadav, Manuj; Cabrera, Densil

    2017-05-01

    This paper aims to study the effect of room acoustics and phonemes on the perception of loudness of one's own voice (autophonic loudness) for a group of trained singers. For a set of five phonemes, 20 singers vocalized over several autophonic loudness ratios, while maintaining pitch constancy over extreme voice levels, within five simulated rooms. There were statistically significant differences in the slope of the autophonic loudness function (logarithm of autophonic loudness as a function of voice sound pressure level) for the five phonemes, with slopes ranging from 1.3 (/a:/) to 2.0 (/z/). There was no significant variation in the autophonic loudness function slopes with variations in room acoustics. The autophonic room response, which represents a systematic decrease in voice levels with increasing levels of room reflections, was also studied, with some evidence found in support. Overall, the average slope of the autophonic room response for the three corner vowels (/a:/, /i:/, and /u:/) was -1.4 for medium autophonic loudness. The findings relating to the slope of the autophonic loudness function are in agreement with the findings of previous studies where the sensorimotor mechanisms in regulating voice were shown to be more important in the perception of autophonic loudness than hearing of room acoustics. However, the role of room acoustics, in terms of the autophonic room response, is shown to be more complicated, requiring further inquiry. Overall, it is shown that autophonic loudness grows at more than twice the rate of loudness growth for sounds created outside the human body. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  4. 14 CFR 25.856 - Thermal/Acoustic insulation materials.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Thermal/Acoustic insulation materials. 25....856 Thermal/Acoustic insulation materials. (a) Thermal/acoustic insulation material installed in the.../acoustic insulation materials (including the means of fastening the materials to the fuselage) installed in...

  5. Imaging of heart acoustic based on the sub-space methods using a microphone array.

    PubMed

    Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo

    2017-07-01

    Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.

  6. Application of acoustic surface wave filter-beam lead component technology to deep space multimission hardware design

    NASA Technical Reports Server (NTRS)

    Kermode, A. W.; Boreham, J. F.

    1974-01-01

    This paper discusses the utilization of acoustic surface wave filters, beam lead components, and thin film metallized ceramic substrate technology as applied to the design of deep space, long-life, multimission transponder. The specific design to be presented is for a second mixer local oscillator module, operating at frequencies as high as 249 MHz.

  7. Broadband acoustic focusing by Airy-like beams based on acoustic metasurfaces

    NASA Astrophysics Data System (ADS)

    Chen, Di-Chao; Zhu, Xing-Feng; Wei, Qi; Wu, Da-Jian; Liu, Xiao-Jun

    2018-01-01

    An acoustic metasurface (AM) composed of space-coiling subunits is proposed to generate acoustic Airy-like beams (ALBs) by manipulating the transmitted acoustic phase. The self-accelerating, self-healing, and non-diffracting features of ALBs are demonstrated using finite element simulations. We further employ two symmetrical AMs to realize two symmetrical ALBs, resulting in highly efficient acoustic focusing. At the working frequency, the focal intensity can reach roughly 20 times that of the incident wave. It is found that the highly efficient acoustic focusing can circumvent obstacles in the propagating path and can be maintained in a broad frequency bandwidth. In addition, simply changing the separation between the two AMs can modulate the focal length of the proposed AM lens. ALBs generated by AMs and the corresponding AM lens may benefit applications in medical ultrasound imaging, biomedical therapy, and particle trapping and manipulation.

  8. The Processing of Consonants and Vowels during Letter Identity and Letter Position Assignment in Visual-Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Vergara-Martinez, Marta; Perea, Manuel; Marin, Alejandro; Carreiras, Manuel

    2011-01-01

    Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in…

  9. Contribution of the supraglottic larynx to the vocal product: imaging and acoustic analysis

    NASA Astrophysics Data System (ADS)

    Gracco, L. Carol

    1996-04-01

    Horizontal supraglottic laryngectomy is a surgical procedure to remove a mass lesion located in the region of the pharynx superior to the true vocal folds. In contrast to full or partial laryngectomy, patients who undergo horizontal supraglottic laryngectomy often present with little or nor involvement to the true vocal folds. This population provides an opportunity to examine the acoustic consequences of altering the pharynx while sparing the laryngeal sound source. Acoustic and magnetic resonance imaging (MRI) data were acquired in a group of four patients before and after supraglottic laryngectomy. Acoustic measures included the identification of vocal tract resonances and the fundamental frequency of the vocal fold vibration. 3D reconstruction of the pharyngeal portion of each subjects' vocal tract were made from MRIs taken during phonation and volume measures were obtained. These measures reveal a variable, but often dramatic difference in the surgically-altered area of the pharynx and changes in the formant frequencies of the vowel/i/post surgically. In some cases the presence of the tumor created a deviation from the expected formant values pre-operatively with post-operative values approaching normal. Patients who also underwent radiation treatment post surgically tended to have greater constriction in the pharyngeal area of the vocal tract.

  10. Coordinated Control of Acoustical Field of View and Flight in Three-Dimensional Space for Consecutive Capture by Echolocating Bats during Natural Foraging.

    PubMed

    Sumiya, Miwa; Fujioka, Emyo; Motoi, Kazuya; Kondo, Masaru; Hiryu, Shizuko

    2017-01-01

    Echolocating bats prey upon small moving insects in the dark using sophisticated sonar techniques. The direction and directivity pattern of the ultrasound broadcast of these bats are important factors that affect their acoustical field of view, allowing us to investigate how the bats control their acoustic attention (pulse direction) for advanced flight maneuvers. The purpose of this study was to understand the behavioral strategies of acoustical sensing of wild Japanese house bats Pipistrellus abramus in three-dimensional (3D) space during consecutive capture flights. The results showed that when the bats successively captured multiple airborne insects in short time intervals (less than 1.5 s), they maintained not only the immediate prey but also the subsequent one simultaneously within the beam widths of the emitted pulses in both horizontal and vertical planes before capturing the immediate one. This suggests that echolocating bats maintain multiple prey within their acoustical field of view by a single sensing using a wide directional beam while approaching the immediate prey, instead of frequently shifting acoustic attention between multiple prey. We also numerically simulated the bats' flight trajectories when approaching two prey successively to investigate the relationship between the acoustical field of view and the prey direction for effective consecutive captures. This simulation demonstrated that acoustically viewing both the immediate and the subsequent prey simultaneously increases the success rate of capturing both prey, which is considered to be one of the basic axes of efficient route planning for consecutive capture flight. The bat's wide sonar beam can incidentally cover multiple prey while the bat forages in an area where the prey density is high. Our findings suggest that the bats then keep future targets within their acoustical field of view for effective foraging. In addition, in both the experimental results and the numerical simulations

  11. Coordinated Control of Acoustical Field of View and Flight in Three-Dimensional Space for Consecutive Capture by Echolocating Bats during Natural Foraging

    PubMed Central

    Sumiya, Miwa; Fujioka, Emyo; Motoi, Kazuya; Kondo, Masaru; Hiryu, Shizuko

    2017-01-01

    Echolocating bats prey upon small moving insects in the dark using sophisticated sonar techniques. The direction and directivity pattern of the ultrasound broadcast of these bats are important factors that affect their acoustical field of view, allowing us to investigate how the bats control their acoustic attention (pulse direction) for advanced flight maneuvers. The purpose of this study was to understand the behavioral strategies of acoustical sensing of wild Japanese house bats Pipistrellus abramus in three-dimensional (3D) space during consecutive capture flights. The results showed that when the bats successively captured multiple airborne insects in short time intervals (less than 1.5 s), they maintained not only the immediate prey but also the subsequent one simultaneously within the beam widths of the emitted pulses in both horizontal and vertical planes before capturing the immediate one. This suggests that echolocating bats maintain multiple prey within their acoustical field of view by a single sensing using a wide directional beam while approaching the immediate prey, instead of frequently shifting acoustic attention between multiple prey. We also numerically simulated the bats’ flight trajectories when approaching two prey successively to investigate the relationship between the acoustical field of view and the prey direction for effective consecutive captures. This simulation demonstrated that acoustically viewing both the immediate and the subsequent prey simultaneously increases the success rate of capturing both prey, which is considered to be one of the basic axes of efficient route planning for consecutive capture flight. The bat’s wide sonar beam can incidentally cover multiple prey while the bat forages in an area where the prey density is high. Our findings suggest that the bats then keep future targets within their acoustical field of view for effective foraging. In addition, in both the experimental results and the numerical simulations

  12. Vowel Diphthongs. Fun with Phonics! Book 10. Grades 1-2.

    ERIC Educational Resources Information Center

    Daniel, Claire

    This book provides hands-on activities for grades 1-2 that make phonics instruction easy and fun for teachers and children in the classroom. The book offers methods for practice, reinforcement, and assessment of phonics skills. A poem is used to introduce the phonics element of this book, vowel diphthongs. The poem is duplicated so children can…

  13. Effect of Audio vs. Video on Aural Discrimination of Vowels

    ERIC Educational Resources Information Center

    McCrocklin, Shannon

    2012-01-01

    Despite the growing use of media in the classroom, the effects of using of audio versus video in pronunciation teaching has been largely ignored. To analyze the impact of the use of audio or video training on aural discrimination of vowels, 61 participants (all students at a large American university) took a pre-test followed by two training…

  14. The Testing Behind the Test Facility: the Acoustic Design of the NASA Glenn Research Center's World-Class Reverberant Acoustic Test Facility

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Mark E.; Hozman, Aron D.; McNelis, Anne M.

    2010-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) is leading the design and build of the new world-class vibroacoustic test capabilities at the NASA GRC s Plum Brook Station in Sandusky, Ohio, U.S.A. Benham Companies, LLC is currently constructing modal, base-shake sine and reverberant acoustic test facilities to support the future testing needs of NASA s space exploration program. The large Reverberant Acoustic Test Facility (RATF) will be approximately 101,000 ft3 in volume and capable of achieving an empty chamber acoustic overall sound pressure level (OASPL) of 163 dB. This combination of size and acoustic power is unprecedented amongst the world s known active reverberant acoustic test facilities. The key to achieving the expected acoustic test spectra for a range of many NASA space flight environments in the RATF is the knowledge gained from a series of ground acoustic tests. Data was obtained from several NASA-sponsored test programs, including testing performed at the National Research Council of Canada s acoustic test facility in Ottawa, Ontario, Canada, and at the Redstone Technical Test Center acoustic test facility in Huntsville, Alabama, U.S.A. The majority of these tests were performed to characterize the acoustic performance of the modulators (noise generators) and representative horns that would be required to meet the desired spectra, as well as to evaluate possible supplemental gas jet noise sources. The knowledge obtained in each of these test programs enabled the design of the RATF sound generation system to confidently advance to its final acoustic design and subsequent ongoing construction.

  15. The Testing Behind The Test Facility: The Acoustic Design of the NASA Glenn Research Center's World-Class Reverberant Acoustic Test Facility

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Mark E.; McNelis, Anne M.

    2011-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) is leading the design and build of the new world-class vibroacoustic test capabilities at the NASA GRC?s Plum Brook Station in Sandusky, Ohio, USA. Benham Companies, LLC is currently constructing modal, base-shake sine and reverberant acoustic test facilities to support the future testing needs of NASA?s space exploration program. T he large Reverberant Acoustic Test Facility (RATF) will be approximately 101,000 ft3 in volume and capable of achieving an empty chamber acoustic overall sound pressure level (OASPL) of 163 dB. This combination of size and acoustic power is unprecedented amongst the world?s known active reverberant acoustic test facilities. The key to achieving the expected acoustic test spectra for a range of many NASA space flight environments in the RATF is the knowledge gained from a series of ground acoustic tests. Data was obtained from several NASA-sponsored test programs, including testing performed at the National Research Council of Canada?s acoustic test facility in Ottawa, Ontario, Canada, and at the Redstone Technical Test Center acoustic test facility in Huntsville, Alabama, USA. The majority of these tests were performed to characterize the acoustic performance of the modulators (noise generators) and representative horns that would be required to meet the desired spectra, as well as to evaluate possible supplemental gas jet noise sources. The knowledge obtained in each of these test programs enabled the design of the RATF sound generation system to confidently advance to its final acoustic de-sign and subsequent on-going construction.

  16. The Testing Behind The Test Facility: The Acoustic Design of the NASA Glenn Research Center's World-Class Reverberant Acoustic Test Facility

    NASA Technical Reports Server (NTRS)

    Hozman, Aron D.; Hughes, William O.; McNelis, Mark E.; McNelis, Anne M.

    2011-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) is leading the design and build of the new world-class vibroacoustic test capabilities at the NASA GRC's Plum Brook Station in Sandusky, Ohio, USA. Benham Companies, LLC is currently constructing modal, base-shake sine and reverberant acoustic test facilities to support the future testing needs of NASA's space exploration program. The large Reverberant Acoustic Test Facility (RATF) will be approximately 101,000 cu ft in volume and capable of achieving an empty chamber acoustic overall sound pressure level (OASPL) of 163 dB. This combination of size and acoustic power is unprecedented amongst the world's known active reverberant acoustic test facilities. The key to achieving the expected acoustic test spectra for a range of many NASA space flight environments in the RATF is the knowledge gained from a series of ground acoustic tests. Data was obtained from several NASA-sponsored test programs, including testing performed at the National Research Council of Canada's acoustic test facility in Ottawa, Ontario, Canada, and at the Redstone Technical Test Center acoustic test facility in Huntsville, Alabama, USA. The majority of these tests were performed to characterize the acoustic performance of the modulators (noise generators) and representative horns that would be required to meet the desired spectra, as well as to evaluate possible supplemental gas jet noise sources. The knowledge obtained in each of these test programs enabled the design of the RATF sound generation system to confidently advance to its final acoustic design and subsequent on-going construction.

  17. Effects of Computer System and Vowel Loading on Measures of Nasalance

    ERIC Educational Resources Information Center

    Awan, Shaheen N.; Omlor, Kristin; Watts, Christopher R.

    2011-01-01

    Purpose: The purpose of this study was to determine similarities and differences in nasalance scores observed with different computerized nasalance systems in the context of vowel-loaded sentences. Methodology: Subjects were 46 Caucasian adults with no perceived hyper-or hyponasality. Nasalance scores were obtained using the Nasometer 6200 (Kay…

  18. Logopenic and Nonfluent Variants of Primary Progressive Aphasia Are Differentiated by Acoustic Measures of Speech Production

    PubMed Central

    Ballard, Kirrie J.; Savage, Sharon; Leyton, Cristian E.; Vogel, Adam P.; Hornberger, Michael; Hodges, John R.

    2014-01-01

    Differentiation of logopenic (lvPPA) and nonfluent/agrammatic (nfvPPA) variants of Primary Progressive Aphasia is important yet remains challenging since it hinges on expert based evaluation of speech and language production. In this study acoustic measures of speech in conjunction with voxel-based morphometry were used to determine the success of the measures as an adjunct to diagnosis and to explore the neural basis of apraxia of speech in nfvPPA. Forty-one patients (21 lvPPA, 20 nfvPPA) were recruited from a consecutive sample with suspected frontotemporal dementia. Patients were diagnosed using the current gold-standard of expert perceptual judgment, based on presence/absence of particular speech features during speaking tasks. Seventeen healthy age-matched adults served as controls. MRI scans were available for 11 control and 37 PPA cases; 23 of the PPA cases underwent amyloid ligand PET imaging. Measures, corresponding to perceptual features of apraxia of speech, were periods of silence during reading and relative vowel duration and intensity in polysyllable word repetition. Discriminant function analyses revealed that a measure of relative vowel duration differentiated nfvPPA cases from both control and lvPPA cases (r 2 = 0.47) with 88% agreement with expert judgment of presence of apraxia of speech in nfvPPA cases. VBM analysis showed that relative vowel duration covaried with grey matter intensity in areas critical for speech motor planning and programming: precentral gyrus, supplementary motor area and inferior frontal gyrus bilaterally, only affected in the nfvPPA group. This bilateral involvement of frontal speech networks in nfvPPA potentially affects access to compensatory mechanisms involving right hemisphere homologues. Measures of silences during reading also discriminated the PPA and control groups, but did not increase predictive accuracy. Findings suggest that a measure of relative vowel duration from of a polysyllable word repetition task

  19. Acoustic and Auditory Perception Effects of the Voice Therapy Technique Finger Kazoo in Adult Women.

    PubMed

    Christmann, Mara Keli; Cielo, Carla Aparecida

    2017-05-01

    This study aimed to verify and to correlate acoustic and auditory-perceptual measures of glottic source after the performance of finger kazoo (FK) technique. This is an experimental, cross-sectional, and qualitative study. We made an analysis of the vowel [a:] in 46 adult women with neither vocal complaints nor laryngeal alterations, through the Multi-Dimensional Voice Program Advanced and RASATI scale, before and immediately after performing three series of FK and 5 minutes after a period of silence. Kappa, Friedman, Wilcoxon, and Spearman tests were used. We found significant increase in fundamental frequency, reduction of amplitude variation, and degree of sub-harmonics immediately after performing FK. Positive correlations were measures of frequency and its perturbation, measures of amplitude, of soft phonation index, of degree and number of unvoiced segments with aspects of RASATI. Negative correlations were voice turbulence index, measures of frequency and its perturbation, and measures of soft phonation index with aspects of RASATI. There was fundamental frequency increase, within normal limits, and reduction of acoustic measures related to presence of noise and instability. In general, acoustic measures, suggestive of noise and instability, were reduced according to the decrease of perceptive-auditory aspects of vocal alteration. It shows that both instruments are complementary and that the acoustic vocal effect was positive. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  20. Accuracy of Acoustic Analysis Measurements in the Evaluation of Patients With Different Laryngeal Diagnoses.

    PubMed

    Lopes, Leonardo Wanderley; Batista Simões, Layssa; Delfino da Silva, Jocélio; da Silva Evangelista, Deyverson; da Nóbrega E Ugulino, Ana Celiane; Oliveira Costa Silva, Priscila; Jefferson Dias Vieira, Vinícius

    2017-05-01

    This study aims to investigate the accuracy of acoustic measures in discriminating between patients with different laryngeal diagnoses. The study design is descriptive, cross-sectional, and retrospective. A total of 279 female patients participated in the research. Acoustic measures of the mean and standard deviation (SD) values of the fundamental frequency (F 0 ), jitter, shimmer, and glottal to noise excitation (GNE) were extracted from the emission of the vowel /ε/. Isolated acoustic measures do not demonstrate adequate performance in discriminating patients with and without laryngeal alteration. The combination of GNE, SD of the F 0 , jitter, and shimmer improved the ability to classify patients with and without laryngeal alteration. In isolation, the SD of the F 0 , shimmer, and GNE presented acceptable performance in discriminating individuals with different laryngeal diagnoses. The combination of acoustic measurements caused discrete improvement in performance of the classifier to discriminate healthy larynx vs vocal polyp (SD of the F 0 , shimmer, and GNE), healthy larynx vs unilateral vocal fold paralysis (SD of the F 0 and jitter), healthy larynx vs vocal nodules (SD of the F 0 and jitter), healthy larynx vs sulcus vocalis (SD of the F 0 and shimmer), and healthy larynx vs voice disorder due to gastroesophageal reflux (F 0 mean, jitter, and shimmer). Isolated acoustic measures do not demonstrate adequate performance in discriminating patients with and without laryngeal alteration, although they present acceptable performance in classifying different laryngeal diagnoses. Combined acoustic measures present an acceptable capacity to discriminate between the presence and the absence of laryngeal alteration and to differentiate several laryngeal diagnoses. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.